TL;DR
Vendor slide decks quote ZTNA statistics from market research — adoption rates, average breach costs, industry maturity levels. These are fine for context but they are not your numbers. Your board wants metrics from your own environment: is the ZTNA working, are attackers being kept out, is the team getting better over time. This post lists the 17 metrics that actually matter, with the formula for each, the data source, the collection cadence, and the failure mode to watch for. Start with 5-7 of them; expand over quarters; avoid the common trap of instrumenting everything before reporting anything.
Who this is for
CISOs, security leads, and security-operations managers responsible for reporting on ZTNA programme health. Platform engineers who build the dashboards. Board members and audit committee members who read the output and need to know what to ask about. Assumes familiarity with basic security operations concepts.
Table of contents
- Why metrics from your environment beat industry statistics
- The three categories — coverage, health, effectiveness
- Coverage metrics (1-6)
- Health metrics (7-12)
- Effectiveness metrics (13-17)
- How to collect each without custom engineering
- Baseline and thresholds
- Dashboard layout
- Board reporting template
- Common pitfalls
- Recommended starting set
1. Why metrics from your environment beat industry statistics
Industry statistics make for attention-grabbing slides. “Breaches cost an average of $4.88 million” (IBM’s Cost of a Data Breach Report 2024). “81% of organisations have adopted at least partial Zero Trust” (various vendor surveys). These are fine for opening a keynote.
They are not useful for running a programme. Your environment’s breach cost is whatever your specific incidents have cost. Your Zero Trust adoption is whatever percentage of your users and resources are actually behind ZTNA. External averages tell you nothing about either.
Metrics from your own environment — measured over time, compared to your own baseline, actionable by your own team — are what drive improvement and what a board needs to see to trust that the security programme is working.
The 17 metrics below are all measurable from a normally instrumented ZTNA deployment. None requires exotic tooling.
2. The three categories — coverage, health, effectiveness
Ordered the way you should think about them.
- Coverage — is the ZTNA actually protecting what it is supposed to protect? Six metrics.
- Health — is the ZTNA operating correctly? Six metrics.
- Effectiveness — is the ZTNA doing its job of preventing and detecting unauthorised access? Five metrics.
Coverage without health is a brittle deployment. Health without effectiveness is a ZTNA that runs well but does not prevent incidents. All three categories matter.
3. Coverage metrics (1-6)
Metric 1 — Users behind ZTNA (%)
Formula: (users with an active ZTNA identity in the last 30 days) / (total active users per IdP)
Target: 100% for standard workforce users. Exceptions (service accounts, break-glass) documented.
Failure mode to watch: Shadow accounts bypassing ZTNA via legacy VPN or direct network access.
Metric 2 — Managed devices behind ZTNA (%)
Formula: (devices with ZTNA agent + in MDM) / (devices in MDM)
Target: 100% minus a small BYOD exception percentage.
Failure mode: Device deployed but agent never installed.
Metric 3 — Critical resources behind ZTNA (%)
Formula: (critical resources accessed only via ZTNA) / (critical resources in scope)
Target: 100% for top-tier resources (production databases, identity systems, code repositories).
Failure mode: Resource reachable via a legacy network path that bypasses ZTNA.
Metric 4 — MFA coverage on privileged accounts (%)
Formula: (privileged accounts with active MFA enrolled) / (total privileged accounts)
Target: 100%. Phishing-resistant MFA (FIDO2/WebAuthn) on 100% of admin accounts.
Failure mode: Legacy admin account with TOTP fallback still accepted.
Metric 5 — Device posture compliance rate (%)
Formula: (devices passing all required posture checks) / (total devices)
Target: Baseline first; aim for 95%+ sustained compliance.
Failure mode: Posture checks that always pass (check is misconfigured) or posture that drifts after deployment.
Metric 6 — Policy coverage ratio
Formula: (resources with explicit ZTNA policy) / (resources reachable via ZTNA)
Target: 100%. Every resource should have an explicit allow or deny — no defaults.
Failure mode: “Default allow” catch-all policy on a subset of resources.
4. Health metrics (7-12)
Metric 7 — Authentication success rate (%)
Formula: (successful auth attempts) / (total auth attempts)
Target: Baseline per-user population; typical range 97-99.5% for low-friction deployments.
Failure mode: Rate drops to 95% — users struggling with auth; service disruption.
Metric 8 — Authentication latency (P95, P99)
Formula: 95th and 99th percentile time from auth request to decision.
Target: P95 < 500ms, P99 < 2s.
Failure mode: Slow IdP, slow ZTNA policy evaluation, network path issues.
Metric 9 — Tunnel establishment success rate (%)
Formula: (tunnels established successfully) / (tunnel attempts)
Target: >99% in a stable environment.
Failure mode: NAT traversal failing, relay fabric overloaded, control-plane issues.
Metric 10 — Posture check latency (P95)
Formula: 95th percentile time from posture check trigger to response.
Target: <1 second for in-session checks.
Failure mode: Slow EDR integration, slow MDM API.
Metric 11 — Coordination-plane availability (%)
Formula: (minutes ZTNA control plane responsive) / (total minutes in period) × 100
Target: 99.9%+ for managed products (vendor SLA); 99.5%+ for self-hosted (your SLA).
Failure mode: Coordination-plane outage during business hours.
Metric 12 — Agent update lag (days)
Formula: Median days between release of new ZTNA agent version and percentage of fleet running it (e.g., time to 95% adoption).
Target: <14 days for non-critical updates; <72 hours for security-critical.
Failure mode: Fleet stuck on old agent version. Users disable auto-update.
5. Effectiveness metrics (13-17)
Metric 13 — Authorisation denial rate (%)
Formula: (authorisation denials) / (authorisation requests)
Target: 0.5-3% is typical for a well-tuned environment.
Analysis: very low denial rate suggests policy is too permissive; very high rate suggests policy is misconfigured or users are attempting things they should not.
Metric 14 — Posture-triggered session terminations per week
Formula: Count of sessions terminated or restricted due to posture failures in the past 7 days.
Target: Baseline per fleet size; expect a steady-state rate.
Analysis: sudden spike may indicate a fleet-wide posture issue (antivirus update, OS update) or a real compromise.
Metric 15 — Anomalous access detections per month
Formula: Count of access patterns flagged by anomaly detection (impossible travel, unusual time, new geography) per month.
Target: Baseline first; trend is what matters.
Analysis: rising detections with no new users or new resources may indicate reconnaissance.
Metric 16 — Mean time to detect (MTTD) for access-related incidents
Formula: Median time from compromise event to security-team detection across incidents in the quarter.
Target: <30 minutes for in-session detections; <4 hours for post-session forensic detections.
Failure mode: Incidents discovered only by external notification.
Metric 17 — Mean time to revoke (MTTR) for compromised credentials
Formula: Median time from detection of compromised credential to effective revocation across incidents in the quarter.
Target: <10 minutes for critical systems.
Failure mode: Manual revocation processes; lost time waiting for admin approval.
6. How to collect each without custom engineering
The six data sources that cover all 17 metrics:
- ZTNA audit logs. Most metrics (1, 3, 6, 7, 9, 13, 14, 15, 16, 17) read directly from ZTNA-exported logs.
- Identity provider logs. Metrics 1, 4, 7, 8. Most IdPs export structured logs to SIEM.
- MDM and EDR reports. Metrics 2, 5. Typically JSON or CSV exports from the admin console.
- Coordination-plane health endpoints. Metric 11. Vendor-provided or built-in if self-hosted.
- Agent update reports. Metric 12. From ZTNA admin console.
- SIEM correlation. Metrics 15, 16, 17. The SIEM is where telemetry converges.
A ZTNA with comprehensive log export, an IdP with JSON/CEF log export, and a SIEM to ingest both — plus MDM/EDR where relevant — covers all 17 without custom engineering.
7. Baseline and thresholds
Start by measuring for 30-60 days with no targets. This is your baseline.
- Identify the stable range for each metric.
- Identify trends — is the number rising, falling, flat.
- Identify outlier weeks — did a known event cause the spike.
After baseline, set thresholds based on your data. Universal absolute thresholds (100% MFA coverage on admins) can be set from day one. Relative thresholds (authorisation denial rate within 2 standard deviations of baseline) require your baseline.
8. Dashboard layout
A one-page operational dashboard with three sections.
Coverage section (top)
Metrics 1-6. Single-row green/yellow/red status. At-a-glance “is the ZTNA deployed correctly”.
Health section (middle)
Metrics 7-12. Sparkline trends for the past 30 days. At-a-glance “is it operating correctly”.
Effectiveness section (bottom)
Metrics 13-17. Numbers plus click-through to details. “Is it preventing and detecting”.
A second dashboard for compliance specifically — SOC 2 evidence, HIPAA audit-log coverage, NIS2 incident-reporting timeliness — is worth building once the operational dashboard is stable.
9. Board reporting template
A single slide, quarterly.
ZTNA Programme Health — Q2 2026
Coverage: 98% users, 100% critical resources, 100% privileged MFA
(target 100% across the board; 2% user gap investigated, plan slide 12)
Incidents: 3 access-related incidents, 0 material impact
MTTD median 12 min (target <30); MTTR median 4 min (target <10)
Denial rate: 1.8% (within baseline band)
No anomalous trend detected in Q2
Top risk: Legacy VPN still in use by finance team (15% of finance traffic)
Mitigation plan: Q3 cutover, on track
Action: approve Q3 budget for finance cutover tooling ($X)
Four numbers. One narrative. One ask. Fits one slide. Board members can read it in 30 seconds and remember the key points next quarter.
10. Common pitfalls
Five mistakes to avoid.
10.1 Reporting every metric to the board
Seventeen metrics are operational. Board gets three to five. Do not cram more.
10.2 Setting targets before baselining
A target of “98% authentication success” is meaningless without your baseline. Measure first, target second.
10.3 Treating metrics as goals rather than signals
When “authentication success rate” becomes a goal, the team is incentivised to relax auth challenges to hit it. Metrics inform; they do not directly evaluate staff.
10.4 Failing to invest in data quality
A dashboard of wrong numbers is worse than no dashboard. Invest in log schema consistency, time synchronisation, and deduplication before building visualisations.
10.5 Counting everything and measuring nothing
Without baseline and trend, raw counts are noise. Report trend, not snapshot. “Authorisation denials up 15% this month” is meaningful; “4,712 denials this month” is not.
11. Recommended starting set
If you are instrumenting from scratch, start with these seven metrics. They cover the highest-value questions and drop out of normal ZTNA logs.
- Metric 1: Users behind ZTNA (%)
- Metric 3: Critical resources behind ZTNA (%)
- Metric 4: MFA coverage on privileged accounts (%)
- Metric 7: Authentication success rate (%)
- Metric 11: Coordination-plane availability (%)
- Metric 13: Authorisation denial rate (%)
- Metric 17: Mean time to revoke
These seven answer: are the right people covered, is the platform operating, is it catching bad decisions, how fast do we respond. The other ten metrics extend the picture but are not required to ship a useful programme.
Further reading
- NIST SP 800-207 — Zero Trust Architecture.
- CISA Zero Trust Maturity Model v2.0.
- AICPA Trust Services Criteria.
Related reading on this blog
- SOC 2 Remote Access Controls
- What Is ZTNA?
- Device Posture Checks That Actually Work
- NIS2 Remote Access Requirements
Try QuickZTNA
QuickZTNA exports structured audit logs to SIEM in JSON and CEF, covering all events needed for the 17 metrics above. Business-tier dashboards include the seven-metric starting set out of the box. Start on Free.