Clover ERA.
The Evidence

Every claim. Every source. Every limitation acknowledged.

The Silent Degradation thesis rests on research. This page documents the primary sources behind every load-bearing claim, the methodology behind the Manager Gap Index cohort, and the limitations we acknowledge openly. Skeptical readers should start here.

  1. 02The Four-Move Thesis Defended
  2. 03The Eight External Validators
  3. 04The Cohort Methodology
  4. 05Objection Defenses
  5. 06Acknowledged Limitations
  6. 07Citations and Bibliography
  7. 08Updates and Version History

02 — The Four-Move Thesis Defended

The structural argument behind Silent Degradation, with primary sources.

Move 01 of 04

Capital and technology drove 75 years of productivity.

For 75 years, productivity gains came from capital and technology. Workers got more productive each hour because the tools around them got better, not because they worked harder.

Primary research

The decomposition of post-war productivity growth into capital deepening, total factor productivity, and labour quality contributions is uncontroversial in macroeconomics. Standard references: the OECD Productivity Statistics database, the U.S. Bureau of Labor Statistics Multifactor Productivity series, and the Penn World Tables.

Across OECD countries between 1970 and 2005, capital deepening contributed 0.8 to 1.2 percentage points annually to labour productivity growth. Total factor productivity contributed an additional 0.7 to 1.0 points. Labour quality improvements contributed 0.2 to 0.4 points.

The U.S. Bureau of Labor Statistics has documented that between 1973 and 2014, labour productivity in the nonfarm business sector grew at 1.7 percent per year, with capital deepening accounting for approximately 60 percent of that growth.

Mechanism

Workers were the platform on which the productivity gains were deployed. The same human, with better machines and better software, produced more output per hour. The cognitive demand on the worker was approximately constant; what changed was the leverage their hour created.

Limitations

The exact percentages of the decomposition vary by methodology, country, and time period. The "75 years" framing anchors to the post-WWII baseline (roughly 1948 onwards) and reflects the period over which the capital-and-technology productivity model held; the highest gains were concentrated in the early decades, with the slowdown beginning around 2005. The Solow growth model that underlies these decompositions has known limitations, particularly around treatment of human capital and intangible investment.

Sources
Move 02 of 04

That model has broken since 2005.

Capital deepening has slowed since 2005. AI is now increasing cognitive demand rather than reducing it. The discretionary effort that used to be quietly available has collapsed.

This claim has three sub-claims, each with its own evidence base.

2.2a — Capital deepening has slowed since 2005.

Primary research

The U.S. Bureau of Labor Statistics has documented the productivity slowdown extensively. The post-2004 productivity growth rate of 1.4 percent per year (through 2014) was 30 percent below the 1948 to 1973 average. The most recent BLS analyses (2023 to 2025) have continued to show weak capital deepening contributions to overall productivity.

The OECD has published multiple studies documenting that across nearly every OECD country, productivity growth slowed sharply after 2005. The cumulative cost of the U.S. productivity slowdown from 2005 to 2018 has been estimated at $10.9 trillion in lost output.

Sources
  • BLS productivity slowdown analysis. bls.gov/opub/mlr/2018
  • OECD Compendium of Productivity Indicators (annual).
  • The Productivity Slowdown. Federal Reserve Bank of San Francisco conference, 2017 and subsequent updates.

2.2b — AI is increasing cognitive demand rather than reducing it.

This sub-claim is the one most likely to be challenged in a sales conversation. It deserves the most detailed defense.

Primary research

Microsoft Research, January 2025. "The Impact of Generative AI on Critical Thinking" (Lee et al., 2025), based on a survey of 319 knowledge workers, found that AI use shifts cognitive work from generation to evaluation. Heavy AI users showed reduced critical thinking engagement during routine tasks, but the cognitive work for non-routine tasks was reported as more demanding when AI was involved. Workers expended more effort applying critical thinking with AI than they would performing the same tasks without it.

Harvard Business Review, February 2026. Eight months of research inside a 200-person U.S. tech firm found that AI tool adoption was correlated with increased work intensity rather than decreased workload. Employees using AI tools did not work less; output expectations rose to consume the time savings.

UC Berkeley Labor Center, 2025. A longitudinal study tracked workers who adopted AI tools through 2025. 67 percent of adopters reported working more hours, not fewer, by the end of the year. This is the single most cited finding in the AI cognitive load literature because it is longitudinal and specifically isolates the adopter population.

Deloitte, 2025 Workforce Intelligence Report. Mental fatigue and cognitive strain have surpassed workload volume as the leading predictors of burnout. This is a methodological shift; previously workload volume was the primary predictor, and the change indicates that the nature of work has shifted toward higher cognitive demand independent of hours worked.

Mechanism

Three structural mechanisms explain why AI increases cognitive load rather than decreasing it.

Elimination of cognitive recovery breaks. Pre-AI knowledge work contained natural cognitive breaks: waiting for reports to compile, manually formatting documents, searching through documents for specific data points. These tasks were not intellectually demanding and served as built-in recovery periods. AI eliminates these breaks. When every task that used to take twenty minutes now takes twenty seconds, the worker moves immediately to the next cognitively demanding task. The result is an uninterrupted stream of high-intensity cognitive work with no natural recovery time.

Multiplication of decisions. AI multiplies decisions rather than eliminating them. Every output requires an accept, reject, or revise decision under uncertainty. Pre-AI, decisions were embedded in the writing process. Post-AI, the worker generates outputs faster but must still evaluate, edit, validate, and approve each one. The cognitive load shifts from generation to evaluation, which is often more draining for sustained periods.

Rising expectation curve. Marketing departments using AI content generation have been expected to produce 3.2x more content pieces per month compared to pre-AI baselines (Content Marketing Institute, 2025). As AI makes individual tasks faster, the volume of expected output rises to consume the time savings. The worker is now responsible for managing 3x the output with no proportional increase in cognitive bandwidth.

Sources
  • Lee, M., et al. (2025). The Impact of Generative AI on Critical Thinking. Microsoft Research. microsoft.com/research
  • Harvard Business Review (2026). 200-person tech firm AI adoption study.
  • UC Berkeley Labor Center (2025). AI Adoption Longitudinal Study.
  • Deloitte (2025). Workforce Intelligence Report.
  • Content Marketing Institute (2025). AI Content Production Benchmarks.

2.2c — Discretionary effort has collapsed.

Primary research

Gallup, State of the Global Workplace 2026. Global engagement is at 20 percent, the lowest measured level since 2020. Manager engagement dropped 9 percentage points between 2022 and 2025, with a 5-point drop between 2024 and 2025 alone. The historical "manager engagement premium" between manager engagement and individual contributor engagement has effectively closed.

ActivTrak, State of the Workplace 2026. Behavioral data from 163,638 employees across 1,111 companies. Disengagement risk now exceeds burnout risk for the first time on record. Focus efficiency dropped to 60 percent.

Perceptyx longitudinal motivation data. Drawing on 20 million employee survey responses, Perceptyx has documented intrinsic motivation collapse over a multi-year window.

Sources
  • Gallup. State of the Global Workplace 2026. gallup.com
  • ActivTrak. State of the Workplace 2026.
  • Perceptyx workforce research archives.
Mechanism, Move 2

If capital deepening has slowed (worker hours less leveraged by capital), AI is increasing cognitive demand (worker hours more cognitively expensive), and discretionary effort has collapsed (workers withholding the extra effort that used to absorb new demands), then the productivity model that worked from 1948 to 2005 has structurally broken. The three sub-claims combine into a coherent argument about why productivity now depends on the human in a way it did not for 75 years.

Limitations

The three sub-claims are individually well-supported, but the integration into a single thesis is our framing rather than a published academic finding. Other framings of the same data are possible.

The capital deepening slowdown is empirically established; the cause is contested. The AI cognitive load research is recent (2025 to 2026) and may be subject to revision as longer-term studies emerge. The discretionary effort collapse is well-documented but the causal relationship to AI is correlational, not proven. We treat the integration as the strongest available framing of the available evidence rather than as a proven causal model.

Move 03 of 04

For the first time in three generations, productivity depends on the human.

For the first time in three generations, productivity now depends on the human.

Logical derivation

This is the inferential move in the thesis. It is logically derived from Move 2 rather than directly cited from research.

If capital can no longer multiply hours (Move 2.2a), AI is consuming rather than freeing cognitive bandwidth (Move 2.2b), and discretionary effort is no longer quietly available (Move 2.2c), then the only remaining lever for productivity gains is human capacity itself.

This is a logical claim, not an empirical one. A skeptical reader can challenge the framing without needing to dispute the underlying data.

Defense

The claim is defensible because the alternatives have been structurally weakened. Capital deepening has slowed in nearly every OECD country. AI investment has not produced the expected productivity gains; Gallup's Q1 2026 data shows 89 percent of executives report no productivity impact from AI in the past three years. If the traditional productivity multipliers are not working, the human capacity is the remaining variable.

The phrase "for the first time in three generations" anchors to the 75-year span over which the post-WWII productivity model has held, with the highest gains concentrated in the early decades. The conversational shorthand is "three generations"; the underlying empirical span is 1948 to roughly 2023.

Limitations

This is the move most vulnerable to the "AI will eventually deliver productivity gains" objection. The current data shows AI has not delivered, but five years of underdelivery is not the same as permanent failure. The thesis describes the present moment, not the long-term trajectory.

Move 04 of 04

The human is depleted.

The human is depleted. The median company in our cohort loses $20 million a year to it.

Primary research

The depletion claim is supported by the engagement collapse data (Gallup 2026), the burnout data (Eagle Hill 2025: 71 percent middle manager burnout), the false-retention data (MetLife 2026: 56 percent staying out of necessity), and the disengagement-exceeds-burnout finding (ActivTrak 2026). The cohort data is Clover ERA's unique contribution, documented in detail in section 04 below.

Mechanism

The combination of higher cognitive demand (Move 2.2b) and reduced discretionary effort (Move 2.2c) produces depletion that is observable at the company level through perception gap measurement. Managers self-report higher capability than their teams report experiencing; that gap is the proxy measurement for depletion the dashboards don't catch.

Limitations

The cohort sample size is small (n=11 as of Q1 2026, growing toward Q2 publication). Single-company variance is high; the median $20M is more reliable than any individual company estimate. The cohort represents companies in the 300 to 1,200 employee range across four sectors; findings may not generalize to smaller or larger companies, or to sectors not represented in the cohort.


03 — The Eight External Validators

Eight independent research firms have named pieces of the same phenomenon.

Validator 01

MetLife 2026 Employee Benefit Trends Study

Published Feb 18, 2026
Methodology
Two quantitative studies conducted October 2025, surveying 2,480 HR decision-makers and 2,541 full-time employees across U.S. organisations.
Key findings
56 percent of employees stay out of necessity, not commitment. Only 18 percent stay because they truly want to. Financial confidence at lowest level since 2012.
Anchor quote
As employees cling to their jobs for security, retention alone can give employers a false sense of stability, even as wellbeing, engagement, and productivity quietly erode.Todd Katz, Head of U.S. Group Benefits, MetLife
Why it matters
The strongest single external validator. A reputable insurance company with 5,000+ respondents has publicly stated that retention metrics are masking the very phenomenon the Manager Gap Index detects.
Validator 02

Gallup State of the Global Workplace 2026

Published Apr 2026
Methodology
Annual survey of employees across 140+ countries, drawing on millions of survey responses.
Key findings
Global engagement at 20 percent, lowest since 2020. Manager engagement dropped 9 points since 2022, including a 5-point drop between 2024 and 2025. Estimated $10 trillion in lost productivity globally in 2025, equivalent to 9 percent of GDP. 89 percent of executives report no productivity impact from AI in past three years.
Anchor quote
Businesses are investing heavily in AI, but the results are not showing up in the bottom line. Gallup's data points to an answer the corporate world has largely ignored: the manager.Jon Clifton, CEO, Gallup
Why it matters
The macro economic anchor. Gallup's $10T number is the most-cited statistic in workforce productivity discourse. The manager-as-variable framing aligns directly with the Manager Gap Index thesis.
Validator 03

ActivTrak 2026 State of the Workplace

Published Mar 2026
Methodology
Behavioral data from 163,638 employees across 1,111 companies, drawing on actual computer usage patterns rather than self-reported surveys.
Key findings
Disengagement risk exceeds burnout risk for the first time. Focus efficiency at 60 percent. 35 percent of workers show pattern-based indicators of withdrawn effort.
Why it matters
The behavioral evidence layer. Where MetLife provides survey data and Gallup provides macro estimates, ActivTrak provides actual behavioral patterns from worker activity. The disengagement-exceeds-burnout finding directly mirrors the Silent Degradation thesis: people are still showing up but withdrawing.
Validator 04

McKinsey Quarterly

Multiple publications, 2023–2025
Methodology
McKinsey Quarterly has published multiple analyses estimating the cost of disengagement and attrition for the median S&P 500 company.
Key findings
$228 million to $355 million annually for the median S&P 500 company. Cited in ActivTrak 2026 and other downstream sources as the upper-bound estimate for company-level cost of workforce disengagement.
Source
McKinsey Quarterly archives. Specific articles cited in the Manager Gap Index methodology (section 04 below).
Why it matters
The authoritative consultancy estimate. McKinsey's range provides the comparison anchor for the Clover ERA cohort data ($20M median for mid-market companies).
Validator 05

Talent LMS 2025 Workplace Skills Report

Published 2025
Methodology
Survey of more than 1,000 U.S. employees.
Key findings
More than 50 percent of employees show signs of "quiet cracking", a precise term Talent LMS coined for sustained low engagement without active disengagement signals.
Source
Talent LMS research archives.
Why it matters
One of the earliest published terms for the phenomenon, predating MetLife's "false retention" framing. Documents the prevalence at majority-of-workforce levels.
Validator 06

Korn Ferry / Aflac / ResumeBuilder

Job-hugging research, 2025
Methodology
Multiple surveys conducted independently by each organisation, with consistent findings across the three.
Key findings
The "job hugging" pattern (employees staying in roles out of fear rather than commitment) increased from 45 percent in early 2025 to 57 percent in late 2025. The pattern is more pronounced in mid-career workers.
Anchor quote
Being a job hugger means you're feeling anxious, insecure, more likely to stay but also more likely to want to leave. You often see a self-protective response: nothing to see here, I'm doing a good job, I'm not leaving.Erin Eatough, PhD, Chief Science Officer, Fractional Insights
Source
Korn Ferry research archives. Aflac WorkForces Report, 15th annual edition. ResumeBuilder workforce survey series.
Why it matters
Names the specific behavioral pattern that managers misread. The self-protective response is exactly what the Manager Gap Index detects as perception gap.
Validator 07

Eagle Hill 2025 Workforce Burnout Survey

Published 2025
Methodology
National survey of U.S. workers across job levels and sectors.
Key findings
71 percent of middle managers report burnout, the highest of any job level. 51 percent of total U.S. workforce reports burnout. Generation Z reports the highest rates ever recorded.
Source
Eagle Hill Consulting research archives.
Why it matters
The middle-manager finding is critical. The 71 percent burnout rate among middle managers means the people responsible for detecting and acting on team-level signals are themselves the most depleted. This explains the action gap finding in the cohort data.
Validator 08

Perceptyx longitudinal motivation data

Multi-year window, ongoing
Methodology
20 million employee survey responses across multi-year window. Perceptyx's research division publishes ongoing analyses.
Key findings
Intrinsic motivation has shown sustained decline across knowledge worker populations since 2022. The decline is sharper in roles with high AI exposure than in roles without.
Source
Perceptyx research archives.
Why it matters
The longest-running longitudinal evidence base. Where other validators offer point-in-time snapshots, Perceptyx demonstrates the trend trajectory.

04 — The Cohort Methodology

How the Manager Gap Index measures what dashboards can't see.

  1. 4.1What the Index measures
  2. 4.2The cohort composition
  3. 4.3The cost calculation across six layers
  4. 4.4What the methodology does not do
  5. 4.5Data handling and cohort confidentiality
  6. 4.6The action gap measurement
4.1

What the Index measures.

The Manager Gap Index measures the perception gap between what managers self-report about team capability and what their teams report anonymously about the same dimensions. The gap is scored across six CLOVER dimensions (Communication, Learning, Opportunity, Vulnerability, Enablement, Reflection) and produces a single MGI score and a six-layer cost estimate.

The Index is deliberately not an engagement survey. Engagement scores capture worker self-report. The MGI captures the gap between manager and team self-report on the same prompts. That gap is the proxy for depletion the dashboards don't catch.

4.2

The cohort composition.

The Q1 2026 cohort is eleven companies in the 300 to 1,200 employee range, across four sectors: B2B SaaS, professional services, consumer products, and industrial. Companies provided their data voluntarily, received their own MGI report, and approved cohort inclusion before any aggregate publication.

All cohort data is anonymised and aggregated. No company names appear in any Clover ERA publication. Sample sizes below n=3 in any segment cut are not reported.

4.3

The cost calculation across six layers.

The cost estimate aggregates six research-backed layers: Regrettable Attrition, Disengagement Tax, Manager Drag, Promotion Risk, Innovation Suppression, and Customer Impact. Each layer applies a research-backed multiplier to a company-specific input (headcount, average compensation, manager span, etc.) and produces a dollar contribution to the total.

Full multipliers and source citations are documented in the Q2 2026 Silent Degradation Index. Each multiplier has its own confidence interval; the aggregate cost is directionally correct rather than precise.

4.4

What the methodology does not do.

Most research methodology sections bury limitations. This one foregrounds them.

Not a clinical diagnostic

The Manager Gap Index is a structured assessment that produces a score and a cost estimate based on documented research multipliers. It does not predict individual outcomes for specific people on specific teams.

Not accounting precision

The cost calculation is an estimate. The six cost layers use research-backed multipliers, but each multiplier has its own confidence interval. Companies should treat the cost number as directionally correct rather than as accounting precision.

Cohort range

The cohort represents companies in the 300 to 1,200 employee range across four sectors; findings may not generalize beyond this range. Sample sizes below n=3 in any segment cut are not reported.

Not causation

A high MGI score correlates with the patterns described in the cohort findings, but the causal pathway from manager perception gap to financial outcome is correlational rather than proven.

4.5

Data handling and cohort confidentiality.

All survey responses are anonymised at intake. Manager and team data are linked at the team level only, never at the individual level. Cohort aggregation requires a minimum n=3 per segment cut. No company names, individual names, or identifying details appear in any cohort-level publication.

4.6

The action gap measurement.

In addition to the perception gap, the Index measures the action gap: the proportion of managers who can see a falling team-level signal and do not intervene within two weeks. The action gap is the operational consequence of the perception gap. Both metrics are reported separately in every Index edition.


05 — Objection Defenses

The strongest challenges to the thesis, and how the data responds.

Objection 01

AI will eventually deliver productivity gains. The current cognitive load research describes a transitional moment, not a structural shift.

What the data supports

This is the strongest objection to Move 2 of the thesis. The current data (Microsoft 2025, UC Berkeley 2025, HBR 2026, Deloitte 2025) describes 2025 to 2026. We do not have data for 2028 or 2030. AI adoption through 2026 has been correlated with increased cognitive load rather than decreased load. Three structural mechanisms (recovery break elimination, decision multiplication, rising expectations) explain why this is the case.

What it does not

The data does not support a strong claim that AI will permanently increase cognitive demand. New AI integration patterns, better tool design, and human adaptation could change the picture in coming years.

Honest position

The thesis describes the present moment. Companies making capital allocation decisions today should plan for the present-moment data rather than for hypothetical future improvements. If AI does eventually reduce cognitive load, that is good news for the future. The cost of Silent Degradation is happening now.

Objection 02

Engagement scores have been declining for years. Why is Silent Degradation different?

What the data supports

Engagement scores measure something different from what the Manager Gap Index measures. Engagement scores capture worker self-report; the MGI captures the gap between manager self-assessment and team self-report.

The distinction matters because survey self-selection bias is significant. The most depleted workers are the least likely to complete surveys honestly. Engagement scores can look healthy at companies where the perception gap is severe. The Q1 2026 cohort included companies with above-average engagement scores and 70 percent perception gaps.

Honest position

Silent Degradation is a broader phenomenon than engagement decline. Engagement is one component; the others are the action gap (visibility without intervention), the cost layer split (turnover is less than half of total cost), and the manager capability collapse (the manager engagement premium has closed).

Objection 03

The cohort sample is too small to support strong claims.

Honest position

This is a fair objection. n=11 is small. The Index acknowledges this directly. The cohort findings are not presented as definitive sector-wide truths. They are presented as the best available evidence of what perception gap measurement reveals at the company level for mid-market companies in the 300 to 1,200 employee range.

The Q1 cohort is a starting point. The Q2 publication will include all companies assessed during Q2 2026, with a target of 30+ companies. The Q3 cohort target is larger still. The Index series is designed to grow toward statistical reliability; the Q1 publication is the first data point in a longitudinal research series.

For comparison: many widely-cited workforce studies (including some of the validators in section 03) report findings from samples of 300 to 500 employees. The MGI cohort is smaller in company count but larger in depth-of-measurement per company.

Objection 04

The four-move thesis is a marketing construct, not academic research.

Honest position

Correct. The four-move thesis is a framing, not a peer-reviewed finding. Each individual move within the thesis is supported by primary research. The integration of the four moves into a single argument is Clover ERA's construct.

This is a feature rather than a bug. Academic research produces findings; integrators produce frameworks. The thesis is positioned as the strongest available framing of the available evidence rather than as a proven causal model. A CFO or analyst can dispute the framing without disputing the underlying data.

Objection 05

You're a vendor selling a product. Of course your research supports your thesis.

Response 01 of 03

The research published by Clover ERA is methodologically transparent. The cohort sample size, the cost calculation multipliers, and the data handling protocols are documented openly. A skeptical reader can examine the methodology and disagree with the conclusions.

Response 02 of 03

The eight external validators in section 03 are not Clover ERA's research. MetLife, Gallup, ActivTrak, Eagle Hill, and the others are independent organisations with no commercial relationship to Clover ERA. Their findings predate and validate the Silent Degradation thesis.

Response 03 of 03

The cohort data is anonymised and aggregated. No company names appear in any Clover ERA publication. The cohort companies provided their data voluntarily, received their own MGI report, and approved inclusion before any publication. Vendors with stronger commercial motivations would typically demand named testimonials and case studies; the absence of those in Clover ERA's research is itself a credibility marker.

Honest position

This is vendor-published research, and readers should weight it accordingly. The methodology transparency, the external validator network, and the absence of cherry-picked testimonials are the available counterweights.


06 — Acknowledged Limitations

What we don't know, and what we don't claim.

The cohort sample is small (n=11 as of Q1 2026). Single-company variance is high. The median is more reliable than any individual estimate.

The cost calculation is an estimate using research-backed multipliers, not accounting precision. Each multiplier has its own confidence interval.

The cohort represents mid-market companies in the 300 to 1,200 employee range across four sectors. Findings may not generalize beyond this range.

The four-move thesis is a framing, not a peer-reviewed finding. The integration of the four moves into a single argument is Clover ERA's construct, defensible but not proven.

The AI cognitive load research is recent (2025 to 2026). Long-term trajectory is uncertain.

Causation is not established. A high MGI score correlates with the patterns described, but causal pathways are correlational rather than proven.

The "75 years" framing in the thesis is conversational shorthand. The post-WWII productivity model held for approximately 60 to 75 years depending on dating; we anchor to the upper bound (1948 to roughly 2023) because that is the span over which the capital-and-technology model demonstrably operated.

External validators (MetLife, Gallup, ActivTrak, and others) are independent of Clover ERA. The Silent Degradation framing is our integration, not a published academic finding.


07 — Citations and Bibliography

Every source cited on this page, in full.

Productivity decomposition and the post-2005 slowdown

AI and cognitive load

  • Microsoft Research Lee, M., et al. (2025). The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects from a Survey of Knowledge Workers. January 2025. microsoft.com/research
  • Harvard Business Review (2026, February). 200-person tech firm AI adoption study.
  • UC Berkeley Labor Center (2025). AI Adoption Longitudinal Study.
  • Deloitte (2025). Workforce Intelligence Report.
  • Content Marketing Institute (2025). AI Content Production Benchmarks.

Engagement and the workforce phenomenon

  • MetLife (2026, February). Employee Benefit Trends Study 2026. metlife.com/newsroom
  • Gallup (2026). State of the Global Workplace Report 2026. gallup.com
  • ActivTrak (2026, March). State of the Workplace 2026.
  • Talent LMS (2025). Workplace Skills Report 2025.
  • Eagle Hill Consulting (2025). Workforce Burnout Survey.
  • Korn Ferry, Aflac, ResumeBuilder (2025). Job Hugging Research Series.
  • Perceptyx (multiple years). Longitudinal Employee Survey Data.
  • McKinsey Quarterly (2023–2025). Multiple analyses on disengagement and attrition costs at the median S&P 500 company.

Cost calculation methodology

  • Multiple sources cited in the Cost Calculation subsection (4.3 above). Full methodology with multipliers documented in the Q2 2026 Silent Degradation Index, available at silentdegradation.com/q2-2026.

08 — Updates and Version History

When this page was last updated, and what changed.


Next

If the methodology holds up, the next step is your own number.

The page above defends the four-move thesis with primary sources. The Manager Gap Index is the live diagnostic that produces a single score and a six-layer cost estimate for your own organisation, scored against the cohort. It takes ten minutes. Take the Manager Gap Index →

Or, if you already know your exposure: schedule a 15-minute Cohort Conversation → with one of the founders. The full thesis surface for sharing internally is The Productivity Inversion →.