top of page

Bay Area and Silicon Valley: Early Evidence of AI Impact on Labor

GIS Evidence, Displacement Mechanics, and the Resilience Response


The geography with maximum AI investment density is also producing measurable net employment disruption. Understanding precisely why — and what companies can do about it — is the analytical problem this platform was built to solve.



A natural experiment in economics is rare and valuable: a setting where one variable is held at its theoretical maximum while outcomes are observed cleanly. The San Francisco Bay Area is currently running one of those experiments with respect to artificial intelligence and employment. It concentrates more AI-native investment, frontier research density, and employer presence than any comparable geography. If AI generates net positive labor demand at scale, the signal should be detectable here before anywhere else — and more clearly than anywhere else.


The signal is more nuanced than headlines suggest. Between 2023 and 2025, the Bay Area shed an estimated 137,200 net technology jobs, according to Beacon Economics and California EDD data, even as AI-native employers expanded. That expansion — roughly 34,000 workers across OpenAI, Anthropic, Scale AI, and their peers — is real and significant. But it is not yet large enough to offset the displacement occurring in the legacy technology sector. The resulting gap, approximately 9:1 on a displacement-to-creation basis by current estimates, is not a verdict on AI's long-run employment potential. It is a measurement of where we are in the transition cycle — and a precise measurement is what good strategy requires.


This post documents the analytical framework I built to study that measurement: the data architecture, the simulation models, the geographic intelligence layer, and most practically, what the modeling reveals about corporate resilience strategy for firms navigating this environment.



Why the Geographic Unit of Analysis Matters


National-level AI labor statistics are useful as context and nearly useless for strategy. They aggregate across geographies, sectors, and firm types in ways that obscure the mechanisms driving outcomes at the level where decisions are actually made. A technology company in Santa Clara County is operating in a fundamentally different labor market environment than one in Austin or Boston — even if the national trend lines look similar.

The methodological contribution of geographic disaggregation here draws from Autor, Dorn, and Hanson's work on import competition, which demonstrated that trade-driven displacement concentrates geographically in ways that national averages mask entirely. The same principle applies to AI-driven displacement. The Bay Area case makes this visible because the displacement geography and the creation geography do not overlap — they are spatially distinct economies operating within the same nine-county Metropolitan Statistical Area.


Sub-Regional Net Tech Employment Change · 2022 Peak to 2025 · CA EDD

South Bay — Santa Clara County-91,200 jobs · -21.1%

SF – San Mateo Corridor-36,300 jobs · -12.0%

East Bay — Alameda / Contra Costa-8,300 jobs · -4.9%

North Bay — Marin, Sonoma, Napa, Solano-1,400 jobs · -2.4%

AI-Native Employment Cluster (SF/Palo Alto)~33,370 workers est. (2025)

Displacement : AI-Creation Ratio (est.)~9:1 (conservative model estimate)


The South Bay accounts for 66% of total Bay Area tech job losses despite hosting the highest concentration of incumbent tech employers. The SF-San Mateo corridor simultaneously shows the highest AI-native employer density and the second-largest displacement figure. These are not contradictory — they reflect an industry in transition, where the same geographic footprint hosts both the peak activity of the outgoing cycle and the early-stage activity of the incoming one. The analytical challenge is distinguishing the signal in each.


The spatial concentration of AI-native employment is particularly striking. Geocoding employer headquarters data places the overwhelming majority of AI-native workers in a roughly 12-square-mile corridor spanning SoMa, Mission Bay, and the Caltrain corridor in San Francisco, with a secondary node in Palo Alto and Menlo Park. Legacy tech displacement, by contrast, radiates across the full nine-county region — Mountain View, San Jose, Santa Clara, Fremont, Milpitas. The mismatch between where jobs are being lost and where new ones are being created is not random. It reflects differences in the types of firms, the capital structures that support them, and the occupational profiles they hire for.


The creation geography and the destruction geography are not the same place. That spatial mismatch is not a temporary artifact of the cycle — it reflects structural differences in firm type, capital model, and occupational profile that persist across the transition.


The Analytical Platform: GIS, Simulation, and Labor Economics Integrated


The Bay Area AI Displacement Intelligence System (BADIS v1.0) is an application I built to integrate four analytical capabilities that are typically maintained in separate tools: geospatial intelligence, macroeconomic simulation, Monte Carlo uncertainty quantification, and corporate resilience modeling. The architecture decision intelligence frameworks, with data inputs from California EDD and CA WARN Act filings, and US Census TIGER geographic data.


The GIS Layer


The geographic component maps four concurrent spatial signals. County-level displacement bubbles are scaled to net job loss and colored by a displacement intensity index — a composite of percentage decline, WARN Act filing density, and AI-native employment gap — using Leaflet with multiple switchable basemap providers including satellite and topo overlays. WARN Act filings are geocoded at employer headquarters addresses, enabling the spatial distribution of the destruction signal to be visualized at the establishment level rather than the county level: Meta in Menlo Park, Google in Mountain View, Cisco and Intel in San Jose and Santa Clara, Tesla in Fremont.

AI-native employers are mapped separately as a creation signal layer, revealing the tight geographic clustering described above. Additional layers include coalition risk zone overlays derived from the TFP simulation, a Bay Area corporate resilience landscape mapping eight industry cluster zones by resilience score versus disruption risk, and infrastructure overlays including research university anchors — Stanford, UC Berkeley, UCSF — that anchor the talent pipeline for both the outgoing and incoming employment cycles.


The Simulation Engine


The TFP simulation is calibrated to the Acemoglu-Restrepo task displacement framework, modeling effective labor as L_eff = (1 − automation_t) × L_t, where automation rate follows a linear trajectory from the current 15% baseline toward a plausible 60% target over the decade, and wages evolve with a rigidity parameter calibrated to Blanchard-Galí (2007) estimates. Labor share emerges endogenously from the product of wages and effective employment divided by output.



The platform runs five configurable scenarios — Baseline, Slow Automation, Rapid AI Acceleration, Policy Intervention at 6.5% of GDP, and a Bay Area-specific calibration applying a 1.15× displacement multiplier — with N=2,000 Monte Carlo draws per scenario using Latin Hypercube Sampling across 42 parameters. Sobol sensitivity analysis decomposes the variance across those parameters.


Key Technical Finding — Sobol Variance Decomposition

Automation adoption rate explains approximately 65% of total outcome variance in labor market projections across 1,000 Monte Carlo simulation runs — more than wage rigidity (18%), inequality dynamics (15%), and TFP growth rate (8%) combined. This is not a modeling artifact: it reflects the mechanistic dominance of task displacement over all secondary adjustment channels in the near-to-medium term. The practical implication is that the pace of AI deployment, not macroeconomic policy settings, is the primary variable firms and policymakers should be tracking.


The Acemoglu-Restrepo Mechanism: So-So Automation and Why It Matters for Strategy


The task displacement framework provides the clearest theoretical lens for interpreting the Bay Area data. In the Acemoglu-Restrepo model, automation does not universally reduce labor demand — it depends on whether the productivity gains from automation are large enough to trigger what they call the "productivity bandwagon": the expansion of economic activity that absorbs displaced workers into new roles. When that bandwagon is strong, automation and employment can be complements. When it is weak — when automation produces modest productivity gains primarily by shifting work from labor to capital — displacement dominates and net employment falls.

The Bay Area evidence is consistent with the "so-so automation" case. The productivity gains from current-generation generative AI are real: code is produced faster, content moderation scales without linear headcount growth, customer queries are handled at lower marginal cost. But those gains are being captured primarily as margin expansion and stock price recovery for incumbent tech firms, not as wage increases for remaining workers or as labor demand expansion. The WARN Act filing patterns confirm this: the companies cutting headcount most aggressively — Meta, Cisco, Google, PayPal — are simultaneously reporting improved operating margins and sustained or increasing AI R&D investment.


This matters for corporate strategy because it clarifies the type of competitive environment firms are operating in. This is not a crisis to be weathered and waited out. It is a structural transition in what inputs are economically valuable, occurring faster than historical technological transitions and with a different distribution of beneficiaries than past technology cycles produced.


Corporate Resilience: The Modeling Framework and What It Finds


The corporate resilience engine is the most directly actionable component of the platform. It models how firms across ten Bay Area sectors can position themselves for growth and durability through the AI transition — not by avoiding displacement, which is neither realistic nor strategically sound, but by sequencing their investments to build compounding advantages while the transition is still early enough for positioning choices to matter.


The engine takes company profile inputs — sector, Bay Area county, headcount, current AI maturity, and revenue — and scores ten strategic levers across three dimensions against a composite resilience index. The index captures five-year trajectory, displacement risk reduction, talent stability, and estimated financial runway. It draws on sector-specific calibrations: the data moat leverage coefficient for a biotech firm is materially different from that of a hardware company, and the augmentation effectiveness multiplier varies by AI maturity baseline.


The Ten Levers and Their Mechanics

Strategic Lever

Primary Mechanism

Time-to-Value

Cost Efficiency

AI Role Augmentation

Redesigns jobs around AI as force multiplier; increases output-per-worker without headcount reduction

< 6 months

Highest

Strategic AI Partnership

Accesses frontier capabilities without internal build cost; signals market position

6–12 months

High

Open-Source AI Strategy

Reduces tooling cost; builds internal AI fluency at low marginal expense

6–9 months

High

Reskilling Investment

Rebuilds workforce capability for AI-adjacent roles; reduces attrition of critical talent

12–18 months

Medium

Talent Retention Budget

Preserves institutional knowledge and AI-capable employees during market disruption

Immediate

Medium

Platform / Ecosystem Pivot

Repositions firm as orchestrator or enabler rather than point solution

18–24 months

Medium

R&D Reinvestment

Generates differentiated capability that competitors cannot license or copy

24–36 months

Medium

Proprietary Data Moat

Builds structural advantage through domain-specific training corpora and operational telemetry

3+ years

Medium (compounding)

Geographic Diversification

Reduces county-level displacement exposure; conditional on proximity dependency

24+ months

Context-dependent

AI Acquisition

Accelerates capability access; high execution risk if integration not resourced

12–24 months

Low (cost-heavy)

The Sequencing Finding


The most consistent and counterintuitive finding across all sector calibrations is that resilience is a sequencing problem, not a budget problem. The firms that score highest on five-year resilience trajectory are not the ones with the largest AI transformation budgets — they are the ones that activate the highest time-value levers first, build internal AI fluency before they need it externally, and avoid large-scale acquisition bets before integration capacity is in place.


AI augmentation of existing roles delivers the highest near-term resilience gain per dollar across every sector profile we modeled — at a fraction of the cost and with a sub-six-month payoff horizon. The companies treating augmentation as a stopgap before "real" transformation are sequencing backwards.


The data moat finding deserves particular attention. Proprietary data assets — domain-specific training corpora, customer behavioral datasets, operational telemetry that cannot be replicated from public sources — show the longest payoff horizon of any lever (typically three years before meaningful resilience contribution) but the highest compounding return in years four and five. Firms that invest in this in 2025 and 2026 will hold structural advantages in 2028 and 2029 that competitors cannot close through spending alone. The window for that investment is not indefinitely open: as foundation model capabilities expand and data synthesis techniques improve, the marginal value of proprietary data degrades. The time to build the moat is before the market prices it.


The Five Strategy Archetypes


The model calibrates five discrete strategic archetypes representing coherent combinations of lever settings. Each represents a different theory of how the AI transition unfolds and what competitive positioning survives it.


Incumbent Defender

Protects existing revenue base with moderate AI adoption. Low augmentation rate, high retention spend. Rational short-term; resilience trajectory flattens in years three through five as competitors compound AI fluency.

Score trend: stable → declining · Cost: low · Best for: regulated sectors with high switching costs


AI-First Transformer

Aggressive AI-first rebuild at high capex. Maximum augmentation, AI partnership, significant acquisition budget. Winner-take-most if integration succeeds; high execution variance.

Score trend: high ceiling · Cost: very high · Best for: high-margin SaaS with strong M&A track record


Hybrid Augmenter

Augments human capabilities with AI rather than replacing them. Balanced investment across reskilling, augmentation, data moat, and open-source strategy. Highest resilience-per-dollar on the frontier.

Score trend: strong and compounding · Cost: moderate · Best for: most enterprise tech and services firms


Managed Retreat

Reduces Bay Area footprint, harvests cash, minimizes AI capex. Rational from a near-term financial standpoint; produces the lowest five-year resilience trajectory of any archetype.

Score trend: declining · Cost: low · Risk: strategic obsolescence by year four


Talent Pivot

Invests heavily in reskilling and retention to reposition workforce ahead of disruption. Geographically diversified. Strong resilience trajectory; execution depends on talent program quality.

Score trend: medium-high · Cost: moderate-high · Best for: professional services and consulting firms


The cost-versus-resilience frontier analysis surfaces an important non-linearity. A Hybrid Augmenter strategy at approximately $400M in total investment outperforms an AI-First Transformer at $1.2B+ on resilience-per-dollar through year three. The Transformer archetype catches up only if post-acquisition integration succeeds, which the failure rate data on large-scale AI acquisitions does not support as a reliable baseline assumption. For most firms, the Hybrid Augmenter path offers superior expected value across the distribution of integration outcomes.


What the Bay Area Tells Us About the Broader Transition


The analytical value of the Bay Area case is not that it predicts doom — it doesn't. The AI-native employment cluster in San Francisco is real, growing, and generating some of the highest-wage positions in the global economy. The long-run employment thesis for AI remains plausible: new tasks will emerge, new industries will form, and the productivity gains from AI adoption will eventually create labor demand in categories that don't yet exist. The historical record of general-purpose technology transitions supports that optimism.


What the Bay Area data tells us is where we are in the transition cycle. We are in the displacement phase — the period where automation reduces labor demand in existing task categories faster than new task categories emerge to absorb it. The industrial electrification transition had a comparable phase lasting roughly two decades. The information technology transition had a shorter one. The AI transition appears to be moving faster than both precedents, which compresses the adjustment horizon for firms and workers.


The practical implication for companies headquartered in or dependent on Bay Area labor markets is that the adjustment horizon is measurable rather than infinite. The firms that invest in resilience capabilities now — AI augmentation, data moats, reskilling, strategic positioning — are building compounding advantages during the period when those investments are still affordable and the competitive differentiation is still available. The firms that wait for the transition to "stabilize" before investing will find that stabilization means a new equilibrium in which their competitors have already locked in the structural advantages.


That is the core analytical claim this platform was built to support: not that AI disruption is catastrophic, but that it is directional, it is measurable, and it rewards early movers who understand the sequencing. Decision intelligence at this scale requires integrating geospatial evidence, macroeconomic simulation, and firm-level strategy modeling into a single operational view. That integration is what BADIS v1.0 provides — and what this research program continues to develop.



References & Data Sources

Acemoglu, D. & Restrepo, P. (2018). Artificial Intelligence, Automation, and Work. NBER Working Paper No. 24196.

Acemoglu, D. & Restrepo, P. (2021). Tasks, Automation, and the Rise in U.S. Wage Inequality. Econometrica, 90(5), 1973–2016.

Acemoglu, D. (2024). The Simple Macroeconomics of AI. NBER Working Paper No. 32122.

Autor, D., Dorn, D. & Hanson, G. (2013). The China Syndrome: Local Labor Market Effects of Import Competition. American Economic Review, 103(6), 2121–2168.

Bagherpour, A. (2025). The TFP-Stability Paradox. ADI Working Paper.

Blanchard, O. & Galí, J. (2007). Real Wage Rigidities and the New Keynesian Model. Journal of Money, Credit and Banking, 39(s1), 35–65.

Beacon Economics / California EDD. (2025–2026). Bay Area technology employment estimates derived from seasonally adjusted EDD reports. Various releases.

CBRE Tech Insights Center. (2025). Scoring Tech Talent: San Francisco Bay Area AI Workforce Analysis.

California Employment Development Department. (2022–2026). WARN Act Filings Database. State of California.

Moretti, E. (2010). Local Multipliers. American Economic Review: Papers & Proceedings, 100(2), 373–377.

Piketty, T. (2014). Capital in the Twenty-First Century. Harvard University Press.

 
 
 

Comments


bottom of page