2 min read AI-generated

Stanford AI Index 2026: China catches up, transparency craters

Copy article as Markdown

Stanford HAI's ninth annual AI Index lands with a thud: the US-China performance gap has basically vanished, while the biggest labs have quietly stopped publishing almost anything about how their models are built.

Featured image for "Stanford AI Index 2026: China catches up, transparency craters"

Stanford HAI published its 2026 AI Index today — the ninth edition now, and it reads like a snapshot of a field growing faster than anyone can measure it.

US and China — effectively tied

The headline: the performance gap between US and Chinese frontier models has effectively disappeared. Top models swap places at the top of the benchmarks on a rolling basis, and Stanford notes that Anthropic’s best model currently leads its strongest Chinese counterpart by just 2.7%.

The US still has the edge on capital, infrastructure, and chips. But China now leads on patents, scientific publications, and — the part I find most striking — autonomous robotics. South Korea files more AI patents per capita than any other country. And 44 nations now run their own state-backed supercomputing clusters.

Transparency at an all-time low

The part that gives me pause is the Transparency Index. Its average score dropped to 40 points from 58 last year. More than 90% of notable models now come from private companies, and 80 of 95 new top models shipped without training code. Parameter counts, dataset sizes, training duration — all trade secrets.

And the industry hasn’t just won technically, it has won politically too: the number of AI witnesses in US congressional hearings has tripled since 2017. Independent academic voices have gone the other way.

Adoption, trust, environment

53% of the world’s population now uses generative AI regularly — faster than the PC or the internet ever spread. At the same time, only 31% of Americans trust their government to regulate AI competently. China sits at 27%, the EU at 53%.

And the environmental line hurts: training xAI’s Grok 4 emitted more than 72,000 tons of CO₂, according to the report. The water consumed for GPT-4o inference workloads would, on paper, cover 12 million people a year.

My take

The report confirms two things I’ve been circling around here on clauding.de for a while: AI development is going global — and getting more opaque at the same time. The big labs keep shipping impressive models while quietly closing the door on how they’re built. That the public distrusts its regulators makes sense in that context. So does the fact that experts (73% optimistic about jobs) and the public (23%) see the near future very differently.

If you want to understand AI in 2026 — not just post about it — this is the report to read.

Sources: