Politics

/

ArcaMax

Gautam Mukunda: AI isn't built for the black swan era of bad weather

Gautam Mukunda, Bloomberg Opinion on

Published in Op Eds

Using artificial intelligence to forecast the weather is getting so good – and so cheap – that meteorological services are starting to retire the expensive physics-based systems they have relied on. That’s a potentially big problem – and not just for weather forecasting.

Models built by Google DeepMind, the European Centre for Medium-Range Weather Forecasts and others now match or outperform the best physics-based models on medium-range forecasting. During the 2025 Atlantic hurricane season, DeepMind’s model outperformed almost every physical one, including the National Hurricane Center’s official forecasts. These models are faster and require a fraction of the computational infrastructure of those based on physics. This can for the first time give access to accurate weather forecasting to developing countries that couldn’t afford the supercomputers, satellite networks and trained meteorological workforces required by the older approach.

It’s not foolproof. A May 2025 paper in Proceedings of the National Academy of Sciences, Pedram Hassanzadeh’s team at the University of Chicago tested what happens when these models meet weather not captured by their training data. They trained the AI weather model FourCastNet on four decades of data after stripping out Category 3, 4 and 5 tropical cyclones. Then they fed it data that had preceded Category 5 storms. The result was that the model always predicted a Category 2.

AIs struggle with extrapolating outside their training data, and it’s every AI model that companies are using to help make predictions, and about the infrastructure, systems and people they’re removing because they think they don’t need them anymore. The problem with these models isn’t that they don’t work; it’s that they often work so well that organizations come to rely on them and even reshape themselves around the models. That makes it especially dangerous when they fail.

Think back to the role of Value at Risk (VaR) during the Global Financial Crisis. It was accurate enough in normal markets to displace older judgment-based risk management and become enshrined in regulation. Then the housing market did something the model had never seen, and VaR continued producing reassuring risk numbers while the world burned. Greenlight Capital’s David Einhorn compared it to an airbag that works perfectly except in a car accident.

AI’s specific failure mechanism is different, but the effect is the same. And unlike VaR, AIs will be applied far beyond the world of finance. That could be fine if people knew when to rely on AIs and when to reject them. AIs, after all, like any other mathematical model, are a simplified version of the real world, not a perfect one.

Unfortunately, people have a profound tendency to confuse representations with reality, a mistake called reification. It’s why Army officers are constantly reminded in training that “the map is not the territory.” Alfred North Whitehead called this the fallacy of misplaced concreteness. Major General William Troy and I analyzed how reification and simulation can lead to disaster in military strategy by causing planners to have unwarranted faith in simulations and strip out redundancies and margins of safety they think they no longer need in a 2009 paper published in Parameters, the Army’s in-house peer-reviewed journal. The same failure modes show up in many other domains.

The financial crisis, for example, was rooted in credit-rating firms handing out AAA grades (a model of risk) on subprime-mortgage securities, and the institutions buying those securities confusing that measure of risk with the reality of risk – as they did with VaR. An AI weather forecast is just a credit rating in a fancy suit. As University of California at San Diego associate professor Rose Yu describes it, AI models do not just miss the rare event, they miss it blithely, without a hint of doubt.

AI models are so useful because most management decisions are inside the training distribution. The grocery chain forecasting milk demand, the airline pricing seats for next Tuesday, or the call center routing inquiries. Improving the efficiency of those predictions is invaluable for anyone dealing with that sort of problem – which is most managers, most of the time.

 

But the decisions that determine whether an organization survives are not those decisions. Historians and biographers don’t spend much time on the everyday. Instead, they focus on the big moments – wars and crises, hostile takeovers, and revolutionary innovations. The tendency to fail in the biggest moments is what made VaR so dangerous. AI threatens to be the same problem at industrial scale. AIs are great at interpolation across familiar territory, enough to earn the confidence of their users, and awful at extrapolation beyond the known.

Two other factors are escalating the danger. First, organizations may thin their managerial ranks and traditional capabilities out of a misplaced confidence in AI. In doing so they are blinding themselves. In weather forecasting the physical infrastructure that could catch AI’s failures, from supercomputers to satellites, is being hollowed out because it looks like an expensive indulgence on all the normal days when AI does the same job at lower cost.

In companies, human infrastructure is at just as much risk. The middle managers who may be replaced by AI have the human judgment and experience to say, “I know the model says things are fine, but this doesn’t look right.” Lose them, and you lose that vital brake on the system. It will work right up until it doesn’t.

Second, rare events seem to be getting less rare. Black swans are turning grey. Climate change is producing weather no model has trained on (including changing the behavior of hurricanes). The post-Cold War international order is breaking down, and no one knows what the new one will look like. AI’s skill at using the past to predict the future becomes a trap when the future no longer looks like the past.

Hurricanes are on their way. They used to kill tens of thousands or even hundreds of thousands of people. They don’t anymore because we can see them coming. In our rush to embrace AI’s potential, we need to make sure we don’t lose that.

____

This column reflects the personal views of the author and does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.

Gautam Mukunda writes about corporate management and innovation. He teaches leadership at the Yale School of Management and is the author of "Indispensable: When Leaders Really Matter."


©2026 Bloomberg L.P. Visit bloomberg.com/opinion. Distributed by Tribune Content Agency, LLC.

 

Comments

blog comments powered by Disqus

 

Related Channels

The ACLU

ACLU

By The ACLU
Amy Goodman

Amy Goodman

By Amy Goodman
Armstrong Williams

Armstrong Williams

By Armstrong Williams
Austin Bay

Austin Bay

By Austin Bay
Ben Shapiro

Ben Shapiro

By Ben Shapiro
Betsy McCaughey

Betsy McCaughey

By Betsy McCaughey
Bill Press

Bill Press

By Bill Press
Bonnie Jean Feldkamp

Bonnie Jean Feldkamp

By Bonnie Jean Feldkamp
Cal Thomas

Cal Thomas

By Cal Thomas
Clarence Page

Clarence Page

By Clarence Page
Danny Tyree

Danny Tyree

By Danny Tyree
David Harsanyi

David Harsanyi

By David Harsanyi
Debra Saunders

Debra Saunders

By Debra Saunders
Dennis Prager

Dennis Prager

By Dennis Prager
Dick Polman

Dick Polman

By Dick Polman
Erick Erickson

Erick Erickson

By Erick Erickson
Froma Harrop

Froma Harrop

By Froma Harrop
Jacob Sullum

Jacob Sullum

By Jacob Sullum
Jamie Stiehm

Jamie Stiehm

By Jamie Stiehm
Jeff Robbins

Jeff Robbins

By Jeff Robbins
Jessica Johnson

Jessica Johnson

By Jessica Johnson
Jim Hightower

Jim Hightower

By Jim Hightower
Joe Conason

Joe Conason

By Joe Conason
John Stossel

John Stossel

By John Stossel
Josh Hammer

Josh Hammer

By Josh Hammer
Judge Andrew P. Napolitano

Judge Andrew Napolitano

By Judge Andrew P. Napolitano
Laura Hollis

Laura Hollis

By Laura Hollis
Marc Munroe Dion

Marc Munroe Dion

By Marc Munroe Dion
Michael Barone

Michael Barone

By Michael Barone
Mona Charen

Mona Charen

By Mona Charen
Rachel Marsden

Rachel Marsden

By Rachel Marsden
Rich Lowry

Rich Lowry

By Rich Lowry
Robert B. Reich

Robert B. Reich

By Robert B. Reich
Ruben Navarrett Jr.

Ruben Navarrett Jr

By Ruben Navarrett Jr.
Ruth Marcus

Ruth Marcus

By Ruth Marcus
S.E. Cupp

S.E. Cupp

By S.E. Cupp
Salena Zito

Salena Zito

By Salena Zito
Star Parker

Star Parker

By Star Parker
Stephen Moore

Stephen Moore

By Stephen Moore
Susan Estrich

Susan Estrich

By Susan Estrich
Ted Rall

Ted Rall

By Ted Rall
Terence P. Jeffrey

Terence P. Jeffrey

By Terence P. Jeffrey
Tim Graham

Tim Graham

By Tim Graham
Tom Purcell

Tom Purcell

By Tom Purcell
Veronique de Rugy

Veronique de Rugy

By Veronique de Rugy
Victor Joecks

Victor Joecks

By Victor Joecks
Wayne Allyn Root

Wayne Allyn Root

By Wayne Allyn Root

Comics

Steve Sack Mike Beckom Jimmy Margulies Harley Schwadron Tom Stiglich Taylor Jones