There’s a lot we don’t know about future climates. Despite continuous improvements, our models still do notoriously poorly with some fairly fundamental dynamics: ocean energy transport, the Indian monsoon, and just reproducing past climate. Worse, we have the computing resources to run each model a hand-full of times. We haven’t estimated the distribution of possible temperatures that even one model predicts.
Instead, we treat the range of models as a kind of distribution over beliefs. This is useful, but a poor approximation to an actual distribution. For one, most of the models are incestuously related. Climate is chaotic, no matter how you model it, so changing the inputs within measurement error can produce a whole range of future temperatures.
First, they collected the full range of 44 CMIP5 models (downscaled and adjusted to predict every US county). They extracted each one’s global 2080-2099 temperature prediction.
Then they ran an *hemisphere* scale model (MAGICC), for which its possible to compute enough runs to produce a whole distribution of future temperatures.
Here’s the result: a matching between models and the temperatures distribution for end-of-century global temperatures.
[From American Climate Prospectus, Technical Appendix I: Physical Climate Projections]
The opaque maps are each the output of a different GCM. The ghosted maps aren’t– at least, not alone. In fact, every ghosted map represents a portion of the probability distribution which is currently missing from our models.
To fix this, they made “surrogate” models, by decomposing existing GCM outputs into “forced” seasonal patterns which scale with temperature, and unforced variability. Then they scaled up the forced pattern, added back in the variability, and bingo! A new surrogate model.
Surrogates can’t tell us the true horrors of a high-temperature world, but without them we’d be navigating without a tail.