Meterologist Mike Smith’s recent blog post provides a good reminder that the concept of global temperature is far more complex than a casual observer might suspect.
If you think about it for awhile, you’d probably come up with two of the bigger problems. The first one is the poor distribution of reliable weather stations. Coverage is decent in the economically advanced countries, but goes downhill from there. And as you’d expect, the poles and oceans have incredibly limited coverage.
Of course this leads to a question of how to create a coherent global average, especially given the fact that stations come and go over time. How much weight do you give a station whose nearest neighbor is hundreds or thousands of miles away, as opposed to one with many neighbors tens of miles away? What kind of algorithm makes the most sense to fill in the big empty areas? Some of these problems are being addressed by satellites, but these have blank spots as well (this post by Dr. Roy Spencer gives a great sense of the complexity of satellite temperature sensing).
The second problem is the big changes in technology. We’re increasingly reliant on satellites and other modern technologies for measuring temperature, but these are (obviously) recent inventions, so leaves the problem of getting comparable temperatures to compare against in the past. This becomes especially tricky when thermometers of the past are far less accurate than the temperature trends we’re trying to detect. (This problem really hasn’t ever gone away. To get a feel for the issue take a look at Anthony Watts’ surfacestations.org, the greatest citizen science project of all time, imo).
Going way back in time we don’t even have the benefit of human temperature records, so are left with proxies, i.e. natural indications of temperature over time such as tree rings or ice cores.
Add all of this up, and you find that you need tons of complex math when calculating global temperature records. Of course anything that complex can be done in more than one way, and different approaches means different results. When you’re trying to figure out if this year is a fraction of a degree warmer or cooler than some other year, small changes in algorithm can give you very different answers.
As Mike Smith indicates, concern surfaces when the same people trying to predict future temperatures are also involved in computing how global temperature is determined, and when the temperature adjustments are in a direction that helps past predictions look better. Personally I can’t judge whether there’s any validity to these concerns, but as a user of weather data I share Mike’s annoyance whenever a temperature record is retroactively recomputed.