This week Roger Pielke Sr. retired his weblog. I just wanted to thank him for the effort he had put into it. I suspect he informed and influenced far more people than he knows. I’ve done professional work in both sustainability and energy, and have been personally compelled to stay on top of climate science. Having a career of experience with data, complex systems and computer models, it’s been obvious to me for awhile that a) the climate is a very complex system, and we don’t fully understand it, b) the data we have about our climate covers a very small window of time, and while the quality of our data continues to improve, simple questions like “what’s the average temperature” are non-trivial and prone to unexpectedly high error bars, and c) predicting the climate future relies on very complex computer models, that have not yet shown that we should trust all of their output.
Over at Huffington Post my friend Bernard David asks an important question: As we rebuild after Sandy, what are we going to do different than before? Are we going to just rebuild what was there previously, or consciously decide to make changes that will reduce the impact of future natural disasters? While the debate will continue about the degree to which Sandy was or wasn’t influenced by human-induced climate change, for the purpose of this discussion I join Roger Pielke, Jr and others in arguing that it doesn’t matter.
In my last post, I used lessons from the open source software community and the Creative Commons effort explore what we mean by “open climate science”. In this post I’m going to take the next step and propose a specification for open climate science. Finally, in the next installments I will look at the how to implement this specification using our current intellectual property legal framework. Before I dive in, it is worth reiterating that I am not a scientist (and, by logical extension, not a climate scientist).
The events that have transpired (physically) at University of East Anglia and (virtually) around the globe have raised the important question of whether climate science is open and transparent enough. This has led, naturally, for a call for “open source” science. Personally, this discussion links two amateur passions of mine, climate science and open source. Coincidentally these are central themes of Greg Papadopoulos’ and my book, “Citizen Engineer”, not because we miraculously anticipated this particular point in time, but because we saw these as the two largest knowledge gaps in today’s engineers.
Roger Pielke, Jr summarizes the state of climate legislation in Australia, and speculates on what it could mean for the US. If this is the next outcome here I would think it quite positive - strengthen the renewable energy and efficiency efforts, and drop the poorly designed cap and trade for now.
The title may have led you to believe that the temperature is rising worse than expected, but the comment is about the data itself. The various sets of temperature data that we have to do climate modeling are not very good, especially as you go back in time. This shouldn’t be surprising, when we’re trying to detect changes in temperature on the order of a few tenths of a degree C per decade, and the state of sensors and data collection networks prior to the PC and Internet eras.