Surface Temperature Workshop

Global Temperature anomaly with respect to the start of the 20th Century

Global Temperature anomaly with respect to the start of the 20th Century

I have just returned from an intense three day stint at the Surface Temperature Workshop hosted by the UK Meteorological Office in Exeter. I was attending along with three colleagues from NPL because the small community who have produced these iconic ‘temperature anomaly’ graphs (such as the one at head of this article) have realised that after Climate-gate, they need to proactively open up. I don’t know what I contributed to the whole affair – I spoke repeatedly about the need for uncertainty assessment – but I certainly learned a great deal. And one thing above all stands out for me – the openness of the leading lights of this community to accept any constructive criticism. It took me several days of asking basic questions and confessing my ignorance to appreciate just how the graphs had been constructed. And when I understood, my view of their achievement changed significantly, and so did my view of this community.

The key thing to appreciate about the global temperature anomaly graphs is that they are amazingly insensitive to the uncertainty of measurement of individual weather stations. Historical temperature measurements are variable in their quality to say the least. And even modern measurements from the GCOS network are not universally satisfactory. Accessing historical temperature records is a task more akin to archeology than science. Take a look here at the NOAA database of foreign (Non-US) records. Try clicking through to look at the climate data from say Guinea Bissau (12 Mb pdf) . The data consists of scanned pages from log books – it simply doesn’t exist in computer readable form. So although this data is still not incorporated into the records, thousands of records similar to this have been investigated. The overall uncertainty of measurement from these stations is probably larger than 1 °C. However two things make the data sets useful. The first useful feature is their continuity: the same measurement has been taken repeatedly in the same way over an extended period and so changes in the results have some significance. The second is that 60 readings of daily maximum and minimum temperatures are averaged to yield just a single estimate of monthly mean temperature: this reduces the scatter of the data significantly (but it doesn’t affect any bias in the measurements). So that is the base data that is available. The question is this: is it possible to say anything meaningful about the temperature of the Earth from this data. The surprising answer is ‘Yes’ due to a process the experts call Homogenisation.

Homogenisation involves two linked processes. The first is an examination of the data series from each station to look for change-points: times where the data indicate a sudden cooling or warming. Sometimes the cause of these can be identified from notes in the historical records – for example, when the station was physically moved – even a few hundred metres can make noticeable difference. And sometimes the cause is not identified. The homogenisation process then looks at data from nearby stations to see if a similar change occurred there. If no change is found the nearby stations, then the data is adjusted to remove the discontinuity. This is a controversial process since it is a bit like magic, but the basic principle of the adjustments is sound. And interestingly the spectrum of adjustments is generally as much positive as it is negative. Once the data have been homogenised, it is a relatively simple matter (in principle) to work out local averages and the global averages. And when anyone does this, graphs similar to that above just pop out.  And now I understand how this graph is made, I can appreciate how it is insensitive to uncertainty in the original data.

So this community – including Phil Jones – have been busy inventing and improving this kind of analysis for decades. Their work is open to criticism, because it is completely open! Anyone can download the data sets (I will blog a page with a compilation of sites in a couple of days) and analysis codes are available too. But despite the openness of the process, no one has ever produced a version of the above graph which does not show recent significant warming. So attending the meeting changed my view of this graph significantly. Previously I had not really taken it seriously, but now I suspect the data reflects a real effect. And secondly I was filled with admiration for the scientists at the heart of this who have suffered tremendous abuse and been subject to immense personal stress. I learned a good deal this week.

Tags: , ,

4 Responses to “Surface Temperature Workshop”

  1. Surface Temperature Data « Protons for Breakfast Blog Says:

    […] Protons for Breakfast Blog Making sense of science « Surface Temperature Workshop […]

  2. PaulM Says:

    Your admiration and praise for the openness of this process is somewhat misplaced. See the comments by Roger Pielke at

    Candid Admissions On Shortcomings In The Land Surface Temperature Data [GHCN and USHCN] At The September Exeter Meeting

    Key people in the field were not invited, and no information has been provided on who was invited or even who attended the meeting.

    On the workshop blog, links to posts “will be limited to views from workshop participants”.

    Also your remark “the spectrum of adjustments is generally as much positive as it is negative” is misleading. The net effect of the adjustments in the US is to introduce a warming of about 0.5 F over the period 1950-2000, as shown at
    http://www.ncdc.noaa.gov/oa/climate/research/ushcn/ushcn.html

  3. Surface Temperature Workshop: Response to Paul M « Protons for Breakfast Blog Says:

    […] left a comment on my Surface Temperature Workshop Blog, and my response was too long to fit in the small ‘reply’ […]

Leave a comment