Posts Tagged ‘Surface Temperature Workshop’

Homogenisation III: It’s complicated…

October 25, 2010
Figure 1 of the Menne and Williams Paper describing their method of homogenisation.

Figure 1 of the Menne and Williams Paper describing their method of homogenisation.

I have now written two blogs on homogenisation of climate data (this one and this one) and really want to get on with blogging about other things – mainly matters less fraught. So let’s finish this off and move on.

I realise that both my previous articles were  embarrassingly oversimplified. Matt Menne sent me his paper detailing how he and his colleague Claude Williams homogenised the climate data. On reading the paper I experienced several moments of understanding, several areas of puzzlement, and a familiar feeling which approximates humiliation. Yes, humiliation. Whenever I encounter a topic I feel I should have understood but haven’t I find myself feeling terrible about my own ignorance.

Precis…

You can read the paper for yourself, but I thought I would try to precis the paper because it is not simple. It goes like this:

  • The aim of the paper is to develop an automatic method (an algorithm) that can consider every climate station temperature record in turn and extract an overall ‘climate’ trend reflected in all series.
  • The first step is to average the daily maximum and minimum values to give averaged monthly minimum and maximum values and monthly averages. This averaging reduces the ‘noise’ on the data by a factor of approximately 5 (the square root of 30 measurements) for the maximum and minimum data and 7.5 for the average (the square root of 60 measurements).
  • Next we compare each station with a network of ‘nearby’ stations by calculating the difference between the target station data and each of its neighbours. In the paper, example data (Figure 1) is given that shows that these difference series are much less ‘noisy’ than  the individual series themselves. This is because the difference series are correlated: for example, when the monthly average temperature in Teddington is high, then the monthly average temperature at nearby stations such as Hounslow is also likely to be high.  Because the temperatures tend to go up and down together – the differences between them show much less than the variability of either series by itself.
  • The low ‘noise’ levels on the difference series are critically important. This allows the authors to sensitively spot when ‘something happens’ – a sudden change in one station or the other (or both). Of course at this point in the analysis they don’t know which data set (e.g. Teddington or Hounslow) contains the sudden change. Typically these changes are caused by a change of sensor, or location of a climate station, and over many decades these are actually fairly common occurrences. If they were simply left in the data sets which were averaged to estimate climate changes, then they would be an obvious source of error.
  • The authors use a statistical test to detect ‘change points’ in the various difference series, and once all the change points have been identified they seek to identify the series in which the change has occurred. They do this by looking at difference series with multiple neighbours (Teddington – Richmond, Teddington – Feltham, Teddington – Kingston etc) they identify the ‘culprit’ series which has shifted. So consider the Teddington – Hounslow difference series. If Teddington is the ‘culprit’ then all the difference series which have Teddington as a partner will show the shift. However if, say, Hounslow has the shift, then we would not expect to see to a shift at that time in the Teddington – Richmond difference series.
  • They then analyse the ‘culprit’ series to determine the type of shift that has taken place. They have 4 general categories or shift: a step-change; a drift; a step-change imposed on a drift, or a step-change followed by a drift.
  • They then adjust the ‘culprit’ series to estimate what it ‘would have shown’ if the shift had not taken place.

So I hope you can see that this is not simple and that is why most of the paper is spent trying to check how well the algorithms they have devised for:

  • spotting change points,
  • identifying ‘culprit’ series,
  • categorising the type of change point
  • and then adjusting the ‘culprit’ series.

are working. Their methods are not perfect. But what I like about this paper is that they are very open about the shortcomings of their technique – it can be fooled for instance if change points in different series at almost the same time. However the tests they have run show that it is capable of extracting trends with a fair degree of accuracy.

Summarising…

It is a sad fact – almost an inconvenient truth – that most climate data is very reproducible, but often has large uncertainty of measurement. The homogenisation approach to extracting trends from this data is a positive response to this fact. Some people take exception to the very concept of ‘homogenising’ climate data. And it is indeed a process in which subtle biases could occur. But having spoken with these authors, and having read this paper, I am sure that the authors would be mortified if there was an unidentified major error in their work. They have not made the analysis choices they have because it ’causes’ a temperature rise which is in line with their political or personal desires. They have done the best they can to be fair and honest – and it does seem as though the climate trend they uncover just happens to a warming trend in most – but not all – regions of the world.

You can read more about their work here.

Homogenistion

September 25, 2010
Raw data: annual average minimum temperature from Reno, Nevada 1895 to 2005

Raw data: annual average minimum temperature from Reno, Nevada 1895 to 2005

UPDATE: This page contains errors! Please see the comments for clarification. I have posted a second version of this calculation here.

As I have mentioned on several recent posts, the raw data from even relatively sophisticated climate stations is really rather poor quality. Rather than just ignore this data altogether, researchers have looked at the data and noted that although the actual temperatures might not be correct to within perhaps ±2 °C, the errors in the measurement are likely to have remained constant for a long time. This is because the methods used to take the data have changed only very slowly. So researchers have looked at the data, not to determine the absolute temperature at that station, but in order to determine whether the temperatures have changed. There are many problems inherent in this, so I thought it would be interesting to very explicitly show the kind of thing involved in this endeavour. To show this I have extracted data from a slide that Matt Menne showed in his talk at the Surface Temperature Workshop.

The graph at the head of this article shows the raw data for a single station near Reno, Nevada. Each day a thermometer which records the maximum temperature and the minimum temperature is read and 365 (or 366 in a leap year) measurements of the minimum daily temperature were averaged to produce each data point on the graph. We can see that the year-to-year scatter is rather low. However, two features stand out and I have re-drawn the above graph highlighting these features.

Raw data for the annual mean minimum temperature highlighting significant features.

Raw data for the annual mean minimum temperature highlighting significant features.

The first feature is a dramatic shift in the data: the years following 1937 appear to be between 3 °C and 4 °C colder than the years prior to 1936. This is a pretty obvious artefact and occured because the station was moved from one location to another – micro climates vary by this much even over distances as short as a few metres! (think about how one side of your car can have frost on in the morning but the other side doesn’t!). The second feature is a strikingly linear 4 °C rise in temperature since 1975. This looks strongly like an Urban Heat Island (UHI) effect – a real effect – but not caused by a shift in climate. The question that the ‘homogenisation process’  tries to answer positively is this: can we recover any information at all from the above graph? To try to extract the trend of the data, researchers look at the difference between this station and its ten nearest neighbours – many kilometres away from this station. This difference data is shown below.

Difference between minimum temperatures at Reno and the mean from its 10 nearest neighbours.

Difference between minimum temperatures at Reno and the mean from its 10 nearest neighbours.

If we add this difference data to the raw data, then we should be able to compensate for the local anomalies in the data. The compensated graph is shown below in red with the original data shown in grey. It is pretty clear that the adjusted data is a better representation of the climate-related temperature changes at the Reno, Nevada station than the original data.

Data for the mean annual minimum temperature from temperature adjusted by comparison with its neighbours

Data for the mean annual minimum temperature from temperature adjusted by comparison with its neighbours

The adjusted data do not show a sudden jump in temperature in 1936/1937. And they do not show the real rise in temperatures at the station due to the UHI effect. The data is said to have been homogenised. Now I have  simplified the process a little, but not much. The professionals can make a statistical assessment of the uncertainty associated with the process.

When you look closely at the graphs of the temperature of the Earth versus time you will see that they are all labelled  ‘temperature anomaly’ rather than temperature change. The data from the land surface portion of the Earth’s temperature (around one third of the data) have ALL been adjusted in this way to highlight only changes temperature common to many stations spread over a wide geographical area. The homogenisation analysis extracts from the original data only the portion of it which corresponds to long term trends and rejects artefacts which occur only in a single localised station.

Is this process fair? Well I don’t know. From having talked with the scientists involved in this work I am sure that they are doing the best they can with the data which exists. I am 100% that they are not surreptitiously ‘fixing’ the maths to make the temperature rise for political ends. The rises they observe are a relatively robust signal. Could the whole process be flawed in some unanticipated way? Well its possible, and that’s for you to make up your mind. But I feel obliged to point out that whether you are convinced by this process or not,  there are plenty of other reasons to be concerned about even the possibility that our climate might be changing.

Surface Temperature Workshop: Response to Paul M

September 21, 2010

PaulM left a comment on my Surface Temperature Workshop Blog, and my response was too long to fit in the small ‘reply’ box.

Your admiration and praise for the openness of this process is somewhat misplaced. See the comments by Roger Pielke at

http://pielkeclimatesci.wordpress.com/2010/09/20/candid-admissions-on-shortcomings-in-the-land-surface-temperature-data-ghcn-and-ushcn-at-the-september-exeter-meeting/

Key people in the field were not invited, and no information has been provided on who was invited or even who attended the meeting.

On the workshop blog, links to posts “will be limited to views from workshop participants”.

Also your remark “the spectrum of adjustments is generally as much positive as it is negative” is misleading. The net effect of the adjustments in the US is to introduce a warming of about 0.5 F over the period 1950-2000, as shown at
http://www.ncdc.noaa.gov/oa/climate/research/ushcn/ushcn.html

PaulM

Thanks for that. I disagree and I think  Roger Pielke’s blog comments are unfair. He has disabled comments so I can’t leave comments there.

I am not at a technical expert in this field – I am  general physicist and specialist in temperature measurement, and my contributions to the workshop were generally along the line of insisting that uncertainty of measurement estimates be included in the data files from the outset. A pretty mundane input, but hopefully significant. I don’t know the details of the invitation process but the people involved did seem to me to feel slightly traumatised and so probably didn’t invite people they viewed as ‘hostile’. I think they wanted skeptics rather than cynics. What I didn’t (and still don’t) understand is why this community has  been the focus of so much negativity. I think the reason they are being so widely criticised is because the output of their work indicates that the Earth is warming, and this is a politically unwelcome result for some people. From my point of view, if their work is wrong then the errors will show up eventually, but actually the ‘signal’ they see appeared to me to be fairly robust. There are plenty of other reasons to be concerned that humans might be affecting the climate and this is just one more, and IMHO, one of the less significant pieces of evidence.

The individuals I spoke with were very open to answering my ‘dumb’ questions. As a group, they seemed to me to be very genuine people who were just trying to communicate clearly what their research revealed. They spoke of their errors – and how any admission of error caused them to be pilloried – and they spoke of the stress of trying to work in the face of that.

You raised the issue of adjustments to the data and I have been slowly working on a blog posting on that specifically – hopefully in the next day or two. The key thing I learned at the meeting concerned this adjustment – the homogenisation process. Historical and current meteorological data was and is compiled for reasons other than Climate Research, and so with the possible exception of the new US climate reference network pretty much all the data from around the globe has large measurement uncertainties, probably greater than 1 °C – but these are mainly Type B (systematic) uncertainties. However, the Type A uncertainty, the reproducibility of the monthly or yearly-averaged data is good, probably less than 0.1 °C. What this community has done is to ask the question ‘Can one do anything with this this data?’ and the answer they give is ‘Yes’, providing one can assess the effect of shifts in Type B terms.  I have been slowly reading through literature on this and their arguments seem sound. The key point is that it is essential to adjust the data to cope with shifts and drifts in the Type B terms. So the  adjustments they make to the data do not ‘introduce’ a warming trend, they ‘reveal’ it.

All the best

Michael

Surface Temperature Workshop

September 11, 2010
Global Temperature anomaly with respect to the start of the 20th Century

Global Temperature anomaly with respect to the start of the 20th Century

I have just returned from an intense three day stint at the Surface Temperature Workshop hosted by the UK Meteorological Office in Exeter. I was attending along with three colleagues from NPL because the small community who have produced these iconic ‘temperature anomaly’ graphs (such as the one at head of this article) have realised that after Climate-gate, they need to proactively open up. I don’t know what I contributed to the whole affair – I spoke repeatedly about the need for uncertainty assessment – but I certainly learned a great deal. And one thing above all stands out for me – the openness of the leading lights of this community to accept any constructive criticism. It took me several days of asking basic questions and confessing my ignorance to appreciate just how the graphs had been constructed. And when I understood, my view of their achievement changed significantly, and so did my view of this community.

The key thing to appreciate about the global temperature anomaly graphs is that they are amazingly insensitive to the uncertainty of measurement of individual weather stations. Historical temperature measurements are variable in their quality to say the least. And even modern measurements from the GCOS network are not universally satisfactory. Accessing historical temperature records is a task more akin to archeology than science. Take a look here at the NOAA database of foreign (Non-US) records. Try clicking through to look at the climate data from say Guinea Bissau (12 Mb pdf) . The data consists of scanned pages from log books – it simply doesn’t exist in computer readable form. So although this data is still not incorporated into the records, thousands of records similar to this have been investigated. The overall uncertainty of measurement from these stations is probably larger than 1 °C. However two things make the data sets useful. The first useful feature is their continuity: the same measurement has been taken repeatedly in the same way over an extended period and so changes in the results have some significance. The second is that 60 readings of daily maximum and minimum temperatures are averaged to yield just a single estimate of monthly mean temperature: this reduces the scatter of the data significantly (but it doesn’t affect any bias in the measurements). So that is the base data that is available. The question is this: is it possible to say anything meaningful about the temperature of the Earth from this data. The surprising answer is ‘Yes’ due to a process the experts call Homogenisation.

Homogenisation involves two linked processes. The first is an examination of the data series from each station to look for change-points: times where the data indicate a sudden cooling or warming. Sometimes the cause of these can be identified from notes in the historical records – for example, when the station was physically moved – even a few hundred metres can make noticeable difference. And sometimes the cause is not identified. The homogenisation process then looks at data from nearby stations to see if a similar change occurred there. If no change is found the nearby stations, then the data is adjusted to remove the discontinuity. This is a controversial process since it is a bit like magic, but the basic principle of the adjustments is sound. And interestingly the spectrum of adjustments is generally as much positive as it is negative. Once the data have been homogenised, it is a relatively simple matter (in principle) to work out local averages and the global averages. And when anyone does this, graphs similar to that above just pop out.  And now I understand how this graph is made, I can appreciate how it is insensitive to uncertainty in the original data.

So this community – including Phil Jones – have been busy inventing and improving this kind of analysis for decades. Their work is open to criticism, because it is completely open! Anyone can download the data sets (I will blog a page with a compilation of sites in a couple of days) and analysis codes are available too. But despite the openness of the process, no one has ever produced a version of the above graph which does not show recent significant warming. So attending the meeting changed my view of this graph significantly. Previously I had not really taken it seriously, but now I suspect the data reflects a real effect. And secondly I was filled with admiration for the scientists at the heart of this who have suffered tremendous abuse and been subject to immense personal stress. I learned a good deal this week.


%d bloggers like this: