Posts Tagged ‘The databank containing 30’

Would you like to measure the surface temperature of the Earth?

August 18, 2014
Our estimate of the global mean temperature is derived from analysis of around 30,000 individual station records such as the one above. This graph shows how a single station record must sometimes be pieced together from fragmentary data in in different holdings.

Our estimate of global mean temperature is derived from analysis of around 30,000 individual station records such as the one above. This graph shows how a single station record must sometimes be pieced together from fragmentary data in different holdings. Data from Geoscience Data Journal 30 JUN 2014 DOI: 10.1002/gdj3.8

Every few weeks an e-mail filters through to me from NPL’s web enquiry service. One arrived last week:

  • We seem to be taking far reaching conclusions (climate change) from temperature measurements taken around the world over the last 300 years. Taking into account the geographic spread of such readings and the changes in methods of measurement over the years is this scientifically robust?   

I wrote a rather long-winded  reply which you can read below, but after I sent it I realised I had forgotten something: I could have invited the enquirer to work out the answer for themselves!

That might have been a bit cheeky, because it is not a simple task. However, the data is freely available and its provenance has never been better established than in the databank released by the International Surface Temperature Initiative (ISTI).

ISTI is an effort to which I have myself contributed minimally by sitting on its steering committee and occasionally answering questions related to calibration and the fundamentals of temperature measurement.

The motivation for ISTI was to make sure the entire databank was publicly available. This was back in 2010 when global warming sceptics were claiming that the global warming was a hoax. Since then sceptics have changed to believing all the data and claiming its fluctuations are of profound significance. In any case, making the data publicly accessible and free allows even small research groups – or even you if you are not too  busy – to make their own analyses.

The paper linked here describes the structure of the databank and the way it has been constructed. It is a long paper and here I only want to draw your attention to the detailed scrutiny paid to every station record in the 30,000 records.

Each station record has a latitude, longitude and altitude associated with it. But one needs to decide whether a short record from one source is a unique record – in which case  it must be included – or  a duplicate of some fraction of a nearby station. Alternatively – as the graphic at the top shows – different parts of an individual station record may be available – possibly with mis-spelt names – in different data holdings.

This kind of analysis is tedious in the extreme, but is actually essential in creating a databank that people can trust. I am filled with admiration for the achievements of my ISTI colleagues. Now anyone can work out the surface temperature of the Earth for themselves.

So what are you waiting for?

================================================================

My Reply to the NPL Enquiry

Hello. I am an NPL scientist specialising in temperature measurement. Amongst other claims to fame, I have made the most accurate temperature measurements in human history, and also sit on the steering committee of a group (http://www.surfacetemperatures.org/) that has established an open-source data base of surface temperature records. And I have been asked to respond to your message which reads.

  • We seem to be taking far reaching conclusions (climate change) from temperature measurements taken around the world over the last 300 years. Taking into account the geographic spread of such readings and the changes in methods of measurement over the years is this scientifically robust?  

The short answer is ‘Yes, it is scientifically robust’ but it is not at all obvious that it should be so.

Let me give a slightly longer answer.

You are quite right to be concerned but in fact the data are analysed in a careful way which we believe eliminates most (if not all) of the inevitable distortions of the data.

First of all, the data

Typically this consists of twice daily readings of the maximum and minimum temperatures taken inside a Stevenson screen. Typically these are average to produce a daily ‘average’ and this is then further averaged to produce a ‘monthly mean temperature’. Because of this averaging the monthly mean temperatures have rather low variability and enable trends to be seen more easily than might be expected.

This met office web site shows typical data from the UK http://www.metoffice.gov.uk/climate/uk/stationdata/

Secondly, the analysis.

The monthly mean temperature data from around the world has been analysed by four different teams and – very substantially – they all agree that the air temperature above the surface of the Earth has warmed substantially in the last century. One key part of their analysis is that data from a station is compared only with itself. So we look back through a single station record and ask “Does it have a rising or falling trend?”. If life were simple – which it never is – all we would have to do would be to extract the trends and average them over the correct area of Earth’s surface. But life is never simple.

As one look’s back through a record one sees many ‘glitches’ or sudden changes, and typically there is no explanation for why they occurred? Were they real? Should we leave the glitch in the data? Or try to correct it? And if we correct it – which way should we adjust the data? Typically these glitches occur when an instrument is changed, or when a station is re-sited e.g. from one side of an airfield to another. Different teams take different approaches, but all of them take great care not to distort the data.

Let me give you one example. This is called the Pair-wise Homogenisation Algorithm (PHA) and works especially well in places such as Europe or the USA where there are a high density of weather stations. Since we are looking for a signal of ‘climate change’ we would expect this to show up on all station records in an area a few hundred kilometres across. So if we detect a glitch – we can compare that individual record with all nearby station records one pair at a time. In this way we can see whether the glitch is real or just affects that one station. And we can correct it if we choose to. The PHA can also  detect the so-called ‘urban heat island’ effect.

Thirdly, geography.

So the PHA can allow us to extract a trend from around 30,000 weather station data records. But these are not distributed uniformly around the globe, and critically Africa and the poles have relatively short data records. Nonetheless, based on reasonable assumptions, the warming trend does seem to be widely distributed.

And finally.

The weather records that we have were not compiled for the purpose of climatology, but we have adapted them for that purpose. But we can look at other records, such as the dates of first blooming of particular flowers. This data is not so easy to interpret but also is much less prone to numerical manipulation. And critically we can look at the extent of the melting of artic sea ice. Many different things we measure tell a story of warming that is consistent with what we would expect from carbon dioxide emissions.

The Berkeley Earth Project web site http://berkeleyearth.org/ has particularly nice graphs and is well organised if you would like to investigate this question further.

If you have any remaining questions, please don’t hesitate to contact me.

Michael
********************************************

Dr. Michael de Podesta MBE CPhys MInstP

Principal Research Scientist
National Physical Laboratory
Teddington, TW11 0LW
michael.depodesta@npl.co.uk
Telephone 44 (0)20 8943 6476
********************************************