Homogenisation II

Matt Menne's Original Data From Reno

Matt Menne's Original Data From Reno

If at first you don’t suceed, try again.  I still don’t understand this but I hope someone will explain!

I am trying to understand the process of Homogenisation – the procedure for sensitively extracting trends from meteorological data with large systematic uncertainties. My first post on the subject was rather too simplistic and a  reader was kind enough to point that out. Since then Matt Menne has sent me the original data from his talk at the Surface Temperature Workshop.

So we start as before with the Raw Data from a single weather station near Reno, Nevada. (See Graph at the top of this article). Each day a thermometer which records the maximum temperature and the minimum temperature is read and 365 (or 366 in a leap year) measurements of the minimum daily temperature were averaged to produce each data point on the graph. We can see that the year-to-year scatter is rather low. However, two features stand out:

  • An abrupt change from 1936 to 1937 where the site apparently became 3 °C colder and stayed that way in following years. This is quite unlikely.
  • A gradual warming by about 5 °C since 1970. This looks like the development of an ‘urban heat island’. Real warming which is not related to a climate trend.

The key ‘trick’ to homogenisation is to identify the change points – the jumps such as the 1936/37 jump. These are identified by looking at the differences from the average of 10 nearby weather stations. ‘Nearby’ means within tens of kilometres rather hundreds of metres. This data is shown below, and it is clear that these differences also show the 1936/37 jump, indicating that this was caused by something local to the single weather station at Reno, rather than being a regional level climate event.

Difference between data from Reno and the average of ten nearby stations

Difference between data from Reno and the average of ten nearby stations

So we could simply throw out all the post 1936 data. Or, we could try to correct it. That is what homogenisation does: on the one side of the change point, the data is left unmodified. On the other side of the change point the data is replaced entirely by the average of the ten nearest stations.

My Treatment of the Menne Data. Data later than 1936/37 is replaced by the average of neighbouring stations.

My Treatment of the Menne Data. Data later than 1936/37 is replaced by the average of neighbouring stations.

It is clear that the 1936/37 discontinuity has been reduced but it is still there. And comparing my compensated data with Menne’s compensated data, a couple of features stand out. Firstly Menne’s adjustment has affected very early data where I made no change. Secondly, Menne’s adjustment seems to have corrected the 1936/37 discontinuity much better. After 1937, his compensated data and my compensated data run almost parallel, but my data are around 2 °C colder.

Comparison of my treatment of the data and Matt Menne's.

Comparison of my treatment of the data and Matt Menne's.

So I haven’t solved this problem yet, because I still don’t understand what Matt Menne has done. But I think I am getting close!

Advertisements

2 Responses to “Homogenisation II”

  1. Homogenisation III: It’s complicated… « Protons for Breakfast Blog Says:

    […] Protons for Breakfast Blog Making sense of science « Homogenisation II […]

  2. Nick Day Says:

    >If at first you don’t suceed, try again. I still don’t understand this but I hope someone will explain!

    Er, I may have to make several attempts at that.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: