Archive for October, 2010

Why is yellow food colouring red?

October 25, 2010
Why is yellow food colouring red?

Why is yellow food colouring red?

Not an exposition, but a question. Why is yellow food colouring red?

While making a movie the other day, I purchased some yellow food colouring and some red food colouring. It wasn’t till I got it home that I was astonished to find that the yellow colouring was red! Even the bottle top was red!

Now I can begin to explain this for myself – I am not there yet – but I was surprised to say least. So while I try to understand this, can someone please explain to me ‘Why is yellow food colouring red?’

UPDATE

My colleague Dave Lowe kindly sent me a link to a site which discusses a related problem . The answer has to do with the amount of light transmitted by the food colouring at different wavelengths. And it is also another example of something we touch upon in Protons for Breakfast, that the colour of an object depends three things:  the spectrum of light illuminating the object; the way light travels through or is reflected from the object; and the different sensitivity of our eyes.

Imagine the spectrum of white light shown above, and imagine that the food colouring molecules absorb all the high frequency light (which we perceive as blue), a middling amount of the middle wavelengths (which we perceive as yellowy-orange), and almost none of the low frequency light (which we perceive as red). Because white light has more of the middle wavelengths present, then even though the food colouring transmits a higher fraction of red light, thin layers or small amounts of the colouring appear yellow. But as the thickness of the food colouring increases, something unusual happens, something described as Beer’s law.

One thin layer: The blue end of the spectrum is strongly absorbed even by a thin layer of the food colouring, so a thin layer transmits almost no blue light. The yellow middle of the spectrum is quite heavily absorbed – let’s guess 50% absorption by a thin layer – but because white light contains lots of yellow light, there us still lots of yellow light being transmitted. The red light is almost all transmitted but there is not much present in white light.

Two thin layer: Almost no blue light made it through the first thin layer – and even less makes it through the second or subsequent layers. Half (50%) of the yellow middle of the spectrum made it through the first thin layer and so 50% of that  (1/2 x 1/2  = 1/4) gets through the second layer. Nearly all the red light that was transmitted through the first layer also makes it through the second layer.

More thin layers making a thick layer: Since no blue light made it through the first layers, non will make through a thick layer. For each layer thickness, and additional 50% of the yellow light is absorbed. After 10 thin layers that amounts to  (1/2 x 1/2 x 1/2 x 1/2 x 1/2 x 1/2 x 1/2 x 1/2 x 1/2 x 1/2 ) which is approximately one thousandth of the original yellow light. Nearly all the red light that was transmitted through the first layer and the second layers also makes it through the subsequent layers. So after many layers, the initial dominance of the yellow transmitted light is replaced by dominance of the red light light.

 

Illustration of Beer Lambert Law or Beer's Law

Illustration of Beer Lambert Law or Beer's Law

Homogenisation III: It’s complicated…

October 25, 2010
Figure 1 of the Menne and Williams Paper describing their method of homogenisation.

Figure 1 of the Menne and Williams Paper describing their method of homogenisation.

I have now written two blogs on homogenisation of climate data (this one and this one) and really want to get on with blogging about other things – mainly matters less fraught. So let’s finish this off and move on.

I realise that both my previous articles were  embarrassingly oversimplified. Matt Menne sent me his paper detailing how he and his colleague Claude Williams homogenised the climate data. On reading the paper I experienced several moments of understanding, several areas of puzzlement, and a familiar feeling which approximates humiliation. Yes, humiliation. Whenever I encounter a topic I feel I should have understood but haven’t I find myself feeling terrible about my own ignorance.

Precis…

You can read the paper for yourself, but I thought I would try to precis the paper because it is not simple. It goes like this:

  • The aim of the paper is to develop an automatic method (an algorithm) that can consider every climate station temperature record in turn and extract an overall ‘climate’ trend reflected in all series.
  • The first step is to average the daily maximum and minimum values to give averaged monthly minimum and maximum values and monthly averages. This averaging reduces the ‘noise’ on the data by a factor of approximately 5 (the square root of 30 measurements) for the maximum and minimum data and 7.5 for the average (the square root of 60 measurements).
  • Next we compare each station with a network of ‘nearby’ stations by calculating the difference between the target station data and each of its neighbours. In the paper, example data (Figure 1) is given that shows that these difference series are much less ‘noisy’ than  the individual series themselves. This is because the difference series are correlated: for example, when the monthly average temperature in Teddington is high, then the monthly average temperature at nearby stations such as Hounslow is also likely to be high.  Because the temperatures tend to go up and down together – the differences between them show much less than the variability of either series by itself.
  • The low ‘noise’ levels on the difference series are critically important. This allows the authors to sensitively spot when ‘something happens’ – a sudden change in one station or the other (or both). Of course at this point in the analysis they don’t know which data set (e.g. Teddington or Hounslow) contains the sudden change. Typically these changes are caused by a change of sensor, or location of a climate station, and over many decades these are actually fairly common occurrences. If they were simply left in the data sets which were averaged to estimate climate changes, then they would be an obvious source of error.
  • The authors use a statistical test to detect ‘change points’ in the various difference series, and once all the change points have been identified they seek to identify the series in which the change has occurred. They do this by looking at difference series with multiple neighbours (Teddington – Richmond, Teddington – Feltham, Teddington – Kingston etc) they identify the ‘culprit’ series which has shifted. So consider the Teddington – Hounslow difference series. If Teddington is the ‘culprit’ then all the difference series which have Teddington as a partner will show the shift. However if, say, Hounslow has the shift, then we would not expect to see to a shift at that time in the Teddington – Richmond difference series.
  • They then analyse the ‘culprit’ series to determine the type of shift that has taken place. They have 4 general categories or shift: a step-change; a drift; a step-change imposed on a drift, or a step-change followed by a drift.
  • They then adjust the ‘culprit’ series to estimate what it ‘would have shown’ if the shift had not taken place.

So I hope you can see that this is not simple and that is why most of the paper is spent trying to check how well the algorithms they have devised for:

  • spotting change points,
  • identifying ‘culprit’ series,
  • categorising the type of change point
  • and then adjusting the ‘culprit’ series.

are working. Their methods are not perfect. But what I like about this paper is that they are very open about the shortcomings of their technique – it can be fooled for instance if change points in different series at almost the same time. However the tests they have run show that it is capable of extracting trends with a fair degree of accuracy.

Summarising…

It is a sad fact – almost an inconvenient truth – that most climate data is very reproducible, but often has large uncertainty of measurement. The homogenisation approach to extracting trends from this data is a positive response to this fact. Some people take exception to the very concept of ‘homogenising’ climate data. And it is indeed a process in which subtle biases could occur. But having spoken with these authors, and having read this paper, I am sure that the authors would be mortified if there was an unidentified major error in their work. They have not made the analysis choices they have because it ’causes’ a temperature rise which is in line with their political or personal desires. They have done the best they can to be fair and honest – and it does seem as though the climate trend they uncover just happens to a warming trend in most – but not all – regions of the world.

You can read more about their work here.

Homogenisation II

October 4, 2010
Matt Menne's Original Data From Reno

Matt Menne's Original Data From Reno

If at first you don’t suceed, try again.  I still don’t understand this but I hope someone will explain!

I am trying to understand the process of Homogenisation – the procedure for sensitively extracting trends from meteorological data with large systematic uncertainties. My first post on the subject was rather too simplistic and a  reader was kind enough to point that out. Since then Matt Menne has sent me the original data from his talk at the Surface Temperature Workshop.

So we start as before with the Raw Data from a single weather station near Reno, Nevada. (See Graph at the top of this article). Each day a thermometer which records the maximum temperature and the minimum temperature is read and 365 (or 366 in a leap year) measurements of the minimum daily temperature were averaged to produce each data point on the graph. We can see that the year-to-year scatter is rather low. However, two features stand out:

  • An abrupt change from 1936 to 1937 where the site apparently became 3 °C colder and stayed that way in following years. This is quite unlikely.
  • A gradual warming by about 5 °C since 1970. This looks like the development of an ‘urban heat island’. Real warming which is not related to a climate trend.

The key ‘trick’ to homogenisation is to identify the change points – the jumps such as the 1936/37 jump. These are identified by looking at the differences from the average of 10 nearby weather stations. ‘Nearby’ means within tens of kilometres rather hundreds of metres. This data is shown below, and it is clear that these differences also show the 1936/37 jump, indicating that this was caused by something local to the single weather station at Reno, rather than being a regional level climate event.

Difference between data from Reno and the average of ten nearby stations

Difference between data from Reno and the average of ten nearby stations

So we could simply throw out all the post 1936 data. Or, we could try to correct it. That is what homogenisation does: on the one side of the change point, the data is left unmodified. On the other side of the change point the data is replaced entirely by the average of the ten nearest stations.

My Treatment of the Menne Data. Data later than 1936/37 is replaced by the average of neighbouring stations.

My Treatment of the Menne Data. Data later than 1936/37 is replaced by the average of neighbouring stations.

It is clear that the 1936/37 discontinuity has been reduced but it is still there. And comparing my compensated data with Menne’s compensated data, a couple of features stand out. Firstly Menne’s adjustment has affected very early data where I made no change. Secondly, Menne’s adjustment seems to have corrected the 1936/37 discontinuity much better. After 1937, his compensated data and my compensated data run almost parallel, but my data are around 2 °C colder.

Comparison of my treatment of the data and Matt Menne's.

Comparison of my treatment of the data and Matt Menne's.

So I haven’t solved this problem yet, because I still don’t understand what Matt Menne has done. But I think I am getting close!

Wind Turbines: The Reality

October 3, 2010
Small Wind Turbines on test

Small Wind Turbines on test

For alternative methods of energy generation to make sense they need to do more than simply appear to be ‘ecological’: they need to make sound ecological and economic sense. And nothing could appear more ecological than sticking a wind turbine on one’s roof! But does this make either economic or ecological sense?

My colleague Neil Campbell was kind enough to send me a link to a site which conducted extensive tests on small wind turbines. I have analysed the results below, but the results are simple to understand: unless you have no connection to the electrical mains, wind power makes no sense at all on a small scale. A basic physics analysis indicates that the extractable power is proportional to the diameter of the rotor squared. So a 5 metre diameter can extract as much as 5 x 5 = 25 times more power than a 1 metre  diameter turbine. Wind turbines are not cheap, and wasting money and resources on a wind turbine is as wasteful as wasting it on anything else.

Results

The Table below shows in turn, the name of a wind turbine, its cost in euros, its actual output in a favourable wind environment (kWh per year), what this output equates to as an average power (W), the cost per unit of electricity if the wind turbine last for 10 years (compare with around £0.20 for a maximum unit cost in the UK); a rough estimate of the maximum average output if wind speeds are a consistent 5 metres per second (11 m.p.h.) for 30% of the time, and a record of how much of that plausible maximum the wind turbines actually achieved. I evaluated the plausible maximum using the formula

Power ~ 0.6 x 0.5 x (air density) x π x rotor radius^2 x wind speed ^3

Rotor Diameter

(m)

Cost

(€)

Output (kWh/Year) Average Output

(W)

Cost per kWh depreciated over a 10 year Life Plausible maximum average output (W) Fraction of Plausible Maximum achieved
Energy Ball 1 4304 73 8.3 5.90 € 11 78%
Ampair 1.7 8925 245 28 3.64 € 31 91%
Turby 2 21350 247 28.1 8.64 € 42 66%
Airdolphin 1.8 17548 393 44.8 4.47 € 34 130%
WRE 030 2.5 29512 404 46 7.30 € 66 69%
WRE 060 3.3 37187 485 55.4 7.67 € 115 48%
Passaat 3.12 9239 578 66 1.60 € 103 64%
Skystream 3.7 10742 2109 240.7 0.51 € 145 166%
Montana 5 18508 2691 307 0.69 € 265 116%

The data above teach three lessons

  • The last column tells me that wind turbines are roughly as efficient as they can be. In this context I take any answer between 50% and 150% to be roughly equal to 100%.
  • To make more economic sense the only way forward is (a) to use large diameter blades or (b) to reduce the cost of each turbine.
  • 4 metre and 5 metre diameter turbines are on the borderline of making  economic sense if they are mounted on a good tower in a windy place. But a 5 metre diameter turbine is a serious engineering undertaking and would probably need a 25 metre tower to make sense. Would your neighbours object? You know they would!

So in summary, small scale wind power doesn’t make sense. And its the alignment of ecology and economics that makes that clear.

And physics of course:-)


%d bloggers like this: