Archive for January, 2014

Happy Birthday Andrew Hanson

January 28, 2014
Andrew Hanson (left)  and myself standing by an alien landing pod somewhere in London

Andrew Hanson (left) and myself standing by an alien landing pod somewhere in London

Today is Andrew Hanson’s birthday: Happy Birthday Andrew !

It would not be polite to disclose Andrew’s actual age. So let me just say that if he were twice this age he would get a message from the Queen. Wow!

Anyone who has met Andrew – and in his role as NPL’s Outreach Supremo that is a very large number of people – will have puzzled as to the source of his energy: it seems boundless.

Of course we all know that our energy comes from food, but as we get older we get less able to turn food energy into kinetic energy of our limbs.

And it was this concern that inspired my gift. I was thinking about  the way Andrew is able to draw energy from his environment – when I realised that he was at heart a heat engine.

So I have bought him a supplementary heat engine. My hope is that in addition to his internal biological resources, it will allow him to convert environmental  energy into energy of motion.

Hopefully it will keep him going for a year or two longer. Keep up the good work!

A Stirling Engine turns a flow of heat into mechanical motion

A Stirling Engine turns a flow of heat into mechanical motion

What would things sound like if we had six ears?

January 26, 2014
If we detected sounds in the same way detected colour we would have six ears. What?

If we detected sound in the same way we detected colour we would have six ears. What?

We see things and hear things in quite different ways.

We have two ears so we can sense the direction from which sound emanates. And each ear has two dimensions of hearing. What do I mean by that? Well the first dimension is loudness and quietness. The second is pitch.

As a pure sound tone increases in frequency we detect the sound as changing ‘pitch’. But a pure tone at 440 Hz (‘concert A’) is not qualitatively different from a pure tone at 256 Hz (‘middle C’).

With light things are more complicated. Each eye gives us an image of the world but each small region within that image elicits an experience we call ‘colour‘. We have the experience of brightness which is akin to loudness. But the sensation of ‘colour‘ is quite different from the sensation of pitch.

As the frequency of an electromagnetic wave reaching a patch in your eye increases from:

  • 400, 000, 000, 000, 000 Hz to
  • 1000, 000, 000, 000, 000 Hz

..our sensation changes qualitatively. Let’s call 1,000, 000, 000, 000 Hz a terahertz (THz) so this range is from 400 THz to 1000 THz.

As the frequency increases, the light first elicits the sensation of red, then yellow, then green and finally blue and violet. All that has changed is the frequency of the electromagnetic wave, but our sensation has changed qualitatively: red is not just a ‘low blue‘: green is not a shade of red. These are a completely distinct sensations.

The reason is that at daylight levels of light intensity, each single frequency of light stimulates not one, but three ‘sensors’ (called cone cells) at each location. Simplifying considerably, each type of cell when stimulated individually elicits one of the three basic ‘colour‘sensations: red, green or blue.

As the frequency changes each of the three sensors is excited to different extents, and our overall sensation of ‘colour‘ at each location in the image is a combination of the three qualitatively different basic sensations.

But what if sound worked like that?

Well if our sensation of sound pitch worked in a  similar way to our ‘colour sensations: red, green or blue, then we would have 3 ears on each side of our head – or at least three sensors inside each ear. Let’s stick with the 6-ear idea because frankly it is more dramatic.

However each ear each would just respond to a single range of frequency. The ranges would have to overlap otherwise there would be some frequencies that would elicit no sensation at all i.e. we would be deaf to those frequencies. But stimulating each sensor would elicit just a single ‘note’ or ‘tone’ or ‘pitch’ no matter what the actual frequency. The notes would be a kind of audio-red, audio-green and audio-blue.

A single pure tone of sound would then elicit an audio-‘colour‘ depending on the relative stimulations of the single sensor within each of the three ears on each side of our head.

Now of course, this whole idea is nonsense. But it did strike me as interesting that we have evolved such distinct ways of seeing and hearing. On reflection one can imagine reasons why the different detection mechanisms might make sense.

  • For light the range of frequencies we can detect ranges from 400 THz to 1000 THz. This is a ratio of just over a factor of two, and if these were musical tones they would cover only just over a single octave. And yet our sensation of colour can detect millions of distinct colours in this small frequency range, giving us phenomenal ability to discriminate between subtly different colours.
  • For sound the range of frequencies we can detect ranges from 20 Hz to 20,000 Hz. This is a ratio of around 1000, or just under ten octaves. The sensor we have in our single our ear needs to have as wide a range as possible to let us hear the sounds around us.

Anyway. As Forrest Gump might have said: “That is all I have to say about that”.

BP Energy Outlook 2035: Room for Improvement

January 22, 2014
BP's Projection of the CO2 Emissions out to 2035. We can take this is a graph showing what will happen if we just let things continue as they are. Graph extracted from the BP Energy Outlook 2035: see text for link details.

BP’s prediction of the CO2 Emissions out to 2035. We can take this as a graph showing what will happen if we let things continue as they are. The International Energy Agency consider this a trajectory towards a catastrophic 6 C rise in global temperature. The dotted line shows the emissions path required to keep atmospheric CO2 concentrations below 450ppm. Graph extracted from the BP Energy Outlook 2035.

BP have recently published their Energy Outlook for 2035 and despite its clarity and thoroughness, it makes grim reading. Depressingly, I find it completely believable.

Cutting a very long story short, BP see energy supplies expanding to meet the needs of people in the developing world, in itself, a ‘Good Thing’.

At the conclusion of their presentation they ask how will the world ‘meet the global energy challenge‘? Will energy be…

  • Sufficient and available? ‘Yes’ – they answer − because of new energy sources and efficiency improvements
  • Secure and reliable? ‘Yes and No’ – they answer – it will improve for some, but remain a concern for others
  • Sustainable? Room for improvement

It is the glib understatement of this last remark which I find numbing. Continuing to increase the rate of carbon dioxide emissions to 45 billion tonnes every year does not represent ‘room for improvement‘. It represents a global catastrophe.

At the end of this article are links where you can investigate the various parts of the report. It is fascinating, and a great resource, but I will leave you to investigate by yourself. Here I want to ask a simple question: could it be wrong? Is there anything which could stabilise annual carbon dioxide emissions or even reduce them?

This report is about ‘the future’ and so by definition, we just don’t know what is going to happen. The report makes many assumptions and inevitably many of them will be wrong.

However the key assumption underlying all others is that global markets will continue to operate to meet global energy needs. And the report assumes that there will be no intergovernmental action on climate emissions.

If their assumption holds, then there is every chance that what happens in the next 20 years will be something like their prediction. Given the previous success rate of intergovernmental agreements I think we have to conclude this is fairly likely.

The International Energy Agency have their own World Energy Outlook involving various scenarios, three of which are summarised below.

  • The 6 °C Scenario (6DS) is an extension of current trends i.e. just what BP predict. Average global temperature rise is projected to be at least 6°C in the long term.
  • The 4 °C Scenario (4DS) takes into account recent pledges made by countries to limit emissions. This is an ambitious scenario that requires significant changes in policy and technologies.
  • The 2°C Scenario (2DS) is an emissions trajectory that would give an 80% chance of limiting average global temperature increase to 2°C. It  is broadly consistent with the World Energy Outlook 450 Scenario through 2035.

In my opinion, 2DS is already unachievable, 4 DS is the best we can hope for with unprecedented intergovernmental agreements, and 6 DS – which is where BP think we are heading – would be a disaster for humankind.

The only way I can see to avoid the catastrophic path charted by BP is by intergovernmental agreement to restrict carbon dioxide emissions.

This will have to allow for growth in non-OECD emissions and so it will have to require that countries like the UK who have emitted carbon dioxide freely for a century restrict their emissions dramatically.

I am not in any kind of denial of how hard it will be to “restrict our emissions dramatically”, but the alternative is a path of insanity.


BP Energy Outlook Links

  • This page is the main page for the Energy Outlook including a video summary by Christof Rühl, BP’s chief economist.
  • This page contains links to a pdf booklet, Excel tables, and a presentation available as a Powerpoint or pdf.
  • This page contains a link to BP’s statistical energy review – a valuable summary of current data


The Golden Rule of Experimental Science

January 19, 2014
A Golden Rule for Experimental Science: Do it quick: Then. Do it right

A Golden Rule for Experimental Science: Do it quick: Then, do it right.

I just thought I would share with you a piece of wisdom I acquired around 25 years ago when I worked at Bristol University.

Over tea one day, Don Gugan expostulated what he called the ‘Golden Rule of Experimental Physics‘.  At the time I did not fully appreciate its simple perfection. But now I am utterly convinced of its embodied truth and I pass it on (sometimes repeatedly and tediously) in the manner of an ‘old person’. It is this:

Do it quick. Then, do it right.

The rule encapsulates the simple fact that when one is doing something for the first time, it is hard to anticipate all the things which might happen. Here are two examples:

Example#1 It is common for students in laboratory sessions to begin measurements, carefully noting down everything they anticipate might vary, only to find that that nothing changes!  Or they find that that something they hadn’t thought of changes so much that the experiment is a waste of time. Or they find that they have taken so much data in a ‘dull’ regime that they ran out of time to make measurements in the ‘interesting’ regime.

Example#2 The rule applies to professional scientists (such as myself) undertaking complex measurements. For example before making my recent measurement of the Boltzmann constant, we were able to borrow an apparatus (Thank you Laurent: we remember your kindness) and quickly cobble together an experiment.

Carrying out a ‘quick and dirty’ experiment gave us an idea of the relative sensitivity of a large number of factors that we previously did not have the experience to properly assess. And we discovered several things in the process, most importantly that we had massively underestimated of the amount of water vapour in our otherwise ultra-pure gas.

And reading the excellent ‘Thinking, fast and slow” by Daniel Kahnemann I realise that Don’s ‘Golden Rule‘ is merely a specific example of a wider rule of thumb. Kahnemann calls it seeking an ‘outside’ view when planning an activity, and he recommends it as a remedy for a cognitive bias towards optimism that he calls the ‘planning fallacy’.

This refers to the tendency when making a plan to consider all factors one knows about, but not to actively seek knowledge of other factors that one might not have considered. The Golden Rule embodies an experimental approach towards seeking knowledge of relevant unanticipated factors.

Another way is reading about other people’s experience, but this is hard for experimentalists. Mike Moldover from NIST is fond of reminding people of what he calls Dean Ripple’s guideline that:

Two weeks in the laboratory can easily save a whole afternoon in the library.

This quotation sums up perfectly the fact that ‘spending an afternoon in the library’ can feel harder than ‘getting on with the project’. Reading other people’s work is hard – and it takes time to work out the relevance of their experience to yours.

Happy experimenting. 🙂


Don Gugan denies being the original source of this quotation. He writes

Not my formulation, though I believe it absolutely. First heard it from Cecil Reginald Burch (FRS and much else besides) in about 1960, and suggested it it as the guiding principle in the “Project Manual” when I was in charge of third year undergraduate projects.

CRB was an amazing fellow, Bob Chambers was/is a great admirer: they don’t make them like him any more.”

CR Burch is recorded as being a pioneer of optical design of microscopes and telescopes. And during this research he was frustrated at being unable to create a perfect enough vacuum to allow the evaporation of aluminium onto a surface to create a mirror.

So in furtherance of making better telescope mirrors, he discovered silicone oils which could be used to make better vacuum pumps. What a range of achievement!

Float or Sink: Don’t believe everything you see on the internet

January 15, 2014
Wow! A can of Diet Coke floats and a can of regular Coca Cola sinks! Mmm. Don't believe everything you see on the internet.

Wow! A can of Diet Coke floats and a can of regular Coca Cola sinks! Really? Well No. Like so many things you see on the internet, it is not so simple.

While discussing recent news stories on the dietary ‘value’ of sugar, a colleague told me he had seen a surprising video on the internet.

It showed Steven Spengler putting cans of Coca Cola and Diet Coke into a tank and the can of Diet Coke floated while the can of Coca Cola sank. The celebrity scientist then related this to the amount of the sugar in the drink. See what you think:

I told my colleague that I didn’t believe anything I saw on the internet, but that the demo was convincing: it seemed as though nature itself was voting on the evils of sugar. A sort of ‘Witch Trial’ for harmful additives.

However being the person I am, I thought I would just check. I bought a can of each drink at lunchtime, went to the lab and did the test. This is what I saw:

This is what I saw when I put the two cans into water: they both floated.

This is what I saw when I put the two cans into water: they both floated.

Yes, that’s right: both cans floated. If you look closely you can see that the Coca Cola is lower in the water, but it did not sink. So how did I get the picture at the top of the page? Simple: I heated the water.

The density of water falls slightly with increasing temperature (See the graph at the bottom of this article) and when the water was around 36 °C at the top and 32 °C at the bottom, the Coca Cola sank. But when the water cooled to between 33 °C at the top and 30 °C at the bottom, the Coca Cola floated.

I made measurements of the mass of the cans, full and empty and found that the 330 ml of Coca Cola weighed 340.2 g while 330 ml of Diet Coke weighed 330.8 g. And hence I made an estimate for the density of the two fluids, and yes, Coca Cola appears to be 2.8% denser that Diet Coke.

But then the label tells you that 330 ml of Coca Cola contains 15.9 g of sugar, whereas Diet Coke contains Aspartame which is weight-for-weight 200 times sweeter. So there is only about  0.07 g of Aspartame in a can of Diet Coke. So this isn’t really ‘news’ of any kind.

I couldn’t understand the exact 9.4 g difference in mass because the density of the fluids is affected by all the other components ingredients which could differ between products.

However both fluids were denser than water (0.5% and 3.3.% respectively) . And whether a can floats or sinks depends only partially on the density  of the liquid in the container.

It also depends on the ‘air’ gap, and the weight of the can. So Steven Spengler’s demo just relies on a simple coincidence between the average density of the US-size cans and the density of water at about room temperature.

For larger containers, the mass of the container will make less difference and so I thought that for a 2 litre PET bottle, both fluids would probably sink. Was I right? No.

The Coca Cola weighed in at 2.129 kg and the Diet Coke was 85 g lighter at 2.044 kg. But in fact these bottles have a larger air gap and so – even when heated to 48  °C – both bottles floated.

2 litre bottles of Coca Cola and Diet Coke both float in water - even when heated to 48 Celsius.

Two 2 litre bottles of Coca Cola and Diet Coke floating in water – even when heated to 48 Celsius. The blue lines scrawled on the photograph show the water level around each bottle and the value on the thermometer.


The density of water plotted as a function of temperature and fitted with a quadratic polynomial.

The density of pure water plotted as a function of temperature and fitted with a quadratic polynomial. The entire vertical range represents just 1% change of the density.

Why do I punish myself?

January 13, 2014
An image from the GWPF Web SIte purporting to show that there was more sea in December 2013 than at any time in history. Can that really be true?

An image from the GWPF Web Site purporting to show that there was more sea-ice in December 2013 than at any time in history. Can that really be true? (Short Answer: No)

I ought to know better than to visit the web site of the Global Warming Policy Foundation. But I did: these things happen.

And there I saw the graph (above) which purports to show that there is currently (January 2014) more sea ice than there has ever been. The graph looks poorly plotted and it came with no explanation, but I thought that was a very strong claim so I just thought I would check if it was true.

So I popped over to NSIDC and downloaded the relevant data files for the Northern and Southern hemisphere sea-ice indices which record daily satellite measurements of sea-ice extent since 1978. The first graph below shows the data for the Northern and Southern hemisphere indices plotted together. This graph shows just the last few years of data.

Data from the US NSIDC showing Northern and Southern Hemisphere Sea Ice Extent since 2005.

Data from the US NSIDC showing Northern and Southern Hemisphere Sea Ice Extent since 2005. Click for a larger version

The graph below shows all the data taken nearly every day since 23 August 1978. On this graph I have also plotted the yearly averages and fitted a straight line to these to show the long-term trend.

Data from the US NSIDC showing Northern and Southern Hemisphere Sea Ice Extent since 2005.

Data from the US NSIDC showing Northern and Southern Hemisphere Sea Ice Extent since 1978. Click for a larger version.

This data highlights interesting differences between the two poles – these are discussed further on this excellent web page at NSIDC. You can see that both northern and southern hemispheres hold on average about the same amount of sea ice – around 12 million square kilometres (km2), but every year this grows and shrinks with the seasons.

  • In the southern hemisphere the sea ice falls to around 3 million kmin the antarctic summer and grows to around 18 million kmin the antarctic winter i.e. around 15 million kmof sea ice re-freezes every year.
  • In the northern hemisphere the sea ice falls to around 5 million kmin the arctic summer and grows to around 15 million kmin the arctic winter i.e. around 10 million kmof sea ice re-freezes every year.

So what of global sea ice? To find that I added the two data sets together and these are plotted below along with a yearly average. Because the extents of the antarctic and arctic freezes are different, globally the amount of sea-ice oscillates. However the annualised average global sea-ice extent is falling.

Data from US NSIDC showing global sea ice extent versus year. The trend is clearly downward by the difference between the two trends in the previous graphs.

Data from US NSIDC showing global sea ice extent versus year. The trend is downward by the difference between the two trends in the previous graphs.

And on the graph below I show how the extent of sea-ice in the northern and southern hemispheres are phased to yield a global maximum near the start of December.

Data from US NSIDC showing the northern hemisphere, southern hemisphere and global sea ice extent plotted versus the day of the year. The d

Data from US NSIDC showing the northern hemisphere, southern hemisphere and global sea ice extent plotted versus the day of the year. The data is same as that shown in the above two graphs but I have just ignored the ‘year’ label. It shows quite clearly the relative phasing of the sea-ice changes in the two hemispheres, and that December is the month in which global sea-ice extent is the largest.

So where does the data plotted by the GWPF come from? Well, I can’t tell for sure but I think it may simply be mislabelled. You can see that it is between 17 and 18 million kmwhereas the global sea-ice extent for December is around 25 million km2. So it might correspond to simply the Southern Hemisphere data which is indeed growing (slowly) and may well have reached its maximum value, but even that does not tie up precisely.

The GWPF do supply two links. The first one links to a web site called Real Science which just reproduces the graph without any analysis. The second links to the a data set which looks a bit like the Global Sea Ice Index but appears to be about 5 million kmlow. It too shows a trend which falls with time.

And where does the red line plotted on the graph come from? I have no idea.

So overall this is as wrong as wrong thingBut I have learned a little so the exercise has not been quite as useless as a useless thing.

Sun Spotted in Teddington

January 9, 2014
A picture of the Sun taken on 9th January 2014 from Teddington UK

A picture of the Sun taken on 9th January 2014 from Teddington UK. Click for more detail. Image is courtesy of Peter Woolliams 2014.

The Sun – as you know well – shines night and day, but direct visibility of the Sun has been in short supply in Teddington these last two weeks.

But at break this morning I caught two of NPL’s astronomical gurus sipping lemon tea and enjoying the sunshine streaming through the windows. I sat with them and it felt so good.

We chatted about telescopes and things, and I must have said something amazing because later in the day Peter Woolliams sent me an e-mail with the picture above.

“Michael, inspired by you (and the unusual presence of the sun) I dashed back home at lunchtime to grab the first bit of solar image data for a few months, just caught the sun before the clouds rolled in… and the netbook battery expired…Rapid processing, see attached….”

I was astonished. First of all at the beauty of the image and second at the rapidity with which Peter had got to work. Another colleague joined in a discussion and she sent a me a link to the astonishing Helioviewer site.

At Helioviewer you can look at satellite images of the Sun for any particular moment in the last few years and create your own movies such as the one below. The movie below shows 6 hours of images (not much happens) but it is astounding nonetheless – especially when you notice the scale image of the Earth in the lower left corner.

And so I found myself wondering which image to be more moved by: the breathtaking Helioviewer with its movies and whizzy interface, or Peter’s astonishing image – surely the most astounding image produced in Teddington today.

Between the two I would vote for Peter’s, but the winning image of the day is one I can’t share with you: it exists only in my mind.

Inspired by Peter and Andrea Sella I looked out the skylight at Jupiter. Using first binoculars, then a small telescope, and then finally the telescope I bought for Maxwell I saw Jupiter’s disc and its 4 Galilean moons perfectly arranged in a line.

Seeing the image of Jupiter and its satellites which had astonished Galileo, and provided crucial evidence for Newton’s theory of Universal Gravitation, I felt in touch not only with vastness of the universe, not only with my family who I dragged upstairs to see it, but with history too.


How did Peter create the image above?

“Images taken during my lunchbreak (just before the sun vanished behind cloud where it has been for the past few months!). 80mm William Optics refractor with Lunt B600 CaK filter onto a DMK41 mono camera. 600frame stacks processed in Autostakkert, post processed in Registax6 (wavelets and gamma stretch), Microsoft ICE (to stitch the disk together from 2 images) and GIMP to colorize. AR1944 is very complex and generated an X class flare yesterday. The sun is very low in the sky so higher magnification images were impossible, it’s nice to see the sun putting on a good show given all the warnings of it being past Solar Maximum!”

Looking at clouds from both sides.

January 8, 2014
Looking out of an aeroplane window it is easy to see how significantly clouds reduce the flow of solar energy onto the Earth's surface.

Looking out of an aeroplane window it is easy to see how significantly clouds reduce the flow of solar energy onto the Earth’s surface.

It’s the New year, but I can’t get the issue of Climate Change out of my mind.

The UK has been a bit wet over the break and there have been some storms, but nothing compared with the extremes experienced in other places. So although this weather hasn’t struck me as particularly exceptional, it makes me wonder how we would cope if we were faced with really serious climate change.

And over the break several news sources (The Guardian, Ars Technica) reported on a paper in Nature which claims to have resolved some of the uncertainty in predictions of the extent of future climate change. The uncertainty relates to role of clouds in determining the surface temperature of the Earth.

The sensitivity of the average surface temperature of the Earth to a doubling of carbon dioxide is currently estimated to be in the range 1.5 °C to 4.5 °C. That is a simple statement and big range of uncertainty – from the serious to the catastrophic.

The ‘doubling’ refers to a change from the historical CO2 concentration of 280 ppm to a hypothetical future state of 560 ppm. Currently the concentration is just approaching 400 ppm and rising at about 2 ppm per year, so if we carry on as we are then this doubling is about 80 years away.

Currently the Earth appears to have warmed by about 0.8 °C due to an additional 120 ppm of CO2. So if everything were in a steady state (which it most definitely isn’t) one might estimate the doubling sensitivity to be about 1.9 °C – at the low end of the sensitivity range.

However, the warming effect of CO2 emissions is expected to be greater than this. You can see why by considering the additional 2 ppm of CO2 emitted last year. This has not yet had time to warm the Earth very much and even if we emitted no more CO2 at all – last year’s CO2 will still keep warming the Earth for decades to come.

The source of uncertainty that the paper refers to is the role of clouds. As CO2 warms the Earth we expect to find extra water vapour in the atmosphere and thus in general we expect to find extra clouds. Depending on where and when these clouds occur they can either add to the warming (for example, if they occur at night) or provide cooling (for example, if they occur at low level in the day).

I have read the paper a  couple of times now and I think I can just about understand what they are saying, though many of the details escaped me. As far as I can tell  they examined 48 climate models to see how they predicted evaporated water should be re-distributed through the atmosphere and they noticed two correlations:

  • The models in which the water vapour went higher in the atmosphere predicted greater sensitivity to CO2 doubling.
  • The models in which the water vapour went higher in the atmosphere better matched the experimental data for the distribution of moisture in atmosphere.

In other words, the models that best describe the real distribution of atmospheric moisture (and so presumably are most reliable) also predict sensitivity to CO2 doubling away from the lower end of the range: they think 3 ° C is the lowest realistic estimate.

This is just one paper, but the analysis is clever and exploits a much larger body of work used in building the models. But if it is correct it is very bad news, because doubling of CO2 appears to be all but inevitable.

P.S. This is the ‘Editor’s Summary’ from Nature.

This paper offers an explanation for the long-standing uncertainty in predictions of global warming derived from climate models. Uncertainties in predicted climate sensitivity — the magnitude of global warming due to an external influence — range from 1.5° C to 5° C for a doubling of atmospheric CO2. It has been assumed that uncertainties in cloud simulations are at the root of the model disparities, and here Steven Sherwood et al. examine the output of 43 climate models and demonstrate that about half of the total uncertainty in climate sensitivity can be traced to the varying treatment of mixing between the lower and middle troposphere — and mostly in the tropics. When constrained by observations, the authors’ modelling suggests that climate sensitivity is likely to exceed 3° C rather than the currently estimated lower limit of 1.5° C, thereby constraining model projections towards more severe future warming.

Signs of change

January 6, 2014
An electric car charging in central London.

An electric car charging in central London. How long might it be before such a sight is commonplace?

I don’t often walk through central London: I find the place mystifying and alienating. But one can sometimes see things there before they become common in other places.

Earlier in 2013 I remember spotting hydrogen cylinders on top of a fuel-cell powered bus. And just before Christmas as I shopped for gifts, I wandered past two electric cars being charged. I had previously seen the charging stations all over the place, but I had never seen them being used.

So how long might it be before such a sight becomes commonplace? Well I don’t know – it’s a question about the future – but it is likely to be decades. And of course electric cars are currently mainly powered by coal and gas burned in power stations, not renewable energy.

Scientific American recently published an article about the slow rate at which ‘new’ sources of energy have historically been adopted. I adapted the data and re-plotted it below.

Graph showing the number of years it took various fuel sources reach a give share of world energy supply - after they reached 5%. What realistic growth rate can we expect for renewables?

Graph showing the number of years it took various fuel sources reach a given share of world energy supply – after they reached 5%. What realistic growth rate can we expect for renewables (3.5% in 2012)?

Notice that throughout the 19th Century, coal was never more than 50% of world energy supply: the world was still burning wood. And notice that the ‘switch to gas’ is still underway.

Each of these transitions represents colossal financial investments from which people will not simply walk away. And since ‘World Energy Supply’ now is vastly larger now than it was in 1850, it is inevitable that change will be slow.

But the lesson of this graph is this: Take Heart. Looking back coal, oil and gas seem like they were somehow ‘obvious’ or inevitable, but that is probably just hindsight. Was it obvious that we would overcome the seemingly impossible engineering challenges required to sink mines, drill wells and capture natural gas?

So when it comes to renewables – and this refers only to ‘modern’ renewables: mainly wind and solar – the rate of rise in usage is unlikely to exceed that seen for coal, oil or gas. But that does not mean that change is not coming.

The slow rate of growth is not something to be proud of, or to rejoice in: but neither is it a cause to berate ourselves and say ‘nothing is happening’. It’s just a measure of how much energy we use, the colossal investment in existing infrastructure, and how much more we need to do.

Hopefully new sights will become visible to us in the decades ahead as we build a new world which doesn’t require fossil fuels to make it work.

Telescope for Christmas?

January 3, 2014
Did you buy a telescope like this for Christmas? Well please don't expect to ever see images like the ones shown.

Did you buy a telescope like this for Christmas? Well please don’t expect to ever see images like the ones shown.

A few years ago I bought a telescope for my son: Sorry Maxwell, it wasn’t Santa Claus.

I bought a decent model, a Meade ETX-80, which had a mount that could automatically track stars and conveniently packed away into a special back-pack. It cost about £300.

It was easily the most expensive piece of optics I had ever bought, costing even more than my spectacles. But despite all my knowledge, and all the reviews I read, I really didn’t have any idea what I would be able to see.

Looking at terrestrial targets – distant chimney pots and the like – the telescope was astounding. It was easy to see insects on bricks 100 metres away. But it was all much harder trying to find an astronomical target.

With help from an expert colleague, Maxwell and I managed to attach a web-cam and after many hours we got some nice pictures of the moon. We were really pleased.

But viewing ‘deep space’ objects such as the galaxies shown in the advertisement was impossible. Actually it was possible to just about detect and get a sense of these galaxies. But it was impossible to see these object in anything resembling the detail shown.

It was not really a matter of magnification: it is a matter of faintness. The telescopes just don’t capture enough light, and our eyes are not sensitive enough.

I received the advert on an e-mail from a well-known camera shop, and I imagined such telescopes being bought for children who would then be disappointed. So I just thought I would mention it.

Seeing astronomical objects with your own eyes through a telescope is a transformatively positive experience: I remember buying a £40 telescope from pedlars in Greece a few years ago and watching the moons of Jupiter changing from one evening to the next: I was astounded. And the fact that the light wave which reached my eyes was the same light wave that had left Jupiter a few hours previously was part of the wonder. Seeing it on a screen would not have been the same.

But failing to see astronomical astronomical objects can have a similarly powerful negative experience: confirming one’s anxieties about the difficulty of ‘science’.

My top tip is to buy an astronomy magazine and attend a local astronomy club. In my experience you will find the people a bit ‘odd’: but they will be delighted to let you see what their years of experience have achieved.

Update: For my Birthday, I was given a book on the history of telescopes – An Acre of Glass by J B Zirker. To be honest, it’s not a very well-written book, but I do find the subject interesting and it has many fascinating details which a better-written book might have left out.

The book makes it clear that the last 50 years has seen astonishing progress in astronomical imaging, and this is likely to continue for many decades to come.

But I fear the perfect images of the cosmos which can now be produced routinely have spoiled us. They are beautiful and mysterious, to be sure. But in the same way that an unobtainable super-model can make one’s actual partner seem ordinary, somehow the perfect images which can never be seen directly, make the glimpses of distant galaxies viewed from Teddington seem less astonishing than they should be.

%d bloggers like this: