Archive for the ‘Personal’ Category

Nuclear Fusion is Irrelevant

February 14, 2022

Click for a larger image. News stories last week heralded a major breakthrough in fusion research. Links to the stories can be found below.

Friends, last week we were subjected to a press campaign on behalf of the teams of scientists and engineers who are carrying out nuclear fusion research.

Here are links to some of the stories that reached the press.

  • BBC Story
    • If nuclear fusion can be successfully recreated on Earth it holds out the potential of virtually unlimited supplies of low-carbon, low-radiation energy.”
  • Guardian Story #1
    • Prof Ian Chapman, the chief executive of the UK Atomic Energy Authority said “It’s clear we must make significant changes to address the effects of climate change, and fusion offers so much potential.”
  • Guardian Story#2 (from last year)
    • The race to give nuclear fusion a role in the climate emergency

The journalists add little to these stories – they mainly consist of snippets from press releases spun together to seem ‘newsy’. All these stories are colossally misleading.

Floating in the background of these stories is the idea that this research is somehow relevant to our climate crisis. The aim of this article is to explain to you that this is absolutely not true.

Even with the most optimistic assumptions conceivable, research into nuclear fusion is irrelevant to the climate crisis.

Allow me to explain how I have come to this conclusion.

Research into a practical fusion reactor for electricity generation can be split into two strands: an ‘old’ government-backed one and a ‘new’ privately-financed one.

The state of fusion research: #1 government-backed projects

The ‘old’ strand consists of research funded by many governments at JET in the UK and the colossal new ITER facility being constructed in France.

In this strand, ITER will begin operation in 2025, and after 10 years of ‘background’ experiments they will begin energy-generating experiments with tritium in 2035, experiments which are limited by design to just 4000 hours. If I understand correctly, operation beyond this limit will make ITER too radioactive to dismantle.

The key operating parameter for fusion reactors is called Q: the ratio of the heat produced to the energy input. And the aim is that by 2045 ITER will have achieved a Q of 10 – producing 500 MW of power for 400 seconds with only 50 MW of input energy to the plasma.

However ITER will only generate heat, not electricity. Also, it will not create any tritium but will instead only consume it. Following on from ITER, a DEMO reactor is planned which will have a Q value in the range 30-50, and which will generate electrical power, and be able to breed tritium in the reactor.

So on this ITER-proposed time-line we might expect the first actual electricity generation – may be 100 MW of electrical power – in maybe 2050.

And then assuming that these reactors take 10 years to build and that the design will evolve a little, it will be perhaps 2070 before there are ten or so operating around the world.

You may consider that research into a technology which will not yield results for 50 years may or may not be a good idea. I am neutral.

But it is definitely irrelevant to our climate crisis: we simply do not have 50 years in which to eliminate carbon dioxide emissions from electricity generation in the UK.

And this is on the ITER-proposed timeline which I consider frankly optimistic. If one considers some of the technical problems, this optimism seems – to put it politely – unjustified.

Here are three of the issues I keep at the top of my file in case I meet some fusion scientists at the folk club.

  • Q is the ratio of heat energy injected into the plasma to the heat energy released. But in order to be effective we have to generate net ELECTRICAL energy. So we really need to take account of the fact that thermodynamics limits the electrical generation to ~30% of the thermal energy produced. Additionally we need to include the considerable amounts of energy used to operate the complex machinery of a reactor. So we really need to consider a wider definition of Q, one that includes the ratio of input to output energies involving the entire reactor. Sabine Hossenfelder has commented on this issue. But basically, Q needs to be a lot bigger than 10.
  • Materials. The inside of the reactor is an extraordinarily hostile place with colossal fluxes of neutrons passing through every part of the structure. After operation has begun, no human can ever enter the environment again – and it is not clear to me that a working lifetime of say 35 years at 90% availability is realistic. Time will tell.
  • Tritium. The reactor consumes tritium – possibly the most expensive substance on Earth – and for each nucleus of tritium destroyed a single neutron is produced. The neutrons so produced must be captured to produce the heat for electrical generation. But the neutrons are also needed to react with lithium to produce more tritium. Since some neutrons are inevitably lost – so the plan is for extra neutrons to be ‘bred’ by bombarding suitable materials with neutrons, which then produce a shower of further neutrons – 2 or 3 for every incident neutron. And these neutrons can then in principle be used to produce tritium. But aside from being technically difficult, this breeding process also produces long-lived radioactive waste – something fusion reactors claim not to do.

If short, when one considers some of these technical problems, optimism that this research path will produce significant power on the grid in 2070 seems to me to be unjustified.

But what about this new ‘breakthrough’?

The breakthrough was not a breakthrough. It was undertaken because the walls of the previous reactor were found to absorb some of the fuel! So this ‘breakthrough’ represented a repeat of a previous experiment, but with new materials in place.

You can relive the press conference here.

Starting with a much larger amount of energy, they managed to produce 59 megajoules (MJ) of energy from fusion in about 8 seconds.

59 MJ is about 16.4 kWh of energy, which is sufficient to heat water for around 500 cups of tea, more than a cup of tea each for all the scientists and engineers working on the project.

For comparison, the 12 solar panels on my house will produce this easily in a day during the summer. To generate the energy in 5 seconds rather than 12 hours would require more panels: a field of panels roughly 200 m x 250 m, which would cost a little under 1 million pounds.

So the breakthrough is modest in absolute terms. But as I mentioned above, after billions more in funding, and another 20 years of research, the scientists expect to extend this generating ‘burn’ from 5 seconds to 400 seconds at a much higher power level.

In my opinion, JET and ITER are a complete waste of money and should be shut down immediately. The resources should be transferred to building out solar and wind energy projects alongside battery storage.

The state of fusion research: #2 privately-backed projects

The ‘new’ strand of fusion research consists of activities carried out primarily by privately-funded companies.

What? If the massive resources of governments can only promise fusion by 2070, how can private companies hope to make progress?

The answer is that JET and ITER were planned before a technical key advance was made, and they are committed to proceeding without incorporating that advance! Its a multi-billion pound version of “I’ve started so I’ll finish“. It is utter madness, and doubly guarantees the irrelevance of ITER.

The technical advance is the achievement of superconducting wire which can create magnetic fields twice as large as was previously possible. It turns out that the volume of plasma required to achieve fusion scales like the fourth power of the magnetic field. So doubling the magnetic field makes the final reactor potentially 16 times smaller!

This also makes it dramatically cheaper requiring amounts on the order of $100 million rather than billions of dollars. Critically, reactors can exploit the concept of Small Modular Reactors (SMRs) which can be mass-produced in a factory and shipped to a site. Potentially the first reactors could be built in years rather than decades, and the technology iterated to produce advances.

I have written about this previously. With some qualifications, I think this activity is generally not crazy  (it is certainly much less crazy than JET and ITER) but success is far from guaranteed.

A key unresolved question with this technology concerns its potential timeline for delivery of a working power plant.

The reactors face exactly similar problems to those in the much larger ITER reactor, and these are not problems that can be solved in months. So let’s suppose that the first demonstration of Q>1 is achieved in just 5 years (2027), and that all the technical problems with respect to electricity generation required only a further 10 years (2037). Given the difficulties in planning, let’s optimistically assume that the first production plant could get built just 5 years after that in 2042.

The ‘S’ in SMR means reactors would be small, with a thermal output of perhaps 150 MW and an electrical output of perhaps 50 MW. This is small on the scale of typical thermal generation plant. For example Hinckley C is designed to output 3,200 MW of electrical power i.e. more than 60 times larger than a hypothetical SMR fusion reactor.

So if we assume a rapid roll out and no technical or societal problems, then perhaps these reactors might generate significant power onto the grid in perhaps 2050. Nominally this is 20 years ahead of ITER.


With optimistic assumptions concerning technical progress, we might hope for fusion reactors to begin to make a significant contribution to the grid somewhere between 2050 and 2070, depending on which route is taken.

That is already too late to make any contribution to our climate crisis.

We need to deploy low-carbon technologies now. And if we have a choice between reducing carbon dioxide emissions now, or in 30 – 50 years, there is no question about what we should do.


We also need to consider the likely cost of the electricity produced by a fusion reactor.

Like conventional fission-based nuclear power, the running costs should be low. Deuterium is cheap and the reactor should generate a surplus of tritium.

The majority of the cost of conventional nuclear power is the cost of the capital used to construct the reactor. If I recall correctly, it amounts to around 95% of the cost of the electricity.

It is hard to imagine that a fusion reactor would be cheaper than a fission reactor – it would be at the limit of manageable engineering complexity. So we might imagine that the costs of fusion-generated electricity would be similar to the cost of nuclear power – which is already most expensive power on the grid.

In contrast, the cost of renewable energy (solar and wind) has fallen dramatically in recent years. Solar and wind are now the cheapest ways to make electricity ever. And their cost – along with the cost of battery storage – is still falling.

So it seems that after waiting all these years, the fusion-based electricity would in all likelihood be extraordinarily expensive.


The idea of generating electricity from nuclear fusion has been seen as technological fix for climate change. It is not.

Even the most optimistic assumptions possible indicate that fusion will not make make any significant contribution to electricity supplies before 2050.

This is too late to help in out in our climate crisis which is happening now.

Additionally the cost of the electricity might be expected to exceed the cost of conventional nuclear power stations – the most expensive electricity currently on the UK grid.

If as an alternative, we invested in renewable generation from wind, solar and tidal resources, together with ever cheaper storage, we could begin to address our climate crisis now in the knowledge that the technology we were deploying would likely only ever get better. And cheaper.



The James Webb Telescope: it’s all done with mirrors

January 26, 2022

Click image for a larger  version. The James Webb Telescope has reached the L2 point!

Friends, Hurray! The James Webb Space Telescope (JWST) has deployed all its moveable parts and reached its lonely station at the L2 point, far beyond the Moon.

In a previous article I mentioned that back in 2018 I had been fortunate enough to meet with Jon Arenberg from Northrop Grumman, and to see the satellite in its clean room at their facility in Redondo Beach, California.

  • In that article I outlined in broad terms why the satellite is the shape it is.
  • In this article I want to mention two other people who have made key contributions to the JWST.

I was fortunate enough to meet these people during my ‘career’ at NPL. And as I hope to explain, they have taken manufacturing and metrology to the very limits of what is possible in order to make a unique component for the JWST.

It’s all done with Mirrors

The 18 hexagonal mirrors of the JWST are iconic, but in fact there are many more mirrors inside the telescope.

JWST uses mirrors rather than lenses to guide the light it has captured, because at the infrared wavelengths for which the JWST is designed, glass and almost all other materials strongly absorb i.e. they are opaque!

In contrast, during reflection from a metal surface, light only enters the material of the mirror in a very thin layer at its surface.

Consequently, mirror surfaces can guide light of any wavelength with very low absorption.

Form and Smoothness

The creation of a mirror surface requires a machining operation in which a metal component – most commonly made from aluminium – is cut into a specific form with an exceptionally smooth surface.

  • The surface form must be close to the shape it was designed to be.
    • Otherwise the light will not be directed to a focus and the images will be blurred.
  • The surface roughness – the ‘ups and downs’ of the surface – must be much less than the wavelength of the light the mirror must reflect.
    • Otherwise the light will be ‘scattered’ from the surface and very dim objects will be obscured by light scattered from nearby bright objects

The large mirror surfaces on the primary and secondary mirrors are manufactured in a complex process that involves machining the surface of the highly-toxic beryllium metal, and then painstakingly grinding and polishing the surface into shape. Once completed, the finished surface is coated in gold.

Each step in the manufacturing process is interspersed with a measurement procedure to assess the roughness of the surface and the closeness to its ideal form. Ultimately the limit to achievable manufacturing perfection is simply the limit of our ability to make measurements of its surface.

For small mirrors, this polishing process is typically not possible and the surface must be cut directly on a sophisticated lathe.

To achieve the mirror-smooth surfaces, the lathe uses an ultra-sharp diamond-tool which can remove just a few thousandths of a millimetre of material at a time and leave a surface with near-atomic smoothness.

But typically the required surface form is not part of a sphere or a cylinder. To cut such ‘aspheric’ surface forms on a lathe requires that the lathe tool move on a complicated trajectory during a single rotation of the workpiece: the solid geometry and mathematics to achieve this are hellish.

To achieve the required form, the trajectory that the tool must follow relative to the workpiece is calculated millisecond-by-millisecond in MATLAB, and then instructions are downloaded to the lathe.

The JWST is filled with mirrored surfaces of bewildering complexity, channelling infrared light via mirrors to measuring instruments. And slightly to my surprise – and perhaps to yours – I have been told that considering all the mirror surfaces inside JWST, more were made in the UK than in any other country.


One of the key instruments onboard the JWST is the Mid-Infrared  Instrument (MIRI) which analyses light with wavelengths from 5 thousandth of a millimetre out to 28 thousandths of a millimetre.

And inside MIRI, one of the most complex mirrored-components is a ‘slicer’ or ‘splitter’ component.

Its precise function is hard to describe: the figure below is from an almost incomprehensible paper. My opinion is that it is incomprehensible if you don’t already know how it works!

Click on image for a larger version. Figure 14 of “The European optical contribution to the James Webb Space Telescope”. See the end of the article for reference. On the right is ‘splitter’ which redirects light in an almost inconceivably complex pattern.

So let me have a go.

  • Imagine parallel light from the main telescope mirrors falling on to a square section of a parabola. It is a property of a parabola that this light will be directed towards a single point: the focus of the parabola.
  • Now imagine splitting the square into two, and preparing each half of the square as sections of two different parabolas with their foci in two different places. Now parallel light falling on the component will be directed towards two different locations – with 50% of the light proceeding to each focus.

Click on image for a larger version. Illustration of the function of the splitter component. The left-hand panel shows parallel light falling onto a fraction of a parabolic surface being directed towards a focus. The right-hand panel shows parallel light falling onto a fractions of two different parabolic surfaces, and being directed towards two different foci. The Cranfield splitter has slices of 21 different parabola – each within 10 nanometres of its ideal form!

Now imagine repeating this division and machining a component with sections of 21 different parabolas in thin slices each just a couple of millimetres across. This what Paul Morantz and colleagues manufactured at Cranfield University in 2012. There’s a photograph of the component below.

Click on image for a larger version. An early prototype of the splitter mirror the JWST in the display cabinet at Cranfield University. It is – by definition, almost impossible to a photograph a mirror surface. But notice that each of the 21 mirror surfaces reflects light from a different portion of the label in front of it.

Each of the 21 different surfaces had to conform to its specified form within ±10 nanometres (IIRC). And to verify this required measuring that surface with that uncertainty. Measuring a complex surface with this uncertainty is at the limit of what is possible: re-machining the surfaces to correct for detected form errors is just breathtaking!

At each of the 21 different different focus points is a separate instrument measuring at slightly different wavelengths.

The outcome of all this ingenuity is a single custom component weighing just a few grams that simplifies the optics of the instrument, allowing more weight and space to be devoted to measuring instruments.

Pride and Wonder

Back in 2013 I was honoured to work with Paul Morantz and his colleague Paul Shore on the creation of the Boltzmann hemispheres which were used to make the most accurate temperature measurements in history.

The two hemispheres they created were assembled to make a cavity with a precisely non-spherical shape with a form uncertainty below 0.001 mm at all points over the surface.

But before the Pauls could get to work on our project, they had to finish the splitter for JWST otherwise its anticipated launch date might be delayed. [As it happened, they probably could have taken a little more time ;-)]

After completing the splitter I remember the disappointed look on Paul Morantz’s face when I explained that the Boltzmann project ‘only’ needed form uncertainty of 0.001 mm.

I cannot imagine their pride at having constructed such a wondrous object that is being sent to this remote point in space to make measurements on the most distant, most ancient light in the Universe.

I feel proud just to have known them as colleagues.

The Physics of Guitar Strings

January 24, 2022

Friends, regular readers may be aware that I play the guitar.

And sleuths amongst you may have deduced that if I play the guitar, then I must occasionally change guitar strings.

The physics of tuning a guitar – the extreme stretching of strings of different diameters – has fascinated me for years, but it is only now that my life has become pointless that I have managed to devote some time to investigating the phenomenon.

What I have found is that the design of guitar strings is extraordinarily clever, leaving me in awe of the companies which make them. They are everyday wonders!

Before I get going, I just want to mention that this article will be a little arcane and a bit physics-y. Apologies.

What’s there to investigate?

The first question is why are guitar strings made the way they are? Some are plain ‘rustless steel’, but others have a steel core around which are wound super-fine wires of another metal, commonly phosphor-bronze.

The second question concerns the behaviour of the thinnest string: when tuned initially, it spontaneously de-tunes, sometimes very significantly, and it does this for several hours after initially being stretched.

Click image for a larger version. The structure of a wire-wound guitar string. Usually the thickest four strings on a guitar are made this way.

Of course, I write these questions out now like they were in my mind when I started! But no, that’s just a narrative style.

When I started my investigation I just wanted to see if I understood what was happening. So I just started measuring things!

Remember, “two weeks in the laboratory can save a whole afternoon in the library“.

Basic Measurements

The frequency with which a guitar string vibrates is related to three things:

  • The length of the string: the longer the string, the lower the frequency of vibration.
  • The tension in the string: for a given length, the tighter the string, the higher the frequency of vibration.
  • The mass per unit length of the string: for a given length and tension, heavier strings lower the frequency of vibration

In these experiments, I can’t measure the tension in the string directly. But I can measure all the other properties:

  • The length of string (~640 mm) can be measured with a tape measure with an uncertainty of 1 mm
  • The frequency of vibration (82 Hz to 330 Hz) can be measured by tuning a guitar against a tuner with an uncertainty of around 0.1 Hz
  • The mass per unit length can be determined by cutting a length of the string and measuring its length (with a tape measure) and its mass with a sensitive scale. So-called Jewellery scales can be bought for £30 which will weigh 50 g to the nearest milligram!

Click image for a larger version. Using a jewellery scale to weigh a pre-measured length of guitar string. Notice the extraordinary resolution of 1 mg. Typically repeatability was at the level of ± 1 mg.

These measurements are enough to calculate the tension in the string, but I am also interested in the stress to which the material of the string is subjected.

The stress in the string is defined as the tension in the string divided by the cross-sectional area of the string. I can work out the area by measuring the diameter of string with  digital callipers. These devices can be bought for under £20 and will measure string diameters with an uncertainty of 0.01 mm.

However there is a subtlety when it comes to working out the stress in the wire-wound strings. The stress is not carried across the whole cross-section of the wire, but only along its core – so one must measure the diameter of the core of these strings (which can be done at the ends) but not the diameter of the playing length.

Click image for a larger version. Formulas relating the tension T in a string (measured in newtons); the frequency of vibration f (measured in hertz); the length of the string L (measured in metres) and the mass per unit length of the string (m/l) measured in kilograms per metre.

Results#1: Tension

The graph below shows the calculated tension in the strings in their standard tuning.

Click image for a larger version. The calculated tension in the 6 strings of a steel-strung guitar. String 1 is the thinnest (E4) and String 6 is the thickest (E2).

The tension is high – roughly 120 newtons per string. If the tension were maintained by a weight stretching the string over a wheel, there would be roughly 12 kg on each string!

Note that the tension is reasonably uniform across the neck of the guitar. This is important. If it were not so, the tension in the strings would tend to bend the neck of the guitar.

Results#2: Core Diameter

The graph below shows the measured diameters of each of the strings.

Click image for a larger version. The measured diameter (mm) of the stainless steel core of the 6 strings of a steel-strung guitar and the diameter of the string including the winding. String 1 is the thinnest (E4) and String 6 is the thickest (E2).

First the obvious. The diameter of the second string is 33% larger than the first string, which increases its mass per unit length and causes the second string to vibrate at a frequency 33% lower than the first string. This is just the basic physics.

Now we get into the subtlety.

The core of the third string is smaller than the second string. And the core diameters of each the heavier strings is just a little larger than the preceding string.

But these changes in core diameter are small compared to changes in the diameter of the wound string.

The density of the phosphor bronze winding (~8,800 kg/m^3) is similar to, but actually around 10% higher than, the density of the stainless steel core (~8,000 kg/m^3). This is not a big difference.

If we simply take the ratios of outer diameters of the top and bottom strings (1.34/0.31 ≈ 4.3) this is sufficient to explain the required two octave (factor 4) change in frequency.

Results#3: Why are some strings wire-wound?

The reason that the thicker strings on a guitar are wire-wound can be appreciated if one imagines the alternative.

A piece of stainless steel 1.34 mm in diameter is not ‘a string’, it’s a rod. Think about the properties of the wire used to make a paperclip.

So although one could attach such a solid rod to a guitar, and although it would vibrate at the correct frequency, it would not move very much, and so could not be pressed against the frets, and would not give rise to a loud sound.

The purpose of using wire-wound strings is to increase their flexibility while maintaining a high mass per unit length.

Results#4: Stress?

The first thing I calculated was the tension in each string. By dividing that result by the cross-sectional area of each string I can calculate the stress in the wire.

But it’s important to realise that the tension is only carried within of the steel core of each string. The windings only provide mass-per-unit-length but add nothing to the resistance to stretching.

The stress has units of newtons per square metre (N/m^2) which in the SI has a special name: the pascal (Pa). The stresses in the strings are very high so the values are typically in the range of gigapascals (GPa).

Click image for a larger version. The estimated stress with the stainless-steel cores of the 6 strings of a steel-strung guitar. Notice that the first and third strings have considerably higher stress than the other strings. In fact the stress in these cores just exceeds the nominal yield stress of stainless steel. String 1 is the thinnest (E4) and String 6 is the thickest (E2).

This graph contains the seeds of an explanation for some of the tuning behaviour I have observed – that the first and third strings are tricky to tune.

With new strings one finds that – most particularly with the 1st string (the thinnest string) – one can tune the string precisely, but within seconds, the frequency of the string falls, and the string goes out of tune.

What is happening is a phenomenon called creep. The physics is immensely complex, but briefly, when a high stress is applied rapidly, the stress is not uniformly distributed within the microscopic grains that make up the metal.

To distribute the stress uniformly requires the motion of faults within the metal called dislocations. And these dislocations can only move at a finite rate. As the dislocations move they relieve the stress.

After many minutes and eventually hours, the dislocations are optimally distributed and the string becomes stable.

Results#5: Yield Stress

The yield stress of a metal is the stress beyond which the metal is no longer elastic i.e. after being exposed to stresses beyond the yield stress, the metal no longer returns to its prior shape when the stress is removed.

For strong steels, stretching beyond their yield stress will cause them to ‘neck’ and thin and rapidly fail. But stainless steel is not designed for strength – it is designed not to rust! And typically its yield curve is different.

Typically stainless steels have a smooth stress-strain curve, so being beyond the nominal yield stress does not imply imminent failure. It is because of this characteristic that the creep is not a sign of imminent failure. The ultimate tensile strength of stainless steel is much higher.

Results#6: Strain

Knowing the stress  to which the core of the wire is subjected, one can calculate the expected strain i.e. the fractional extension of the wire.

Click image for a larger version. The estimated strain of the 6 strings of a steel-strung guitar. Also shown on the right-hand axis is the actual extension in millimetres Notice that the first and third strings have considerably higher strain than the other strings. String 1 is the thinnest (E4) and String 6 is the thickest (E2).

The calculated fractional string extension (strain) ranges from about 0.4% to 0.8% and the actual string extension from 2.5 mm to 5 mm.

This is difficult to measure accurately, but I did make an attempt by attaching small piece of tape to the end of the old string as I removed it, and the end of the new string as I tightened it.

Click image for a larger version. Method for estimating the strain in a guitar string. A piece of tape is tied to the top of the old string while it is still tight. On loosening, the tape moves with the part of the string to which it is attached.

For the first string my estimate was between 6 mm and 7 mm of extension, so it seems that the calculations are a bit low, but in the right ball-park.


Please forgive me: I have rambled. But I think I have eventually got to a destination of sorts.

In summary, the design of guitar strings is clever. It balances:

  • the tension in each string.
  • the stress in the steel core of each string.
  • the mass-per-unit length of each string.
  • the flexibility of each string.

Starting with the thinnest string, these are typically available in a range from 0.23 mm to 0.4 mm. The thinnest strings are easy to bend, but reducing the diameter increase the stress in the wire and makes it more likely to break. They also tend to be less loud.

The second string is usually unwound like the first string but the stresses in the string are lower.

The thicker strings are usually wire-wound to increase the flexibility of the strings for a given tensioning mass-per-unit-length. If the strings were unwound they would be extremely inflexible, impossible to push against the frets, and would vibrate only with a very low amplitude.

How does the flexibility arise? When these wound strings are stretched, small gaps open up between the windings and allow the windings to slide past each other when the string is bent.

Click image for a larger version. Illustration of how stretching a wound string slightly separates the windings. This allows the wound components to slide past each other when the string is bent.

Additionally modern strings are often coated with a very thin layer of polymer which prevents rusting and probably reduces friction between the sliding coils.

Remaining Questions 

I still have many  questions about guitar strings.

The first set of questions concerns the behaviour of nylon strings used on classical guitars. Nylon has a very different stress-strain curve from stainless steel, and the contrast in density between the core materials and the windings is much larger.

The second set of questions concerns the ageing of guitar strings. Old strings sound dull, and changing strings makes a big difference to the sound: the guitar sounds brighter and louder. But why? I have a couple of ideas, but none of them feel convincing at the moment.

And now I must get back to playing the guitar, which is quite a different matter. And sadly understanding the physics does not help at all!


The strings I used were Elixir 12-53 with a Phosphor Bronze winding and the guitar is a Taylor model 114ce

COVID-19: Is it really like flu now?

January 22, 2022

Friends, I hope you are well.

In case you’ve been tuning out of the news and relying solely on this blog for information, I feel obliged to inform you that the COVID-19 pandemic is still here. But I think things are getting better!

I last wrote about the pandemic over a month ago on December 14, and there I commented that I didn’t have much to say. And since then I have frankly just not wanted to think about it!

I was prompted to look again at the situation, as the government seems to be moving to a stance that COVID is now ‘like flu’, and we just need to live with it.

I felt rather distrustful of this assertion, so I thought I would see if there really was any justification for this laissez-faire approach.

To my surprise, I have concluded that there is actually a fair amount of justification.


In this article I compare current death rates in Wave#3 with what used to happen in ‘normal’ years. Can you remember normal years?

My conclusion is that death rates from COVID in its omicron variant, spreading in our heavily-vaccinated population seem to be similar to – or less than – winter death rates seen in ‘normal’ years between 2000 and 2017.

So it may be thatwith some caveats that I will discuss at the end – yes, COVID is becoming like flu.

Really? Yes. Let me explain.

COVID deaths so far

Click the image for a larger version. Logarithmic graph showing positive caseshospital admissions and deaths since the start of the pandemic. Numbers in panels highlight the numbers at the peak of Wave#2 in January 2021 and the peak of Wave#3 in January 2022. Also shown are the average values through the months of September, October and November 2021.

Even the most cursory glance at the graph shows that the pandemic is still with us, killing around 270 people per day (1,890 people per week) at the moment.

Infections, hospitalisations and deaths have been at a high level through the last 4 months of 2021, and rose strongly at the start of the new year.

Comparing peaks of the curves in January 2021 and 2022:

  • Cases in 2022 (omicron) are around 3 times higher than in 2021 (delta).
  • Hospitalisations in 2022 are around half the numbers in 2021.
  • Deaths in 2022 are around a fifth of the numbers in 2021.

So back in 2021, around 2% of people testing positive died. Now the equivalent figure is 0.1%.

These simple figures mask many complications.

  • In 2021 the death rate was for an almost unvaccinated population and the virus was controlled by severe social distancing: a lockdown.
  • Now the most vulnerable parts of the population are multiply vaccinated and there is a great deal of immunity acquired through prior infection.
  • In 2021 we were dealing with the delta variant and now we have omicron.
  • In recent months schools seem to have been the focus of transmission, and that is likely to stay that way for a little while.

So how does this level of COVID infection and death compare to influenza?

As far as I can tell, and rather to my surprise, the death rates from this third wave are similar to those seen in earlier winter flu episodes.

Let me explain how I have come to his conclusion.

I downloaded the weekly rate of deaths in England and Wales from the Office for National Statistics (ONS). The data shown below are extracted from Figure 4 of this document.

Click the image for a larger version. Weekly deaths in England and Wales from mid-1999 to mid-2017. Typically 8,000 to 9,000 people die each week in the summer, but the figure rises in the winter peaking between 1,000 and 3,000 deaths per week above the summer death rate.

Typically 8,000 to 9,000 people die each week in the summer, but the figure rises in the winter, peaking between 1,000 and 3,000 deaths per week above the summer death rate.

A significant fraction of this is due to flu – it correlates well with what the ONS call an index of “influenza-like illness (ILI) consultation rates”.

So I suppose we can consider that this is ‘normal’ for the UK.

The large peak in the winter of 1999/2000 is (I think) caused by flu, because the ONS note elsewhere (this document just after Figure 3) that in 2000, flu vaccination became commonplace.

So in all the years shown except the first, flu is being controlled by mass vaccination of the vulnerable population.

How does the COVID 19 death data look if overlaid on this graph? This is shown below.

Click the image for a larger version. Weekly deaths in England and Wales from mid-1999 to mid-2017. Also shown are the weekly deaths from COVID plotted above a nominal baseline of 9000 deaths per week. This is the same data plotted on the logarithmic graph at the head of the article. The three waves can be clearly seen.

The graph above shows weekly deaths from COVID plotted above a nominal baseline of 9,000 deaths per week. This is the same data plotted in black on the logarithmic graph at the head of the article, but scaled ‘per week’ rather than ‘per day’.

The three waves of the pandemic can be clearly seen.

The data are broadly comparable to what happens in normal years between 2000 and 2017, but larger. And remember, these are deaths from just a single cause.

Additionally we must remember that the peaks from Wave#1 and Wave #2 are only this small because of national lockdowns which wreaked immense disruption to all our lives.

If we had not had lockdowns, then the scale would likely have been something on the order of 10 times this size – quite comparable with the 1918 influenza pandemic. Things would have been truly catastrophic.

However the death rate in Wave#3 – the wave we are currently experiencing – is much smaller than the previous waves and this has been achieved mainly (but not entirely) by vaccinations.

From this, I conclude that deaths from COVID in Wave#3 do appear to be at a level similar to deaths from “Influenza-like illnesses” over the years 2000 to 2017.


Before concluding that ‘it’s all over’, we need to remember that the situation is still serious. People are still becoming seriously ill and dying.

What I am pointing out is that this is now occurring at a rate that is similar to previous ‘normal’ winters.

We should also recall that COVID is a completely new disease and may have other mutations which may surprise us still.

Given this, it seems to unwise to throw out all mitigations against transmission while viral prevalence is so high.

For example, I will continue to mask up in shops and I would feel comfortable if other people did too.

However, I can understand that other people may disagree.


But is COVID really on its way to being ‘like flu’.

Yes. Amongst vaccinated populations, that’s how it looks to me.


P.S. As pointed out in a comment: I did not consider the impact of Long COVID – or the disease burden on hospitals.

Me and Tea.

January 20, 2022

Friends, I have given up putting milk in my tea.

Why? Because as I wrote a few days ago, putting milk in my tea gives rise to annual methane emissions equivalent to almost a third of tonne of carbon dioxide.

On balance, I would rather avoid those emissions than experience the pleasure of putting milk in my tea.

My life in tea

I can still hazily remember being served milky tea with sugar as a child – perhaps I was 6 or 7.

Later on, drinking tea became a habit, and when I was probably 11 or 12, I gave up putting sugar in my tea.

And I have been drinking large amounts of tea each day – maybe 6 cups – ever since.

Around 12 years ago, I was concerned about my son’s seemingly unbreakable attachment to his iPod. To my surprise, he agreed to surrender his iPod if I gave up tea. I agreed, pleased we had reached an amicable bargain.

However I gave him back his iPod after 3 days, because in truth I was – and am – addicted to tea!

So changing the way I drink my tea is changing a life-long habit.

Life-long habits

Carbon dioxide and methane emissions are not very obvious – we generally don’t see them: the gases are invisible and have no smell. And they frequently take place at distant locations such as power stations or farms.

But the emissions are nonetheless real and their long term damage is on a scale that it is scarcely possible to imagine.

Additionally these emissions are entwined with our familiar ways of living.

  • Gas boilers keep us warm.
  • Cars provide mobility.
  • Aeroplanes take us on holiday.
  • Milk and Cheese and Butter taste great.
  • Tea with milk is ‘how normal people have tea’.

So acknowledging the reality of the emissions we give rise to and the harm they cause is hard both intellectually and emotionally.

Writing the article last week it became clear to me that I had to overcome my emotional attachment to milk in my tea.

Breaking these life-long habits is something we will all have to do if we want to create a way of living which does not damage the climate of our children’s future more than we already have.

More than milk

After 10 days I am happy to report that I am enjoying my milk-free tea and have now almost stopped reflexive visits to the fridge each time I make a cup!

I think I taste the tea itself rather more, but it is a very different kind of drink.

I have also been reducing use of butter and cheese and I have found alternatives that are perfectly acceptable in most recipes.

I find it hard to believe my use of dairy products will ever reach zero. But I can easily imagine reducing consumption by 90% or so.

Life is a long journey, and I never thought my journey would take me here: milk-free tea and minimising use of cheese and butter which I love!

It feels strange and unfamiliar.

But here I am.

A Watched Pan…

January 18, 2022

Click on Image for larger version.  A vision of domestic bliss in the de Podesta household. Apparatus in use for measuring the rate of heating of 1 litre of water on an induction hob.

In the beginning…

Friends, my very first blog article (written back on 1st January 2008 and re-posted in 2012) was about whether it is better to boil water with an electric kettle or a gas kettle on a gas hob.

Back then, my focus was simply on energy efficiency rather than carbon dioxide emissions. I had wanted to know how much of the primary energy of methane ended up heating the water. I did this by simply timing how long it took to boil 1 litre of water by various methods.

Prior to doing the experiments I had imagined that heating water with gas was more efficient because the fuel was used directly to heat the water. In contrast, even the best gas-fired power stations are only ~50% efficient.

What I learned back then was that gas cookers are terrible at heating kettles & pans! They were so much worse than I had imagined that I later spent many hours with different size pans, burners, and amounts of water just so I could believe my results!

Typically gas burners only transferred between 36% and 56% of the energy of combustion to the water – the exact fraction depending on the size and power of the burner. Heating things faster with a bigger burner was less efficient. Using a small flame and a very large pan, I could achieve an efficiency of 83%, but of course the water heated only very slowly.

This inefficiency was roughly equivalent to or worse than the inefficiency of the power station generating electricity, and so I concluded that electric kettles and gas kettles were similarly inefficient in their use of the primary energy of the gas. But that using electric kettles allowed one to use the correct amount of water more easily, and so avoided heating water that wasn’t used.

14 years later…

After a recent conversation on Twitter (@Protons4B) I thought I would look at this issue again.

Why? Well two things have changed in the last 14 years.

  • Firstly, electricity generation now incorporates dramatically more renewable sources than in 2008 and so using electricity involves ever decreasing amounts of gas-fired generation.
  • Secondly, I am now concerned about emissions of carbon dioxide resulting from lifestyle choices.

Also being a retired person, I now have a bit more time on my hands and access to fancy instruments such as thermometers.

The way I did the experiments is described at the end of the article, but here are the results.

Results#1: Efficiency

The chart below shows estimates for the efficiency with which the electrical energy or the calorific content of the gas is turned into heat in one litre of water. My guess is these figures all have an uncertainty of around ±5%.

  • The kettle was close to 100% efficient:
  • The induction hob was approximately 86% efficient
  • The Microwave oven was approximately 65% efficient

In contrast, heating the water in a pan (with a lid) on a gas hob was only round 38% or 39% efficient.

Click on Image for larger version. Chart showing the efficiency of 5 methods of heating 1 litre of water. 100% efficiency means that all the energy input used resulted in a temperature rise. The two gas results were for heating pans with two different diameters (19 cm and 17 cm).

It was particularly striking that the water heated on the gas burner (~1833 W) took 80% longer to boil than on the Induction hob (~1440 W) despite the heating power being ~20% less on the induction hob.

Click on Image for larger version. Chart showing the rate of heating for each of the 5 methods of heating 1 litre of water. Notice that the water heated on the gas burner (~1833 W) took 80% longer to boil than on the Induction hob (~1440 W) despite the heating power being ~20% less on the induction hob. Notice that up until 40 °C, the microwave oven heats water as fast as the gas hob, despite using half the power!

Results#2: Carbon Dioxide Emissions 

Based on the average carbon intensity of electricity in 2021 (235 g CO2/kWh), boiling a litre of water by any electrical means results in substantially less CO2 emissions than using a pan (with a lid) on a gas burner.

I performed these experiments on 17th January 2021 between 4 p.m. and 7 p.m. when the carbon intensity of electricity was well above averages: ~330 g CO2/kWh. In this case, boiling a litre of water in a kettle or induction hob still gave the lowest emissions, but heating water in a microwave oven resulted in similar emissions to those arising from using a pan (with a lid) on a gas burner.

Click on Image for larger version. Charts showing the amount of carbon dioxide released by heating 1 litre of water from 10 °C to 100 °C using either electrical methods or gas. The gas heating is assumed to have a carbon intensity of 200 gCO2/kWh. The left-hand chart is based on the carbon intensity of 330 gCO2/kWh of electricity which was appropriate at the time the experiments were performed. The right-hand chart is based on the carbon intensity of 235 gCO2/kWh of electricity which was the average value for 2021. Electrical methods of heating result in lower CO2 emissions in almost all circumstances.

Results#3: Cost 

Currently I am paying 3.83 p/kWh for gas and 16.26 p/kWh for electricity i.e. electricity is around four times more expensive than gas.

These prices are likely to rise substantially in the coming months, but it is not clear whether this ratio will change much.

So sadly, despite gas being the slowest way to heat water and the way which releases the most climate damaging gases, it is still the cheapest way to heat water. It’s about 40% cheaper than using an electric kettle.


For the sake of the climate, use an electric kettle if you can.


That was the end of the article and there is no need to read anymore unless you want to know how I made these measurements.


Estimating the power delivered to the water + vessel

  • For electrical measurements I paused the heating typically every 30 seconds, and read the plug-in electrical power meter. This registered electrical energy consumed in kWh to 3 decimal places.
    • I fitted a straight line to the energy versus time graph to estimate power.
  • For gas measurements I read the gas meter before and after each experiment. This reads in m^3 to 3 decimal places and I converted this volume reading to kWh by multiplying 11.19 kWh/m^3.
    • The gas used only amount to 0.025 m^3 so uncertainty is at least 4% from the digitisation.
    • I divided by the time – typically 550 seconds – to estimate the power.

Mass of water

  • I placed the heating vessel (kettle, pan, jug) on the balanced and tired (zeroed) the reading.
  • I then added water until the vessel read within 1 g of 1000 g. Uncertainty is probably around 1% or 10 g.

Heating rate with 100% energy conversion

  • Based on the power consumed, I estimated the ideal heating rate if 100% of the supplied power caused temperature rises in the water by using the equations.

  • I assumed the average specific heat capacity of water of the range from 10 °C to 100 °C was 4187 J/ (kg °C)

Measuring the temperature.

  • For electrical measurements I paused the heating typically every 30 seconds, stirred the liquid with a coffee-stirrer for 2 or three seconds, and then took the temperature using a thermocouple probe..
  • For gas measurements it wasn’t possible to the pause the heating because of the way I was measuring the power. So about 10 seconds before the reading was due I slipped the coffee stirrer under the lid to mix the water.

Estimating the rate of temperature rise.

  • For all measurements I fitted a straight line to the temperature versus time data, using only data points below approximately 80 °C to avoid the effects of increased temperature losses near to the boiling point.

Mass of the ‘addenda’.

  • The applied power heated not only the water but also its container.
  • The heat capacity of the 19 cm stainless steel pan (572 g) was roughly 6% of the heat capacity of the water.
  • I chose not to take account this heat capacity because there was no way to heat water with a container. So the container is a somewhat confounding factor, but allows more meaningful comparison of the results.

Efficiency of boiling

  • I estimated efficiency by comparing the actual warming rate with the ideal warming rate.
  • I then calculated the energy required to heat 1 kg of water from 10 °C to 100 °C, and multiplied this by the efficiency.
  • In this way the result is relevant even if all the measurements did not start and stop at the same temperatures.


  • I heated the water in the microwave in a plastic jug which did not have a tight fitting lid. I am not sure if this had an effect.
  • I did notice that the entire microwave oven was warm to hot at the end of the heating sequence.

Would you like milk with your tea?

January 9, 2022

Every blog article starts with a mug of tea.

Friends, I am addicted to tea.

I like all kinds of tea, but my favourite is a basic brew, with milk.

The ritual of settling down with a mug of hot tea is an essential pre-requisite for any kind of concentration – such as writing this article.

This the main use of milk in the household and each week my wife and I consume around 2 litres.

So per year we use roughly 100 litres of milk.

Looking online, I find this corresponds to emissions (mainly of methane) which are equivalent to 315 kg of carbon dioxide per year: almost a third of a tonne!

I think there is a lot of uncertainty in that estimate, and it probably varies from country to country. But taking it at face value, it is a truly colossal impact from a very mundane activity.

Click image for a larger version. The graph shows the global warming impact of emissions associated with production of milk and cheese in terms of the equivalent amount of CO2 emissions which would have the same impact. The data is from Our World in Data

After all the work I have had done on the house, annual heating and electrical emissions have fallen from 3.7 tonnes to about 0.7 tonnes.

So emissions from drinking tea alone have become a significant fraction (∼50%) of general household emissions!

What to do?

As far as I know, the only way to avoid these emissions, is to stop drinking milk –  or to reduce the amount I drink substantially.

The problem

The problem with this solution is that I – like millions of other people – really like having milk in my tea.

I have tried using milk alternatives derived from oats, almonds, and soya. These products look like milk and come in packaging which suggests they are in some way similar to milk.

But they do not taste even remotely like milk.

Additionally, I am emotionally attached to the idea that milk comes from cows that live on farms. For someone who is basically a city-dweller, this connection feels meaningful.

So at the start of this new year I am facing a dilemma.

What to do?

As far as I know, the only way to avoid these emissions, is to stop drinking milk –  or to reduce the amount I drink substantially.

Switching to the plant-based alternatives is just not acceptable, which leaves me with just two options:

  • Abandoning milk in my tea altogether. This is an extreme option, but one I am keeping under review.
  • Currently I am experimenting with a 50:50 mixture of milk with Oat ‘derivative’ product. It is predictably, not as nice as just milk, but it is borderline acceptable. But the emissions are still substantial.

I will let you know how it goes when I have a few more weeks under my belt.

The wider problem

The wider problem with this solution is that I haven’t even mentioned butter or cheese, other dairy-based staples of my diet.

Because of the large amount of raw milk used, each kilogram of cheese is apparently is associated with 24 kg of CO2 equivalent emissions (mostly as methane).

My wife and I eat – and enjoy prodigiously – about 0.5 kg of Davidstow Cheddar each week. This corresponds to around 25 kg per year, and emissions with the equivalent impact of 600 kg of carbon dioxide per year.

Basically the emissions associated with our cheese consumption have an impact roughly equivalent to all the electricity we use to heat and run the house for a year!

Fortunately Our World in Data does not have information about butter. I say ‘fortunately’ because I feel sure it will be bad.

The wider truth is that in regards to my house, all the changes I have made to reduce carbon dioxide emissions have been expensive, but they have not really affected my quality of life.

But it seems that emissions from some of the basics of my diet, foods I love and have eaten all my life, are apparently responsible for more annual carbon dioxide emissions than my entire house!

Reducing these emissions is going to be much tougher and feel much more like a personal sacrifice with a very direct and (at least initially) negative impact on my quality of life.

I guess nobody said it would be easy.

I am going to sit down now with a nice cup of tea to think about this…






January 8, 2022

Friends, it’s been a fraught week. I have had two computer failures and bought two new computers.

Many of the details of this trauma are irrelevant, but I have learned a lot. And there are two things I feel compelled to share.


And secondly, no matter how beautiful your computer; no matter how big its screen; no matter fast its processor; it is in fact bound to end up as scrap.

And no matter how much you think you like this object, all you actually care about is the data it is storing.

The Failures

Just before Christmas, my son returned home from university with a broken laptop which would not switch on at all.

It contained data he had just obtained in the course of his PhD research and he was stoically indifferent. But profoundly depressed.

Using the super-power that parents have: I took him out and bought him a new computer. But I could not recover his data.

And then just three days ago I returned home to find my Lenovo Ideacenter in distress. A propos of nothing at all, it displayed a black screen with some obscure text.

After running diagnostics, the primitive brain of the computer informed me gruffly that I had had a hard drive failure.

The Successes

I took the broken computers to our local PC repair shop, the wonderful PC Home.

Within a few hours they told me that the Lenovo diagnostics program was wrong: the hard disk and all my data were fine: it was the computer that was kaput.

They returned the hard disk to me along with a portable drive containing the data they had extracted.

Then miraculously, PC Home found that my son’s computer had a random short-circuit on its main board. They sorted it and the computer returned to life, all data in tact.

Lessons learned#1

What I have learned from these episodes is simple.

No matter how wonderful, or fashionable your computer, it is destined to become scrap.

The attic room in our house is filled with devices which were once sources of pride and wonder. They are now – at best – relics and items of occasional fascination.

In actual fact it is the data on your computer that is valuable: the words you have written as poems or remembrances, novels or reports; the photographs or videos; the spreadsheets or data files.

So please, whatever you doing right now, please stop it. And go and back up your computer!

As the saying goes: there are two types of computer: those which have failed, and those which have not failed yet.

Lessons learned#2

I am now 62 years old and I have been programming computers and using them for work since I was 18. I consider myself modestly competent: I have for example successfully replaced internal hard drives in iMac computers. But I am having difficulty coping with the complexity and mode of operation of modern computers.

For example, while sorting through the hard drive recovered my computer, I was shocked to find that many files that I thought I had saved, were not in fact present on the hard drive.

Many of the files were recoverable from an on-line vault known as the Microsoft OneDrive. They were there even though I had repeatedly and explicitly saved the files on what was apparently just masquerading as my own hard drive.

I am relieved that I have any access to these documents, but these were my documents on my computer: I do not consider it acceptable for Microsoft to effectively steal my documents in order to inveigle me into using their on-line storage service.

Consequently I will no longer be using Windows. So far, I am not missing it.








I love Greta Thunberg

December 30, 2021

Click image for a larger version. My son gave me a Christmas Tree decoration in the likeness of Greta Thunberg.

Friends, love is a strong word.

Back in 2012 I wrote that I loved James Hansen. If you haven’t heard it, I strongly recommend his TED talk.

I wrote:

When I hear him speak I feel I am listening to a human being who understands enough to feel compelled to shout ‘Fire’ in the ‘cinema’ of the modern world. He feels that no matter what the consequences, we must face up to the climate challenge ahead. Being prepared to be arrested for his insistence that the US government should listen to what the science (they have paid for!) has to say seems like an act of great bravery to me.

And today I would like to declare a similar – but different – admiration for Greta Thunberg. Greta is not after all, a world-leading scientist.

The fact that my son gave me a Christmas Tree decoration in the likeness of Greta Thunberg – but not James Hansen – is testament to their different roles. He also gave me a book of Greta’s speeches and not a copy of James Hansen’s papers (e.g. this one from 1981).

Greta Thunberg has unintentionally become a global cultural phenomenon. But having read her book, I can assure you it is not because of her oratory. It is because of her unflinching honesty.

Reading her words addressed to old people like me, I do not feel inspired: I feel shamed.

I will leave you with a quote from the book: Greta’s speech to the UK Parliament in 2019. I hope you too will be as affected by her honesty as I have been.

UK Parliament 2019

23 April 2019:

Is my microphone on? Can you hear me?

Around the year 2030, 10 years 252 days and 10 hours away from now, we will be in a position where we set off an irreversible chain reaction beyond human control, that will most likely lead to the end of our civilisation as we know it. That is unless in that time, permanent and unprecedented changes in all aspects of society have taken place, including a reduction of CO2 emissions by at least 50%.

And please note that these calculations are depending on inventions that have not yet been invented at scale, inventions that are supposed to clear the atmosphere of astronomical amounts of carbon dioxide.

Furthermore, these calculations do not include unforeseen tipping points and feedback loops like the extremely powerful methane gas escaping from rapidly thawing arctic permafrost.

Nor do these scientific calculations include already locked-in warming hidden by toxic air pollution. Nor the aspect of equity – or climate justice – clearly stated throughout the Paris agreement, which is absolutely necessary to make it work on a global scale.

We must also bear in mind that these are just calculations. Estimations. That means that these “points of no return” may occur a bit sooner or later than 2030. No one can know for sure. We can, however, be certain that they will occur approximately in these timeframes, because these calculations are not opinions or wild guesses.

These projections are backed up by scientific facts, concluded by all nations through the IPCC. Nearly every single major national scientific body around the world unreservedly supports the work and findings of the IPCC.

Did you hear what I just said? Is my English OK? Is the microphone on? Because I’m beginning to wonder.

During the last six months I have travelled around Europe for hundreds of hours in trains, electric cars and buses, repeating these life-changing words over and over again. But no one seems to be talking about it, and nothing has changed. In fact, the emissions are still rising.

When I have been travelling around to speak in different countries, I am always offered help to write about the specific climate policies in specific countries. But that is not really necessary. Because the basic problem is the same everywhere. And the basic problem is that basically nothing is being done to halt – or even slow – climate and ecological breakdown, despite all the beautiful words and promises.

The UK is, however, very special. Not only for its mind-blowing historical carbon debt, but also for its current, very creative, carbon accounting.

Since 1990 the UK has achieved a 37% reduction of its territorial CO2 emissions, according to the Global Carbon Project. And that does sound very impressive. But these numbers do not include emissions from aviation, shipping and those associated with imports and exports. If these numbers are included the reduction is around 10% since 1990 – or an an average of 0.4% a year, according to Tyndall Manchester.

And the main reason for this reduction is not a consequence of climate policies, but rather a 2001 EU directive on air quality that essentially forced the UK to close down its very old and extremely dirty coal power plants and replace them with less dirty gas power stations. And switching from one disastrous energy source to a slightly less disastrous one will of course result in a lowering of emissions.

But perhaps the most dangerous misconception about the climate crisis is that we have to “lower” our emissions. Because that is far from enough. Our emissions have to stop if we are to stay below 1.5-2 °C of warming. The “lowering of emissions” is of course necessary but it is only the beginning of a fast process that must lead to a stop within a couple of decades, or less. And by “stop” I mean net zero – and then quickly on to negative figures. That rules out most of today’s politics.

The fact that we are speaking of “lowering” instead of “stopping” emissions is perhaps the greatest force behind the continuing business as usual. The UK’s active current support of new exploitation of fossil fuels – for example, the UK shale gas fracking industry, the expansion of its North Sea oil and gas fields, the expansion of airports as well as the planning permission for a brand new coal mine – is beyond absurd

This ongoing irresponsible behaviour will no doubt be remembered in history as one of the greatest failures of humankind.

People always tell me and the other millions of school strikers that we should be proud of ourselves for what we have accomplished. But the only thing that we need to look at is the emission curve. And I’m sorry, but it’s still rising. That curve is the only thing we should look at.

Every time we make a decision we should ask ourselves; how will this decision affect that curve? We should no longer measure our wealth and success in the graph that shows economic growth, but in the curve that shows the emissions of greenhouse gases. We should no longer only ask: “Have we got enough money to go through with this?” but also: “Have we got enough of the carbon budget to spare to go through with this?” That should and must become the centre of our new currency.

Many people say that we don’t have any solutions to the climate crisis. And they are right. Because how could we? How do you “solve” the greatest crisis that humanity has ever faced? How do you “solve” a war? How do you “solve” going to the moon for the first time? How do you “solve” inventing new inventions?

The climate crisis is both the easiest and the hardest issue we have ever faced. The easiest because we know what we must do. We must stop the emissions of greenhouse gases. The hardest because our current economics are still totally dependent on burning fossil fuels, and thereby destroying ecosystems in order to create everlasting economic growth.

“So, exactly how do we solve that?” you ask us – the schoolchildren striking for the climate.

And we say: “No one knows for sure. But we have to stop burning fossil fuels and restore nature and many other things that we may not have quite figured out yet.”

Then you say: “That’s not an answer!”

So we say: “We have to start treating the crisis like a crisis – and act even if we don’t have all the solutions.”

“That’s still not an answer,” you say.

Then we start talking about circular economy and rewilding nature and the need for a just transition. Then you don’t understand what we are talking about.

We say that all those solutions needed are not known to anyone and therefore we must unite behind the science and find them together along the way. But you do not listen to that. Because those answers are for solving a crisis that most of you don’t even fully understand. Or don’t want to understand.

You don’t listen to the science because you are only interested in solutions that will enable you to carry on like before. Like now. And those answers don’t exist any more. Because you did not act in time.

Avoiding climate breakdown will require cathedral thinking. We must lay the foundation while we may not know exactly how to build the ceiling.

Sometimes we just simply have to find a way. The moment we decide to fulfil something, we can do anything. And I’m sure that the moment we start behaving as if we were in an emergency, we can avoid climate and ecological catastrophe. Humans are very adaptable: we can still fix this. But the opportunity to do so will not last for long. We must start today. We have no more excuses.

We children are not sacrificing our education and our childhood for you to tell us what you consider is politically possible in the society that you have created.

We have not taken to the streets for you to take selfies with us, and tell us that you really admire what we do.

We children are doing this to wake the adults up.

We children are doing this for you to put your differences aside and start acting as you would in a crisis.

We children are doing this because we want our hopes and dreams back.

I hope my microphone was on. I hope you could all hear me.

The James Webb Space Telescope

December 24, 2021

Friends, a gift to humanity!

On Christmas Day at 12:20 GMT/UTC, the James Webb Space Telescope will finally be launched.

You can follow the countdown here and watch the launch live via NASA or on YouTube – below.

In May 2018 I was fortunate enough to visit the telescope at the Northrop Grumman facility where it was built, and to speak with the project’s former engineering director Jon Arenberg.

Everything about this telescope is extraordinary, and so as the launch approaches I thought that it might be an idea to re-post the article I wrote back in those pre-pandemical days.

As a bonus, if you read to the end you can find out what I was doing in California back in 2018!

Happy Christmas and all that.


Last week I was on holiday in Southern California. Lucky me.

Lucky me indeed. During my visit I had – by extreme good fortune – the opportunity to meet with Jon Arenberg – former engineering director of the James Webb Space Telescope (JWST).

And by even more extreme good fortune I had the opportunity to speak with him while overlooking the JWST itself – held upright in a clean room at the Northrop Grumman campus in Redondo Beach, California.

[Sadly, photography was not allowed, so I will have to paint you a picture in words and use some stock images.]


In case you don’t know, the JWST will be the successor to the Hubble Space Telescope (HST), and has been designed to exceed the operational performance of the HST in two key areas.

  • Firstly, it is designed to gather more light than the HST. This will allow the JWST to see very faint objects.
  • Secondly, it is designed to work better with infrared light than the HST. This will allow the JWST to see objects whose light has been extremely red-shifted from the visible.

A full-size model of the JWST is shown below and it is clear that the design is extraordinary, and at first sight, rather odd-looking. But the structure – and much else besides – is driven by these two requirements.

JWST and people

Requirement#1: Gather more light.

To gather more light, the main light-gathering mirror in the JWST is 6.5 metres across rather than just 2.5 metres in the HST. That means it gathers around 7 times more light than the HST and so can see fainter objects and produce sharper images.


Image courtesy of Wikipedia

But in order to launch a mirror this size from Earth on a rocket, it is necessary to use a  mirror which can be folded for launch. This is why the mirror is made in hexagonal segments.

To cope with the alignment requirements of a folding mirror, the mirror segments have actuators to enable fine-tuning of the shape of the mirror.

To reduce the weight of such a large mirror it had to be made of beryllium – a highly toxic metal which is difficult to machine. It is however 30% less dense than aluminium and also has a much lower coefficient of thermal expansion.

The ‘deployment’ or ‘unfolding’ sequence of the JWST is shown below.

Requirement#2: Improved imaging of infrared light.

The wavelength of visible light varies from roughly 0.000 4 mm for light which elicits the sensation we call violet, to 0.000 7 mm for light which elicits the sensation we call red.

Light with a wavelength longer than 0.000 7 mm does not elicit any visible sensation in humans and is called ‘infrared’ light.

Imaging so-called ‘near’ infrared light (with wavelengths from 0.000 7 mm to 0.005 mm) is relatively easy.

Hubble can ‘see’ at wavelengths as long as 0.002 5 mm. To achieve this, the detector in HST was cooled. But to work at longer wavelengths the entire telescope needs to be cold.

This is because every object emits infrared light and the amount of infrared light it emits is related to its temperature. So a warm telescope ‘glows’ and offers no chance to image dim infrared light from the edge of the universe!

The JWST is designed to ‘see’ at wavelengths as long as 0.029 mm – 10 times longer wavelengths than the HST – and that means that typically the telescope needs to be on the order of 10 times colder.

To cool the entire telescope requires a breathtaking – but logical – design. There were two parts to the solution.

  • The first part involved the design of the satellite itself.
  • The second part involved the positioning the satellite.

Cooling the telescope part#1: design

The telescope and detectors were separated from the rest of the satellite that contains elements such as the thrusters, cryo-coolers, data transmission equipment and solar cells. These parts need to be warm to operate correctly.

The telescope is separated from the ‘operational’ part of the satellite with a sun-shield roughly the size of a tennis court. When shielded from the Sun, the telescope is exposed to the chilly universe, and cooled gas from the cryo-coolers cools some of the detectors to just a few degrees above absolute zero.

Cooling the telescope part#2: location

The HST is only 300 miles or so from Earth, and orbits every 97 minutes. It travels in-to and out-of full sunshine on each orbit. This type of orbit is not compatible with keeping a gigantic telescope cold.

So the second part of the cooling strategy is to position the JWST approximately 1 million miles from Earth at a location beyond the orbit of the moon at a location known as the second Lagrange point L2. But JWST does not orbit the Earth like Hubble: it orbits the Sun.

Normally the period of orbits around the Sun get longer as satellites orbit at greater distances from the Sun. But at the L2 position, the gravitational attraction of the Earth and Moon add to the gravitational attraction of the Sun and speed up the orbit of the JWST so that it orbits the Sun with a period of one Earth year – and so JWST stays in the same position relative to the Earth.

  • The advantage of orbiting at L2 is that the satellite can maintain the same orientation with respect to the Sun for long periods. And so the sun-shade can shield the telescope very effectively, allowing it to stay cool.
  • The disadvantage of orbiting at L2 is that it is beyond the orbit of the moon and no manned space-craft has ever travelled so far from Earth. So once launched, there is absolutely no possibility of a rescue mission.

The most expensive object on Earth?

I love the concept of the JWST. At an estimated cost of $8 billion $10 billion, if this is not the most expensive single object on Earth, then I would be interested to know what is.

But it has not been created to make money or as an act of aggression.

Instead, it has been created to answer the simple question

I wonder what we would see if we looked into deep space at infrared wavelengths.”. 

Ultimately, we just don’t know until we look.

In a year or two, engineers will place the JWST on top of an Ariane rocket and fire it into space. And the most expensive object on Earth will then – hopefully – become the most expensive object in space.

Personally I find the mere existence of such an enterprise a bastion of hope in a world full of worry.


Many thanks to Jon Arenberg  and Stephanie Sandor-Leahy for the opportunity to see this apogee of science and engineering.


Breathtaking photographs are available in galleries linked to from this page

Christmas Bonus

Re-posting this article, I remembered why I was in Southern California back in May 2018 – I was attending Dylanfest – a marathon celebration of Bob Dylan’s music as performed by people who are not Bob Dylan.

The pandemic hit Dylanfest like a Hard Rain, but in 2020 they went on-line and produced a superb cover of Subterranean Homesick Blues which I gift to you this Christmas. Look out for the fantastic guitar solo at 1’18” into the video.

And since I am randomly posting performances inspired by Dylan songs, I can’t quite leave without reminding you of the entirely palindromic (!) version of the song by Wierd Al Yankovic.

%d bloggers like this: