Archive for the ‘Personal’ Category

Why global warming affects the poles more than the equator

May 8, 2022

Friends, welcome to Episode 137 in the occasional series of “Things I really should have known a long time ago, but have somehow only realised just now“.

In this case, the focus of my ignorance is the observation that the warming resulting from our emissions of carbon dioxide affects higher latitudes more than lower latitudes.

This is a feature both of our observations and models. But what I learned this week from reading the 1965 White House Report on Environmental Pollution (link) was the simple reason why.

[Note added after feedback: In this article I am describing an effect that makes the direct effect of increase CO2 levels more important at high latitudes. There are also many feedback effects that amplify the direct effect and some of these are also more  important at high latitudes. Carbon Brief has an excellent article on these feedback effects here, but that is not what I am talking about in this article.

Why?

The two gases responsible for majority of greenhouse warming of the Earth’s surface are water vapour and carbon dioxide. But the distribution of these two gases around the planet differs significantly.

  • The concentration of water vapour in the atmosphere depends on the temperature of the liquid surfaces from which the water evaporates. Because the Equator is much hotter than the poles, there is much more water vapour in the atmosphere at the Equator compared with the poles.
  • The concentration of carbon dioxide in the atmosphere is pretty uniform around the globe.

This is illustrated schematically in the figure below.

Click on the image for a larger version. There is more water vapour in the atmosphere in lower latitudes (near the Equator) that at higher latitudes (near the poles). In contrast, carbon dioxide is rather uniformly distributed around the globe.

Of course the truth is a bit more complex than the simplistic figure above might imply.

  • Water vapour from the Equator is transported throughout the atmosphere. But nonetheless, the generality is correct. And the effect is large: the atmosphere above water at 15 °C contains roughly twice as much moisture as the atmosphere above water at 5 °C.
  • Carbon dioxide is mainly emitted in the northern hemisphere, and is then uniformly mixed in the northern hemisphere within a year or so. The mixing with the Southern Hemisphere usually takes two or three years. The variability around the globe is usually within ±2%.

The uniformity of the carbon dioxide distribution can be seen in the figure below from Scripps Institute showing the carbon dioxide concentrations measured at (a) Mauna Loa in the Northern hemisphere, and (b) the South Pole.

Click on the image for a larger version. The carbon dioxide concentrations measured at (a) Mauna Loa in the Northern hemisphere, and (b) the South Pole. Notice that the data from South Pole shows only small seasonal variations and lags behind the Northern Hemisphere data by a couple of years.

Because of this difference in geographical distribution, the greenhouse effect due to carbon dioxide is relatively more important at higher latitudes where the water vapour concentration is low.

And that is why the observed warming at these latitudes is inevitably higher.

Click on the image for a larger version. The observed temperature anomalies shown as a function of location around the Earth for four recent years. Notice the extreme warming at the highest latitudes in the Northern Hemisphere (Source: UEA CRU).

Once I had read this explanation it seemed completely obvious, and yet somehow I had neither figured it out myself nor knowingly read it almost 20 years of study!

2022 to 1978: Looking Back and Looking Forwards

May 3, 2022

Friends, it’s been two years since I retired, and since leaving the chaos and bullying at NPL, retirement has felt like the gift of a new life.

I now devote myself to pastimes befitting a man of my advanced years:

  • Drinking coffee and eating Lebanese pastries for breakfast.
  • Frequenting Folk Clubs
  • Proselytising about the need for action on Climate Change
  • Properly disposing of the 99% of my possessions that will have no meaning to my children or my wife after I die.

It was while engaged in this latter activity, that I came across some old copies of Scientific American magazine.

Last year I abandoned my 40 year subscription to the magazine because it had become almost content free. But in its day, Scientific American occupied a unique niche that allowed enthusiasts in science and engineering to read detailed articles by authors at the forefront of their fields.

In the January Edition for 1978 there were a number of fascinating articles:

  • The Surgical Replacement of the Human Knee Joint
  • How Bacteria Stick
  • The Efficiency of Algorithms
  • Roman Carthage
  • The Visual Characteristics of Words

and…

  • The Carbon Dioxide Question

You can read a scanned pdf copy of the article here.

This article was written by George M Woodwell a pioneer ecologist. The particular carbon dioxide question he asked was this:

Will enough carbon be stored in forests and the ocean to avert a major change in climate?

The article did not definitively answer this question. Instead it highlighted the uncertainties in our understanding of the some of the key processes required to answer the question.

In 1978 the use of satellite analysis to assess the rate of loss of forests was in its infancy. And there were large uncertainties in estimates of the difference in storage capacity between native forests, and managed forests and croplands.

The article drew up a global ‘balance sheet’ for carbon, and concluded that there were major uncertainties in our understanding of many of the physical processes by which carbon and carbon dioxide was captured (or cycled through) Earth’s systems.

Some uncertainty still remains in these areas, but the basic picture has become clearer in the subsequent 44 years of intense study.

So what can we learn from this ‘out of date’ paper?

Three things struck me.

Thing#1

Firstly, from a 2022 perspective, I noticed that there are important things missing from the article!

In considering likely future carbon dioxide emissions, the author viewed the choices as being simply between coal and nuclear power.

Elsewhere in the magazine, the Science and the Citizen column discusses electricity generation by coal with no mention of CO2 emissions. Instead the article simply laments that coal will be in short supply and concludes that:

“There is no question… coal will supply a large part of the nation’s energy future. The required trade-offs will be costly however, particularly in terms of human life and disease.

Neither article mentions generation of electricity by gas turbines. And neither makes any mention of either wind or solar power generation – now the cheapest and fastest growing sources of electricity generation.

From this I note that in it it’s specific details, the future is very hard to see.

Thing#2

Despite the difficulties, the author did make predictions and it is fascinating from the perspective of 2022 to look back and see how those predictions from 1978 have worked out!

The article included predictions for 

  • The atmospheric concentration of CO2
  • CO2 emissions from Fossil Fuels

Click on image for a larger version. Figure from the 1978 article by George Woodwell. The curves in green (to be read against the right-hand axis) shows two predictions for atmospheric concentration of CO2. The curves in black (to be read against the left-hand axis) shows two predictions for fossil fuel emissions of CO2. In each case, the difference between the two curves represents the uncertainty caused by changes in the way CO2 would be cycled through (or captured by) the oceans and forests. See the article for a detailed rubric.

The current atmospheric concentration of carbon dioxide is roughly 420 ppm and the lowest projection from 1978 is very close.

The fossil fuel emissions estimates are given in terms of the equivalent change in atmospheric CO2, and I am not exactly sure how to interpret this correctly.

Atmospheric concentration of CO2 is currently rising at approximately 2.5 ppm per year, and roughly 56% of fossil fuel emissions end up in the atmosphere. So the annual emissions predicted for 2022 are around 2.5/0.56 ~ 4.5 ppm /year, which is rather lower than the lowest prediction of around 6 ppm/year.

The article also predicts that this will be the peak in annual emissions, but that has yet to be seen.

The predictions did not cover the warming effect of carbon dioxide emissions, the science of which was in the process of being formulated. ‘Modern’ predictions can be dated to 1981, when James Hansen and colleagues published a landmark paper in Science (Climate Impact of Increasing Atmospheric Carbon Dioxide) which predicted:

A 2 °C global warming is exceeded in the 21st century in all the CO2 scenarios we considered, except no growth and coal phaseout.

This is the path we are still on.

From this I note that the worst predictions don’t always happen, but sometimes they do.

Thing#3

The final observation concerns the prescience of the author’s conclusion in spite of his ignorance of the details.

Click on the image for a larger version. This is the author’s final conclusion in 1978

His last two sentences could not be truer:

There is almost no aspect of national or international policy that can remain unaffected by the prospect of global climatic change.

Carbon dioxide, until now an apparently innocuous trace gas in the atmosphere may be moving rapidly toward a central role as a major threat to the present world order.

 

First Winter with a Heat Pump

April 27, 2022

Friends, our first winter with a heat pump is over.

Last week:

  • I switched off the space heating, and…
  • I changed the heating cycle for domestic hot water (DHW) from night-time (using cheap-rate electricity) to day-time (using free solar electricity).

From now until the end of July, I am hopeful that we will be substantially off-grid.

Let me explain…

No Space Heating 

The figure below shows the temperatures relevant to our heating system for the week commencing Saturday 9th April.

The week started cold, with overnight temperatures close to 0 °C and daytime temperatures peaking at 12 °C.

But the week ended with much warmer temperatures, and even in the absence of any heating flow, the household temperature rose above 21 °C. At this point I decided to switch off the space heating. You can see this on the monitoring data below.

Up to the 15th April, the heat pump would operate each evening – you can see this because radiator temperatures oscillated overnight as the heating circuit struggled to deliver a very low heating power.

From the 16th April – with the space-heating off – you can see the radiator temperatures simply fell after the DHW water heating cycle.

Click image for a larger version. Graph showing four temperatures during the week beginning 9th April 2022. The upper graph shows the temperature of radiator flow and the domestic hot water (DHW). The lower graph shows the internal and external temperatures. In the colder weather at the start of the week, the radiator flow temperatures cycled on and off. In the warmer temperatures at the end of the week, heating stopped automatically. On 16th April I switched the space heating circuit off.

Heating DHW during the day 

The next graph shows the same data for the following week. Now there is no space-heating in the house, but the insulation is good enough that household temperature does not fall very much overnight.

On the 20th April I switched from heating the domestic hot water at night (using cheap rate electricity) to heating during the afternoon (using electricity generated using solar PV).

My plan was that by 2:00 p.m., the battery would be substantially re-charged, and heating the hot water at that time would:

  1. Minimise exports to the grid and maximise self-use of solar-generated electricity.
  2. Heat the domestic hot water using air that was ~ 10 °C hotter than it would be at night – improving the efficiency of the heat pump.

Click image for a larger version. Graph showing four temperatures during the week beginning 16th April 2022. The upper graph shows the temperature of radiator flow and the domestic hot water (DHW). The lower graph shows the internal and external temperatures. The radiator flow was switched off. On 20th April I switched from heating the domestic hot water at night to heating during the day.

One can see that household temperature has fallen a little during the week, but only to around 19 °C, which feels quite ‘spring-like’ in the sunshine.

The big picture 

The graph below shows:

  1. The amount of electricity used by the household
  2. The amount of electricity drawn from the grid

It covers the whole of 2021 and the start of 2022 up to today (almost) the end of April. The graphs show running averages over ± 2 weeks.

Click image for a larger version. Graph showing the amount of electricity used by the household each day (kWh/day) and the amount of electricity drawn from the grid each day (kWh/day). Over the 8 months of the winter heating season, 27% was supplied by solar generated electricity.

The 4 kWp solar PV system was installed in November 2020 and was just beginning to make a noticeable difference to our electricity consumption in the spring of 2021.

In March 2021 we installed the Powerwall and immediately dropped off the grid for just over 2 months! In mid-summer we had a run of very poor solar days and we began to draw from the grid again.

In July 2021 we installed a heat pump and this extra load (for DHW) coupled with the decline in solar generation caused us to need to draw a few kWh from the grid each day.

Over the 8 month heating season from the start of August to the end of April, the household used 4,226 kWh of electricity for all the normal activities (~ 2,200 kWh) plus heating using the heat pump (~2,000 kWh). Over this period the heat pump delivered just over 7,000 kWh of heat for a seasonally averaged COP of around 3.5.

However, even in this winter season, only 3,067 kWh were drawn from the grid – mostly at low cost. The balance (27%) was solar generated.

Summer and Winter Settings

The optimal strategy for the Powerwall is now becoming clear.

In the Winter season, daily consumption can reach 25 kWh/day and solar generation is only ~ 2 kWh day. So in this season:

  • We operate the household from the grid during the off-peak hours.
  • We time heavy loads (dishwashing, tumble drying and DHW heating) to take place in the off peak hours.
  • We buy electricity from the grid to fill the battery (13.5 kWh) with cheap rate electricity – and then run the household from the battery for as long as possible. Typically we would need to draw full price electricity from the grid only late in the day.

Click image for a larger version. Images showing the time of day that we have drawn power from the grid (kW) in half-hour periods through the day. Each image shows the average for one month. The graph was assembled using data from the fabulous Powershaper software (link).

In the ‘summer’ season, daily household consumption is ~11 kWh and average solar generation is typically 15 kWh/day. So given that the battery has 13.5 kWh of storage, we can still stay ‘off-grid’ even during a periods of two or three dull days.

So during this period

  • We switch the battery from ‘time-based’ mode to ‘self-powered’ mode.
  • We time heavy loads (dishwashing, tumble drying and DHW heating) to take place in the afternoon.

This year and last year 

Last year (2021), as soon as we installed the Tesla Powerwall battery, we dropped off-grid within days.

But this year (2022) we have an additional daily electrical load. Now we are heating DHW electrically with a heat pump which requires ~ 1.5 kWh/day.

Nonetheless, I hope it will be possible to remain substantially ‘off-grid’ for the next few months. Time will tell.

What Size Heat Pump Do I Need? A Rule of Thumb

April 5, 2022

Friends, a few weeks ago I wrote four articles about using the idea of Heating Degree Days to make simple calculations about heat losses from one’s home.

  • Article 1 was an introduction to the idea of Heating Degree Days as a general measure of the heating demand from a dwelling.
  • Article 2 explained how and why the idea of Heating Degree Days works.
  • Article 3 looked at the variability of Heating Degree Days across the UK, at locations around London, and from year to year.
  • Article 4 introduced some rules of thumb for estimating the Heat Transfer Coefficient for a dwelling and the size of heat pump it requires.

The Rule of Thumb for Heat Pump Sizing is dramatically simple:

The video above is about using these ‘Rules of Thumb’.

I feel these rules could be helpful to both heat pump installers and their clients.

The Powerpoint slides (.pptx) I use in the presentation can be found here.

 

 

 

 

 

Heating Degree Days:1: A Brilliant Idea

March 15, 2022

Friends, on learning recently about the wonderful idea behind Heating Degree Days (HDDs) I found myself torn between two conflicting emotions.

  • On the one hand, I feel delighted at the cleverness of the concept and I rejoice in my new-found ability to save so much time on calculations about heating houses.
  • But on the other hand, I feel like an idiot for not having known about the idea previously!

Being the positive person that I am, I am writing this gripped by the more positive sentiment and have written four articles on the subject. This is first article in which I try to keep things simple-ish. I deal with complicated questions is this next article and the one that follows that.  In the final article I summarise the calculations that HDDs make easy.

HDDs and HTCs?

Heating Degree Days (HDDs) make it easy way to calculate the ‘thermal leakiness’ of a dwelling – a quantity technically called its overall Heat Transfer Coefficient (HTC).

The HTC is the most important number to know if you are considering any type of retrofit – insulation, draught-proofing or installing a heat pump. It allows you to answer the question:

When it’s (say) 8 °C outside, how much heating power (in watts or kWh/day) do I need to keep my house at (say) 20 °C“.

The answer is just the temperature difference (12 °C in this example) multiplied by the HTC.

So if the HTC of a dwelling is 300 W/°C then it would require 12 °C x 300 W/°C = 3,600 W or 3.6 kW to keep that dwelling warm.

But how do you find the HTC? This is normally quite hard work. It usually requires either extensive surveys and calculations or or prolonged measurements. But the idea of heating degree days HDDs makes it really easy. There is just one sum to do. Let me explain.

Degree Days in General

The idea of degree-days  is commonplace in agriculture.

For example, in viticulture, the number of Growing Degree Days (GDD) is calculated to allow farmers to estimate when the grapes will flower, or ripen, and when certain pests will emerge.

Click on Image for a larger version. Illustration of the concept of Growing Degree Days (GDDs). See text for more details.

GDDs are calculated as follows:

  • If the average daily temperature is below some Base Temperature – usually 10 °C – then one adds 0 to the number of GDDs
  • If the average daily temperature on a day is above the Base Temperature, then one subtracts the base temperature from the average temperature, and adds the result to the number of GDDs.
    • So if the average temperature on a particular day is 15 °C, and the base temperature is 10 °C, then one adds 5 °C to the GDD total.

Each geographic region has a characteristic number of GDDs available per year, and each grape-type requires a certain number of GDDs for a successful harvest. So using GDDs is a simple way to match vines to regions,

Alternatively, in any particular year, one can use the number of GDDs to discuss whether the grapes are likely to mature earlier or later.

Heating Degree Days

Heating Degree Days (HDDs) work in a similar way to GDDs, but count days when the temperature falls below a base temperature.

Over a winter season, the number of HDDs provides an estimate for the overall ‘heating demand’ that you want your heating system to meet.

Click on Image for a larger version. Illustration of the concept of Heating Degree Days (HDDs). See text for more details.

To keep things specific in this article I will mainly work with a base temperature of 16.5 °C, and the heating degree days are then known as HDD(16.5)s.

I’ll explain the choice of base temperature in the next article, but the choice corresponds to a thermostat setting of approximately 20 °C which is typical of many UK dwellings.

  • For much of the south of the UK – basically anywhere south of Manchester – the number of HDD(16.5)s per year typically lies in the range 2,150 ± 150 °C-days/year.
  • For regions north of Manchester up to Edinburgh in Scotland, the number of HDD(16.5)s per year typically lies in the range 2,350 ± 150 °C-days/year.
  • You can look up the exact number of HDD16.5s for your location for the last three years using the outstanding Heating Degree Days web site. At my home in Teddington, the annual number HDD(16.5) is typically 2,000 °C-days/year

What now?

In order to estimate the heat leak from a dwelling – its Heat Transfer Coefficient (HTC) – you also need to know one more number: how many kWh of heating the dwelling requires in a year.

  • If it’s heated with gas, you can use the annual number of kWh of gas used.
  • If it’s heated with oil, multiply the number of litres of oil used annually by 10.
    • e.g. 2,500 litres of heating oil per year is ~25,000 kWh.

So for example, before I did any work on our home, we used 15,000 kWh of gas each year. I looked up the annual number

So to calculate the HTC for my home I divide 15,000 kWh/year by 2,000 °C days/year to give 7.5 kWh/day/°C. This tells me that:

  • To heat my home 1 °C above the outside temperature required an additional 7.5 kWh of gas per day.
  • Or if I reduced the temperature in my home by 1 °C, I would save 7.5 kWh of gas per day.

Equivalently, if we divide by 24 and multiply by 1000, we can convert 7.5 kWh/day/°C into the more common units of watts i.e. 313 watts/°C.

  • So to heat my home 1 °C above the outside temperature required an additional continuous 313 W of heating.

Thinking about a heat pump?

Knowing the HTC, one can change a qualitative sense that “it’s a really cold house” into a quantitative measurement “It has a HTC of 400 W/°C“. that can help one to choose which refurbishments are likely to be effective.

Suppose, for example, we want to work out the size of heat pump required to heat our dwelling in the depths of winter.

Typically the coldest temperatures encountered routinely in the UK are around -3.5 °C i.e. around 20 °C colder than the base temperature.

So to estimate the heat pump power required for my house before insulation, I would simply multiply the heating demand (20 °C) by the HTC (313 W/°C) to yield 6.26 kW.

Additionally, if we make changes to the dwelling, such as adding triple-glazing, we can estimate the change in HTC by dividing fuel use (in kWh) by the number of HDD16.5s – a number which can be found for any location at the Heating Degree Days web site.

Summary

This article introduced the idea of using Heating Degree Days as an estimate of overall demand.

When combined with a measure of heating energy supplied over the same period, dividing one by the other magically yields the Heat Transfer Coefficient (HTC) for a dwelling.

Knowing the HTC one can measure the effect of any improvements one makes – such as triple-glazing or installation. And additionally, one can calculate the amount of heating required on a cold winter day.

But you may have some questions. For example:

  • I set my thermostat to 20 °C: Why did I recommend using 16.5 °C as a base temperature?
  • Does it really work?
  • How do HDD(16.5)s vary from one location to another and from year-to-year?

Will aviation eventually become electrified?

March 2, 2022

Friends. I ‘have a feeling’ that aviation will eventually become electrified. At first sight this seems extraordinarily unlikely, but I just have this feeling…

Obviously, I could just be wrong, but let me explain my thinking.

Basics 

The current technology for aviation – jet engines on aluminium/composite frames with wings – relies on the properties of jet fuel – kerosene.

There are two basic parameters for aviation ‘fuel’.

  • Energy density – which characterises the volume required to carry fuel with a certain energy content. It can be expressed in units of megajoules per litre (MJ/l).
  • Specific energy – which characterises the mass required to carry fuel with a certain energy content. It can be expressed in units of megajoules per kilogram (MJ/kg).

Wikipedia have helpfully charted these quantities for a wide range of ‘fuels’ and this figure is shown above with five technologies highlighted:

  • Lithium batteries,
  • Liquid and Gaseous Hydrogen,
  • Kerosene and diesel.

Click on image for a larger version. Chart from Wikipedia showing the specific energy and energy density of various fuels enabled energy technologies.

A general observation is that hydrocarbon fuels  have a much higher density and specific energy than any current battery technology. Liquid Hydrogen on the other hand has an exceptionally high specific energy, but poor energy density: better than batteries but much worse than hydrocarbon fuels.

Lessons from the EVs transition:#1

I think the origin of my feeling about the aviation transition stems from the last 20 years of watching the development of battery electric vehicles (BEVs). What is notable is that the pioneers of BEVs – Tesla and Chinese companies such as Xpeng or BYD – are “all in” on BEV’s – they have no interest in Internal Combustion Engine (ICE) vehicles or hybrids. They have no legacy market in ICE vehicles to protect.

‘Legacy Auto’ (short hand for VW, GM, Ford, Toyota etc) had poked their toe in the waters of alternative drive-trains quite a few years ago. GM’s Volt and Bolt were notable and Toyota’s Mirai hydrogen fuel cell car was a wonder. But Legacy Auto were comfortable manufacturing ICE vehicles and making profits from it, and saw these alternative energy projects as ‘insurance’ in case things eventually changed.

As I write in early 2022, all the legacy auto makers are in serious trouble. They can generally manufacture BEVs, but not very well – and none of them are making money from BEVs. Aside from Tesla, they have very poor market penetration in China, the world’s largest EV car-market. In contrast Tesla are popular in China and America and Europe and make roughly 20% profit on every car they sell.

So one lesson from the BEV transition is that the legacy industry who have invested billions in an old technology, may not be the pioneers of a new way of doing things.

Lessons from the EVs transition:#2

How did BEV’s overcome the awesome advantages of hydrocarbon fuels over lithium batteries in terms of energy density and specific energy?

First of all, ICEs throw away about 75% of their advantage because of the way they operate as heat engines. This reduces their energy density advantage over batteries to just a factor 10 or so.

Secondly, there is the fact that ICE cars contain many heavy components – such as engines, gearboxes, and transmissions that aren’t needed in a BEV.

But despite this, BEV cars still generally have a weight and volume disadvantage compared to ICE cars. But this disadvantage has been overcome by careful BEV-specific design.

By placing the large, heavy, battery pack low down, designers can create pleasant vehicle interiors with good handling characteristics. And because the ability to draw power rapidly from batteries is so impressive, the extra mass doesn’t materially affect the acceleration of the vehicle.

EV range is still not as good as a diesel car with a full tank. But it is now generally ‘good enough’.

And once EVs became good enough to compete with ICE vehicles, the advantages of EVs could come to the fore – their ability to charge at home, low-running costs, quietness, potential low carbon emissions and of course, zero in situ emissions.

And significantly, BEV’s are now software platforms and full electronic control of the car allows for some capabilities that ICE vehicles will likely never have.

Lessons from the EV transition:#3

Despite Toyota’s massive and long-term investment in Hydrogen Fuel Cell (HFC) cars, it is now clear that hydrogen will be irrelevant in the transition away from ICE vehicles. Before moving on to look at aviation, it is interesting to look at why this is so.

The reason was not technological. HFC cars using compressed hydrogen fuel were excellent – I have driven one – with ranges in excess of 320 km (200 miles). And they were excellent long before BEVs were excellent. But the very concept of re-fuelling with hydrogen was the problem. Hydrogen is difficult to deal with, and fundamentally if one starts with a certain amount of electrical power – much less of it gets to the wheels with a HFC-EV than with a BEV.

The very idea of a HFC car is – I think – a product of imagining that there would be companies akin to petrochemical companies who could sell ‘a commodity’ in something like the way Oil Companies sold petrol in the 20th Century. BEV’s just don’t work that way.

Interestingly, the engineering problems of handling high-pressure hydrogen were all solved in principle. But this just became irrelevant.

Cars versus Aeroplanes

So let’s look at how energy density and specific energy affect the basic constraints on designs of cars and aeroplanes.

50 litres of diesel contains roughly 1,930 MJ of energy. The table below shows the mass and volume of other fuels or batteries which contains this same energy.

Mass (kg) Volume (l)
Kerosene 45 55
Diesel 42 50
Hydrogen HP 14 364
Hydrogen Liquid 14 193
Lithium Battery 4,825 1,930

We see that batteries look terrible – the equivalent energy storage would require 4.8 tonnes of batteries occupying almost 2 cubic metres! Surely BEVs are impossible?!

But as I mentioned earlier, internal combustion engines waste around 75% of their fuel’s embodied energy in the form of heat. So a battery with the required stored energy would only need 25% of the mass and volume in the table above.

Mass (kg) Volume (l)
Lithium Battery 1206 483

So, we see that the equivalent battery pack is about a tonne heavier than the fuel for a diesel car.

But this doesn’t include the engine required to make the diesel fuel work. So one can see how by clever design and exploiting the fact that electric motors are lighter than engines, one can create a BEV that, while heavier than an ICE car, is still competitive.

Indeed, BEVs now outperform ICE cars on almost every metric that anyone cares about, and will continue to get better for many years yet.

Let’s do the same analysis for aeroplanes. A modern jet aeroplane typically carries 100 tonnes of kerosene with an energy content of around 43 x 105 MJ. This is sufficient to fly a modern jet (200 tonnes plus 30 tonnes of passengers) around 5,000 miles or so.

The table below shows the mass and volume other fuels or batteries which contains this same energy. Notice that the units are no longer kilograms and litres but tonnes and cubic metres.

Mass (tonnes) Volume (m^3)
Kerosene 100 123
Diesel 94 111
Hydrogen HP 31 811
Hydrogen Liquid 31 430
Lithium Battery 10,750 4,300

Now things look irrecoverably impossible for batteries! The batteries would weigh 10,000 tonnes! And occupy a ridiculous volume. Also, turbines are more thermodynamically efficient than ICEs, so assuming say 50% efficiency, batteries would still weigh ~5,000 tonnes and occupy 2,000 m3.

Even with a factor 10 increase in battery energy density – which is just about conceivable but not happening any time soon – the battery would still weigh 1,000 tonnes!.

Does it get any better for shorter ranges? Not much. Consider how much energy is stored in 10 tonnes of kerosene (~43 x 104 MJ). This is sufficient to fly a modern jet – weighing around 50 tonnes unladen and carrying 20 tonnes of passengers around 500 miles or so.

Mass (tonnes) Volume (m^3)
Kerosene 10 12
Diesel 9 11
Hydrogen HP 3 81
Hydrogen Liquid 3 43
Lithium Battery 1,075 430

Even assuming 50% jet efficiency, batteries with equivalent energy would still weigh ~500 tonnes and occupy 200 m3. Even after a factor 10 increase in battery energy density, things still look pretty hopeless.

So can we conclude that battery electric aviation is impossible? Surprisingly, No.

And yet, it flies.

Jet engines burning kerosene have now reached an astonishing state of technological refinement.

But jet engines are also very expensive, which makes the economics of airlines challenging. And despite improvements, jets are also noisy. And of course, they emit CO2 and create condensation trails that affect the climate.

In contrast, electric motors are relatively cheap, which means that electric aeroplanes (if they are possible) would be much cheaper, and require dramatically less engine maintenance. These features are very attractive for airlines – the people who buy planes. And the planes would be quiet and have zero emissions – attractive for people who fly or live near airports.

And several companies are seeking to exploit these potential advantages. Obviously, given the fundamental problem of energy density I outlined above, all the projects have limitations. Mostly the aeroplanes proposed have limited numbers of passengers and limited range. But the companies all impress me as being serious about the engineering and commercial realities that they face. And I have been surprised by what appears to be possible.

Here are a few of the companies that have caught my attention.

Contenders

In the UK, Rolls Royce and partners have built an impressive single-engined aircraft which flies much faster than equivalent ICE powered aircraft.

Their Wikipedia page states the batteries have a specific energy of 0.58 MJ/kg, about 50% higher than I had assumed earlier in the article. The range of this plane is probably tiny – a few 10’s of kilometres – but this number will only increase in the coming years.

This aeroplane is really a technology demonstrator rather than a seedling commercial project. But I found it striking to see the plane actually flying.

In Sweden, Heart Aerospace have plans for a 19-seater short-hop passenger craft with 400 km of range. Funded by Bill Gates amongst others, they have a clear and realistic engineering target.

In an interview, the founder explained that he was focussing on the profitability of the plane. In this sense the enterprise differs from the Rolls Royce project. He stated that as planned, 2 minutes in the air will require 1 minute of re-charging. He had clear markets in mind in (Sweden, Norway, and New Zealand) where air travel involves many ‘short hops’ via transport hubs. And the expected first flights will be soon – 2025 if I have it correct.

In Germany, Lillium are building innovative ducted-fan planes. Whereas Heart’s planes and Rolls Royce’s demonstrator projects are conventional air-frames powered by electric motors, Lillium have settled on completely novel engineering possibilities enabled by electrical propulsion technology. Seeing their ‘Star Wars’ style aircraft take off and land is breathtaking.

Back in the UK, the Electric Aviation Group are advertising HERA as a 90-seater short route airliner with battery and hydrogen fuel-cell technology (not a turbine). This doesn’t seem to be as advanced as the other projects I have mentioned but illustrates the way that different technologies may be incorporated into electric aviation.

What about Hydrogen Turbines?

Legacy Aeromaker Airbus are advertising development of a hydrogen turbine demonstrator. It’s a gigantic A380 conventional jet airliner with a single hydrogen turbine attached. (Twitter video)

Stills from a video showing how the hydrogen turbine demonstrator will work. A single turbine will attached to kerosene driven aeroplane by 2035.

The demonstrator looks very clever, but I feel deeply suspicious of this for two sets of reasons: Technical reasons and ‘Feelings’.

Technical.

  • Fuel Volume: To have the same range and capabilities as an existing jet – the promise that seems to be being advertised – the cryogenic (roughly -250 °C) liquid hydrogen would occupy 4 times the volume of the equivalent kerosene. It likely could not be stored in the wings because of heat leakage, and so a big fraction of the useful volume within an aeroplane would be sacrificed for fuel.
  • Fuel Mass: Although the liquid hydrogen fuel itself would not weigh much, the tanks to hold it would likely be much heavier than their kerosene equivalents. My guess is that that there would not be much net benefit in terms of mass.
  • Turbine#1: Once the stored liquid hydrogen is evaporated to create gas, its energy density plummets. To operate at a similar power level to a conventional turbine, the volume of hydrogen entering the combustion chamber per second will have to be (very roughly) 40 times greater.
  • Turbine#2: Hydrogen burns in a different way from kerosene. For example embrittlement issues around the high pressure, high temperature hydrogen present at the inlets to the combustion chamber are likely to be very serious.
  • I don’t doubt that a hydrogen turbine is possible, but the 2035 target advertised seems about right given the difficulties.
  • Performance: And finally, assuming it all works as planned, the aircraft will still emit NOx, and will still be noisy.

Feelings.

  • I feel this a legacy aero-maker trying to create a future world in which they are still relevant despite the changed reality of climate crisis.
  • I suspect these engines assuming the technical problems are all solved –  will be even more expensive than current engines.
  • I feel the timeline of 2035 might as well be ‘never’. It allows Airbus to operate as normal for the next 13 years – years in which it is critical that we cut CO2 emissions.
  • I suspect that in 13 years – if electric aviation ‘gets off the ground’ (sorry!) – then it will have developed to the point where the short-haul end of the aviation business will be in their grasp. And once people have flown on smaller quieter aircraft, they will not want to go back.
  • Here is Rolls Royces short ‘Vision’ video.

And so…

I wrote this article in response to a Twitter discussion with someone who was suggesting that cryogenic liquid-hydrogen-fuelled jets would be the zero-emission future of aviation.

I feel that that the idea of a cryogenic hydrogen aircraft is the last gasp of a legacy engine industry that is trying to stay relevant in the face of a fundamentally changed reality.

In contrast, electrical aviation is on the verge of becoming a reality: planes are already flying. And motors and batteries will only get better over coming decades.

At some point, I expect that electrical aviation will reach the point where its capabilities will make conventional kerosene-fuelled aeroplanes uneconomic, first on short-haul routes, and then eventually – though I have no idea how! – on longer routes.

But…

…I could be completely wrong.

Heat Pump Explainer

February 24, 2022

Friends, Everyone is talking about heat pumps!

But many people are still unfamiliar with the principles behind their ‘engineering magic’.

This ‘explainer’ video was shot on location in my kitchen and back garden, and uses actual experiments together with state-of-the-art Powerpoint animations (available here) to sort-of explain how they work.

I hope it helps!

 

Nuclear Fusion is Irrelevant

February 14, 2022

Click for a larger image. News stories last week heralded a major breakthrough in fusion research. Links to the stories can be found below.

Friends, last week we were subjected to a press campaign on behalf of the teams of scientists and engineers who are carrying out nuclear fusion research.

Here are links to some of the stories that reached the press.

  • BBC Story
    • If nuclear fusion can be successfully recreated on Earth it holds out the potential of virtually unlimited supplies of low-carbon, low-radiation energy.”
  • Guardian Story #1
    • Prof Ian Chapman, the chief executive of the UK Atomic Energy Authority said “It’s clear we must make significant changes to address the effects of climate change, and fusion offers so much potential.”
  • Guardian Story#2 (from last year)
    • The race to give nuclear fusion a role in the climate emergency

The journalists add little to these stories – they mainly consist of snippets from press releases spun together to seem ‘newsy’. All these stories are colossally misleading.

Floating in the background of these stories is the idea that this research is somehow relevant to our climate crisis. The aim of this article is to explain to you that this is absolutely not true.

Even with the most optimistic assumptions conceivable, research into nuclear fusion is irrelevant to the climate crisis.

Allow me to explain how I have come to this conclusion.

Research into a practical fusion reactor for electricity generation can be split into two strands: an ‘old’ government-backed one and a ‘new’ privately-financed one.

The state of fusion research: #1 government-backed projects

The ‘old’ strand consists of research funded by many governments at JET in the UK and the colossal new ITER facility being constructed in France.

In this strand, ITER will begin operation in 2025, and after 10 years of ‘background’ experiments they will begin energy-generating experiments with tritium in 2035, experiments which are limited by design to just 4000 hours. If I understand correctly, operation beyond this limit will make ITER too radioactive to dismantle.

The key operating parameter for fusion reactors is called Q: the ratio of the heat produced to the energy input. And the aim is that by 2045 ITER will have achieved a Q of 10 – producing 500 MW of power for 400 seconds with only 50 MW of input energy to the plasma.

However ITER will only generate heat, not electricity. Also, it will not create any tritium but will instead only consume it. Following on from ITER, a DEMO reactor is planned which will have a Q value in the range 30-50, and which will generate electrical power, and be able to breed tritium in the reactor.

So on this ITER-proposed time-line we might expect the first actual electricity generation – may be 100 MW of electrical power – in maybe 2050.

And then assuming that these reactors take 10 years to build and that the design will evolve a little, it will be perhaps 2070 before there are ten or so operating around the world.

You may consider that research into a technology which will not yield results for 50 years may or may not be a good idea. I am neutral.

But it is definitely irrelevant to our climate crisis: we simply do not have 50 years in which to eliminate carbon dioxide emissions from electricity generation in the UK.

And this is on the ITER-proposed timeline which I consider frankly optimistic. If one considers some of the technical problems, this optimism seems – to put it politely – unjustified.

Here are three of the issues I keep at the top of my file in case I meet some fusion scientists at the folk club.

  • Q is the ratio of heat energy injected into the plasma to the heat energy released. But in order to be effective we have to generate net ELECTRICAL energy. So we really need to take account of the fact that thermodynamics limits the electrical generation to ~30% of the thermal energy produced. Additionally we need to include the considerable amounts of energy used to operate the complex machinery of a reactor. So we really need to consider a wider definition of Q, one that includes the ratio of input to output energies involving the entire reactor. Sabine Hossenfelder has commented on this issue. But basically, Q needs to be a lot bigger than 10.
  • Materials. The inside of the reactor is an extraordinarily hostile place with colossal fluxes of neutrons passing through every part of the structure. After operation has begun, no human can ever enter the environment again – and it is not clear to me that a working lifetime of say 35 years at 90% availability is realistic. Time will tell.
  • Tritium. The reactor consumes tritium – possibly the most expensive substance on Earth – and for each nucleus of tritium destroyed a single neutron is produced. The neutrons so produced must be captured to produce the heat for electrical generation. But the neutrons are also needed to react with lithium to produce more tritium. Since some neutrons are inevitably lost – so the plan is for extra neutrons to be ‘bred’ by bombarding suitable materials with neutrons, which then produce a shower of further neutrons – 2 or 3 for every incident neutron. And these neutrons can then in principle be used to produce tritium. But aside from being technically difficult, this breeding process also produces long-lived radioactive waste – something fusion reactors claim not to do.

If short, when one considers some of these technical problems, optimism that this research path will produce significant power on the grid in 2070 seems to me to be unjustified.

But what about this new ‘breakthrough’?

The breakthrough was not a breakthrough. It was undertaken because the walls of the previous reactor were found to absorb some of the fuel! So this ‘breakthrough’ represented a repeat of a previous experiment, but with new materials in place.

You can relive the press conference here.

Starting with a much larger amount of energy, they managed to produce 59 megajoules (MJ) of energy from fusion in about 8 seconds.

59 MJ is about 16.4 kWh of energy, which is sufficient to heat water for around 500 cups of tea, more than a cup of tea each for all the scientists and engineers working on the project.

For comparison, the 12 solar panels on my house will produce this easily in a day during the summer. To generate the energy in 5 seconds rather than 12 hours would require more panels: a field of panels roughly 200 m x 250 m, which would cost a little under 1 million pounds.

So the breakthrough is modest in absolute terms. But as I mentioned above, after billions more in funding, and another 20 years of research, the scientists expect to extend this generating ‘burn’ from 5 seconds to 400 seconds at a much higher power level.

In my opinion, JET and ITER are a complete waste of money and should be shut down immediately. The resources should be transferred to building out solar and wind energy projects alongside battery storage.

The state of fusion research: #2 privately-backed projects

The ‘new’ strand of fusion research consists of activities carried out primarily by privately-funded companies.

What? If the massive resources of governments can only promise fusion by 2070, how can private companies hope to make progress?

The answer is that JET and ITER were planned before a technical key advance was made, and they are committed to proceeding without incorporating that advance! Its a multi-billion pound version of “I’ve started so I’ll finish“. It is utter madness, and doubly guarantees the irrelevance of ITER.

The technical advance is the achievement of superconducting wire which can create magnetic fields twice as large as was previously possible. It turns out that the volume of plasma required to achieve fusion scales like the fourth power of the magnetic field. So doubling the magnetic field makes the final reactor potentially 16 times smaller!

This also makes it dramatically cheaper requiring amounts on the order of $100 million rather than billions of dollars. Critically, reactors can exploit the concept of Small Modular Reactors (SMRs) which can be mass-produced in a factory and shipped to a site. Potentially the first reactors could be built in years rather than decades, and the technology iterated to produce advances.

I have written about this previously. With some qualifications, I think this activity is generally not crazy  (it is certainly much less crazy than JET and ITER) but success is far from guaranteed.

A key unresolved question with this technology concerns its potential timeline for delivery of a working power plant.

The reactors face exactly similar problems to those in the much larger ITER reactor, and these are not problems that can be solved in months. So let’s suppose that the first demonstration of Q>1 is achieved in just 5 years (2027), and that all the technical problems with respect to electricity generation required only a further 10 years (2037). Given the difficulties in planning, let’s optimistically assume that the first production plant could get built just 5 years after that in 2042.

The ‘S’ in SMR means reactors would be small, with a thermal output of perhaps 150 MW and an electrical output of perhaps 50 MW. This is small on the scale of typical thermal generation plant. For example Hinckley C is designed to output 3,200 MW of electrical power i.e. more than 60 times larger than a hypothetical SMR fusion reactor.

So if we assume a rapid roll out and no technical or societal problems, then perhaps these reactors might generate significant power onto the grid in perhaps 2050. Nominally this is 20 years ahead of ITER.

Relevance.

With optimistic assumptions concerning technical progress, we might hope for fusion reactors to begin to make a significant contribution to the grid somewhere between 2050 and 2070, depending on which route is taken.

That is already too late to make any contribution to our climate crisis.

We need to deploy low-carbon technologies now. And if we have a choice between reducing carbon dioxide emissions now, or in 30 – 50 years, there is no question about what we should do.

Cost.

We also need to consider the likely cost of the electricity produced by a fusion reactor.

Like conventional fission-based nuclear power, the running costs should be low. Deuterium is cheap and the reactor should generate a surplus of tritium.

The majority of the cost of conventional nuclear power is the cost of the capital used to construct the reactor. If I recall correctly, it amounts to around 95% of the cost of the electricity.

It is hard to imagine that a fusion reactor would be cheaper than a fission reactor – it would be at the limit of manageable engineering complexity. So we might imagine that the costs of fusion-generated electricity would be similar to the cost of nuclear power – which is already most expensive power on the grid.

In contrast, the cost of renewable energy (solar and wind) has fallen dramatically in recent years. Solar and wind are now the cheapest ways to make electricity ever. And their cost – along with the cost of battery storage – is still falling.

So it seems that after waiting all these years, the fusion-based electricity would in all likelihood be extraordinarily expensive.

Summary.

The idea of generating electricity from nuclear fusion has been seen as technological fix for climate change. It is not.

Even the most optimistic assumptions possible indicate that fusion will not make make any significant contribution to electricity supplies before 2050.

This is too late to help in out in our climate crisis which is happening now.

Additionally the cost of the electricity might be expected to exceed the cost of conventional nuclear power stations – the most expensive electricity currently on the UK grid.

If as an alternative, we invested in renewable generation from wind, solar and tidal resources, together with ever cheaper storage, we could begin to address our climate crisis now in the knowledge that the technology we were deploying would likely only ever get better. And cheaper.

 

 

The James Webb Telescope: it’s all done with mirrors

January 26, 2022

Click image for a larger  version. The James Webb Telescope has reached the L2 point!

Friends, Hurray! The James Webb Space Telescope (JWST) has deployed all its moveable parts and reached its lonely station at the L2 point, far beyond the Moon.

In a previous article I mentioned that back in 2018 I had been fortunate enough to meet with Jon Arenberg from Northrop Grumman, and to see the satellite in its clean room at their facility in Redondo Beach, California.

  • In that article I outlined in broad terms why the satellite is the shape it is.
  • In this article I want to mention two other people who have made key contributions to the JWST.

I was fortunate enough to meet these people during my ‘career’ at NPL. And as I hope to explain, they have taken manufacturing and metrology to the very limits of what is possible in order to make a unique component for the JWST.

It’s all done with Mirrors

The 18 hexagonal mirrors of the JWST are iconic, but in fact there are many more mirrors inside the telescope.

JWST uses mirrors rather than lenses to guide the light it has captured, because at the infrared wavelengths for which the JWST is designed, glass and almost all other materials strongly absorb i.e. they are opaque!

In contrast, during reflection from a metal surface, light only enters the material of the mirror in a very thin layer at its surface.

Consequently, mirror surfaces can guide light of any wavelength with very low absorption.

Form and Smoothness

The creation of a mirror surface requires a machining operation in which a metal component – most commonly made from aluminium – is cut into a specific form with an exceptionally smooth surface.

  • The surface form must be close to the shape it was designed to be.
    • Otherwise the light will not be directed to a focus and the images will be blurred.
  • The surface roughness – the ‘ups and downs’ of the surface – must be much less than the wavelength of the light the mirror must reflect.
    • Otherwise the light will be ‘scattered’ from the surface and very dim objects will be obscured by light scattered from nearby bright objects

The large mirror surfaces on the primary and secondary mirrors are manufactured in a complex process that involves machining the surface of the highly-toxic beryllium metal, and then painstakingly grinding and polishing the surface into shape. Once completed, the finished surface is coated in gold.

Each step in the manufacturing process is interspersed with a measurement procedure to assess the roughness of the surface and the closeness to its ideal form. Ultimately the limit to achievable manufacturing perfection is simply the limit of our ability to make measurements of its surface.

For small mirrors, this polishing process is typically not possible and the surface must be cut directly on a sophisticated lathe.

To achieve the mirror-smooth surfaces, the lathe uses an ultra-sharp diamond-tool which can remove just a few thousandths of a millimetre of material at a time and leave a surface with near-atomic smoothness.

But typically the required surface form is not part of a sphere or a cylinder. To cut such ‘aspheric’ surface forms on a lathe requires that the lathe tool move on a complicated trajectory during a single rotation of the workpiece: the solid geometry and mathematics to achieve this are hellish.

To achieve the required form, the trajectory that the tool must follow relative to the workpiece is calculated millisecond-by-millisecond in MATLAB, and then instructions are downloaded to the lathe.

The JWST is filled with mirrored surfaces of bewildering complexity, channelling infrared light via mirrors to measuring instruments. And slightly to my surprise – and perhaps to yours – I have been told that considering all the mirror surfaces inside JWST, more were made in the UK than in any other country.

Splitter

One of the key instruments onboard the JWST is the Mid-Infrared  Instrument (MIRI) which analyses light with wavelengths from 5 thousandth of a millimetre out to 28 thousandths of a millimetre.

And inside MIRI, one of the most complex mirrored-components is a ‘slicer’ or ‘splitter’ component.

Its precise function is hard to describe: the figure below is from an almost incomprehensible paper. My opinion is that it is incomprehensible if you don’t already know how it works!

Click on image for a larger version. Figure 14 of “The European optical contribution to the James Webb Space Telescope”. See the end of the article for reference. On the right is ‘splitter’ which redirects light in an almost inconceivably complex pattern.

So let me have a go.

  • Imagine parallel light from the main telescope mirrors falling on to a square section of a parabola. It is a property of a parabola that this light will be directed towards a single point: the focus of the parabola.
  • Now imagine splitting the square into two, and preparing each half of the square as sections of two different parabolas with their foci in two different places. Now parallel light falling on the component will be directed towards two different locations – with 50% of the light proceeding to each focus.

Click on image for a larger version. Illustration of the function of the splitter component. The left-hand panel shows parallel light falling onto a fraction of a parabolic surface being directed towards a focus. The right-hand panel shows parallel light falling onto a fractions of two different parabolic surfaces, and being directed towards two different foci. The Cranfield splitter has slices of 21 different parabola – each within 10 nanometres of its ideal form!

Now imagine repeating this division and machining a component with sections of 21 different parabolas in thin slices each just a couple of millimetres across. This what Paul Morantz and colleagues manufactured at Cranfield University in 2012. There’s a photograph of the component below.

Click on image for a larger version. An early prototype of the splitter mirror the JWST in the display cabinet at Cranfield University. It is – by definition, almost impossible to a photograph a mirror surface. But notice that each of the 21 mirror surfaces reflects light from a different portion of the label in front of it.

Each of the 21 different surfaces had to conform to its specified form within ±10 nanometres (IIRC). And to verify this required measuring that surface with that uncertainty. Measuring a complex surface with this uncertainty is at the limit of what is possible: re-machining the surfaces to correct for detected form errors is just breathtaking!

At each of the 21 different different focus points is a separate instrument measuring at slightly different wavelengths.

The outcome of all this ingenuity is a single custom component weighing just a few grams that simplifies the optics of the instrument, allowing more weight and space to be devoted to measuring instruments.

Pride and Wonder

Back in 2013 I was honoured to work with Paul Morantz and his colleague Paul Shore on the creation of the Boltzmann hemispheres which were used to make the most accurate temperature measurements in history.

The two hemispheres they created were assembled to make a cavity with a precisely non-spherical shape with a form uncertainty below 0.001 mm at all points over the surface.

But before the Pauls could get to work on our project, they had to finish the splitter for JWST otherwise its anticipated launch date might be delayed. [As it happened, they probably could have taken a little more time ;-)]

After completing the splitter I remember the disappointed look on Paul Morantz’s face when I explained that the Boltzmann project ‘only’ needed form uncertainty of 0.001 mm.

I cannot imagine their pride at having constructed such a wondrous object that is being sent to this remote point in space to make measurements on the most distant, most ancient light in the Universe.

I feel proud just to have known them as colleagues.

The Physics of Guitar Strings

January 24, 2022

Friends, regular readers may be aware that I play the guitar.

And sleuths amongst you may have deduced that if I play the guitar, then I must occasionally change guitar strings.

The physics of tuning a guitar – the extreme stretching of strings of different diameters – has fascinated me for years, but it is only now that my life has become pointless that I have managed to devote some time to investigating the phenomenon.

What I have found is that the design of guitar strings is extraordinarily clever, leaving me in awe of the companies which make them. They are everyday wonders!

Before I get going, I just want to mention that this article will be a little arcane and a bit physics-y. Apologies.

What’s there to investigate?

The first question is why are guitar strings made the way they are? Some are plain ‘rustless steel’, but others have a steel core around which are wound super-fine wires of another metal, commonly phosphor-bronze.

The second question concerns the behaviour of the thinnest string: when tuned initially, it spontaneously de-tunes, sometimes very significantly, and it does this for several hours after initially being stretched.

Click image for a larger version. The structure of a wire-wound guitar string. Usually the thickest four strings on a guitar are made this way.

Of course, I write these questions out now like they were in my mind when I started! But no, that’s just a narrative style.

When I started my investigation I just wanted to see if I understood what was happening. So I just started measuring things!

Remember, “two weeks in the laboratory can save a whole afternoon in the library“.

Basic Measurements

The frequency with which a guitar string vibrates is related to three things:

  • The length of the string: the longer the string, the lower the frequency of vibration.
  • The tension in the string: for a given length, the tighter the string, the higher the frequency of vibration.
  • The mass per unit length of the string: for a given length and tension, heavier strings lower the frequency of vibration

In these experiments, I can’t measure the tension in the string directly. But I can measure all the other properties:

  • The length of string (~640 mm) can be measured with a tape measure with an uncertainty of 1 mm
  • The frequency of vibration (82 Hz to 330 Hz) can be measured by tuning a guitar against a tuner with an uncertainty of around 0.1 Hz
  • The mass per unit length can be determined by cutting a length of the string and measuring its length (with a tape measure) and its mass with a sensitive scale. So-called Jewellery scales can be bought for £30 which will weigh 50 g to the nearest milligram!

Click image for a larger version. Using a jewellery scale to weigh a pre-measured length of guitar string. Notice the extraordinary resolution of 1 mg. Typically repeatability was at the level of ± 1 mg.

These measurements are enough to calculate the tension in the string, but I am also interested in the stress to which the material of the string is subjected.

The stress in the string is defined as the tension in the string divided by the cross-sectional area of the string. I can work out the area by measuring the diameter of string with  digital callipers. These devices can be bought for under £20 and will measure string diameters with an uncertainty of 0.01 mm.

However there is a subtlety when it comes to working out the stress in the wire-wound strings. The stress is not carried across the whole cross-section of the wire, but only along its core – so one must measure the diameter of the core of these strings (which can be done at the ends) but not the diameter of the playing length.

Click image for a larger version. Formulas relating the tension T in a string (measured in newtons); the frequency of vibration f (measured in hertz); the length of the string L (measured in metres) and the mass per unit length of the string (m/l) measured in kilograms per metre.

Results#1: Tension

The graph below shows the calculated tension in the strings in their standard tuning.

Click image for a larger version. The calculated tension in the 6 strings of a steel-strung guitar. String 1 is the thinnest (E4) and String 6 is the thickest (E2).

The tension is high – roughly 120 newtons per string. If the tension were maintained by a weight stretching the string over a wheel, there would be roughly 12 kg on each string!

Note that the tension is reasonably uniform across the neck of the guitar. This is important. If it were not so, the tension in the strings would tend to bend the neck of the guitar.

Results#2: Core Diameter

The graph below shows the measured diameters of each of the strings.

Click image for a larger version. The measured diameter (mm) of the stainless steel core of the 6 strings of a steel-strung guitar and the diameter of the string including the winding. String 1 is the thinnest (E4) and String 6 is the thickest (E2).

First the obvious. The diameter of the second string is 33% larger than the first string, which increases its mass per unit length and causes the second string to vibrate at a frequency 33% lower than the first string. This is just the basic physics.

Now we get into the subtlety.

The core of the third string is smaller than the second string. And the core diameters of each the heavier strings is just a little larger than the preceding string.

But these changes in core diameter are small compared to changes in the diameter of the wound string.

The density of the phosphor bronze winding (~8,800 kg/m^3) is similar to, but actually around 10% higher than, the density of the stainless steel core (~8,000 kg/m^3). This is not a big difference.

If we simply take the ratios of outer diameters of the top and bottom strings (1.34/0.31 ≈ 4.3) this is sufficient to explain the required two octave (factor 4) change in frequency.

Results#3: Why are some strings wire-wound?

The reason that the thicker strings on a guitar are wire-wound can be appreciated if one imagines the alternative.

A piece of stainless steel 1.34 mm in diameter is not ‘a string’, it’s a rod. Think about the properties of the wire used to make a paperclip.

So although one could attach such a solid rod to a guitar, and although it would vibrate at the correct frequency, it would not move very much, and so could not be pressed against the frets, and would not give rise to a loud sound.

The purpose of using wire-wound strings is to increase their flexibility while maintaining a high mass per unit length.

Results#4: Stress?

The first thing I calculated was the tension in each string. By dividing that result by the cross-sectional area of each string I can calculate the stress in the wire.

But it’s important to realise that the tension is only carried within of the steel core of each string. The windings only provide mass-per-unit-length but add nothing to the resistance to stretching.

The stress has units of newtons per square metre (N/m^2) which in the SI has a special name: the pascal (Pa). The stresses in the strings are very high so the values are typically in the range of gigapascals (GPa).

Click image for a larger version. The estimated stress with the stainless-steel cores of the 6 strings of a steel-strung guitar. Notice that the first and third strings have considerably higher stress than the other strings. In fact the stress in these cores just exceeds the nominal yield stress of stainless steel. String 1 is the thinnest (E4) and String 6 is the thickest (E2).

This graph contains the seeds of an explanation for some of the tuning behaviour I have observed – that the first and third strings are tricky to tune.

With new strings one finds that – most particularly with the 1st string (the thinnest string) – one can tune the string precisely, but within seconds, the frequency of the string falls, and the string goes out of tune.

What is happening is a phenomenon called creep. The physics is immensely complex, but briefly, when a high stress is applied rapidly, the stress is not uniformly distributed within the microscopic grains that make up the metal.

To distribute the stress uniformly requires the motion of faults within the metal called dislocations. And these dislocations can only move at a finite rate. As the dislocations move they relieve the stress.

After many minutes and eventually hours, the dislocations are optimally distributed and the string becomes stable.

Results#5: Yield Stress

The yield stress of a metal is the stress beyond which the metal is no longer elastic i.e. after being exposed to stresses beyond the yield stress, the metal no longer returns to its prior shape when the stress is removed.

For strong steels, stretching beyond their yield stress will cause them to ‘neck’ and thin and rapidly fail. But stainless steel is not designed for strength – it is designed not to rust! And typically its yield curve is different.

Typically stainless steels have a smooth stress-strain curve, so being beyond the nominal yield stress does not imply imminent failure. It is because of this characteristic that the creep is not a sign of imminent failure. The ultimate tensile strength of stainless steel is much higher.

Results#6: Strain

Knowing the stress  to which the core of the wire is subjected, one can calculate the expected strain i.e. the fractional extension of the wire.

Click image for a larger version. The estimated strain of the 6 strings of a steel-strung guitar. Also shown on the right-hand axis is the actual extension in millimetres Notice that the first and third strings have considerably higher strain than the other strings. String 1 is the thinnest (E4) and String 6 is the thickest (E2).

The calculated fractional string extension (strain) ranges from about 0.4% to 0.8% and the actual string extension from 2.5 mm to 5 mm.

This is difficult to measure accurately, but I did make an attempt by attaching small piece of tape to the end of the old string as I removed it, and the end of the new string as I tightened it.

Click image for a larger version. Method for estimating the strain in a guitar string. A piece of tape is tied to the top of the old string while it is still tight. On loosening, the tape moves with the part of the string to which it is attached.

For the first string my estimate was between 6 mm and 7 mm of extension, so it seems that the calculations are a bit low, but in the right ball-park.

Summary

Please forgive me: I have rambled. But I think I have eventually got to a destination of sorts.

In summary, the design of guitar strings is clever. It balances:

  • the tension in each string.
  • the stress in the steel core of each string.
  • the mass-per-unit length of each string.
  • the flexibility of each string.

Starting with the thinnest string, these are typically available in a range from 0.23 mm to 0.4 mm. The thinnest strings are easy to bend, but reducing the diameter increase the stress in the wire and makes it more likely to break. They also tend to be less loud.

The second string is usually unwound like the first string but the stresses in the string are lower.

The thicker strings are usually wire-wound to increase the flexibility of the strings for a given tensioning mass-per-unit-length. If the strings were unwound they would be extremely inflexible, impossible to push against the frets, and would vibrate only with a very low amplitude.

How does the flexibility arise? When these wound strings are stretched, small gaps open up between the windings and allow the windings to slide past each other when the string is bent.

Click image for a larger version. Illustration of how stretching a wound string slightly separates the windings. This allows the wound components to slide past each other when the string is bent.

Additionally modern strings are often coated with a very thin layer of polymer which prevents rusting and probably reduces friction between the sliding coils.

Remaining Questions 

I still have many  questions about guitar strings.

The first set of questions concerns the behaviour of nylon strings used on classical guitars. Nylon has a very different stress-strain curve from stainless steel, and the contrast in density between the core materials and the windings is much larger.

The second set of questions concerns the ageing of guitar strings. Old strings sound dull, and changing strings makes a big difference to the sound: the guitar sounds brighter and louder. But why? I have a couple of ideas, but none of them feel convincing at the moment.

And now I must get back to playing the guitar, which is quite a different matter. And sadly understanding the physics does not help at all!

P.S.

The strings I used were Elixir 12-53 with a Phosphor Bronze winding and the guitar is a Taylor model 114ce


%d bloggers like this: