Archive for the ‘Protons for Breakfast’ Category

Why global warming affects the poles more than the equator

May 8, 2022

Friends, welcome to Episode 137 in the occasional series of “Things I really should have known a long time ago, but have somehow only realised just now“.

In this case, the focus of my ignorance is the observation that the warming resulting from our emissions of carbon dioxide affects higher latitudes more than lower latitudes.

This is a feature both of our observations and models. But what I learned this week from reading the 1965 White House Report on Environmental Pollution (link) was the simple reason why.

[Note added after feedback: In this article I am describing an effect that makes the direct effect of increase CO2 levels more important at high latitudes. There are also many feedback effects that amplify the direct effect and some of these are also more  important at high latitudes. Carbon Brief has an excellent article on these feedback effects here, but that is not what I am talking about in this article.

Why?

The two gases responsible for majority of greenhouse warming of the Earth’s surface are water vapour and carbon dioxide. But the distribution of these two gases around the planet differs significantly.

  • The concentration of water vapour in the atmosphere depends on the temperature of the liquid surfaces from which the water evaporates. Because the Equator is much hotter than the poles, there is much more water vapour in the atmosphere at the Equator compared with the poles.
  • The concentration of carbon dioxide in the atmosphere is pretty uniform around the globe.

This is illustrated schematically in the figure below.

Click on the image for a larger version. There is more water vapour in the atmosphere in lower latitudes (near the Equator) that at higher latitudes (near the poles). In contrast, carbon dioxide is rather uniformly distributed around the globe.

Of course the truth is a bit more complex than the simplistic figure above might imply.

  • Water vapour from the Equator is transported throughout the atmosphere. But nonetheless, the generality is correct. And the effect is large: the atmosphere above water at 15 °C contains roughly twice as much moisture as the atmosphere above water at 5 °C.
  • Carbon dioxide is mainly emitted in the northern hemisphere, and is then uniformly mixed in the northern hemisphere within a year or so. The mixing with the Southern Hemisphere usually takes two or three years. The variability around the globe is usually within ±2%.

The uniformity of the carbon dioxide distribution can be seen in the figure below from Scripps Institute showing the carbon dioxide concentrations measured at (a) Mauna Loa in the Northern hemisphere, and (b) the South Pole.

Click on the image for a larger version. The carbon dioxide concentrations measured at (a) Mauna Loa in the Northern hemisphere, and (b) the South Pole. Notice that the data from South Pole shows only small seasonal variations and lags behind the Northern Hemisphere data by a couple of years.

Because of this difference in geographical distribution, the greenhouse effect due to carbon dioxide is relatively more important at higher latitudes where the water vapour concentration is low.

And that is why the observed warming at these latitudes is inevitably higher.

Click on the image for a larger version. The observed temperature anomalies shown as a function of location around the Earth for four recent years. Notice the extreme warming at the highest latitudes in the Northern Hemisphere (Source: UEA CRU).

Once I had read this explanation it seemed completely obvious, and yet somehow I had neither figured it out myself nor knowingly read it almost 20 years of study!

2022 to 1978: Looking Back and Looking Forwards

May 3, 2022

Friends, it’s been two years since I retired, and since leaving the chaos and bullying at NPL, retirement has felt like the gift of a new life.

I now devote myself to pastimes befitting a man of my advanced years:

  • Drinking coffee and eating Lebanese pastries for breakfast.
  • Frequenting Folk Clubs
  • Proselytising about the need for action on Climate Change
  • Properly disposing of the 99% of my possessions that will have no meaning to my children or my wife after I die.

It was while engaged in this latter activity, that I came across some old copies of Scientific American magazine.

Last year I abandoned my 40 year subscription to the magazine because it had become almost content free. But in its day, Scientific American occupied a unique niche that allowed enthusiasts in science and engineering to read detailed articles by authors at the forefront of their fields.

In the January Edition for 1978 there were a number of fascinating articles:

  • The Surgical Replacement of the Human Knee Joint
  • How Bacteria Stick
  • The Efficiency of Algorithms
  • Roman Carthage
  • The Visual Characteristics of Words

and…

  • The Carbon Dioxide Question

You can read a scanned pdf copy of the article here.

This article was written by George M Woodwell a pioneer ecologist. The particular carbon dioxide question he asked was this:

Will enough carbon be stored in forests and the ocean to avert a major change in climate?

The article did not definitively answer this question. Instead it highlighted the uncertainties in our understanding of the some of the key processes required to answer the question.

In 1978 the use of satellite analysis to assess the rate of loss of forests was in its infancy. And there were large uncertainties in estimates of the difference in storage capacity between native forests, and managed forests and croplands.

The article drew up a global ‘balance sheet’ for carbon, and concluded that there were major uncertainties in our understanding of many of the physical processes by which carbon and carbon dioxide was captured (or cycled through) Earth’s systems.

Some uncertainty still remains in these areas, but the basic picture has become clearer in the subsequent 44 years of intense study.

So what can we learn from this ‘out of date’ paper?

Three things struck me.

Thing#1

Firstly, from a 2022 perspective, I noticed that there are important things missing from the article!

In considering likely future carbon dioxide emissions, the author viewed the choices as being simply between coal and nuclear power.

Elsewhere in the magazine, the Science and the Citizen column discusses electricity generation by coal with no mention of CO2 emissions. Instead the article simply laments that coal will be in short supply and concludes that:

“There is no question… coal will supply a large part of the nation’s energy future. The required trade-offs will be costly however, particularly in terms of human life and disease.

Neither article mentions generation of electricity by gas turbines. And neither makes any mention of either wind or solar power generation – now the cheapest and fastest growing sources of electricity generation.

From this I note that in it it’s specific details, the future is very hard to see.

Thing#2

Despite the difficulties, the author did make predictions and it is fascinating from the perspective of 2022 to look back and see how those predictions from 1978 have worked out!

The article included predictions for 

  • The atmospheric concentration of CO2
  • CO2 emissions from Fossil Fuels

Click on image for a larger version. Figure from the 1978 article by George Woodwell. The curves in green (to be read against the right-hand axis) shows two predictions for atmospheric concentration of CO2. The curves in black (to be read against the left-hand axis) shows two predictions for fossil fuel emissions of CO2. In each case, the difference between the two curves represents the uncertainty caused by changes in the way CO2 would be cycled through (or captured by) the oceans and forests. See the article for a detailed rubric.

The current atmospheric concentration of carbon dioxide is roughly 420 ppm and the lowest projection from 1978 is very close.

The fossil fuel emissions estimates are given in terms of the equivalent change in atmospheric CO2, and I am not exactly sure how to interpret this correctly.

Atmospheric concentration of CO2 is currently rising at approximately 2.5 ppm per year, and roughly 56% of fossil fuel emissions end up in the atmosphere. So the annual emissions predicted for 2022 are around 2.5/0.56 ~ 4.5 ppm /year, which is rather lower than the lowest prediction of around 6 ppm/year.

The article also predicts that this will be the peak in annual emissions, but that has yet to be seen.

The predictions did not cover the warming effect of carbon dioxide emissions, the science of which was in the process of being formulated. ‘Modern’ predictions can be dated to 1981, when James Hansen and colleagues published a landmark paper in Science (Climate Impact of Increasing Atmospheric Carbon Dioxide) which predicted:

A 2 °C global warming is exceeded in the 21st century in all the CO2 scenarios we considered, except no growth and coal phaseout.

This is the path we are still on.

From this I note that the worst predictions don’t always happen, but sometimes they do.

Thing#3

The final observation concerns the prescience of the author’s conclusion in spite of his ignorance of the details.

Click on the image for a larger version. This is the author’s final conclusion in 1978

His last two sentences could not be truer:

There is almost no aspect of national or international policy that can remain unaffected by the prospect of global climatic change.

Carbon dioxide, until now an apparently innocuous trace gas in the atmosphere may be moving rapidly toward a central role as a major threat to the present world order.

 

Will aviation eventually become electrified?

March 2, 2022

Friends. I ‘have a feeling’ that aviation will eventually become electrified. At first sight this seems extraordinarily unlikely, but I just have this feeling…

Obviously, I could just be wrong, but let me explain my thinking.

Basics 

The current technology for aviation – jet engines on aluminium/composite frames with wings – relies on the properties of jet fuel – kerosene.

There are two basic parameters for aviation ‘fuel’.

  • Energy density – which characterises the volume required to carry fuel with a certain energy content. It can be expressed in units of megajoules per litre (MJ/l).
  • Specific energy – which characterises the mass required to carry fuel with a certain energy content. It can be expressed in units of megajoules per kilogram (MJ/kg).

Wikipedia have helpfully charted these quantities for a wide range of ‘fuels’ and this figure is shown above with five technologies highlighted:

  • Lithium batteries,
  • Liquid and Gaseous Hydrogen,
  • Kerosene and diesel.

Click on image for a larger version. Chart from Wikipedia showing the specific energy and energy density of various fuels enabled energy technologies.

A general observation is that hydrocarbon fuels  have a much higher density and specific energy than any current battery technology. Liquid Hydrogen on the other hand has an exceptionally high specific energy, but poor energy density: better than batteries but much worse than hydrocarbon fuels.

Lessons from the EVs transition:#1

I think the origin of my feeling about the aviation transition stems from the last 20 years of watching the development of battery electric vehicles (BEVs). What is notable is that the pioneers of BEVs – Tesla and Chinese companies such as Xpeng or BYD – are “all in” on BEV’s – they have no interest in Internal Combustion Engine (ICE) vehicles or hybrids. They have no legacy market in ICE vehicles to protect.

‘Legacy Auto’ (short hand for VW, GM, Ford, Toyota etc) had poked their toe in the waters of alternative drive-trains quite a few years ago. GM’s Volt and Bolt were notable and Toyota’s Mirai hydrogen fuel cell car was a wonder. But Legacy Auto were comfortable manufacturing ICE vehicles and making profits from it, and saw these alternative energy projects as ‘insurance’ in case things eventually changed.

As I write in early 2022, all the legacy auto makers are in serious trouble. They can generally manufacture BEVs, but not very well – and none of them are making money from BEVs. Aside from Tesla, they have very poor market penetration in China, the world’s largest EV car-market. In contrast Tesla are popular in China and America and Europe and make roughly 20% profit on every car they sell.

So one lesson from the BEV transition is that the legacy industry who have invested billions in an old technology, may not be the pioneers of a new way of doing things.

Lessons from the EVs transition:#2

How did BEV’s overcome the awesome advantages of hydrocarbon fuels over lithium batteries in terms of energy density and specific energy?

First of all, ICEs throw away about 75% of their advantage because of the way they operate as heat engines. This reduces their energy density advantage over batteries to just a factor 10 or so.

Secondly, there is the fact that ICE cars contain many heavy components – such as engines, gearboxes, and transmissions that aren’t needed in a BEV.

But despite this, BEV cars still generally have a weight and volume disadvantage compared to ICE cars. But this disadvantage has been overcome by careful BEV-specific design.

By placing the large, heavy, battery pack low down, designers can create pleasant vehicle interiors with good handling characteristics. And because the ability to draw power rapidly from batteries is so impressive, the extra mass doesn’t materially affect the acceleration of the vehicle.

EV range is still not as good as a diesel car with a full tank. But it is now generally ‘good enough’.

And once EVs became good enough to compete with ICE vehicles, the advantages of EVs could come to the fore – their ability to charge at home, low-running costs, quietness, potential low carbon emissions and of course, zero in situ emissions.

And significantly, BEV’s are now software platforms and full electronic control of the car allows for some capabilities that ICE vehicles will likely never have.

Lessons from the EV transition:#3

Despite Toyota’s massive and long-term investment in Hydrogen Fuel Cell (HFC) cars, it is now clear that hydrogen will be irrelevant in the transition away from ICE vehicles. Before moving on to look at aviation, it is interesting to look at why this is so.

The reason was not technological. HFC cars using compressed hydrogen fuel were excellent – I have driven one – with ranges in excess of 320 km (200 miles). And they were excellent long before BEVs were excellent. But the very concept of re-fuelling with hydrogen was the problem. Hydrogen is difficult to deal with, and fundamentally if one starts with a certain amount of electrical power – much less of it gets to the wheels with a HFC-EV than with a BEV.

The very idea of a HFC car is – I think – a product of imagining that there would be companies akin to petrochemical companies who could sell ‘a commodity’ in something like the way Oil Companies sold petrol in the 20th Century. BEV’s just don’t work that way.

Interestingly, the engineering problems of handling high-pressure hydrogen were all solved in principle. But this just became irrelevant.

Cars versus Aeroplanes

So let’s look at how energy density and specific energy affect the basic constraints on designs of cars and aeroplanes.

50 litres of diesel contains roughly 1,930 MJ of energy. The table below shows the mass and volume of other fuels or batteries which contains this same energy.

Mass (kg) Volume (l)
Kerosene 45 55
Diesel 42 50
Hydrogen HP 14 364
Hydrogen Liquid 14 193
Lithium Battery 4,825 1,930

We see that batteries look terrible – the equivalent energy storage would require 4.8 tonnes of batteries occupying almost 2 cubic metres! Surely BEVs are impossible?!

But as I mentioned earlier, internal combustion engines waste around 75% of their fuel’s embodied energy in the form of heat. So a battery with the required stored energy would only need 25% of the mass and volume in the table above.

Mass (kg) Volume (l)
Lithium Battery 1206 483

So, we see that the equivalent battery pack is about a tonne heavier than the fuel for a diesel car.

But this doesn’t include the engine required to make the diesel fuel work. So one can see how by clever design and exploiting the fact that electric motors are lighter than engines, one can create a BEV that, while heavier than an ICE car, is still competitive.

Indeed, BEVs now outperform ICE cars on almost every metric that anyone cares about, and will continue to get better for many years yet.

Let’s do the same analysis for aeroplanes. A modern jet aeroplane typically carries 100 tonnes of kerosene with an energy content of around 43 x 105 MJ. This is sufficient to fly a modern jet (200 tonnes plus 30 tonnes of passengers) around 5,000 miles or so.

The table below shows the mass and volume other fuels or batteries which contains this same energy. Notice that the units are no longer kilograms and litres but tonnes and cubic metres.

Mass (tonnes) Volume (m^3)
Kerosene 100 123
Diesel 94 111
Hydrogen HP 31 811
Hydrogen Liquid 31 430
Lithium Battery 10,750 4,300

Now things look irrecoverably impossible for batteries! The batteries would weigh 10,000 tonnes! And occupy a ridiculous volume. Also, turbines are more thermodynamically efficient than ICEs, so assuming say 50% efficiency, batteries would still weigh ~5,000 tonnes and occupy 2,000 m3.

Even with a factor 10 increase in battery energy density – which is just about conceivable but not happening any time soon – the battery would still weigh 1,000 tonnes!.

Does it get any better for shorter ranges? Not much. Consider how much energy is stored in 10 tonnes of kerosene (~43 x 104 MJ). This is sufficient to fly a modern jet – weighing around 50 tonnes unladen and carrying 20 tonnes of passengers around 500 miles or so.

Mass (tonnes) Volume (m^3)
Kerosene 10 12
Diesel 9 11
Hydrogen HP 3 81
Hydrogen Liquid 3 43
Lithium Battery 1,075 430

Even assuming 50% jet efficiency, batteries with equivalent energy would still weigh ~500 tonnes and occupy 200 m3. Even after a factor 10 increase in battery energy density, things still look pretty hopeless.

So can we conclude that battery electric aviation is impossible? Surprisingly, No.

And yet, it flies.

Jet engines burning kerosene have now reached an astonishing state of technological refinement.

But jet engines are also very expensive, which makes the economics of airlines challenging. And despite improvements, jets are also noisy. And of course, they emit CO2 and create condensation trails that affect the climate.

In contrast, electric motors are relatively cheap, which means that electric aeroplanes (if they are possible) would be much cheaper, and require dramatically less engine maintenance. These features are very attractive for airlines – the people who buy planes. And the planes would be quiet and have zero emissions – attractive for people who fly or live near airports.

And several companies are seeking to exploit these potential advantages. Obviously, given the fundamental problem of energy density I outlined above, all the projects have limitations. Mostly the aeroplanes proposed have limited numbers of passengers and limited range. But the companies all impress me as being serious about the engineering and commercial realities that they face. And I have been surprised by what appears to be possible.

Here are a few of the companies that have caught my attention.

Contenders

In the UK, Rolls Royce and partners have built an impressive single-engined aircraft which flies much faster than equivalent ICE powered aircraft.

Their Wikipedia page states the batteries have a specific energy of 0.58 MJ/kg, about 50% higher than I had assumed earlier in the article. The range of this plane is probably tiny – a few 10’s of kilometres – but this number will only increase in the coming years.

This aeroplane is really a technology demonstrator rather than a seedling commercial project. But I found it striking to see the plane actually flying.

In Sweden, Heart Aerospace have plans for a 19-seater short-hop passenger craft with 400 km of range. Funded by Bill Gates amongst others, they have a clear and realistic engineering target.

In an interview, the founder explained that he was focussing on the profitability of the plane. In this sense the enterprise differs from the Rolls Royce project. He stated that as planned, 2 minutes in the air will require 1 minute of re-charging. He had clear markets in mind in (Sweden, Norway, and New Zealand) where air travel involves many ‘short hops’ via transport hubs. And the expected first flights will be soon – 2025 if I have it correct.

In Germany, Lillium are building innovative ducted-fan planes. Whereas Heart’s planes and Rolls Royce’s demonstrator projects are conventional air-frames powered by electric motors, Lillium have settled on completely novel engineering possibilities enabled by electrical propulsion technology. Seeing their ‘Star Wars’ style aircraft take off and land is breathtaking.

Back in the UK, the Electric Aviation Group are advertising HERA as a 90-seater short route airliner with battery and hydrogen fuel-cell technology (not a turbine). This doesn’t seem to be as advanced as the other projects I have mentioned but illustrates the way that different technologies may be incorporated into electric aviation.

What about Hydrogen Turbines?

Legacy Aeromaker Airbus are advertising development of a hydrogen turbine demonstrator. It’s a gigantic A380 conventional jet airliner with a single hydrogen turbine attached. (Twitter video)

Stills from a video showing how the hydrogen turbine demonstrator will work. A single turbine will attached to kerosene driven aeroplane by 2035.

The demonstrator looks very clever, but I feel deeply suspicious of this for two sets of reasons: Technical reasons and ‘Feelings’.

Technical.

  • Fuel Volume: To have the same range and capabilities as an existing jet – the promise that seems to be being advertised – the cryogenic (roughly -250 °C) liquid hydrogen would occupy 4 times the volume of the equivalent kerosene. It likely could not be stored in the wings because of heat leakage, and so a big fraction of the useful volume within an aeroplane would be sacrificed for fuel.
  • Fuel Mass: Although the liquid hydrogen fuel itself would not weigh much, the tanks to hold it would likely be much heavier than their kerosene equivalents. My guess is that that there would not be much net benefit in terms of mass.
  • Turbine#1: Once the stored liquid hydrogen is evaporated to create gas, its energy density plummets. To operate at a similar power level to a conventional turbine, the volume of hydrogen entering the combustion chamber per second will have to be (very roughly) 40 times greater.
  • Turbine#2: Hydrogen burns in a different way from kerosene. For example embrittlement issues around the high pressure, high temperature hydrogen present at the inlets to the combustion chamber are likely to be very serious.
  • I don’t doubt that a hydrogen turbine is possible, but the 2035 target advertised seems about right given the difficulties.
  • Performance: And finally, assuming it all works as planned, the aircraft will still emit NOx, and will still be noisy.

Feelings.

  • I feel this a legacy aero-maker trying to create a future world in which they are still relevant despite the changed reality of climate crisis.
  • I suspect these engines assuming the technical problems are all solved –  will be even more expensive than current engines.
  • I feel the timeline of 2035 might as well be ‘never’. It allows Airbus to operate as normal for the next 13 years – years in which it is critical that we cut CO2 emissions.
  • I suspect that in 13 years – if electric aviation ‘gets off the ground’ (sorry!) – then it will have developed to the point where the short-haul end of the aviation business will be in their grasp. And once people have flown on smaller quieter aircraft, they will not want to go back.
  • Here is Rolls Royces short ‘Vision’ video.

And so…

I wrote this article in response to a Twitter discussion with someone who was suggesting that cryogenic liquid-hydrogen-fuelled jets would be the zero-emission future of aviation.

I feel that that the idea of a cryogenic hydrogen aircraft is the last gasp of a legacy engine industry that is trying to stay relevant in the face of a fundamentally changed reality.

In contrast, electrical aviation is on the verge of becoming a reality: planes are already flying. And motors and batteries will only get better over coming decades.

At some point, I expect that electrical aviation will reach the point where its capabilities will make conventional kerosene-fuelled aeroplanes uneconomic, first on short-haul routes, and then eventually – though I have no idea how! – on longer routes.

But…

…I could be completely wrong.

Heat Pump Explainer

February 24, 2022

Friends, Everyone is talking about heat pumps!

But many people are still unfamiliar with the principles behind their ‘engineering magic’.

This ‘explainer’ video was shot on location in my kitchen and back garden, and uses actual experiments together with state-of-the-art Powerpoint animations (available here) to sort-of explain how they work.

I hope it helps!

 

Nuclear Fusion is Irrelevant

February 14, 2022

Click for a larger image. News stories last week heralded a major breakthrough in fusion research. Links to the stories can be found below.

Friends, last week we were subjected to a press campaign on behalf of the teams of scientists and engineers who are carrying out nuclear fusion research.

Here are links to some of the stories that reached the press.

  • BBC Story
    • If nuclear fusion can be successfully recreated on Earth it holds out the potential of virtually unlimited supplies of low-carbon, low-radiation energy.”
  • Guardian Story #1
    • Prof Ian Chapman, the chief executive of the UK Atomic Energy Authority said “It’s clear we must make significant changes to address the effects of climate change, and fusion offers so much potential.”
  • Guardian Story#2 (from last year)
    • The race to give nuclear fusion a role in the climate emergency

The journalists add little to these stories – they mainly consist of snippets from press releases spun together to seem ‘newsy’. All these stories are colossally misleading.

Floating in the background of these stories is the idea that this research is somehow relevant to our climate crisis. The aim of this article is to explain to you that this is absolutely not true.

Even with the most optimistic assumptions conceivable, research into nuclear fusion is irrelevant to the climate crisis.

Allow me to explain how I have come to this conclusion.

Research into a practical fusion reactor for electricity generation can be split into two strands: an ‘old’ government-backed one and a ‘new’ privately-financed one.

The state of fusion research: #1 government-backed projects

The ‘old’ strand consists of research funded by many governments at JET in the UK and the colossal new ITER facility being constructed in France.

In this strand, ITER will begin operation in 2025, and after 10 years of ‘background’ experiments they will begin energy-generating experiments with tritium in 2035, experiments which are limited by design to just 4000 hours. If I understand correctly, operation beyond this limit will make ITER too radioactive to dismantle.

The key operating parameter for fusion reactors is called Q: the ratio of the heat produced to the energy input. And the aim is that by 2045 ITER will have achieved a Q of 10 – producing 500 MW of power for 400 seconds with only 50 MW of input energy to the plasma.

However ITER will only generate heat, not electricity. Also, it will not create any tritium but will instead only consume it. Following on from ITER, a DEMO reactor is planned which will have a Q value in the range 30-50, and which will generate electrical power, and be able to breed tritium in the reactor.

So on this ITER-proposed time-line we might expect the first actual electricity generation – may be 100 MW of electrical power – in maybe 2050.

And then assuming that these reactors take 10 years to build and that the design will evolve a little, it will be perhaps 2070 before there are ten or so operating around the world.

You may consider that research into a technology which will not yield results for 50 years may or may not be a good idea. I am neutral.

But it is definitely irrelevant to our climate crisis: we simply do not have 50 years in which to eliminate carbon dioxide emissions from electricity generation in the UK.

And this is on the ITER-proposed timeline which I consider frankly optimistic. If one considers some of the technical problems, this optimism seems – to put it politely – unjustified.

Here are three of the issues I keep at the top of my file in case I meet some fusion scientists at the folk club.

  • Q is the ratio of heat energy injected into the plasma to the heat energy released. But in order to be effective we have to generate net ELECTRICAL energy. So we really need to take account of the fact that thermodynamics limits the electrical generation to ~30% of the thermal energy produced. Additionally we need to include the considerable amounts of energy used to operate the complex machinery of a reactor. So we really need to consider a wider definition of Q, one that includes the ratio of input to output energies involving the entire reactor. Sabine Hossenfelder has commented on this issue. But basically, Q needs to be a lot bigger than 10.
  • Materials. The inside of the reactor is an extraordinarily hostile place with colossal fluxes of neutrons passing through every part of the structure. After operation has begun, no human can ever enter the environment again – and it is not clear to me that a working lifetime of say 35 years at 90% availability is realistic. Time will tell.
  • Tritium. The reactor consumes tritium – possibly the most expensive substance on Earth – and for each nucleus of tritium destroyed a single neutron is produced. The neutrons so produced must be captured to produce the heat for electrical generation. But the neutrons are also needed to react with lithium to produce more tritium. Since some neutrons are inevitably lost – so the plan is for extra neutrons to be ‘bred’ by bombarding suitable materials with neutrons, which then produce a shower of further neutrons – 2 or 3 for every incident neutron. And these neutrons can then in principle be used to produce tritium. But aside from being technically difficult, this breeding process also produces long-lived radioactive waste – something fusion reactors claim not to do.

If short, when one considers some of these technical problems, optimism that this research path will produce significant power on the grid in 2070 seems to me to be unjustified.

But what about this new ‘breakthrough’?

The breakthrough was not a breakthrough. It was undertaken because the walls of the previous reactor were found to absorb some of the fuel! So this ‘breakthrough’ represented a repeat of a previous experiment, but with new materials in place.

You can relive the press conference here.

Starting with a much larger amount of energy, they managed to produce 59 megajoules (MJ) of energy from fusion in about 8 seconds.

59 MJ is about 16.4 kWh of energy, which is sufficient to heat water for around 500 cups of tea, more than a cup of tea each for all the scientists and engineers working on the project.

For comparison, the 12 solar panels on my house will produce this easily in a day during the summer. To generate the energy in 5 seconds rather than 12 hours would require more panels: a field of panels roughly 200 m x 250 m, which would cost a little under 1 million pounds.

So the breakthrough is modest in absolute terms. But as I mentioned above, after billions more in funding, and another 20 years of research, the scientists expect to extend this generating ‘burn’ from 5 seconds to 400 seconds at a much higher power level.

In my opinion, JET and ITER are a complete waste of money and should be shut down immediately. The resources should be transferred to building out solar and wind energy projects alongside battery storage.

The state of fusion research: #2 privately-backed projects

The ‘new’ strand of fusion research consists of activities carried out primarily by privately-funded companies.

What? If the massive resources of governments can only promise fusion by 2070, how can private companies hope to make progress?

The answer is that JET and ITER were planned before a technical key advance was made, and they are committed to proceeding without incorporating that advance! Its a multi-billion pound version of “I’ve started so I’ll finish“. It is utter madness, and doubly guarantees the irrelevance of ITER.

The technical advance is the achievement of superconducting wire which can create magnetic fields twice as large as was previously possible. It turns out that the volume of plasma required to achieve fusion scales like the fourth power of the magnetic field. So doubling the magnetic field makes the final reactor potentially 16 times smaller!

This also makes it dramatically cheaper requiring amounts on the order of $100 million rather than billions of dollars. Critically, reactors can exploit the concept of Small Modular Reactors (SMRs) which can be mass-produced in a factory and shipped to a site. Potentially the first reactors could be built in years rather than decades, and the technology iterated to produce advances.

I have written about this previously. With some qualifications, I think this activity is generally not crazy  (it is certainly much less crazy than JET and ITER) but success is far from guaranteed.

A key unresolved question with this technology concerns its potential timeline for delivery of a working power plant.

The reactors face exactly similar problems to those in the much larger ITER reactor, and these are not problems that can be solved in months. So let’s suppose that the first demonstration of Q>1 is achieved in just 5 years (2027), and that all the technical problems with respect to electricity generation required only a further 10 years (2037). Given the difficulties in planning, let’s optimistically assume that the first production plant could get built just 5 years after that in 2042.

The ‘S’ in SMR means reactors would be small, with a thermal output of perhaps 150 MW and an electrical output of perhaps 50 MW. This is small on the scale of typical thermal generation plant. For example Hinckley C is designed to output 3,200 MW of electrical power i.e. more than 60 times larger than a hypothetical SMR fusion reactor.

So if we assume a rapid roll out and no technical or societal problems, then perhaps these reactors might generate significant power onto the grid in perhaps 2050. Nominally this is 20 years ahead of ITER.

Relevance.

With optimistic assumptions concerning technical progress, we might hope for fusion reactors to begin to make a significant contribution to the grid somewhere between 2050 and 2070, depending on which route is taken.

That is already too late to make any contribution to our climate crisis.

We need to deploy low-carbon technologies now. And if we have a choice between reducing carbon dioxide emissions now, or in 30 – 50 years, there is no question about what we should do.

Cost.

We also need to consider the likely cost of the electricity produced by a fusion reactor.

Like conventional fission-based nuclear power, the running costs should be low. Deuterium is cheap and the reactor should generate a surplus of tritium.

The majority of the cost of conventional nuclear power is the cost of the capital used to construct the reactor. If I recall correctly, it amounts to around 95% of the cost of the electricity.

It is hard to imagine that a fusion reactor would be cheaper than a fission reactor – it would be at the limit of manageable engineering complexity. So we might imagine that the costs of fusion-generated electricity would be similar to the cost of nuclear power – which is already most expensive power on the grid.

In contrast, the cost of renewable energy (solar and wind) has fallen dramatically in recent years. Solar and wind are now the cheapest ways to make electricity ever. And their cost – along with the cost of battery storage – is still falling.

So it seems that after waiting all these years, the fusion-based electricity would in all likelihood be extraordinarily expensive.

Summary.

The idea of generating electricity from nuclear fusion has been seen as technological fix for climate change. It is not.

Even the most optimistic assumptions possible indicate that fusion will not make make any significant contribution to electricity supplies before 2050.

This is too late to help in out in our climate crisis which is happening now.

Additionally the cost of the electricity might be expected to exceed the cost of conventional nuclear power stations – the most expensive electricity currently on the UK grid.

If as an alternative, we invested in renewable generation from wind, solar and tidal resources, together with ever cheaper storage, we could begin to address our climate crisis now in the knowledge that the technology we were deploying would likely only ever get better. And cheaper.

 

 

Reducing Carbon Dioxide Emissions from my home: Video and Slides

February 4, 2022

Friends, Good Evening.

As I mentioned in my previous post I gave a talk to Richmond & Twickenham Friends of the Earth on Wednesday, 2nd February 2022.

The video above is the dullest of the dull repetition of that presentation.

It lasts 45 minutes, so make yourself a cup of tea before you start!

You can also download the PowerPoint slides from this presentation using this link.

The Physics of Guitar Strings

January 24, 2022

Friends, regular readers may be aware that I play the guitar.

And sleuths amongst you may have deduced that if I play the guitar, then I must occasionally change guitar strings.

The physics of tuning a guitar – the extreme stretching of strings of different diameters – has fascinated me for years, but it is only now that my life has become pointless that I have managed to devote some time to investigating the phenomenon.

What I have found is that the design of guitar strings is extraordinarily clever, leaving me in awe of the companies which make them. They are everyday wonders!

Before I get going, I just want to mention that this article will be a little arcane and a bit physics-y. Apologies.

What’s there to investigate?

The first question is why are guitar strings made the way they are? Some are plain ‘rustless steel’, but others have a steel core around which are wound super-fine wires of another metal, commonly phosphor-bronze.

The second question concerns the behaviour of the thinnest string: when tuned initially, it spontaneously de-tunes, sometimes very significantly, and it does this for several hours after initially being stretched.

Click image for a larger version. The structure of a wire-wound guitar string. Usually the thickest four strings on a guitar are made this way.

Of course, I write these questions out now like they were in my mind when I started! But no, that’s just a narrative style.

When I started my investigation I just wanted to see if I understood what was happening. So I just started measuring things!

Remember, “two weeks in the laboratory can save a whole afternoon in the library“.

Basic Measurements

The frequency with which a guitar string vibrates is related to three things:

  • The length of the string: the longer the string, the lower the frequency of vibration.
  • The tension in the string: for a given length, the tighter the string, the higher the frequency of vibration.
  • The mass per unit length of the string: for a given length and tension, heavier strings lower the frequency of vibration

In these experiments, I can’t measure the tension in the string directly. But I can measure all the other properties:

  • The length of string (~640 mm) can be measured with a tape measure with an uncertainty of 1 mm
  • The frequency of vibration (82 Hz to 330 Hz) can be measured by tuning a guitar against a tuner with an uncertainty of around 0.1 Hz
  • The mass per unit length can be determined by cutting a length of the string and measuring its length (with a tape measure) and its mass with a sensitive scale. So-called Jewellery scales can be bought for £30 which will weigh 50 g to the nearest milligram!

Click image for a larger version. Using a jewellery scale to weigh a pre-measured length of guitar string. Notice the extraordinary resolution of 1 mg. Typically repeatability was at the level of ± 1 mg.

These measurements are enough to calculate the tension in the string, but I am also interested in the stress to which the material of the string is subjected.

The stress in the string is defined as the tension in the string divided by the cross-sectional area of the string. I can work out the area by measuring the diameter of string with  digital callipers. These devices can be bought for under £20 and will measure string diameters with an uncertainty of 0.01 mm.

However there is a subtlety when it comes to working out the stress in the wire-wound strings. The stress is not carried across the whole cross-section of the wire, but only along its core – so one must measure the diameter of the core of these strings (which can be done at the ends) but not the diameter of the playing length.

Click image for a larger version. Formulas relating the tension T in a string (measured in newtons); the frequency of vibration f (measured in hertz); the length of the string L (measured in metres) and the mass per unit length of the string (m/l) measured in kilograms per metre.

Results#1: Tension

The graph below shows the calculated tension in the strings in their standard tuning.

Click image for a larger version. The calculated tension in the 6 strings of a steel-strung guitar. String 1 is the thinnest (E4) and String 6 is the thickest (E2).

The tension is high – roughly 120 newtons per string. If the tension were maintained by a weight stretching the string over a wheel, there would be roughly 12 kg on each string!

Note that the tension is reasonably uniform across the neck of the guitar. This is important. If it were not so, the tension in the strings would tend to bend the neck of the guitar.

Results#2: Core Diameter

The graph below shows the measured diameters of each of the strings.

Click image for a larger version. The measured diameter (mm) of the stainless steel core of the 6 strings of a steel-strung guitar and the diameter of the string including the winding. String 1 is the thinnest (E4) and String 6 is the thickest (E2).

First the obvious. The diameter of the second string is 33% larger than the first string, which increases its mass per unit length and causes the second string to vibrate at a frequency 33% lower than the first string. This is just the basic physics.

Now we get into the subtlety.

The core of the third string is smaller than the second string. And the core diameters of each the heavier strings is just a little larger than the preceding string.

But these changes in core diameter are small compared to changes in the diameter of the wound string.

The density of the phosphor bronze winding (~8,800 kg/m^3) is similar to, but actually around 10% higher than, the density of the stainless steel core (~8,000 kg/m^3). This is not a big difference.

If we simply take the ratios of outer diameters of the top and bottom strings (1.34/0.31 ≈ 4.3) this is sufficient to explain the required two octave (factor 4) change in frequency.

Results#3: Why are some strings wire-wound?

The reason that the thicker strings on a guitar are wire-wound can be appreciated if one imagines the alternative.

A piece of stainless steel 1.34 mm in diameter is not ‘a string’, it’s a rod. Think about the properties of the wire used to make a paperclip.

So although one could attach such a solid rod to a guitar, and although it would vibrate at the correct frequency, it would not move very much, and so could not be pressed against the frets, and would not give rise to a loud sound.

The purpose of using wire-wound strings is to increase their flexibility while maintaining a high mass per unit length.

Results#4: Stress?

The first thing I calculated was the tension in each string. By dividing that result by the cross-sectional area of each string I can calculate the stress in the wire.

But it’s important to realise that the tension is only carried within of the steel core of each string. The windings only provide mass-per-unit-length but add nothing to the resistance to stretching.

The stress has units of newtons per square metre (N/m^2) which in the SI has a special name: the pascal (Pa). The stresses in the strings are very high so the values are typically in the range of gigapascals (GPa).

Click image for a larger version. The estimated stress with the stainless-steel cores of the 6 strings of a steel-strung guitar. Notice that the first and third strings have considerably higher stress than the other strings. In fact the stress in these cores just exceeds the nominal yield stress of stainless steel. String 1 is the thinnest (E4) and String 6 is the thickest (E2).

This graph contains the seeds of an explanation for some of the tuning behaviour I have observed – that the first and third strings are tricky to tune.

With new strings one finds that – most particularly with the 1st string (the thinnest string) – one can tune the string precisely, but within seconds, the frequency of the string falls, and the string goes out of tune.

What is happening is a phenomenon called creep. The physics is immensely complex, but briefly, when a high stress is applied rapidly, the stress is not uniformly distributed within the microscopic grains that make up the metal.

To distribute the stress uniformly requires the motion of faults within the metal called dislocations. And these dislocations can only move at a finite rate. As the dislocations move they relieve the stress.

After many minutes and eventually hours, the dislocations are optimally distributed and the string becomes stable.

Results#5: Yield Stress

The yield stress of a metal is the stress beyond which the metal is no longer elastic i.e. after being exposed to stresses beyond the yield stress, the metal no longer returns to its prior shape when the stress is removed.

For strong steels, stretching beyond their yield stress will cause them to ‘neck’ and thin and rapidly fail. But stainless steel is not designed for strength – it is designed not to rust! And typically its yield curve is different.

Typically stainless steels have a smooth stress-strain curve, so being beyond the nominal yield stress does not imply imminent failure. It is because of this characteristic that the creep is not a sign of imminent failure. The ultimate tensile strength of stainless steel is much higher.

Results#6: Strain

Knowing the stress  to which the core of the wire is subjected, one can calculate the expected strain i.e. the fractional extension of the wire.

Click image for a larger version. The estimated strain of the 6 strings of a steel-strung guitar. Also shown on the right-hand axis is the actual extension in millimetres Notice that the first and third strings have considerably higher strain than the other strings. String 1 is the thinnest (E4) and String 6 is the thickest (E2).

The calculated fractional string extension (strain) ranges from about 0.4% to 0.8% and the actual string extension from 2.5 mm to 5 mm.

This is difficult to measure accurately, but I did make an attempt by attaching small piece of tape to the end of the old string as I removed it, and the end of the new string as I tightened it.

Click image for a larger version. Method for estimating the strain in a guitar string. A piece of tape is tied to the top of the old string while it is still tight. On loosening, the tape moves with the part of the string to which it is attached.

For the first string my estimate was between 6 mm and 7 mm of extension, so it seems that the calculations are a bit low, but in the right ball-park.

Summary

Please forgive me: I have rambled. But I think I have eventually got to a destination of sorts.

In summary, the design of guitar strings is clever. It balances:

  • the tension in each string.
  • the stress in the steel core of each string.
  • the mass-per-unit length of each string.
  • the flexibility of each string.

Starting with the thinnest string, these are typically available in a range from 0.23 mm to 0.4 mm. The thinnest strings are easy to bend, but reducing the diameter increase the stress in the wire and makes it more likely to break. They also tend to be less loud.

The second string is usually unwound like the first string but the stresses in the string are lower.

The thicker strings are usually wire-wound to increase the flexibility of the strings for a given tensioning mass-per-unit-length. If the strings were unwound they would be extremely inflexible, impossible to push against the frets, and would vibrate only with a very low amplitude.

How does the flexibility arise? When these wound strings are stretched, small gaps open up between the windings and allow the windings to slide past each other when the string is bent.

Click image for a larger version. Illustration of how stretching a wound string slightly separates the windings. This allows the wound components to slide past each other when the string is bent.

Additionally modern strings are often coated with a very thin layer of polymer which prevents rusting and probably reduces friction between the sliding coils.

Remaining Questions 

I still have many  questions about guitar strings.

The first set of questions concerns the behaviour of nylon strings used on classical guitars. Nylon has a very different stress-strain curve from stainless steel, and the contrast in density between the core materials and the windings is much larger.

The second set of questions concerns the ageing of guitar strings. Old strings sound dull, and changing strings makes a big difference to the sound: the guitar sounds brighter and louder. But why? I have a couple of ideas, but none of them feel convincing at the moment.

And now I must get back to playing the guitar, which is quite a different matter. And sadly understanding the physics does not help at all!

P.S.

The strings I used were Elixir 12-53 with a Phosphor Bronze winding and the guitar is a Taylor model 114ce

A Watched Pan…

January 18, 2022

Click on Image for larger version.  A vision of domestic bliss in the de Podesta household. Apparatus in use for measuring the rate of heating of 1 litre of water on an induction hob.

In the beginning…

Friends, my very first blog article (written back on 1st January 2008 and re-posted in 2012) was about whether it is better to boil water with an electric kettle or a gas kettle on a gas hob.

Back then, my focus was simply on energy efficiency rather than carbon dioxide emissions. I had wanted to know how much of the primary energy of methane ended up heating the water. I did this by simply timing how long it took to boil 1 litre of water by various methods.

Prior to doing the experiments I had imagined that heating water with gas was more efficient because the fuel was used directly to heat the water. In contrast, even the best gas-fired power stations are only ~50% efficient.

What I learned back then was that gas cookers are terrible at heating kettles & pans! They were so much worse than I had imagined that I later spent many hours with different size pans, burners, and amounts of water just so I could believe my results!

Typically gas burners only transferred between 36% and 56% of the energy of combustion to the water – the exact fraction depending on the size and power of the burner. Heating things faster with a bigger burner was less efficient. Using a small flame and a very large pan, I could achieve an efficiency of 83%, but of course the water heated only very slowly.

This inefficiency was roughly equivalent to or worse than the inefficiency of the power station generating electricity, and so I concluded that electric kettles and gas kettles were similarly inefficient in their use of the primary energy of the gas. But that using electric kettles allowed one to use the correct amount of water more easily, and so avoided heating water that wasn’t used.

14 years later…

After a recent conversation on Twitter (@Protons4B) I thought I would look at this issue again.

Why? Well two things have changed in the last 14 years.

  • Firstly, electricity generation now incorporates dramatically more renewable sources than in 2008 and so using electricity involves ever decreasing amounts of gas-fired generation.
  • Secondly, I am now concerned about emissions of carbon dioxide resulting from lifestyle choices.

Also being a retired person, I now have a bit more time on my hands and access to fancy instruments such as thermometers.

The way I did the experiments is described at the end of the article, but here are the results.

Results#1: Efficiency

The chart below shows estimates for the efficiency with which the electrical energy or the calorific content of the gas is turned into heat in one litre of water. My guess is these figures all have an uncertainty of around ±5%.

  • The kettle was close to 100% efficient:
  • The induction hob was approximately 86% efficient
  • The Microwave oven was approximately 65% efficient

In contrast, heating the water in a pan (with a lid) on a gas hob was only round 38% or 39% efficient.

Click on Image for larger version. Chart showing the efficiency of 5 methods of heating 1 litre of water. 100% efficiency means that all the energy input used resulted in a temperature rise. The two gas results were for heating pans with two different diameters (19 cm and 17 cm).

It was particularly striking that the water heated on the gas burner (~1833 W) took 80% longer to boil than on the Induction hob (~1440 W) despite the heating power being ~20% less on the induction hob.

Click on Image for larger version. Chart showing the rate of heating for each of the 5 methods of heating 1 litre of water. Notice that the water heated on the gas burner (~1833 W) took 80% longer to boil than on the Induction hob (~1440 W) despite the heating power being ~20% less on the induction hob. Notice that up until 40 °C, the microwave oven heats water as fast as the gas hob, despite using half the power!

Results#2: Carbon Dioxide Emissions 

Based on the average carbon intensity of electricity in 2021 (235 g CO2/kWh), boiling a litre of water by any electrical means results in substantially less CO2 emissions than using a pan (with a lid) on a gas burner.

I performed these experiments on 17th January 2021 between 4 p.m. and 7 p.m. when the carbon intensity of electricity was well above averages: ~330 g CO2/kWh. In this case, boiling a litre of water in a kettle or induction hob still gave the lowest emissions, but heating water in a microwave oven resulted in similar emissions to those arising from using a pan (with a lid) on a gas burner.

Click on Image for larger version. Charts showing the amount of carbon dioxide released by heating 1 litre of water from 10 °C to 100 °C using either electrical methods or gas. The gas heating is assumed to have a carbon intensity of 200 gCO2/kWh. The left-hand chart is based on the carbon intensity of 330 gCO2/kWh of electricity which was appropriate at the time the experiments were performed. The right-hand chart is based on the carbon intensity of 235 gCO2/kWh of electricity which was the average value for 2021. Electrical methods of heating result in lower CO2 emissions in almost all circumstances.

Results#3: Cost 

Currently I am paying 3.83 p/kWh for gas and 16.26 p/kWh for electricity i.e. electricity is around four times more expensive than gas.

These prices are likely to rise substantially in the coming months, but it is not clear whether this ratio will change much.

So sadly, despite gas being the slowest way to heat water and the way which releases the most climate damaging gases, it is still the cheapest way to heat water. It’s about 40% cheaper than using an electric kettle.

Conclusion 

For the sake of the climate, use an electric kettle if you can.

=========================

That was the end of the article and there is no need to read anymore unless you want to know how I made these measurements.

Method 

Estimating the power delivered to the water + vessel

  • For electrical measurements I paused the heating typically every 30 seconds, and read the plug-in electrical power meter. This registered electrical energy consumed in kWh to 3 decimal places.
    • I fitted a straight line to the energy versus time graph to estimate power.
  • For gas measurements I read the gas meter before and after each experiment. This reads in m^3 to 3 decimal places and I converted this volume reading to kWh by multiplying 11.19 kWh/m^3.
    • The gas used only amount to 0.025 m^3 so uncertainty is at least 4% from the digitisation.
    • I divided by the time – typically 550 seconds – to estimate the power.

Mass of water

  • I placed the heating vessel (kettle, pan, jug) on the balanced and tired (zeroed) the reading.
  • I then added water until the vessel read within 1 g of 1000 g. Uncertainty is probably around 1% or 10 g.

Heating rate with 100% energy conversion

  • Based on the power consumed, I estimated the ideal heating rate if 100% of the supplied power caused temperature rises in the water by using the equations.

  • I assumed the average specific heat capacity of water of the range from 10 °C to 100 °C was 4187 J/ (kg °C)

Measuring the temperature.

  • For electrical measurements I paused the heating typically every 30 seconds, stirred the liquid with a coffee-stirrer for 2 or three seconds, and then took the temperature using a thermocouple probe..
  • For gas measurements it wasn’t possible to the pause the heating because of the way I was measuring the power. So about 10 seconds before the reading was due I slipped the coffee stirrer under the lid to mix the water.

Estimating the rate of temperature rise.

  • For all measurements I fitted a straight line to the temperature versus time data, using only data points below approximately 80 °C to avoid the effects of increased temperature losses near to the boiling point.

Mass of the ‘addenda’.

  • The applied power heated not only the water but also its container.
  • The heat capacity of the 19 cm stainless steel pan (572 g) was roughly 6% of the heat capacity of the water.
  • I chose not to take account this heat capacity because there was no way to heat water with a container. So the container is a somewhat confounding factor, but allows more meaningful comparison of the results.

Efficiency of boiling

  • I estimated efficiency by comparing the actual warming rate with the ideal warming rate.
  • I then calculated the energy required to heat 1 kg of water from 10 °C to 100 °C, and multiplied this by the efficiency.
  • In this way the result is relevant even if all the measurements did not start and stop at the same temperatures.

Oddities

  • I heated the water in the microwave in a plastic jug which did not have a tight fitting lid. I am not sure if this had an effect.
  • I did notice that the entire microwave oven was warm to hot at the end of the heating sequence.

Our Old Car is Dead. Long Live The New Car!

July 28, 2021

Click for larger image. Our old and new Zafira cars

After 20 years and 96,000 miles, our 2001 Vauxhall Zafira is close to death.

We bought it for £7000 when it was three years old in 2004. Back then it was all shiny and new, but over the last 17 years it has developed a very long list of faults.

Alexei Sayle once said of his car: “When one door closes, another one opens.”. This was one of the faults our car did not have. But it did have a feature such that: “When one door closes, all the electric windows operate simultaneously.”

Over the last few weeks the engine has begun making horrific noises, the engine warning light is on permanently, and there is an acrid stench of burning oil in the cabin.

After much deliberation, we have replaced it with a closely similar car, a 2010 Zafira with only 52,000 miles on its ‘clock’. The new car lacks our old car’s charmingly idiosyncratic list of faults, but what can you expect for £3,200?

In this post I would like to explain the thinking behind our choice of car.

Do we need a car?

Strictly speaking, no. We could operate with a combination of bikes and taxis and hire cars. But my wife and I do find having a car extremely convenient.

Having a car available simplifies a large number of mundane tasks and gives us the sense of – no irony intended – freedom.

Further, although I am heavily invested in reducing my carbon dioxide emissions, I do not want to live the life of a ‘martyr’. I am keen to show that a life with low carbon dioxide emissions can be very ‘normal’.

So why not an electric car? #1: Cost

Given the effort and expense I have gone to in reducing carbon dioxide emissions from the house, I confess that I did want to get an electric car.

I have come to viscerally hate the idea of burning a few kilograms of hydrocarbon fuel in order to move myself around. It feels dirty.

But sadly buying a new electric car didn’t really make financial sense.

There are lots of excellent electric family cars available in the UK, but they all cost in the region of £30,000.

There are not many second-hand models available but amongst those that were available, there appeared to be very few for less than £15,000.

Click for larger version. Annual Mileage of our family cars since 1995 taken from their MOT certificates. The red dotted line is the Zafira’s average over its lifetime.

Typically my wife and I drive between 3,000 and 5,000 miles per year, and we found ourselves unable to enthuse about the high cost of these cars.

And personally, I feel like I have spent a fortune on the house. Indeed I have spent a fortune! And I now need to just stop spending money for a while. But Michael: What about the emissions?

So why not an electric car? #2: Carbon Dioxide

Sadly, buying an electric didn’t quite make sense in terms of carbon emissions either.

Electric cars have very low emissions of carbon dioxide per kilometre. But they have – like conventional cars – quite large amounts of so-called ’embedded’ carbon dioxide arising from their manufacture.

As a consequence, at low annual mileages, it takes several years for the carbon dioxide emissions of an electric car to beat the carbon dioxide emissions from an already existing internal combustion engine car.

The graph below compares the anticipated carbon dioxide emissions from our old car, our new car, and a hypothetical EV over the next 10 years. The assumptions I have made are listed at the end of the article.

Click for larger version. Projected carbon dioxide emissions from driving 5,000 miles per year in: Our current car (2001 Zafira); Our new car (2010 Zafira); and a typical EV. The dotted line shows the effect of grid carbon intensity falling from around 200 gCO2/kWhe now to 100 gCO2/kWhe in 2030.

For an annual mileage of 5000 miles, the breakeven point for carbon dioxide emissions is 6 or 7 years away. If we reduced our mileage to 3000 miles per year, then the breakeven point would be even further away.

Click for larger version. Projected carbon dioxide emissions from driving 3,000 miles per year in: Our new car (2010 Zafira); and a typical EV. The dotted line shows the effect of grid carbon intensity falling from around 200 gCO2/kWhe now to 100 gCO2/kWhe in 2030.

However, we are a low mileage household. If we drove a more typical 10,000 miles per year then the breakeven point would be just a couple of years away. Over 10 years, the Zafira would emit roughly 12 tonnes more carbon dioxide than the EV.

If we took account of embodied carbon dioxide in a combustion engine car, i.e. if we were considering buying a new car, the case for an EV would be very compelling.

Click for larger version. Projected carbon dioxide emissions from driving 10,000 miles per year in: Our new car (2010 Zafira); and a typical EV. The dotted line shows the effect of grid carbon intensity falling from around 200 gCO2/kWhe now to 100 gCO2/kWhe in 2030.

So…

By replacing our old car with a closely similar model we have minimised the cognitive stress of buying a new car. Hopefully it will prove to be reliable.

And however many miles we drive in the coming years, our new car will reduce our carbon dioxide emissions compared to what they would have been in the old car by about 17%. And no new cars will have been built to achieve that saving.

Assuming that our new car will last us for (say) 5 years, I am hopeful that by then the cost of electric cars will have fallen to the point where an electric car – new or second-hand – might make sense to us.

Additionally, if the electricity used to both manufacture and charge electric cars increasingly comes from renewable sources, then the reduction in carbon dioxide emissions associated with driving electric cars will (year-on-year) become ever more compelling.

However, despite being able to justify this decision to myself, I must confess that I am sad not to be able to join the electric revolution just yet.

Assumptions

For the Zafiras:

  • I used the standard CO2 emissions per kilometre (190 and 157 gCO2/km respectively) in the standard government database.

For the hypothetical EV

  • I took a typical high efficiency figure of 16 kWh per 100 km taken from this article.
  • I assumed a charging inefficiency of 10%, and a grid carbon intensity of 200 gCO2/kWhe reducing to 100 gCO2/kWhe in 10 years time.
  • I assumed that the battery size was 50 kWh and that embodied carbon emissions were 65 kg per kWh (link) of battery storage yielding 3.3 tonnes of embodied carbon dioxide.
  • I assumed the embodied carbon dioxide in the chassis and other components was 4.6 tonnes.
  • For comparison, the roughly 8 tonnes of embodied carbon dioxide in an EV is only just less than the combined embodied carbon dioxide in all the other emission reduction technology I have bought recently:
    • Triple Glazing, External Wall Insulation, Solar Panels, Powerwall Battery, Heat Pump, Air Conditioning

I think all these numbers are quite uncertain, but they seem plausible and broadly in line with other estimates one can find on the web

 

Rocket Science

January 14, 2021

One of my lockdown pleasures has been watching SpaceX launches.

I find the fact that they are broadcast live inspiring. And the fact they will (and do) stop launches even at T-1 second shows that they do not operate on a ‘let’s hope it works’ basis. It speaks to me of confidence built on the application of measurement science and real engineering prowess.

Aside from the thrill of the launch  and the beautiful views, one of the brilliant features of these launches is that the screen view gives lots of details about the rocket: specifically it gives time, altitude and speed.

When coupled with a little (public) knowledge about the rocket one can get to really understand the launch. One can ask and answer questions such as:

  • What is the acceleration during launch?
  • What is the rate of fuel use?
  • What is Max Q?

Let me explain.

Rocket Science#1: Looking at the data

To do my study I watched the video above starting at launch, about 19 minutes 56 seconds into the video. I then repeatedly paused it – at first every second or so – and wrote down the time, altitude (km) and speed (km/h) in my notebook. Later I wrote down data for every kilometre or so in altitude, then later every 10 seconds or so.

In all I captured around 112 readings, and then entered them into a spreadsheet (Link). This made it easy to convert the  speeds to metres per second.

Then I plotted graphs of the data to see how they looked: overall I was quite pleased.

Click for a larger image. Speed (m/s) of Falcon 9 versus time after launch (s) during the Turksat 5A launch.

The velocity graph clearly showed the stage separation. In fact looking in detail, one can see the Main Engine Cut Off (MECO), after which the rocket slows down for stage separation, and then the Second Engine Start (SES) after which the rocket’s second stage accelerates again.

Click for a larger image. Detail from graph above showing the speed (m/s) of Falcon 9 versus time (s) after launch. After MECO the rocket is flying upwards without power and so slows down. After stage separation, the second stage then accelerates again.

It is also interesting that acceleration – the slope of the speed-versus-time graph – increases up to stage separation, then falls and then rises again.

The first stage acceleration increases because the thrust of the rocket is almost constant – but its mass is decreasing at an astonishing 2.5 tonnes per second as it burns its fuel!

After stage separation, the second stage mass is much lower, but there is only one rocket engine!

Then I plotted a graph of altitude versus time.

Click for a larger image. Altitude (km) of Falcon 9 versus time after launch (s) during the Turksat 5A launch.

The interesting thing about this graph is that much of the second stage is devoted to increasing the speed of the second stage at almost constant altitude – roughly 164 km above the Earth. It’s not pushing the spacecraft higher and higher – but faster and faster.

About 30 minutes into the flight the second stage engine re-started, speeding up again and raising the altitude further to put the spacecraft on a trajectory towards a geostationary orbit at 35,786 km.

Rocket Science#2: Analysing the data for acceleration

To estimate the acceleration I subtracted each measurement of speed from the previous measurement of speed and then divided by the time between the two readings. This gives acceleration in units of metres per second, but I thought it would be more meaningful to plot the acceleration as a multiple of the strength of Earth’s gravitational field g (9.81 m/s/s).

The data as I calculated them had spikes in because the small time differences between speed measurements (of the order of a second) were not very accurately recorded. So I smoothed the data by averaging 5 data points together.

Click for a larger image. Smoothed Acceleration (measured in multiples of Earth gravity g) of Falcon 9 versus time after launch (s) during the Turksat 5A launch. Also shown as blue dotted line is a ‘theoretical’ estimate for the acceleration assuming it used up fuel as a uniform rate.

The acceleration increased as the rocket’s mass reduced reaching approximately 3.5g just before stage separation.

I then wondered if I could explain that behaviour.

  • To do that I looked up the launch mass of a Falcon 9 (Data sources at the end of the article and saw that it was 549 tonnes (549,000 kg).
  • I then looked up the mass of the second stage 150 tonnes (150,000 kg).
  • I then assumed that the mass of the first stage was almost entirely fuel and oxidiser and guessed that the mass would decrease uniformly from T = 0 to MECO at T = 156 seconds. This gave a burn rate of 2558 kg/s – over 2.5 tonnes per second!
  • I then looked up the launch thrust from the 9 rocket engines and found it was 7,600,000 newtons (7.6 MN)
  • I then calculated the ‘theoretical’ acceleration using Newton’s Second Law (a = F/m) at each time step – remembering to decrease the mass by 2.558 kilograms per second. And also remembering that the thrust has to exceed 1 x g before the rocket would leave the ground!

The theoretical line (– – –) catches the trend of the data pretty well. But one interesting feature caught my eye – a period of constant acceleration around 50 seconds into the flight.

This is caused by the Falcon 9 throttling back its engines to reduce stresses on the rocket as it experiences maximum aerodynamic pressure – so-called Max Q – around 80 seconds into flight.

Click for a larger image. Detail from the previous graph showing smoothed Acceleration (measured in multiples of Earth gravity g) of Falcon 9 versus time after launch (s) during the Turksat 5A launch. Also shown as blue dotted line is a ‘theoretical’ estimate for the acceleration assuming it used up fuel as a uniform rate. Highlighted in red are the regions around 50 seconds into flight when the engines are throttled back to reduce the speed as the craft experience maximum aerodynamic pressure (Max Q) about 80 seconds into flight.

Rocket Science#3: Maximum aerodynamic pressure

Rocket’s look like they do – rocket shaped – because they have to get through Earth’s atmosphere rapidly, pushing the air in front of them as they go.

The amount of work needed to do that is generally proportional to the three factors:

  • The cross-sectional area A of the rocket. Narrower rockets require less force to push through the air.
  • The speed of the rocket squared (v2). One factor of v arises from the fact that travelling faster requires one to move the same amount of air out of the way faster. The second factor arises because moving air more quickly out of the way is harder due to the viscosity of the air.
  • The air pressure P. The density of the air in the atmosphere falls roughly exponentially with height, reducing by approximately 63% every 8.5 km.

The work done by the rocket on the air results in so-called aerodynamic stress on the rocket. These stresses – forces – are expected to vary as the product of the above three factors: A P v2. The cross-sectional area of the rocket A is constant so in what follows I will just look at the variation of the product P v2.

As the rocket rises, the pressure falls and the speed increases. So their product P v, and functions like P v2, will naturally have a maximum value.

The importance of the maximum of the product P v2 (known as Max Q) as a point in flight, is that if the aerodynamic forces are not uniformly distributed, then the rocket trajectory can easily become unstable – and Max Q marks the point at which the danger of this is greatest.

The graph below shows the variation of pressure P with time during flight. The pressure is calculated using:

Where the ‘1000’ is the approximate pressure at the ground (in mbar), h is the altitude at a particular time, and h0 is called the scale height of the atmosphere and is typically 8.5 km.

Click for a larger image. The atmospheric pressure calculated from the altitude h versus time after launch (s) during the Turksat 5A launch.

I then calculated the product P v2, and divided by 10 million to make it plot easily.

Click for a larger image. The aerodynamic stresses calculated from the altitude and speed versus time after launch during the Turksat 5A launch.

This calculation predicts that Max Q occurs about 80 seconds into flight, long after the engines throttled down, and in good agreement with SpaceX’s more sophisticated calculation.

Summary 

I love watching the Space X launches  and having analysed one of them just a little bit, I feel like understand better what is going on.

These calculations are well within the capability of advanced school students – and there are many more questions to be addressed.

  • What is the pressure at stage separation?
  • What is the altitude of Max Q?
  • The vertical velocity can be calculated by measuring the rate of change of altitude with time.
  • The horizontal velocity can be calculated from the speed and the vertical velocity.
  • How does the speed vary from one mission to another?
  • Why does the craft aim for a particular speed?

And then there’s the satellites themselves to study!

Good luck with your investigations!

Resources

And finally thanks to Jon for pointing me towards ‘Flight Club – One-Click Rocket Science‘. This site does what I have done but with a good deal more attention to detail! Highly Recommended.

 


%d bloggers like this: