Archive for the ‘Protons for Breakfast’ Category

The Physics of Guitar Strings

January 24, 2022

Friends, regular readers may be aware that I play the guitar.

And sleuths amongst you may have deduced that if I play the guitar, then I must occasionally change guitar strings.

The physics of tuning a guitar – the extreme stretching of strings of different diameters – has fascinated me for years, but it is only now that my life has become pointless that I have managed to devote some time to investigating the phenomenon.

What I have found is that the design of guitar strings is extraordinarily clever, leaving me in awe of the companies which make them. They are everyday wonders!

Before I get going, I just want to mention that this article will be a little arcane and a bit physics-y. Apologies.

What’s there to investigate?

The first question is why are guitar strings made the way they are? Some are plain ‘rustless steel’, but others have a steel core around which are wound super-fine wires of another metal, commonly phosphor-bronze.

The second question concerns the behaviour of the thinnest string: when tuned initially, it spontaneously de-tunes, sometimes very significantly, and it does this for several hours after initially being stretched.

Click image for a larger version. The structure of a wire-wound guitar string. Usually the thickest four strings on a guitar are made this way.

Of course, I write these questions out now like they were in my mind when I started! But no, that’s just a narrative style.

When I started my investigation I just wanted to see if I understood what was happening. So I just started measuring things!

Remember, “two weeks in the laboratory can save a whole afternoon in the library“.

Basic Measurements

The frequency with which a guitar string vibrates is related to three things:

  • The length of the string: the longer the string, the lower the frequency of vibration.
  • The tension in the string: for a given length, the tighter the string, the higher the frequency of vibration.
  • The mass per unit length of the string: for a given length and tension, heavier strings lower the frequency of vibration

In these experiments, I can’t measure the tension in the string directly. But I can measure all the other properties:

  • The length of string (~640 mm) can be measured with a tape measure with an uncertainty of 1 mm
  • The frequency of vibration (82 Hz to 330 Hz) can be measured by tuning a guitar against a tuner with an uncertainty of around 0.1 Hz
  • The mass per unit length can be determined by cutting a length of the string and measuring its length (with a tape measure) and its mass with a sensitive scale. So-called Jewellery scales can be bought for £30 which will weigh 50 g to the nearest milligram!

Click image for a larger version. Using a jewellery scale to weigh a pre-measured length of guitar string. Notice the extraordinary resolution of 1 mg. Typically repeatability was at the level of ± 1 mg.

These measurements are enough to calculate the tension in the string, but I am also interested in the stress to which the material of the string is subjected.

The stress in the string is defined as the tension in the string divided by the cross-sectional area of the string. I can work out the area by measuring the diameter of string with  digital callipers. These devices can be bought for under £20 and will measure string diameters with an uncertainty of 0.01 mm.

However there is a subtlety when it comes to working out the stress in the wire-wound strings. The stress is not carried across the whole cross-section of the wire, but only along its core – so one must measure the diameter of the core of these strings (which can be done at the ends) but not the diameter of the playing length.

Click image for a larger version. Formulas relating the tension T in a string (measured in newtons); the frequency of vibration f (measured in hertz); the length of the string L (measured in metres) and the mass per unit length of the string (m/l) measured in kilograms per metre.

Results#1: Tension

The graph below shows the calculated tension in the strings in their standard tuning.

Click image for a larger version. The calculated tension in the 6 strings of a steel-strung guitar. String 1 is the thinnest (E4) and String 6 is the thickest (E2).

The tension is high – roughly 120 newtons per string. If the tension were maintained by a weight stretching the string over a wheel, there would be roughly 12 kg on each string!

Note that the tension is reasonably uniform across the neck of the guitar. This is important. If it were not so, the tension in the strings would tend to bend the neck of the guitar.

Results#2: Core Diameter

The graph below shows the measured diameters of each of the strings.

Click image for a larger version. The measured diameter (mm) of the stainless steel core of the 6 strings of a steel-strung guitar and the diameter of the string including the winding. String 1 is the thinnest (E4) and String 6 is the thickest (E2).

First the obvious. The diameter of the second string is 33% larger than the first string, which increases its mass per unit length and causes the second string to vibrate at a frequency 33% lower than the first string. This is just the basic physics.

Now we get into the subtlety.

The core of the third string is smaller than the second string. And the core diameters of each the heavier strings is just a little larger than the preceding string.

But these changes in core diameter are small compared to changes in the diameter of the wound string.

The density of the phosphor bronze winding (~8,800 kg/m^3) is similar to, but actually around 10% higher than, the density of the stainless steel core (~8,000 kg/m^3). This is not a big difference.

If we simply take the ratios of outer diameters of the top and bottom strings (1.34/0.31 ≈ 4.3) this is sufficient to explain the required two octave (factor 4) change in frequency.

Results#3: Why are some strings wire-wound?

The reason that the thicker strings on a guitar are wire-wound can be appreciated if one imagines the alternative.

A piece of stainless steel 1.34 mm in diameter is not ‘a string’, it’s a rod. Think about the properties of the wire used to make a paperclip.

So although one could attach such a solid rod to a guitar, and although it would vibrate at the correct frequency, it would not move very much, and so could not be pressed against the frets, and would not give rise to a loud sound.

The purpose of using wire-wound strings is to increase their flexibility while maintaining a high mass per unit length.

Results#4: Stress?

The first thing I calculated was the tension in each string. By dividing that result by the cross-sectional area of each string I can calculate the stress in the wire.

But it’s important to realise that the tension is only carried within of the steel core of each string. The windings only provide mass-per-unit-length but add nothing to the resistance to stretching.

The stress has units of newtons per square metre (N/m^2) which in the SI has a special name: the pascal (Pa). The stresses in the strings are very high so the values are typically in the range of gigapascals (GPa).

Click image for a larger version. The estimated stress with the stainless-steel cores of the 6 strings of a steel-strung guitar. Notice that the first and third strings have considerably higher stress than the other strings. In fact the stress in these cores just exceeds the nominal yield stress of stainless steel. String 1 is the thinnest (E4) and String 6 is the thickest (E2).

This graph contains the seeds of an explanation for some of the tuning behaviour I have observed – that the first and third strings are tricky to tune.

With new strings one finds that – most particularly with the 1st string (the thinnest string) – one can tune the string precisely, but within seconds, the frequency of the string falls, and the string goes out of tune.

What is happening is a phenomenon called creep. The physics is immensely complex, but briefly, when a high stress is applied rapidly, the stress is not uniformly distributed within the microscopic grains that make up the metal.

To distribute the stress uniformly requires the motion of faults within the metal called dislocations. And these dislocations can only move at a finite rate. As the dislocations move they relieve the stress.

After many minutes and eventually hours, the dislocations are optimally distributed and the string becomes stable.

Results#5: Yield Stress

The yield stress of a metal is the stress beyond which the metal is no longer elastic i.e. after being exposed to stresses beyond the yield stress, the metal no longer returns to its prior shape when the stress is removed.

For strong steels, stretching beyond their yield stress will cause them to ‘neck’ and thin and rapidly fail. But stainless steel is not designed for strength – it is designed not to rust! And typically its yield curve is different.

Typically stainless steels have a smooth stress-strain curve, so being beyond the nominal yield stress does not imply imminent failure. It is because of this characteristic that the creep is not a sign of imminent failure. The ultimate tensile strength of stainless steel is much higher.

Results#6: Strain

Knowing the stress  to which the core of the wire is subjected, one can calculate the expected strain i.e. the fractional extension of the wire.

Click image for a larger version. The estimated strain of the 6 strings of a steel-strung guitar. Also shown on the right-hand axis is the actual extension in millimetres Notice that the first and third strings have considerably higher strain than the other strings. String 1 is the thinnest (E4) and String 6 is the thickest (E2).

The calculated fractional string extension (strain) ranges from about 0.4% to 0.8% and the actual string extension from 2.5 mm to 5 mm.

This is difficult to measure accurately, but I did make an attempt by attaching small piece of tape to the end of the old string as I removed it, and the end of the new string as I tightened it.

Click image for a larger version. Method for estimating the strain in a guitar string. A piece of tape is tied to the top of the old string while it is still tight. On loosening, the tape moves with the part of the string to which it is attached.

For the first string my estimate was between 6 mm and 7 mm of extension, so it seems that the calculations are a bit low, but in the right ball-park.

Summary

Please forgive me: I have rambled. But I think I have eventually got to a destination of sorts.

In summary, the design of guitar strings is clever. It balances:

  • the tension in each string.
  • the stress in the steel core of each string.
  • the mass-per-unit length of each string.
  • the flexibility of each string.

Starting with the thinnest string, these are typically available in a range from 0.23 mm to 0.4 mm. The thinnest strings are easy to bend, but reducing the diameter increase the stress in the wire and makes it more likely to break. They also tend to be less loud.

The second string is usually unwound like the first string but the stresses in the string are lower.

The thicker strings are usually wire-wound to increase the flexibility of the strings for a given tensioning mass-per-unit-length. If the strings were unwound they would be extremely inflexible, impossible to push against the frets, and would vibrate only with a very low amplitude.

How does the flexibility arise? When these wound strings are stretched, small gaps open up between the windings and allow the windings to slide past each other when the string is bent.

Click image for a larger version. Illustration of how stretching a wound string slightly separates the windings. This allows the wound components to slide past each other when the string is bent.

Additionally modern strings are often coated with a very thin layer of polymer which prevents rusting and probably reduces friction between the sliding coils.

Remaining Questions 

I still have many  questions about guitar strings.

The first set of questions concerns the behaviour of nylon strings used on classical guitars. Nylon has a very different stress-strain curve from stainless steel, and the contrast in density between the core materials and the windings is much larger.

The second set of questions concerns the ageing of guitar strings. Old strings sound dull, and changing strings makes a big difference to the sound: the guitar sounds brighter and louder. But why? I have a couple of ideas, but none of them feel convincing at the moment.

And now I must get back to playing the guitar, which is quite a different matter. And sadly understanding the physics does not help at all!

P.S.

The strings I used were Elixir 12-53 with a Phosphor Bronze winding and the guitar is a Taylor model 114ce

A Watched Pan…

January 18, 2022

Click on Image for larger version.  A vision of domestic bliss in the de Podesta household. Apparatus in use for measuring the rate of heating of 1 litre of water on an induction hob.

In the beginning…

Friends, my very first blog article (written back on 1st January 2008 and re-posted in 2012) was about whether it is better to boil water with an electric kettle or a gas kettle on a gas hob.

Back then, my focus was simply on energy efficiency rather than carbon dioxide emissions. I had wanted to know how much of the primary energy of methane ended up heating the water. I did this by simply timing how long it took to boil 1 litre of water by various methods.

Prior to doing the experiments I had imagined that heating water with gas was more efficient because the fuel was used directly to heat the water. In contrast, even the best gas-fired power stations are only ~50% efficient.

What I learned back then was that gas cookers are terrible at heating kettles & pans! They were so much worse than I had imagined that I later spent many hours with different size pans, burners, and amounts of water just so I could believe my results!

Typically gas burners only transferred between 36% and 56% of the energy of combustion to the water – the exact fraction depending on the size and power of the burner. Heating things faster with a bigger burner was less efficient. Using a small flame and a very large pan, I could achieve an efficiency of 83%, but of course the water heated only very slowly.

This inefficiency was roughly equivalent to or worse than the inefficiency of the power station generating electricity, and so I concluded that electric kettles and gas kettles were similarly inefficient in their use of the primary energy of the gas. But that using electric kettles allowed one to use the correct amount of water more easily, and so avoided heating water that wasn’t used.

14 years later…

After a recent conversation on Twitter (@Protons4B) I thought I would look at this issue again.

Why? Well two things have changed in the last 14 years.

  • Firstly, electricity generation now incorporates dramatically more renewable sources than in 2008 and so using electricity involves ever decreasing amounts of gas-fired generation.
  • Secondly, I am now concerned about emissions of carbon dioxide resulting from lifestyle choices.

Also being a retired person, I now have a bit more time on my hands and access to fancy instruments such as thermometers.

The way I did the experiments is described at the end of the article, but here are the results.

Results#1: Efficiency

The chart below shows estimates for the efficiency with which the electrical energy or the calorific content of the gas is turned into heat in one litre of water. My guess is these figures all have an uncertainty of around ±5%.

  • The kettle was close to 100% efficient:
  • The induction hob was approximately 86% efficient
  • The Microwave oven was approximately 65% efficient

In contrast, heating the water in a pan (with a lid) on a gas hob was only round 38% or 39% efficient.

Click on Image for larger version. Chart showing the efficiency of 5 methods of heating 1 litre of water. 100% efficiency means that all the energy input used resulted in a temperature rise. The two gas results were for heating pans with two different diameters (19 cm and 17 cm).

It was particularly striking that the water heated on the gas burner (~1833 W) took 80% longer to boil than on the Induction hob (~1440 W) despite the heating power being ~20% less on the induction hob.

Click on Image for larger version. Chart showing the rate of heating for each of the 5 methods of heating 1 litre of water. Notice that the water heated on the gas burner (~1833 W) took 80% longer to boil than on the Induction hob (~1440 W) despite the heating power being ~20% less on the induction hob. Notice that up until 40 °C, the microwave oven heats water as fast as the gas hob, despite using half the power!

Results#2: Carbon Dioxide Emissions 

Based on the average carbon intensity of electricity in 2021 (235 g CO2/kWh), boiling a litre of water by any electrical means results in substantially less CO2 emissions than using a pan (with a lid) on a gas burner.

I performed these experiments on 17th January 2021 between 4 p.m. and 7 p.m. when the carbon intensity of electricity was well above averages: ~330 g CO2/kWh. In this case, boiling a litre of water in a kettle or induction hob still gave the lowest emissions, but heating water in a microwave oven resulted in similar emissions to those arising from using a pan (with a lid) on a gas burner.

Click on Image for larger version. Charts showing the amount of carbon dioxide released by heating 1 litre of water from 10 °C to 100 °C using either electrical methods or gas. The gas heating is assumed to have a carbon intensity of 200 gCO2/kWh. The left-hand chart is based on the carbon intensity of 330 gCO2/kWh of electricity which was appropriate at the time the experiments were performed. The right-hand chart is based on the carbon intensity of 235 gCO2/kWh of electricity which was the average value for 2021. Electrical methods of heating result in lower CO2 emissions in almost all circumstances.

Results#3: Cost 

Currently I am paying 3.83 p/kWh for gas and 16.26 p/kWh for electricity i.e. electricity is around four times more expensive than gas.

These prices are likely to rise substantially in the coming months, but it is not clear whether this ratio will change much.

So sadly, despite gas being the slowest way to heat water and the way which releases the most climate damaging gases, it is still the cheapest way to heat water. It’s about 40% cheaper than using an electric kettle.

Conclusion 

For the sake of the climate, use an electric kettle if you can.

=========================

That was the end of the article and there is no need to read anymore unless you want to know how I made these measurements.

Method 

Estimating the power delivered to the water + vessel

  • For electrical measurements I paused the heating typically every 30 seconds, and read the plug-in electrical power meter. This registered electrical energy consumed in kWh to 3 decimal places.
    • I fitted a straight line to the energy versus time graph to estimate power.
  • For gas measurements I read the gas meter before and after each experiment. This reads in m^3 to 3 decimal places and I converted this volume reading to kWh by multiplying 11.19 kWh/m^3.
    • The gas used only amount to 0.025 m^3 so uncertainty is at least 4% from the digitisation.
    • I divided by the time – typically 550 seconds – to estimate the power.

Mass of water

  • I placed the heating vessel (kettle, pan, jug) on the balanced and tired (zeroed) the reading.
  • I then added water until the vessel read within 1 g of 1000 g. Uncertainty is probably around 1% or 10 g.

Heating rate with 100% energy conversion

  • Based on the power consumed, I estimated the ideal heating rate if 100% of the supplied power caused temperature rises in the water by using the equations.

  • I assumed the average specific heat capacity of water of the range from 10 °C to 100 °C was 4187 J/ (kg °C)

Measuring the temperature.

  • For electrical measurements I paused the heating typically every 30 seconds, stirred the liquid with a coffee-stirrer for 2 or three seconds, and then took the temperature using a thermocouple probe..
  • For gas measurements it wasn’t possible to the pause the heating because of the way I was measuring the power. So about 10 seconds before the reading was due I slipped the coffee stirrer under the lid to mix the water.

Estimating the rate of temperature rise.

  • For all measurements I fitted a straight line to the temperature versus time data, using only data points below approximately 80 °C to avoid the effects of increased temperature losses near to the boiling point.

Mass of the ‘addenda’.

  • The applied power heated not only the water but also its container.
  • The heat capacity of the 19 cm stainless steel pan (572 g) was roughly 6% of the heat capacity of the water.
  • I chose not to take account this heat capacity because there was no way to heat water with a container. So the container is a somewhat confounding factor, but allows more meaningful comparison of the results.

Efficiency of boiling

  • I estimated efficiency by comparing the actual warming rate with the ideal warming rate.
  • I then calculated the energy required to heat 1 kg of water from 10 °C to 100 °C, and multiplied this by the efficiency.
  • In this way the result is relevant even if all the measurements did not start and stop at the same temperatures.

Oddities

  • I heated the water in the microwave in a plastic jug which did not have a tight fitting lid. I am not sure if this had an effect.
  • I did notice that the entire microwave oven was warm to hot at the end of the heating sequence.

Our Old Car is Dead. Long Live The New Car!

July 28, 2021

Click for larger image. Our old and new Zafira cars

After 20 years and 96,000 miles, our 2001 Vauxhall Zafira is close to death.

We bought it for £7000 when it was three years old in 2004. Back then it was all shiny and new, but over the last 17 years it has developed a very long list of faults.

Alexei Sayle once said of his car: “When one door closes, another one opens.”. This was one of the faults our car did not have. But it did have a feature such that: “When one door closes, all the electric windows operate simultaneously.”

Over the last few weeks the engine has begun making horrific noises, the engine warning light is on permanently, and there is an acrid stench of burning oil in the cabin.

After much deliberation, we have replaced it with a closely similar car, a 2010 Zafira with only 52,000 miles on its ‘clock’. The new car lacks our old car’s charmingly idiosyncratic list of faults, but what can you expect for £3,200?

In this post I would like to explain the thinking behind our choice of car.

Do we need a car?

Strictly speaking, no. We could operate with a combination of bikes and taxis and hire cars. But my wife and I do find having a car extremely convenient.

Having a car available simplifies a large number of mundane tasks and gives us the sense of – no irony intended – freedom.

Further, although I am heavily invested in reducing my carbon dioxide emissions, I do not want to live the life of a ‘martyr’. I am keen to show that a life with low carbon dioxide emissions can be very ‘normal’.

So why not an electric car? #1: Cost

Given the effort and expense I have gone to in reducing carbon dioxide emissions from the house, I confess that I did want to get an electric car.

I have come to viscerally hate the idea of burning a few kilograms of hydrocarbon fuel in order to move myself around. It feels dirty.

But sadly buying a new electric car didn’t really make financial sense.

There are lots of excellent electric family cars available in the UK, but they all cost in the region of £30,000.

There are not many second-hand models available but amongst those that were available, there appeared to be very few for less than £15,000.

Click for larger version. Annual Mileage of our family cars since 1995 taken from their MOT certificates. The red dotted line is the Zafira’s average over its lifetime.

Typically my wife and I drive between 3,000 and 5,000 miles per year, and we found ourselves unable to enthuse about the high cost of these cars.

And personally, I feel like I have spent a fortune on the house. Indeed I have spent a fortune! And I now need to just stop spending money for a while. But Michael: What about the emissions?

So why not an electric car? #2: Carbon Dioxide

Sadly, buying an electric didn’t quite make sense in terms of carbon emissions either.

Electric cars have very low emissions of carbon dioxide per kilometre. But they have – like conventional cars – quite large amounts of so-called ’embedded’ carbon dioxide arising from their manufacture.

As a consequence, at low annual mileages, it takes several years for the carbon dioxide emissions of an electric car to beat the carbon dioxide emissions from an already existing internal combustion engine car.

The graph below compares the anticipated carbon dioxide emissions from our old car, our new car, and a hypothetical EV over the next 10 years. The assumptions I have made are listed at the end of the article.

Click for larger version. Projected carbon dioxide emissions from driving 5,000 miles per year in: Our current car (2001 Zafira); Our new car (2010 Zafira); and a typical EV. The dotted line shows the effect of grid carbon intensity falling from around 200 gCO2/kWhe now to 100 gCO2/kWhe in 2030.

For an annual mileage of 5000 miles, the breakeven point for carbon dioxide emissions is 6 or 7 years away. If we reduced our mileage to 3000 miles per year, then the breakeven point would be even further away.

Click for larger version. Projected carbon dioxide emissions from driving 3,000 miles per year in: Our new car (2010 Zafira); and a typical EV. The dotted line shows the effect of grid carbon intensity falling from around 200 gCO2/kWhe now to 100 gCO2/kWhe in 2030.

However, we are a low mileage household. If we drove a more typical 10,000 miles per year then the breakeven point would be just a couple of years away. Over 10 years, the Zafira would emit roughly 12 tonnes more carbon dioxide than the EV.

If we took account of embodied carbon dioxide in a combustion engine car, i.e. if we were considering buying a new car, the case for an EV would be very compelling.

Click for larger version. Projected carbon dioxide emissions from driving 10,000 miles per year in: Our new car (2010 Zafira); and a typical EV. The dotted line shows the effect of grid carbon intensity falling from around 200 gCO2/kWhe now to 100 gCO2/kWhe in 2030.

So…

By replacing our old car with a closely similar model we have minimised the cognitive stress of buying a new car. Hopefully it will prove to be reliable.

And however many miles we drive in the coming years, our new car will reduce our carbon dioxide emissions compared to what they would have been in the old car by about 17%. And no new cars will have been built to achieve that saving.

Assuming that our new car will last us for (say) 5 years, I am hopeful that by then the cost of electric cars will have fallen to the point where an electric car – new or second-hand – might make sense to us.

Additionally, if the electricity used to both manufacture and charge electric cars increasingly comes from renewable sources, then the reduction in carbon dioxide emissions associated with driving electric cars will (year-on-year) become ever more compelling.

However, despite being able to justify this decision to myself, I must confess that I am sad not to be able to join the electric revolution just yet.

Assumptions

For the Zafiras:

  • I used the standard CO2 emissions per kilometre (190 and 157 gCO2/km respectively) in the standard government database.

For the hypothetical EV

  • I took a typical high efficiency figure of 16 kWh per 100 km taken from this article.
  • I assumed a charging inefficiency of 10%, and a grid carbon intensity of 200 gCO2/kWhe reducing to 100 gCO2/kWhe in 10 years time.
  • I assumed that the battery size was 50 kWh and that embodied carbon emissions were 65 kg per kWh (link) of battery storage yielding 3.3 tonnes of embodied carbon dioxide.
  • I assumed the embodied carbon dioxide in the chassis and other components was 4.6 tonnes.
  • For comparison, the roughly 8 tonnes of embodied carbon dioxide in an EV is only just less than the combined embodied carbon dioxide in all the other emission reduction technology I have bought recently:
    • Triple Glazing, External Wall Insulation, Solar Panels, Powerwall Battery, Heat Pump, Air Conditioning

I think all these numbers are quite uncertain, but they seem plausible and broadly in line with other estimates one can find on the web

 

Rocket Science

January 14, 2021

One of my lockdown pleasures has been watching SpaceX launches.

I find the fact that they are broadcast live inspiring. And the fact they will (and do) stop launches even at T-1 second shows that they do not operate on a ‘let’s hope it works’ basis. It speaks to me of confidence built on the application of measurement science and real engineering prowess.

Aside from the thrill of the launch  and the beautiful views, one of the brilliant features of these launches is that the screen view gives lots of details about the rocket: specifically it gives time, altitude and speed.

When coupled with a little (public) knowledge about the rocket one can get to really understand the launch. One can ask and answer questions such as:

  • What is the acceleration during launch?
  • What is the rate of fuel use?
  • What is Max Q?

Let me explain.

Rocket Science#1: Looking at the data

To do my study I watched the video above starting at launch, about 19 minutes 56 seconds into the video. I then repeatedly paused it – at first every second or so – and wrote down the time, altitude (km) and speed (km/h) in my notebook. Later I wrote down data for every kilometre or so in altitude, then later every 10 seconds or so.

In all I captured around 112 readings, and then entered them into a spreadsheet (Link). This made it easy to convert the  speeds to metres per second.

Then I plotted graphs of the data to see how they looked: overall I was quite pleased.

Click for a larger image. Speed (m/s) of Falcon 9 versus time after launch (s) during the Turksat 5A launch.

The velocity graph clearly showed the stage separation. In fact looking in detail, one can see the Main Engine Cut Off (MECO), after which the rocket slows down for stage separation, and then the Second Engine Start (SES) after which the rocket’s second stage accelerates again.

Click for a larger image. Detail from graph above showing the speed (m/s) of Falcon 9 versus time (s) after launch. After MECO the rocket is flying upwards without power and so slows down. After stage separation, the second stage then accelerates again.

It is also interesting that acceleration – the slope of the speed-versus-time graph – increases up to stage separation, then falls and then rises again.

The first stage acceleration increases because the thrust of the rocket is almost constant – but its mass is decreasing at an astonishing 2.5 tonnes per second as it burns its fuel!

After stage separation, the second stage mass is much lower, but there is only one rocket engine!

Then I plotted a graph of altitude versus time.

Click for a larger image. Altitude (km) of Falcon 9 versus time after launch (s) during the Turksat 5A launch.

The interesting thing about this graph is that much of the second stage is devoted to increasing the speed of the second stage at almost constant altitude – roughly 164 km above the Earth. It’s not pushing the spacecraft higher and higher – but faster and faster.

About 30 minutes into the flight the second stage engine re-started, speeding up again and raising the altitude further to put the spacecraft on a trajectory towards a geostationary orbit at 35,786 km.

Rocket Science#2: Analysing the data for acceleration

To estimate the acceleration I subtracted each measurement of speed from the previous measurement of speed and then divided by the time between the two readings. This gives acceleration in units of metres per second, but I thought it would be more meaningful to plot the acceleration as a multiple of the strength of Earth’s gravitational field g (9.81 m/s/s).

The data as I calculated them had spikes in because the small time differences between speed measurements (of the order of a second) were not very accurately recorded. So I smoothed the data by averaging 5 data points together.

Click for a larger image. Smoothed Acceleration (measured in multiples of Earth gravity g) of Falcon 9 versus time after launch (s) during the Turksat 5A launch. Also shown as blue dotted line is a ‘theoretical’ estimate for the acceleration assuming it used up fuel as a uniform rate.

The acceleration increased as the rocket’s mass reduced reaching approximately 3.5g just before stage separation.

I then wondered if I could explain that behaviour.

  • To do that I looked up the launch mass of a Falcon 9 (Data sources at the end of the article and saw that it was 549 tonnes (549,000 kg).
  • I then looked up the mass of the second stage 150 tonnes (150,000 kg).
  • I then assumed that the mass of the first stage was almost entirely fuel and oxidiser and guessed that the mass would decrease uniformly from T = 0 to MECO at T = 156 seconds. This gave a burn rate of 2558 kg/s – over 2.5 tonnes per second!
  • I then looked up the launch thrust from the 9 rocket engines and found it was 7,600,000 newtons (7.6 MN)
  • I then calculated the ‘theoretical’ acceleration using Newton’s Second Law (a = F/m) at each time step – remembering to decrease the mass by 2.558 kilograms per second. And also remembering that the thrust has to exceed 1 x g before the rocket would leave the ground!

The theoretical line (– – –) catches the trend of the data pretty well. But one interesting feature caught my eye – a period of constant acceleration around 50 seconds into the flight.

This is caused by the Falcon 9 throttling back its engines to reduce stresses on the rocket as it experiences maximum aerodynamic pressure – so-called Max Q – around 80 seconds into flight.

Click for a larger image. Detail from the previous graph showing smoothed Acceleration (measured in multiples of Earth gravity g) of Falcon 9 versus time after launch (s) during the Turksat 5A launch. Also shown as blue dotted line is a ‘theoretical’ estimate for the acceleration assuming it used up fuel as a uniform rate. Highlighted in red are the regions around 50 seconds into flight when the engines are throttled back to reduce the speed as the craft experience maximum aerodynamic pressure (Max Q) about 80 seconds into flight.

Rocket Science#3: Maximum aerodynamic pressure

Rocket’s look like they do – rocket shaped – because they have to get through Earth’s atmosphere rapidly, pushing the air in front of them as they go.

The amount of work needed to do that is generally proportional to the three factors:

  • The cross-sectional area A of the rocket. Narrower rockets require less force to push through the air.
  • The speed of the rocket squared (v2). One factor of v arises from the fact that travelling faster requires one to move the same amount of air out of the way faster. The second factor arises because moving air more quickly out of the way is harder due to the viscosity of the air.
  • The air pressure P. The density of the air in the atmosphere falls roughly exponentially with height, reducing by approximately 63% every 8.5 km.

The work done by the rocket on the air results in so-called aerodynamic stress on the rocket. These stresses – forces – are expected to vary as the product of the above three factors: A P v2. The cross-sectional area of the rocket A is constant so in what follows I will just look at the variation of the product P v2.

As the rocket rises, the pressure falls and the speed increases. So their product P v, and functions like P v2, will naturally have a maximum value.

The importance of the maximum of the product P v2 (known as Max Q) as a point in flight, is that if the aerodynamic forces are not uniformly distributed, then the rocket trajectory can easily become unstable – and Max Q marks the point at which the danger of this is greatest.

The graph below shows the variation of pressure P with time during flight. The pressure is calculated using:

Where the ‘1000’ is the approximate pressure at the ground (in mbar), h is the altitude at a particular time, and h0 is called the scale height of the atmosphere and is typically 8.5 km.

Click for a larger image. The atmospheric pressure calculated from the altitude h versus time after launch (s) during the Turksat 5A launch.

I then calculated the product P v2, and divided by 10 million to make it plot easily.

Click for a larger image. The aerodynamic stresses calculated from the altitude and speed versus time after launch during the Turksat 5A launch.

This calculation predicts that Max Q occurs about 80 seconds into flight, long after the engines throttled down, and in good agreement with SpaceX’s more sophisticated calculation.

Summary 

I love watching the Space X launches  and having analysed one of them just a little bit, I feel like understand better what is going on.

These calculations are well within the capability of advanced school students – and there are many more questions to be addressed.

  • What is the pressure at stage separation?
  • What is the altitude of Max Q?
  • The vertical velocity can be calculated by measuring the rate of change of altitude with time.
  • The horizontal velocity can be calculated from the speed and the vertical velocity.
  • How does the speed vary from one mission to another?
  • Why does the craft aim for a particular speed?

And then there’s the satellites themselves to study!

Good luck with your investigations!

Resources

And finally thanks to Jon for pointing me towards ‘Flight Club – One-Click Rocket Science‘. This site does what I have done but with a good deal more attention to detail! Highly Recommended.

 

Research into Nuclear Fusion is a waste of money

November 24, 2019

I used to be a Technological Utopian, and there has been no greater vision for a Technical Utopia than the prospect of limitless energy at low cost promised by Nuclear Fusion researchers.

But glowing descriptions of the Utopia which awaits us all, and statements by fusion Utopians such as:

Once harnessed, fusion has the potential to be nearly unlimited, safe and CO2-free energy source.

are deceptive. And I no longer believe this is just the self-interested optimism characteristic of all institutions.

It is a damaging deception, because money spent on nuclear fusion research could be spent on actual solutions to the problem of climate change. Solutions which exist right now and which could be implemented inside in a decade in the UK.

Reader: Michael? Are you OK? You seem to have come over a little over-rhetorical?

Me: Thanks. Just let me catch my breath and I’ll be fine. Ahhhhhh. Breathe…..

What’s the problem?

Well let’s just suppose that the current generation of experiments at JET and ITER are ‘successful’. If so, then having started building in 2013:

  • By 2025 the plant should be ready for initial plasma experiments.
  • Unbelievably, full deuteriumtritium fusion experiments will not start until 2035!
    • I could not believe this so I checked. Here’s the link.
    • I can’t find a source for it, but I have been told that the running lifetime of ITER with deuterium and tritium is just 4000 hours.
  • The cost of this experiment is hard to find written down – ITER has its own system of accounting! – but will probably be around 20 billion dollars.

And at this point, without having ever generated a single kilowatt of electricity, ITER will be decommissioned and its intensely radioactive core will be allowed to cool down until it can be buried.

The ‘fusion community’ would then ask for another 20 billion dollars or so to fund a DEMO power station which might be operational around 2050. At which point after a few years of DEMO operation, commercial designs would become available.

So the overall proposal is to spend about 40 billion dollars over the next 30 years to find out if a ‘commercial’ fusion power station is viable.

This plan is the embodiment of madness that could only be advocated by Technological Utopians who have lost track of the reason that fusion might once have been a good idea.

Let’s look at the problems in the most general terms.

1. Cost

Fusion will not be cheap. If we look at the current generation of nuclear fission stations, such as Hinkley C, then these will cost around £20 billion each.

Despite the fact the technology for building nuclear fission reactors is now half a century old, previous versions of the Hinkley C reactor being built at Olkiluoto and Flamanville are many years late, massively over-budget and in fact may never be allowed to operate.

Assuming Hinkley C does eventually become operational, the cost of the electricity it produces will be barely affected by the fuel it uses. More than 90% of the cost of the electricity is paying back the debt used to finance the reactor. It will produce the most expensive electricity ever supplied in the UK.

Nuclear fusion reactors designed to produce a gigawatt of electricity would definitely be engineering behemoths in the same category of engineering challenge as Hinkley C, but with much greater complexity and many more unknown failure modes. 

ITER Project. Picture produced by Oak Ridge National Laboratory [CC BY 2.0 (https://creativecommons.org/licenses/by/2.0)]

The ITER Torus. The scale and complexity is hard to comprehend. Picture produced by Oak Ridge National Laboratory [CC BY 2.0 (https://creativecommons.org/licenses/by/2.0)%5D

Even in the most optimistic case – an optimism which we will see is not easy to justify – it is inconceivable that fusion technology could ever produce low cost electricity.

I don’t want to live in a world with
nuclear fusion reactors, because
I don’t want to live in a world
where electricity is that expensive.
Unknown author

2. Sustainable

One of the components of the fuel for a nuclear fusion reactor – deuterium – is readily available on Earth. It can be separated from sea water at modest cost.

The other componenttritium – is extraordinarily rare and expensive. It is radioactive with a half-life of about 10 years.

To  become <irony>sustainable<\irony>, a major task of a fusion reactor is to manufacture tritium.

The ‘plan’ is to do this by bombarding lithium-6 with neutrons causing a reaction yielding tritium and helium.

Ideally, every single neutron produced in the fusion reaction would be captured, but in fact most of them will not be lost. Instead, a ‘neutron multiplication’ process is conceived of, despite the intense radioactive waste this will produce.

3. Technical Practicality

I have written enough here and so I will just refer you to this article published on the web site of the Bulletin of Atomic Scientists.

This article considers:

  • The embedded carbon and costs
  • Optimistic statements of energy balance that fail to recognise the difference between:
    • The thermal energy of particles in the plasma
    • The thermal energy extracted – or extractable.
    • The electrical energy supplied for operation
  • Other aspects of the tritium problem I mentioned above.
  • Radiation and radioactive waste
  • The materials problems caused by – putatively – decades of neutron irradiation.
  • The cooling water required.

I could add my own concerns about neutron damage to the immense superconducting magnets that are just a metre or so away from the hottest place in the solar system.

In short, there are really serious problems that have no obvious solution.

4. Alternatives

If there were no alternative, then I would think it worthwhile to face down all these challenges and struggle on.

But there are really good alternatives based on that fusion reactor in the sky – the Sun.

We can extract energy directly from sunlight, and from the winds that the Sun drives around the Earth.

We need to capture only 0.02% of the energy in the sunlight reaching Earth to power our entire civilisation!

The complexity and cost of fusion reactors even makes fission reactors look good!

And all the technology that we require to address what is acknowledged as a climate emergency exists here and now.

By 2050, when (optimistically?) the first generation of fusion reactors might be ready to be built – carbon-free electricity production could be a solved problem.

Nuclear fusion research is, at its best, a distraction from the problem at hand. At worst, it sucks money and energy away from genuinely renewable energy technologies which need it.

We should just stop it all right now.

Hazards of Flying

November 17, 2019

Radiation Dose

Radeye in Cabin

RadEye Geiger Counter on my lap in the plane.

It is well-known that by flying in commercial airliners, one exposes oneself to increased intensity of ionising radiation.

But it is one thing to know something in the abstract, and another to watch it in front of you.

Thus on a recent flight from Zurich I was fascinated to use a Radeye B20-ER survey meter to watch the intensity of radiation rise with altitude as I flew home.

Slide1

Graph showing the dose rate in microsieverts per hour as a function of time before and after take off. The dose rate at cruising altitude was around 25 times on the ground.

Slide2

During the flight from Zurich, the accumulated radiation dose was almost equal to my entire daily dose in the UK.

The absolute doses are not very great (Some typical doses). The dose on flight from Zurich (about 2.2 microsieverts) was roughly equivalent to the dose from a dental X-ray, or one whole day’s dose in the UK.

But for people who fly regularly the effects mount up.

Given how skittish people are about exposing themselves to any hazard I am surprised that more is not made of this – it is certainly one more reason to travel by train!

CO2 Exposure

Although I knew that by flying I was exposing myself to higher levels of radiation – I was not aware of how high the levels of carbon dioxide can become in the cabin.

I have been using a portable detector for several months. I was sceptical that it really worked well, and needed to re-assure myself that it reads correctly. I am now more or less convinced and the insights it has given have been very helpful.

In fresh air the meter reads around 400 parts per million (ppm) – but in the house, levels can exceed this by a factor of two – especially if I have been cooking using gas.

One colleague plotted levels of CO2 in the office as a function of the number of people using the office. We were then able to make a simple airflow model based on standard breathing rates and the specified number of air changes per hour.

Slide5

However I was surprised at just how high the levels became in the cabin of an airliner.

The picture below shows CO2 levels in the bridge leading to the plane in Zurich Airport. Levels around 1500 ppm are indicative very poor air quality.

Slide3

Carbon dioxide concentration on the bridge leading to the plane – notice the rapid rise.

The picture below shows that things were even worse in the aeroplane cabin as we taxied on the tarmac.

Slide4

Carbon dioxide concentration measured in the cabin while we taxied on the ground in Zurich.

Once airborne, levels quickly fell to around 1000 ppm – still a high level – but much more comfortable.

I have often felt preternaturally sleepy on aircraft and now I think I know why – the spike in carbon dioxide concentrations at this level can easily induce drowsiness.

One more reason not to fly!

 

 

 

Getting there…

November 14, 2019

Life is a journey to a well-known destination. It’s the ‘getting there’ that is interesting.

The journey has been difficult these last few weeks. But I feel like I am ‘getting there

Work and non-work

At the start of 2019 I moved to a 3-day working week, and at first I managed to actually work around 3-days a week, and felt much better for it.

But as the year wore on, I have found it more difficult to limit my time at work. This has been particularity intense these last few weeks.

My lack of free time has been making me miserable. It has limited my ability to focus on things I want to do for personal, non-work reasons.

Any attention I pay to a personal project – such as writing this blog – feels like a luxurious indulgence. In contrast, work activities acquire a sense of all-pervading numinous importance.

But despite this difficulty – I feel like I am better off than last year – and making progress towards the mythical goal of work-life balance on the way to a meaningful retirement.

I am getting there!

Travelling 

Mainly as a result of working too much, I am still travelling too much by air. But on some recent trips to Europe I was able to travel in part by train, and it was surprisingly easy and enjoyable.

I am getting there! By train.

My House

The last of the triple-glazing has been installed in the house. Nine windows and a door (around £7200 since you asked) have been replaced.

Many people have knowingly askedWhat’s the payback time?

  • Using financial analysis the answer is many years.
  • Using moral and emotional analysis, the payback has been instantaneous.

It would be shameful to have a house which spilt raw sewage onto the street. I feel the same way about the 2.5 tonnes of carbon dioxide my house currently emits every winter.

This triple-glazing represents the first steps in bringing my home up to 21st Century Standards and it is such a relief to have begun this journey.

I will monitor the performance over the winter to see if it coincides with my expectations, and then proceed to take the next steps in the spring of 2020.

I am getting there! And emitting less carbon dioxide in the process

Talking… and listening

Physics in Action 3

Yesterday I spoke about the SI to more than 800 A level students at the Emmanuel Centre in London. I found the occasion deeply moving.

  • Firstly, the positivity and curiosity of this group of group of young people was palpable.
  • Secondly, their interest in the basics of metrology was heartwarming.
  • Thirdly, I heard Andrea Sella talk about ‘ice’.

Andrea’s talked linked the extraordinary physical properties of water ice to the properties of ice on Earth: the dwindling glaciers and the retreat of sea-ice.

He made the connection between our surprise that water ice was in any way unusual with the journalism of climate change denial perpetrated by ‘newspapers’ such as the Daily Mail.

This link between the academic and the political was shocking to hear in this educational context – but essential as we all begin our journey to a new world in which we acknowledge what we have done to Earth’s climate.

We have a long way to go. But hearing Andrea clearly and truthfully denounce the lies to which we are being exposed was personally inspiring.

We really really are getting there. 

What it takes to heat my house: 280 watts per degree Celsius above ambient

August 16, 2019

Slide1

The climate emergency calls on us to “Think globally and act locally“. So moving on from distressing news about the Climate, I have been looking to reduce energy losses – and hence carbon dioxide emissions – from my home.

One of the problems with doing this is that one is often working ‘blind’ – one makes choices – often expensive choices – but afterwards it can be hard to know precisely what difference that choice has made.

So the first step is to find out the thermal performance of the house as it is now. This is as tedious as it sounds – but the result is really insightful and will help me make rational decisions about how to improve the house.

Using the result from the end of the article I found out that to keep my house comfortable in the winter, for each degree Celsius that the average temperature falls below 20 °C, I currently need to use around 280 W of heating. So when the temperature is 5 °C outside, I need to use 280 × (20 – 5) = 4200 watts of heating.

Is this a lot? Well that depends on the size of my house. By measuring the wall area and window area of the house, this figure allows me to work out the thermal performance of the walls and windows. And then I can estimate how much I could reasonably hope to improve the performance by using extra insulation or replacing windows. These details will be the topic of my next article.

In the rest of this article I describe how I made the estimate for my home which uses gas for heating, hot water, and cooking. My hope is it will help you make similar estimates for your own home.

Overall Thermal Performance

The first step to assessing the thermal performance of the house was to read the gas meter – weekly: I did say it was tedious. I began doing that last November.

One needs to do this in the winter and the summer. Gas consumption in winter is dominated by heating, and the summer reading reveals the background rate of consumption for the other uses.

My meter reads gas consumption in units of ‘hundreds of cubic feet’. This archaic unit can be converted to energy units – kilowatt-hours using the formula below.

Energy used in kilowatt-hours = Gas Consumption in 100’s of cubic feet × 31.4

So if you consume 3 gas units per day i.e. 300 cubic feet of gas, then that corresponds to 3 × 31.4 = 94.2 kilowatt hours of energy per day, and an average power of 94.2 / 24 = 3 925 watts.

The second step is to measure the average external temperature each week. This sounds hard but is surprisingly easy thanks to Weather Underground.

Look up their ‘Wundermap‘ for your location – you can search by UK postcode. They have data from thousands of weather stations available.

To get historical data I clicked on a nearby the weather station (it was actually the one in my garden [ITEDDING4] but any of the neighbouring ones would have done just as well.)  I then selected ‘weekly’ mode and noted down the average weekly temperature for each week in the period from November 2018 to the August 2019.

Slide3

Weather history for my weather station. Any nearby station would have done just as well. Select ‘Weekly Mode’ and then just look at the ‘Average temperature’. You can navigate to any week using the ‘Next’ and ‘Previous’ buttons, or by selecting a date from the drop down menus

Once I had the average weekly temperature, I then worked out the difference between the internal temperature in the house – around 20 °C and the external temperature.

I expected that the gas consumption to be correlated with the difference from 20 °C, but I was surprised by how close the correlation was.

Slide2

Averaging the winter data in the above graph I estimate that it takes approximately 280 watts to keep my house at 20 °C for each 1 °C that the temperature falls below 20 °C.

Discussion

I have ignored many complications in arriving at this estimate.

  • I ignored the variability in the energy content of gas
  • I ignored the fact that less than 100% of the energy of the gas is use in heating

But nonetheless, I think it fairly represents the thermal performance of my house with an uncertainty of around 10%.

In the next article I will show how I used this figure to estimate the thermal performance – the so-called ‘U-values’ – of the walls and windows.

Why this matters

As I end, please let me explain why this arcane and tedious stuff matters.

Assuming that the emissions of CO2 were around 0.2 kg of CO2 per kWh of thermal energy, my meter readings enable me to calculate the carbon dioxide emissions from heating my house last winter.

The graph below shows the cumulative CO2 emissions…

Slide4

Through the winter I emitted 17 kg of CO2 every day – amounting to around 2.5 tonnes of CO2 emissions in total.

2.5 tonnes????!!!!

This is around a factor of 10 more than the waste we dispose of or recycle. I am barely conscious that 2.5 tonnes of ANYTHING have passed through my house!

I am stunned and appalled by this figure.

Without stealing the thunder from the next article, I think I can see a way to reduce this by a factor of three at least – and maybe even six.

Gravity Wave Detector#1

July 6, 2017
Me and Albert Einstein

Not Charlie Chaplin: That’s me and Albert Einstein. A special moment for me. Not so much for him.

I belong to an exclusive club! I have visited two gravity wave detectors in my life.

Neither of the detectors have ever detected gravity waves, but nonetheless, both of them filled me with admiration for their inventors.

Bristol, 1987 

In 1987, the buzz of the discovery of high-temperature superconductors was still intense.

I was in my first post-doctoral appointment at the University of Bristol and I spent many late late nights ‘cooking’ up compounds and carrying out experiments.

As I wandered around the H. H. Wills Physics department late at night I opened a door and discovered a secret corridor underneath the main corridor.

Stretching for perhaps 50 metres along the subterranean hideout was a high-tech arrangement of vacuum tubing, separated every 10 metres or so by a ‘castle’ of vacuum apparatus.

It lay dormant and dusty and silent in the stillness of the night.

The next day I asked about the apparatus at morning tea – a ritual amongst the low-temperature physicists.

It was Peter Aplin who smiled wryly and claimed ownership. Peter was a kindly antipodean physicist, a generalist – and an expert in electronics.

New Scientist article from 1975

New Scientist article from 1975

He explained that it was his new idea for a gravity wave detector.

In each of the ‘castles’ was a mass suspended in vacuum from a spring made of quartz.

He had calculated that by detecting ‘ringing’ in multiple masses, rather than in a single mass, he could make a detector whose sensitivity scaled as its Length2 rather than as its Length.

He had devised the theory; built the apparatus; done the experiment; and written the paper announcing that gravity waves had not been detected with a new limit of sensitivity.

He then submitted the paper to Physical Review. It was at this point that a referee had reminded him that:

When a term in L2 is taken from the left-hand side of the equation to the right-hand side, it changes sign. You will thus find that in your Equation 13, the term in L2 will cancel.

And so his detector was not any more sensitive than anyone else’s.

And so…

If it had been me, I think I might have cried.

But as Peter recounted this tale, he did not cry. He smiled and put it down to experience.

Peter was – and perhaps still is – a brilliant physicist. And amongst the kindest and most helpful people I have ever met.

And I felt inspired by his screw up. Or rather I was inspired by his ability to openly acknowledge his mistake. Smile. And move on.

30 years later…

…I visited Geo 600. And I will describe this dramatically scaled-up experiment in my next article.

P.S. (Aplin)

Peter S Aplin wrote a review of gravitational wave experiments in 1972 and had a paper at a conference called “A novel gravitational wave antenna“. Sadly, I don’t have easy access to either of these sources.

 

Not everything is getting worse!

April 19, 2017

Carbon Intensity April 2017

Friends, I find it hard to believe, but I think I have found something happening in the world which is not bad. Who knew such things still happened?

The news comes from the fantastic web site MyGridGB which charts the development of electricity generation in the UK.

On the site I read that:

  • At lunchtime on Sunday 9th April 2017,  8 GW of solar power was generated.
  • On Friday all coal power stations in the UK were off.
  • On Saturday, strong winds and solar combined with low demand to briefly provide 73% of power.

All three of these facts fill me with hope. Just think:

  • 8 gigawatts of solar power. In the UK! IN APRIL!!!
  • And no coal generation at all!
  • And renewable energy providing 73% of our power!

Even a few years ago each of these facts would have been unthinkable!

And even more wonderfully: nobody noticed!

Of course, these were just transients, but they show we have the potential to generate electricity which has a significantly low carbon intensity.

Carbon Intensity is a measure of the amount of carbon dioxide emitted into the atmosphere for each unit (kWh) of electricity generated.

Wikipedia tells me that electricity generated from:

  • Coal has a carbon intensity of about 1.0 kg of CO2 per kWh
  • Gas has a carbon intensity of about 0.47 kg of CO2 per kWh
  • Biomass has a carbon intensity of about 0.23 kg of CO2 per kWh
  • Solar PV has a carbon intensity of about 0.05 kg of CO2 per kW
  • Nuclear has a carbon intensity of about 0.02 kg of CO2 per kWh
  • Wind has a carbon intensity of about 0.01 kg of CO2 per kWh

The graph at the head of the page shows that in April 2017 the generating mix in the UK has a carbon intensity of about 0.25 kg of CO2 per kWh.

MyGridGB’s mastermind is Andrew Crossland. On the site he has published a manifesto outlining a plan which would actually reduce our carbon intensity to less than 0.1 kg of CO2 per kWh.

What I like about the manifesto is that it is eminently doable.

And who knows? Perhaps we might actually do it?

Ahhhh. Thank you Andrew.

Even thinking that a good thing might still be possible makes me feel better.

 


%d bloggers like this: