Archive for the ‘Simple Science’ Category

The Physics of Guitar Strings

January 24, 2022

Friends, regular readers may be aware that I play the guitar.

And sleuths amongst you may have deduced that if I play the guitar, then I must occasionally change guitar strings.

The physics of tuning a guitar – the extreme stretching of strings of different diameters – has fascinated me for years, but it is only now that my life has become pointless that I have managed to devote some time to investigating the phenomenon.

What I have found is that the design of guitar strings is extraordinarily clever, leaving me in awe of the companies which make them. They are everyday wonders!

Before I get going, I just want to mention that this article will be a little arcane and a bit physics-y. Apologies.

What’s there to investigate?

The first question is why are guitar strings made the way they are? Some are plain ‘rustless steel’, but others have a steel core around which are wound super-fine wires of another metal, commonly phosphor-bronze.

The second question concerns the behaviour of the thinnest string: when tuned initially, it spontaneously de-tunes, sometimes very significantly, and it does this for several hours after initially being stretched.

Click image for a larger version. The structure of a wire-wound guitar string. Usually the thickest four strings on a guitar are made this way.

Of course, I write these questions out now like they were in my mind when I started! But no, that’s just a narrative style.

When I started my investigation I just wanted to see if I understood what was happening. So I just started measuring things!

Remember, “two weeks in the laboratory can save a whole afternoon in the library“.

Basic Measurements

The frequency with which a guitar string vibrates is related to three things:

  • The length of the string: the longer the string, the lower the frequency of vibration.
  • The tension in the string: for a given length, the tighter the string, the higher the frequency of vibration.
  • The mass per unit length of the string: for a given length and tension, heavier strings lower the frequency of vibration

In these experiments, I can’t measure the tension in the string directly. But I can measure all the other properties:

  • The length of string (~640 mm) can be measured with a tape measure with an uncertainty of 1 mm
  • The frequency of vibration (82 Hz to 330 Hz) can be measured by tuning a guitar against a tuner with an uncertainty of around 0.1 Hz
  • The mass per unit length can be determined by cutting a length of the string and measuring its length (with a tape measure) and its mass with a sensitive scale. So-called Jewellery scales can be bought for £30 which will weigh 50 g to the nearest milligram!

Click image for a larger version. Using a jewellery scale to weigh a pre-measured length of guitar string. Notice the extraordinary resolution of 1 mg. Typically repeatability was at the level of ± 1 mg.

These measurements are enough to calculate the tension in the string, but I am also interested in the stress to which the material of the string is subjected.

The stress in the string is defined as the tension in the string divided by the cross-sectional area of the string. I can work out the area by measuring the diameter of string with  digital callipers. These devices can be bought for under £20 and will measure string diameters with an uncertainty of 0.01 mm.

However there is a subtlety when it comes to working out the stress in the wire-wound strings. The stress is not carried across the whole cross-section of the wire, but only along its core – so one must measure the diameter of the core of these strings (which can be done at the ends) but not the diameter of the playing length.

Click image for a larger version. Formulas relating the tension T in a string (measured in newtons); the frequency of vibration f (measured in hertz); the length of the string L (measured in metres) and the mass per unit length of the string (m/l) measured in kilograms per metre.

Results#1: Tension

The graph below shows the calculated tension in the strings in their standard tuning.

Click image for a larger version. The calculated tension in the 6 strings of a steel-strung guitar. String 1 is the thinnest (E4) and String 6 is the thickest (E2).

The tension is high – roughly 120 newtons per string. If the tension were maintained by a weight stretching the string over a wheel, there would be roughly 12 kg on each string!

Note that the tension is reasonably uniform across the neck of the guitar. This is important. If it were not so, the tension in the strings would tend to bend the neck of the guitar.

Results#2: Core Diameter

The graph below shows the measured diameters of each of the strings.

Click image for a larger version. The measured diameter (mm) of the stainless steel core of the 6 strings of a steel-strung guitar and the diameter of the string including the winding. String 1 is the thinnest (E4) and String 6 is the thickest (E2).

First the obvious. The diameter of the second string is 33% larger than the first string, which increases its mass per unit length and causes the second string to vibrate at a frequency 33% lower than the first string. This is just the basic physics.

Now we get into the subtlety.

The core of the third string is smaller than the second string. And the core diameters of each the heavier strings is just a little larger than the preceding string.

But these changes in core diameter are small compared to changes in the diameter of the wound string.

The density of the phosphor bronze winding (~8,800 kg/m^3) is similar to, but actually around 10% higher than, the density of the stainless steel core (~8,000 kg/m^3). This is not a big difference.

If we simply take the ratios of outer diameters of the top and bottom strings (1.34/0.31 ≈ 4.3) this is sufficient to explain the required two octave (factor 4) change in frequency.

Results#3: Why are some strings wire-wound?

The reason that the thicker strings on a guitar are wire-wound can be appreciated if one imagines the alternative.

A piece of stainless steel 1.34 mm in diameter is not ‘a string’, it’s a rod. Think about the properties of the wire used to make a paperclip.

So although one could attach such a solid rod to a guitar, and although it would vibrate at the correct frequency, it would not move very much, and so could not be pressed against the frets, and would not give rise to a loud sound.

The purpose of using wire-wound strings is to increase their flexibility while maintaining a high mass per unit length.

Results#4: Stress?

The first thing I calculated was the tension in each string. By dividing that result by the cross-sectional area of each string I can calculate the stress in the wire.

But it’s important to realise that the tension is only carried within of the steel core of each string. The windings only provide mass-per-unit-length but add nothing to the resistance to stretching.

The stress has units of newtons per square metre (N/m^2) which in the SI has a special name: the pascal (Pa). The stresses in the strings are very high so the values are typically in the range of gigapascals (GPa).

Click image for a larger version. The estimated stress with the stainless-steel cores of the 6 strings of a steel-strung guitar. Notice that the first and third strings have considerably higher stress than the other strings. In fact the stress in these cores just exceeds the nominal yield stress of stainless steel. String 1 is the thinnest (E4) and String 6 is the thickest (E2).

This graph contains the seeds of an explanation for some of the tuning behaviour I have observed – that the first and third strings are tricky to tune.

With new strings one finds that – most particularly with the 1st string (the thinnest string) – one can tune the string precisely, but within seconds, the frequency of the string falls, and the string goes out of tune.

What is happening is a phenomenon called creep. The physics is immensely complex, but briefly, when a high stress is applied rapidly, the stress is not uniformly distributed within the microscopic grains that make up the metal.

To distribute the stress uniformly requires the motion of faults within the metal called dislocations. And these dislocations can only move at a finite rate. As the dislocations move they relieve the stress.

After many minutes and eventually hours, the dislocations are optimally distributed and the string becomes stable.

Results#5: Yield Stress

The yield stress of a metal is the stress beyond which the metal is no longer elastic i.e. after being exposed to stresses beyond the yield stress, the metal no longer returns to its prior shape when the stress is removed.

For strong steels, stretching beyond their yield stress will cause them to ‘neck’ and thin and rapidly fail. But stainless steel is not designed for strength – it is designed not to rust! And typically its yield curve is different.

Typically stainless steels have a smooth stress-strain curve, so being beyond the nominal yield stress does not imply imminent failure. It is because of this characteristic that the creep is not a sign of imminent failure. The ultimate tensile strength of stainless steel is much higher.

Results#6: Strain

Knowing the stress  to which the core of the wire is subjected, one can calculate the expected strain i.e. the fractional extension of the wire.

Click image for a larger version. The estimated strain of the 6 strings of a steel-strung guitar. Also shown on the right-hand axis is the actual extension in millimetres Notice that the first and third strings have considerably higher strain than the other strings. String 1 is the thinnest (E4) and String 6 is the thickest (E2).

The calculated fractional string extension (strain) ranges from about 0.4% to 0.8% and the actual string extension from 2.5 mm to 5 mm.

This is difficult to measure accurately, but I did make an attempt by attaching small piece of tape to the end of the old string as I removed it, and the end of the new string as I tightened it.

Click image for a larger version. Method for estimating the strain in a guitar string. A piece of tape is tied to the top of the old string while it is still tight. On loosening, the tape moves with the part of the string to which it is attached.

For the first string my estimate was between 6 mm and 7 mm of extension, so it seems that the calculations are a bit low, but in the right ball-park.

Summary

Please forgive me: I have rambled. But I think I have eventually got to a destination of sorts.

In summary, the design of guitar strings is clever. It balances:

  • the tension in each string.
  • the stress in the steel core of each string.
  • the mass-per-unit length of each string.
  • the flexibility of each string.

Starting with the thinnest string, these are typically available in a range from 0.23 mm to 0.4 mm. The thinnest strings are easy to bend, but reducing the diameter increase the stress in the wire and makes it more likely to break. They also tend to be less loud.

The second string is usually unwound like the first string but the stresses in the string are lower.

The thicker strings are usually wire-wound to increase the flexibility of the strings for a given tensioning mass-per-unit-length. If the strings were unwound they would be extremely inflexible, impossible to push against the frets, and would vibrate only with a very low amplitude.

How does the flexibility arise? When these wound strings are stretched, small gaps open up between the windings and allow the windings to slide past each other when the string is bent.

Click image for a larger version. Illustration of how stretching a wound string slightly separates the windings. This allows the wound components to slide past each other when the string is bent.

Additionally modern strings are often coated with a very thin layer of polymer which prevents rusting and probably reduces friction between the sliding coils.

Remaining Questions 

I still have many  questions about guitar strings.

The first set of questions concerns the behaviour of nylon strings used on classical guitars. Nylon has a very different stress-strain curve from stainless steel, and the contrast in density between the core materials and the windings is much larger.

The second set of questions concerns the ageing of guitar strings. Old strings sound dull, and changing strings makes a big difference to the sound: the guitar sounds brighter and louder. But why? I have a couple of ideas, but none of them feel convincing at the moment.

And now I must get back to playing the guitar, which is quite a different matter. And sadly understanding the physics does not help at all!

P.S.

The strings I used were Elixir 12-53 with a Phosphor Bronze winding and the guitar is a Taylor model 114ce

The James Webb Space Telescope

December 24, 2021

Friends, a gift to humanity!

On Christmas Day at 12:20 GMT/UTC, the James Webb Space Telescope will finally be launched.

You can follow the countdown here and watch the launch live via NASA or on YouTube – below.

In May 2018 I was fortunate enough to visit the telescope at the Northrop Grumman facility where it was built, and to speak with the project’s former engineering director Jon Arenberg.

Everything about this telescope is extraordinary, and so as the launch approaches I thought that it might be an idea to re-post the article I wrote back in those pre-pandemical days.

As a bonus, if you read to the end you can find out what I was doing in California back in 2018!

Happy Christmas and all that.

===================================

Last week I was on holiday in Southern California. Lucky me.

Lucky me indeed. During my visit I had – by extreme good fortune – the opportunity to meet with Jon Arenberg – former engineering director of the James Webb Space Telescope (JWST).

And by even more extreme good fortune I had the opportunity to speak with him while overlooking the JWST itself – held upright in a clean room at the Northrop Grumman campus in Redondo Beach, California.

[Sadly, photography was not allowed, so I will have to paint you a picture in words and use some stock images.]

The JWST

In case you don’t know, the JWST will be the successor to the Hubble Space Telescope (HST), and has been designed to exceed the operational performance of the HST in two key areas.

  • Firstly, it is designed to gather more light than the HST. This will allow the JWST to see very faint objects.
  • Secondly, it is designed to work better with infrared light than the HST. This will allow the JWST to see objects whose light has been extremely red-shifted from the visible.

A full-size model of the JWST is shown below and it is clear that the design is extraordinary, and at first sight, rather odd-looking. But the structure – and much else besides – is driven by these two requirements.

JWST and people

Requirement#1: Gather more light.

To gather more light, the main light-gathering mirror in the JWST is 6.5 metres across rather than just 2.5 metres in the HST. That means it gathers around 7 times more light than the HST and so can see fainter objects and produce sharper images.

1280px-JWST-HST-primary-mirrors.svg

Image courtesy of Wikipedia

But in order to launch a mirror this size from Earth on a rocket, it is necessary to use a  mirror which can be folded for launch. This is why the mirror is made in hexagonal segments.

To cope with the alignment requirements of a folding mirror, the mirror segments have actuators to enable fine-tuning of the shape of the mirror.

To reduce the weight of such a large mirror it had to be made of beryllium – a highly toxic metal which is difficult to machine. It is however 30% less dense than aluminium and also has a much lower coefficient of thermal expansion.

The ‘deployment’ or ‘unfolding’ sequence of the JWST is shown below.

Requirement#2: Improved imaging of infrared light.

The wavelength of visible light varies from roughly 0.000 4 mm for light which elicits the sensation we call violet, to 0.000 7 mm for light which elicits the sensation we call red.

Light with a wavelength longer than 0.000 7 mm does not elicit any visible sensation in humans and is called ‘infrared’ light.

Imaging so-called ‘near’ infrared light (with wavelengths from 0.000 7 mm to 0.005 mm) is relatively easy.

Hubble can ‘see’ at wavelengths as long as 0.002 5 mm. To achieve this, the detector in HST was cooled. But to work at longer wavelengths the entire telescope needs to be cold.

This is because every object emits infrared light and the amount of infrared light it emits is related to its temperature. So a warm telescope ‘glows’ and offers no chance to image dim infrared light from the edge of the universe!

The JWST is designed to ‘see’ at wavelengths as long as 0.029 mm – 10 times longer wavelengths than the HST – and that means that typically the telescope needs to be on the order of 10 times colder.

To cool the entire telescope requires a breathtaking – but logical – design. There were two parts to the solution.

  • The first part involved the design of the satellite itself.
  • The second part involved the positioning the satellite.

Cooling the telescope part#1: design

The telescope and detectors were separated from the rest of the satellite that contains elements such as the thrusters, cryo-coolers, data transmission equipment and solar cells. These parts need to be warm to operate correctly.

The telescope is separated from the ‘operational’ part of the satellite with a sun-shield roughly the size of a tennis court. When shielded from the Sun, the telescope is exposed to the chilly universe, and cooled gas from the cryo-coolers cools some of the detectors to just a few degrees above absolute zero.

Cooling the telescope part#2: location

The HST is only 300 miles or so from Earth, and orbits every 97 minutes. It travels in-to and out-of full sunshine on each orbit. This type of orbit is not compatible with keeping a gigantic telescope cold.

So the second part of the cooling strategy is to position the JWST approximately 1 million miles from Earth at a location beyond the orbit of the moon at a location known as the second Lagrange point L2. But JWST does not orbit the Earth like Hubble: it orbits the Sun.

Normally the period of orbits around the Sun get longer as satellites orbit at greater distances from the Sun. But at the L2 position, the gravitational attraction of the Earth and Moon add to the gravitational attraction of the Sun and speed up the orbit of the JWST so that it orbits the Sun with a period of one Earth year – and so JWST stays in the same position relative to the Earth.

  • The advantage of orbiting at L2 is that the satellite can maintain the same orientation with respect to the Sun for long periods. And so the sun-shade can shield the telescope very effectively, allowing it to stay cool.
  • The disadvantage of orbiting at L2 is that it is beyond the orbit of the moon and no manned space-craft has ever travelled so far from Earth. So once launched, there is absolutely no possibility of a rescue mission.

The most expensive object on Earth?

I love the concept of the JWST. At an estimated cost of $8 billion $10 billion, if this is not the most expensive single object on Earth, then I would be interested to know what is.

But it has not been created to make money or as an act of aggression.

Instead, it has been created to answer the simple question

I wonder what we would see if we looked into deep space at infrared wavelengths.”. 

Ultimately, we just don’t know until we look.

In a year or two, engineers will place the JWST on top of an Ariane rocket and fire it into space. And the most expensive object on Earth will then – hopefully – become the most expensive object in space.

Personally I find the mere existence of such an enterprise a bastion of hope in a world full of worry.

Thanks

Many thanks to Jon Arenberg  and Stephanie Sandor-Leahy for the opportunity to see this apogee of science and engineering.

Resources

Breathtaking photographs are available in galleries linked to from this page

Christmas Bonus

Re-posting this article, I remembered why I was in Southern California back in May 2018 – I was attending Dylanfest – a marathon celebration of Bob Dylan’s music as performed by people who are not Bob Dylan.

The pandemic hit Dylanfest like a Hard Rain, but in 2020 they went on-line and produced a superb cover of Subterranean Homesick Blues which I gift to you this Christmas. Look out for the fantastic guitar solo at 1’18” into the video.

And since I am randomly posting performances inspired by Dylan songs, I can’t quite leave without reminding you of the entirely palindromic (!) version of the song by Wierd Al Yankovic.

Our Old Car is Dead. Long Live The New Car!

July 28, 2021

Click for larger image. Our old and new Zafira cars

After 20 years and 96,000 miles, our 2001 Vauxhall Zafira is close to death.

We bought it for £7000 when it was three years old in 2004. Back then it was all shiny and new, but over the last 17 years it has developed a very long list of faults.

Alexei Sayle once said of his car: “When one door closes, another one opens.”. This was one of the faults our car did not have. But it did have a feature such that: “When one door closes, all the electric windows operate simultaneously.”

Over the last few weeks the engine has begun making horrific noises, the engine warning light is on permanently, and there is an acrid stench of burning oil in the cabin.

After much deliberation, we have replaced it with a closely similar car, a 2010 Zafira with only 52,000 miles on its ‘clock’. The new car lacks our old car’s charmingly idiosyncratic list of faults, but what can you expect for £3,200?

In this post I would like to explain the thinking behind our choice of car.

Do we need a car?

Strictly speaking, no. We could operate with a combination of bikes and taxis and hire cars. But my wife and I do find having a car extremely convenient.

Having a car available simplifies a large number of mundane tasks and gives us the sense of – no irony intended – freedom.

Further, although I am heavily invested in reducing my carbon dioxide emissions, I do not want to live the life of a ‘martyr’. I am keen to show that a life with low carbon dioxide emissions can be very ‘normal’.

So why not an electric car? #1: Cost

Given the effort and expense I have gone to in reducing carbon dioxide emissions from the house, I confess that I did want to get an electric car.

I have come to viscerally hate the idea of burning a few kilograms of hydrocarbon fuel in order to move myself around. It feels dirty.

But sadly buying a new electric car didn’t really make financial sense.

There are lots of excellent electric family cars available in the UK, but they all cost in the region of £30,000.

There are not many second-hand models available but amongst those that were available, there appeared to be very few for less than £15,000.

Click for larger version. Annual Mileage of our family cars since 1995 taken from their MOT certificates. The red dotted line is the Zafira’s average over its lifetime.

Typically my wife and I drive between 3,000 and 5,000 miles per year, and we found ourselves unable to enthuse about the high cost of these cars.

And personally, I feel like I have spent a fortune on the house. Indeed I have spent a fortune! And I now need to just stop spending money for a while. But Michael: What about the emissions?

So why not an electric car? #2: Carbon Dioxide

Sadly, buying an electric didn’t quite make sense in terms of carbon emissions either.

Electric cars have very low emissions of carbon dioxide per kilometre. But they have – like conventional cars – quite large amounts of so-called ’embedded’ carbon dioxide arising from their manufacture.

As a consequence, at low annual mileages, it takes several years for the carbon dioxide emissions of an electric car to beat the carbon dioxide emissions from an already existing internal combustion engine car.

The graph below compares the anticipated carbon dioxide emissions from our old car, our new car, and a hypothetical EV over the next 10 years. The assumptions I have made are listed at the end of the article.

Click for larger version. Projected carbon dioxide emissions from driving 5,000 miles per year in: Our current car (2001 Zafira); Our new car (2010 Zafira); and a typical EV. The dotted line shows the effect of grid carbon intensity falling from around 200 gCO2/kWhe now to 100 gCO2/kWhe in 2030.

For an annual mileage of 5000 miles, the breakeven point for carbon dioxide emissions is 6 or 7 years away. If we reduced our mileage to 3000 miles per year, then the breakeven point would be even further away.

Click for larger version. Projected carbon dioxide emissions from driving 3,000 miles per year in: Our new car (2010 Zafira); and a typical EV. The dotted line shows the effect of grid carbon intensity falling from around 200 gCO2/kWhe now to 100 gCO2/kWhe in 2030.

However, we are a low mileage household. If we drove a more typical 10,000 miles per year then the breakeven point would be just a couple of years away. Over 10 years, the Zafira would emit roughly 12 tonnes more carbon dioxide than the EV.

If we took account of embodied carbon dioxide in a combustion engine car, i.e. if we were considering buying a new car, the case for an EV would be very compelling.

Click for larger version. Projected carbon dioxide emissions from driving 10,000 miles per year in: Our new car (2010 Zafira); and a typical EV. The dotted line shows the effect of grid carbon intensity falling from around 200 gCO2/kWhe now to 100 gCO2/kWhe in 2030.

So…

By replacing our old car with a closely similar model we have minimised the cognitive stress of buying a new car. Hopefully it will prove to be reliable.

And however many miles we drive in the coming years, our new car will reduce our carbon dioxide emissions compared to what they would have been in the old car by about 17%. And no new cars will have been built to achieve that saving.

Assuming that our new car will last us for (say) 5 years, I am hopeful that by then the cost of electric cars will have fallen to the point where an electric car – new or second-hand – might make sense to us.

Additionally, if the electricity used to both manufacture and charge electric cars increasingly comes from renewable sources, then the reduction in carbon dioxide emissions associated with driving electric cars will (year-on-year) become ever more compelling.

However, despite being able to justify this decision to myself, I must confess that I am sad not to be able to join the electric revolution just yet.

Assumptions

For the Zafiras:

  • I used the standard CO2 emissions per kilometre (190 and 157 gCO2/km respectively) in the standard government database.

For the hypothetical EV

  • I took a typical high efficiency figure of 16 kWh per 100 km taken from this article.
  • I assumed a charging inefficiency of 10%, and a grid carbon intensity of 200 gCO2/kWhe reducing to 100 gCO2/kWhe in 10 years time.
  • I assumed that the battery size was 50 kWh and that embodied carbon emissions were 65 kg per kWh (link) of battery storage yielding 3.3 tonnes of embodied carbon dioxide.
  • I assumed the embodied carbon dioxide in the chassis and other components was 4.6 tonnes.
  • For comparison, the roughly 8 tonnes of embodied carbon dioxide in an EV is only just less than the combined embodied carbon dioxide in all the other emission reduction technology I have bought recently:
    • Triple Glazing, External Wall Insulation, Solar Panels, Powerwall Battery, Heat Pump, Air Conditioning

I think all these numbers are quite uncertain, but they seem plausible and broadly in line with other estimates one can find on the web

 

The Last Artifact – At Last!

May 20, 2021

Friends, at last a film to which I made a minor contribution – The Last Artifact – is available in full online!

It’s the story of the redefinition of the kilogram which took place on this day back in May 2019.

The director Ed Watkins and his team carried out interviews at NPL back in August 2017 (link) and then headed off on a globe-trotting tour of National Metrology Laboratories.

Excerpts from the film were released last year (link), but somehow the entire film was unavailable – until now!

So set aside 90 minutes or so, put it onto as big a screen as you can manage, and relax as film-making professionals explain what it was all about!

 

The Last Artifact

Gas Boilers versus Heat pumps

May 18, 2021

Click for a larger version. A recent quote for gas and electricity from Octopus Energy. The electricity is six times more expensive than gas.

We are receiving strong messages from the Government and the International Energy Agency telling us that we must stop installing new gas boilers in just a year or two.

And I myself will be getting rid of mine within the month, replacing it with an Air Source Heat Pump (ASHP).

But when a friend told me his gas boiler was failing, and asked for my advice, I paused.

Then after considering things carefully, I recommended he get another gas boiler rather than install an ASHP.

Why? It’s the cost, stupid!

Air Source Heat Pumps:

  • cost more to buy than a gas boiler,
  • cost more to install than a gas boiler,
  • cost more to run than a gas boiler.

I am prepared to spend my own money on this type of project because I am – slightly neurotically – intensely focused on reducing my carbon dioxide emissions.

But I could not in all conscience recommend it to someone else.

More to Buy

Using the services of Messrs. Google, Google and Google I find that:

And this does not even touch upon the costs of installing a domestic hot water tank if one is not already installed.

More to Install

Having experienced this, please accept my word that the installation costs of an ASHP exceed those of replacing an existing boiler by a large factor – probably less than 10.

More to Run

I have a particularly bad tariff from EDF,  so I got a quote from Octopus Energy, a popular supplier at the moment,

They offered me the following rates: 19.1 p/kWh for electricity and 3.2 p/kWh for gas.

Using an ASHP my friend would be likely to generate around 3 units of heat for every 1 unit of electricity he used: a so-called Coefficient of Performance (COP) of 3.

But electricity costs 19.1/3.2 = 6.0 times as much as gas. So heating his house would cost twice as much!

More to buy, install and run and they don’t work as well!

Without reducing the heating demand within a house – by insulation – it is quite possible that my friend would not be able to heat his house at all with an ASHP!

Radiator output is specified assuming that water flowing through the radiators is 50 °C warmer than the room. For rooms at 20 °C, this implies water flowing at 70 °C.

A gas boiler has no problem with this, but an ASHP can normally only heat water to 55 °C i.e. the radiators would be just 35 °C above room temperature.

As this document explains, the heating output would be reduced (de-rated to use the correct terminology) to just 63% of its output with 70 °C flow.

What would you do?

Now consider that my friend is not – as you probably imagined – a member of the global elite, a metropolitan intellectual with a comfortable income and savings. I have friends outside that circle too.

Imagine perhaps that my friend, was elderly and on a limited pension.

Or imagine that they were frail or confused?

Or imagine perhaps that they had small children and were on a tight budget.

Or imagine that they were just hard up.

Could you in all honesty have recommended anything different? 

These problems are well known (BBC story) but until this cost landscape changes the UK doesn’t stand a chance of reaching net-zero.

 

Recalesence

May 18, 2021

When I used to work at NPL, I remember being really impressed by the work of my colleague Andrew Levick.

Magnetic Resonance Imaging (MRI) machines are able to image many physical details inside human bodies.

One their little-used features is that they can also be used to image the variation of temperatures throughout bodies.

Andrew was working on a temperature standard that could be used to calibrate temperature measurements in MRI imaging machines.

This would be a device placed inside the imager that could create a volume of imageable organic material at a known temperature. But one of the difficulties was that there must be no metal parts – so it could not contain any heaters or conventional temperature sensors.

So Andrew had the idea of using a vessel containing a supercooled organic liquid. If the transition to a solid was initiated, then the released latent heat – the recalescence –  would warm the liquid back to the melting/freezing temperature, creating a region of liquid-solid mixture at a stable, known and reproducible temperature, ideal for calibrating MRI machines.

Anyway…

Early on in the research he was doing experiments on the different ways in which the liquid crystallised.

He was supercooling organic liquids and then seeding the solidification, and making videos of the solidification process using a thermal imaging camera.

I thought the results were beautiful and put them to music, and that’s what the movie is.

The music is Bach’s Jesu Joy of Man’s Desiring arranged for 12-string guitar by the inimitable Leo Kottke.

If you are interested You Tube has many versions and lessons!

That’s all. I hope you enjoy it.

How Many Naturally-Occurring Elements are there? Corrigendum

January 28, 2021

And in non-COVID news…

… I received an e-mail from ‘Claire’ the other day pointing out that there was an error in one of my blog articles.

I try quite hard to be ‘right’ on this blog, so despite her politeness, I was distressed to hear this.

The error was in an article written on 15th February 2010 – yes, more than 10 years ago – entitled: Just How Many Naturally Occurring Elements are there?

Reading it again after all these years I was pleased with it. The gist of the article is that there is not a clear answer to the question.

It turns out that the nuclei of the atoms of some elements are so radioactively unstable that even though they do exist on Earth naturally, at any one time there are only handful of atoms of the substance in existence.

These elements seemed to be in a different category from, say, carbon (which has some stable isotopes) or uranium (which has no stable isotopes). But some of the isotopes of uranium have very long half-lives: 238U has a half-life 4.468 billion years – roughly the length of time that the Earth has existed.

So of all the 238U has which was donated to the Earth at its formation – very roughly half of it has decayed (warming the Earth in the process) and half of it is still left.

So I had no problem saying that 238U was ‘naturally-occurring’, but that it was a moot point whether Francium, of which there are just a few atoms in existence on Earth at one time, could really be said to be ‘naturally-occurring’.

So in the article I stated that I had stopped giving an exact number for the number of ‘naturally-occurring’ elements – I just say it is ‘about 100’ – and then discuss the details should anyone ask for them.

What was my error?

In the article I stated that Bismuth – atomic number 83 – is the heaviest element which has at least one stable isotopes. For elements with larger atomic numbers than Bismuth, every isotope is radioactively unstable.

What Claire told me was that in fact the one apparently stable isotope of bismuth (209Bi, the one which occurs naturally) had been found to be unstable against alpha decay, but with an exceedingly long half-life. The discovery had been announced in Nature in 2003: link

Click image for a larger version. Link to Nature here

What I want to comment on here is the length of the half-life: The authors estimated the half life of 209Bi was:

  • 1.9 (± 0.2) x 1019 years.
  • 19 billion billion years

This is an extraordinarily long time. For comparison the estimated age of the Universe – the time since the Big Bang – is estimated to be about:

  • 1.4 x 1010 years.
  • 14 billion years

Imagining that 1 kilogram of pure 209Bi was gifted to the Earth when it was formed roughly…

  • 0.4 x 1010 years.
  • 4 billion years

…ago. Then since that time less than 1 in a billion atoms (0.15 micrograms) of the 209Bi would have decayed.

We might expect a single nuclear decay in 1 kilogram of pure 209Bi every 5 minutes.

How could one measure such a decay? The authors used a transparent crystal of Bismuth Germanate (Bi4Ge3O12) which scintillates when a radioactive particle – such as an alpha particle passes through it. In this case, the crystal would ‘self-scintillate’.

But the background rate of scintillation due to other sources of radiation is much higher than the count due to the decay of the 209Bi.

To improve the discrimination against the background the authors cooled the crystal down to just 0.1 K. At this very low temperature its heat capacity becomes a tiny fraction of its heat capacity at room temperature, and the energy of even a single radioactive decay can be detected with a thermometer!

Combining light detection and heat detection (scintillating bolometry) helps to discriminate against spurious events.

And my point was…?

For all practical purposes 209Bi is stable. Anything with a half-life a billion times longer than the age of the Universe is at least stable-ish!

But Claire’s e-mail caused me to reflect that the apparently binary distinction between ‘stable’ and ‘unstable’ is not as obvious as I had assumed.

By this extraordinary measurement, the authors have reminded me that instead of saying that something is ‘stable’ we should really state that it may be stable, but that if it decays, its rate of decay is beyond our current limit of detectability.

So for example, we know that neutrons – outside a nucleus – decay with a radioactive half-life of just 10.2 minutes. But what about protons? Are they really unconditionally stable?

People have searched for the decay of the proton and established that protons may be stable, but if they do decay, their half-life is greater than 1.7 x 1034 years – or more than a million, billion, billion times the age of the Universe.

So now we know.

Rocket Science

January 14, 2021

One of my lockdown pleasures has been watching SpaceX launches.

I find the fact that they are broadcast live inspiring. And the fact they will (and do) stop launches even at T-1 second shows that they do not operate on a ‘let’s hope it works’ basis. It speaks to me of confidence built on the application of measurement science and real engineering prowess.

Aside from the thrill of the launch  and the beautiful views, one of the brilliant features of these launches is that the screen view gives lots of details about the rocket: specifically it gives time, altitude and speed.

When coupled with a little (public) knowledge about the rocket one can get to really understand the launch. One can ask and answer questions such as:

  • What is the acceleration during launch?
  • What is the rate of fuel use?
  • What is Max Q?

Let me explain.

Rocket Science#1: Looking at the data

To do my study I watched the video above starting at launch, about 19 minutes 56 seconds into the video. I then repeatedly paused it – at first every second or so – and wrote down the time, altitude (km) and speed (km/h) in my notebook. Later I wrote down data for every kilometre or so in altitude, then later every 10 seconds or so.

In all I captured around 112 readings, and then entered them into a spreadsheet (Link). This made it easy to convert the  speeds to metres per second.

Then I plotted graphs of the data to see how they looked: overall I was quite pleased.

Click for a larger image. Speed (m/s) of Falcon 9 versus time after launch (s) during the Turksat 5A launch.

The velocity graph clearly showed the stage separation. In fact looking in detail, one can see the Main Engine Cut Off (MECO), after which the rocket slows down for stage separation, and then the Second Engine Start (SES) after which the rocket’s second stage accelerates again.

Click for a larger image. Detail from graph above showing the speed (m/s) of Falcon 9 versus time (s) after launch. After MECO the rocket is flying upwards without power and so slows down. After stage separation, the second stage then accelerates again.

It is also interesting that acceleration – the slope of the speed-versus-time graph – increases up to stage separation, then falls and then rises again.

The first stage acceleration increases because the thrust of the rocket is almost constant – but its mass is decreasing at an astonishing 2.5 tonnes per second as it burns its fuel!

After stage separation, the second stage mass is much lower, but there is only one rocket engine!

Then I plotted a graph of altitude versus time.

Click for a larger image. Altitude (km) of Falcon 9 versus time after launch (s) during the Turksat 5A launch.

The interesting thing about this graph is that much of the second stage is devoted to increasing the speed of the second stage at almost constant altitude – roughly 164 km above the Earth. It’s not pushing the spacecraft higher and higher – but faster and faster.

About 30 minutes into the flight the second stage engine re-started, speeding up again and raising the altitude further to put the spacecraft on a trajectory towards a geostationary orbit at 35,786 km.

Rocket Science#2: Analysing the data for acceleration

To estimate the acceleration I subtracted each measurement of speed from the previous measurement of speed and then divided by the time between the two readings. This gives acceleration in units of metres per second, but I thought it would be more meaningful to plot the acceleration as a multiple of the strength of Earth’s gravitational field g (9.81 m/s/s).

The data as I calculated them had spikes in because the small time differences between speed measurements (of the order of a second) were not very accurately recorded. So I smoothed the data by averaging 5 data points together.

Click for a larger image. Smoothed Acceleration (measured in multiples of Earth gravity g) of Falcon 9 versus time after launch (s) during the Turksat 5A launch. Also shown as blue dotted line is a ‘theoretical’ estimate for the acceleration assuming it used up fuel as a uniform rate.

The acceleration increased as the rocket’s mass reduced reaching approximately 3.5g just before stage separation.

I then wondered if I could explain that behaviour.

  • To do that I looked up the launch mass of a Falcon 9 (Data sources at the end of the article and saw that it was 549 tonnes (549,000 kg).
  • I then looked up the mass of the second stage 150 tonnes (150,000 kg).
  • I then assumed that the mass of the first stage was almost entirely fuel and oxidiser and guessed that the mass would decrease uniformly from T = 0 to MECO at T = 156 seconds. This gave a burn rate of 2558 kg/s – over 2.5 tonnes per second!
  • I then looked up the launch thrust from the 9 rocket engines and found it was 7,600,000 newtons (7.6 MN)
  • I then calculated the ‘theoretical’ acceleration using Newton’s Second Law (a = F/m) at each time step – remembering to decrease the mass by 2.558 kilograms per second. And also remembering that the thrust has to exceed 1 x g before the rocket would leave the ground!

The theoretical line (– – –) catches the trend of the data pretty well. But one interesting feature caught my eye – a period of constant acceleration around 50 seconds into the flight.

This is caused by the Falcon 9 throttling back its engines to reduce stresses on the rocket as it experiences maximum aerodynamic pressure – so-called Max Q – around 80 seconds into flight.

Click for a larger image. Detail from the previous graph showing smoothed Acceleration (measured in multiples of Earth gravity g) of Falcon 9 versus time after launch (s) during the Turksat 5A launch. Also shown as blue dotted line is a ‘theoretical’ estimate for the acceleration assuming it used up fuel as a uniform rate. Highlighted in red are the regions around 50 seconds into flight when the engines are throttled back to reduce the speed as the craft experience maximum aerodynamic pressure (Max Q) about 80 seconds into flight.

Rocket Science#3: Maximum aerodynamic pressure

Rocket’s look like they do – rocket shaped – because they have to get through Earth’s atmosphere rapidly, pushing the air in front of them as they go.

The amount of work needed to do that is generally proportional to the three factors:

  • The cross-sectional area A of the rocket. Narrower rockets require less force to push through the air.
  • The speed of the rocket squared (v2). One factor of v arises from the fact that travelling faster requires one to move the same amount of air out of the way faster. The second factor arises because moving air more quickly out of the way is harder due to the viscosity of the air.
  • The air pressure P. The density of the air in the atmosphere falls roughly exponentially with height, reducing by approximately 63% every 8.5 km.

The work done by the rocket on the air results in so-called aerodynamic stress on the rocket. These stresses – forces – are expected to vary as the product of the above three factors: A P v2. The cross-sectional area of the rocket A is constant so in what follows I will just look at the variation of the product P v2.

As the rocket rises, the pressure falls and the speed increases. So their product P v, and functions like P v2, will naturally have a maximum value.

The importance of the maximum of the product P v2 (known as Max Q) as a point in flight, is that if the aerodynamic forces are not uniformly distributed, then the rocket trajectory can easily become unstable – and Max Q marks the point at which the danger of this is greatest.

The graph below shows the variation of pressure P with time during flight. The pressure is calculated using:

Where the ‘1000’ is the approximate pressure at the ground (in mbar), h is the altitude at a particular time, and h0 is called the scale height of the atmosphere and is typically 8.5 km.

Click for a larger image. The atmospheric pressure calculated from the altitude h versus time after launch (s) during the Turksat 5A launch.

I then calculated the product P v2, and divided by 10 million to make it plot easily.

Click for a larger image. The aerodynamic stresses calculated from the altitude and speed versus time after launch during the Turksat 5A launch.

This calculation predicts that Max Q occurs about 80 seconds into flight, long after the engines throttled down, and in good agreement with SpaceX’s more sophisticated calculation.

Summary 

I love watching the Space X launches  and having analysed one of them just a little bit, I feel like understand better what is going on.

These calculations are well within the capability of advanced school students – and there are many more questions to be addressed.

  • What is the pressure at stage separation?
  • What is the altitude of Max Q?
  • The vertical velocity can be calculated by measuring the rate of change of altitude with time.
  • The horizontal velocity can be calculated from the speed and the vertical velocity.
  • How does the speed vary from one mission to another?
  • Why does the craft aim for a particular speed?

And then there’s the satellites themselves to study!

Good luck with your investigations!

Resources

And finally thanks to Jon for pointing me towards ‘Flight Club – One-Click Rocket Science‘. This site does what I have done but with a good deal more attention to detail! Highly Recommended.

 

Thinking about domestic batteries

January 3, 2021

My External Wall Insulation project is complete and the solar panels are installed, so I am left to simply gather data on how things are working: a retired metrologist’s work is never done!

So inevitably my mind is moving on to the ‘next thing’, which is possibly a battery, and I am left with nothing to do but write over-long articles about the possibilities.

  • [Note added on 9/1/2021: If you like this article, then try also the next article on the same subject – link – I think it is a little clearer and the spreadsheet has been improved.]

The idea of using a battery is very simple: store solar electricity and use it later! But as I tried to think about it, I found myself intermittently perplexed. This could be an age thing, or just due to my lack of familiarity with solar power installations, but it was not at all obvious to me how to operate the battery in harmony with the solar panels.

This is because energy can flow in several directions.

  • For example electricity from the solar panels could charge the battery, operate the domestic load, or be exported to the grid.
  • Similarly, the battery could charge itself from the grid, operate the domestic load or export energy to the grid.

Understanding these things matters because domestic scale batteries are not cheap.

  • A rechargeable AA battery with 5 Wh of capacity (3.3 Ah @ 1.5V) costs around £5.
  • If we scale that up to 13.5 kWh (the size of Tesla battery) then 2700 rechargeable AA batteries would cost about £13,500.
  • In fact there are some economies of scale, but the likely cost is still around £10,000.

After making several simulations I think I have a clearer idea how the scheme would work, so please allow me to explain.

Mode#1: Storing in the day.

At the moment the solar panels generate at the whim of the weather gods – and the iron diktats of celestial geometry.

In sunshine – even at mid-winter – the panels can generate at more than 2 kW and unless we are using that electricity in the house at the moment the Sun is shining, the power is exported to the grid.

Click for a larger version. Solar electricity (in kWh) generated daily since the solar panels were installed.

  • Over the last 50 winter days the panels have generated about 136 kWh
  • I have used about 60% of that, saving round 81.6 x 24.3 pence ~£19.83
  • But I have given away about 40% of the electricity I have generated.
  • I can arrange to sell that electricity to EDF, my electricity and gas supplier, for the grand price of 1.8 pence per unit i.e. the 54.4 units I have donated would be worth £0.98
  • However, if I could have stored those units and used them later I would have saved approximately £13.22.

So using a battery to store solar energy and then use it later to displace buying full-price electricity makes some financial sense. It also makes carbon sense, displacing grid electricity with low-carbon solar energy.

In winter, a battery would make the most of the meagre solar supply and in summer it would allow us to be effectively ‘off grid’ for many days at a time.

Mode#2: Storing at night.

But batteries can also be used to store electricity generated at night time – when it is cheap. EDF charges me 24.31 pence for each unit I use between 6:30 a.m. and 11:30 p.m. (‘peak’ rate) , but only 4.75 pence for each unit I use overnight (‘off peak’ rate).

On average, we use around 11 kWh/day of electricity, around 9 kWh of which is used during ‘peak’ time. So if I could buy that electricity at the ‘off peak’ rate (costing 9 x 4.75 = 42.75 p), store it in a battery, and then use it the next day, then I would avoid spending 9 x 24.31 pence = £2.19.

This strategy would save me around £1.76 per day, or around £640 per year – a truly staggering amount of money!

It would also be slightly greener. The exact amount of carbon dioxide emitted for each unit of electricity – a quantity known as the carbon intensity – depends on how the electricity is generated,

  • Electricity generated from coal has a carbon intensity of around 900 gCO2/kWh
  • Electricity generated from gas has a carbon intensity of around 500 gCO2/kWh
  • Electricity generated from nuclear, solar or wind has a carbon intensity of a few 10’s of gCO2/kWh

Depending on mix of generating sources, the carbon intensity of electricity varies from hour-to-hour, day-to-day and from month-to-month.

To estimate the difference in carbon intensity between ‘peak’ and ‘off peak’ electricity is quite a palava.

  • I went to the site CarbonIntensity.org.uk and downloaded the data for the carbon intensity of electricity assessed every 30 minutes for the last three years.
  • I then went through the data and found out the average carbon intensity for ‘Off Peak’ and ‘Peak’ electricity.
  • I averaged these figures monthly.

The data are graphed below.

Click for a larger version. Carbon intensity (grams of CO2 per kWh of electricity) for UK electricity evaluated each month since the start of 2018. The red curve uses data for ‘Peak Rate’ electricity and the blue curve shows data for ‘off peak’ electricity’. The black curve shows the difference between ‘peak’ and ‘off-peak’ and the dotted red line shows the average value of the difference.

The average ‘Peak Rate’ carbon intensity over the last two years is approximately 191 g CO2 per kWh, and the ‘Off-peak’ average is approximately 25 g (or 13%) lower.

I calculated that over the last year if I used 9 peak units and 2 off-peak units per day then the carbon emissions associated with my electricity use would have been 749 kg (~three quarters of a tonne) and the cost would have been £822.

If I had instead bought all those units at night, stored them in a battery, and used them the next day the carbon emissions would have been 661 kg – a saving of 88 kg and the cost would have been just £188 – a saving of £634.

Summary so far

So these two strategies involve using the battery to:

  • Store solar electricity in the day (which maximises my personal use of my personal solar electricity)
  • Store grid electricity at night (which appears to be amazingly cost effective and has about 13% lower carbon emissions)

Understanding how these two strategies can be combined had been hurting my head, but I think I have got there!

I think the operating principles I need are these:

  • Whenever solar electricity is available, use it.
  • If the solar power exceeds immediate demand,
    • If the battery is not full, store it.
    • If the battery is full, export it for whatever marginal gain may be made.
  • At night, charge the battery from the mains so that it is full before the start of the next day.

I have run a few simulations below assuming a Tesla Powerwall 2 battery with a capacity of 13.5 kWh. If you want, you can download the Excel™ spreadsheet here, or view typical outputs below.

  • Note: I hate sharing spreadsheets because as Jean Paul Satre might once have said “Hell is other people’s spreadsheets“. Please forgive me for any errors. Thanks

Battery only: No Solar

Click for a larger version. The dotted (—-) red line shows the battery capacity of a Tesla Powerwall 2 and the green curve shows the state of charge of the battery. Both should be read against the left-hand axis. The yellow curve shows the electrical power generated from the solar panels and the blue curve shows the power exported to the grid. Both are zero in this graph. Both should be read against the right-hand axis.

In the first simulation the battery charges from empty using 2 kW of ‘Off Peak’ electricity and fills up just before morning. It then discharges through the day (at 0.4 kW) and is about half empty – or half full depending on your disposition – the next evening.

So the next day the battery starts charging from about 50% full and then discharges through the day and is again about 50% full at the end of the day.

Click for a larger version. The dotted (—-) red line shows the battery capacity of a Tesla Powerwall 2 and the green curve shows the state of charge of the battery. Both should be read against the left-hand axis. The yellow curve shows the electrical power generated from the solar panels and the blue curve shows the power exported to the grid. Both are zero in this graph. Both should be read against the right-hand axis.

So based on this simulation, it looks like a stable daily charge and discharge rate could effectively eliminate the need to use ‘Peak-Rate’ electricity.

Each night the battery would store however much electricity had been used the day before.

Battery and solar in harmony 

Click for a larger version. The dotted (—-) red line shows the battery capacity of a Tesla Powerwall 2 and the green curve shows the state of charge of the battery. Both should be read against the left-hand axis. The yellow curve shows the electrical power generated from the solar panels and should be read against the right-hand axis.

The simulation above, shows what would happen if there were weak solar generation typical of this wintry time of year. As the solar electricity is being generated. the rate of discharge of the battery slows – is reversed briefly – and then resumes as the solar generation fades away.

A modest generation day – typical of a bright winter day or a normal spring day – is shown below.

Click for a larger version. The dotted (—-) red line shows the battery capacity of a Tesla Powerwall 2 and the green curve shows the state of charge of the battery. Both should be read against the left-hand axis. The yellow curve shows the electrical power generated from the solar panels and the blue curve shows the power exported to the grid. Both should be read against the right-hand axis.

At its peak the solar generation reaches 2 kW – and in the middle of the day re-charges the battery to capacity. When the battery reaches capacity – the solar generation covers the domestic load and the excess electricity is exported (blue curve).

On a long summer day solar generation might reach 3.6 kW but here I assume just a 2.5 kW peak. In this scenario, the battery barely discharges and solar generation covers the domestic load and exports to the grid during the day. Only in the evening does the battery discharge.

Click for a larger version. The dotted (—-) red line shows the battery capacity of a Tesla Powerwall 2 and the green curve shows the state of charge of the battery. Both should be read against the left-hand axis. The yellow curve shows the electrical power generated from the solar panels and the blue curve shows the power exported to the grid. Both should be read against the right-hand axis.

Battery and heat pump and solar 

The battery and the solar panels are just a part of the wider project to reduce carbon emissions which – if you have been paying attention – involves replacing my gas boiler with an air source heat pump. This uses electricity to move heat from outside into the house.

Back in the Winter of 2018/19 the gas boiler supplied up to 100 kWh/day of heating. In the slightly milder winter of 2019/20 the boiler used on average 70 kWh/day of gas for heating. This winter the External Wall Insulation and the Triple Glazing seem to have reduced this average to about 40 kWh/day – with a peak requirement around 72 kWh on the very coldest days.

Using a heat pump with a coefficient of performance of about 3, it will require 40/3 kWh= 13.3 kWh/day of electrical energy to supply these 40 kWh of heat energy. This amounts to an additional 0.55 kW running continuously.

I have simulated this situation below by increasing the load to 1.0 kW. In this case the battery will discharge a couple of hours early and we will have to buy a couple of units of full-price electricity.

Click for a larger version. The dotted (—-) red line shows the battery capacity of a Tesla Powerwall 2 and the green curve shows the state of charge of the battery. Both should be read against the left-hand axis. The yellow curve shows the electrical power generated from the solar panels and the blue curve shows the power exported to the grid. Both should be read against the right-hand axis.

And finally we come to the reasonable worst-case scenario. Here there would be effectively no solar power (dull winter days!) and the external temperature would be around 0 °C requiring around 72 kWh of heating i.e. 3 kW of heating power. This will require 1 kW of electrical power to operate the heat pump on top of the 0.4 kW of domestic load.

Click for a larger version. The dotted (—-) red line shows the battery capacity of a Tesla Powerwall 2 and the green curve shows the state of charge of the battery. Both should be read against the left-hand axis. The yellow curve shows the electrical power generated from the solar panels and the blue curve shows the power exported to the grid. Both should be read against the right-hand axis.

In this scenario we would require about 8 hours of full price electricity @1.4 kW i.e. 11.2 kWh which@ 24.3 p/kWh would cost around £2.70. So if there were 10 of these days a year it would cost roughly £27/year.

I could avoid purchasing this full price electricity by buying two Tesla Powerwall batteries to give a capacity of 27 kWh. But spending an additional £8000 to avoid paying £27 year does not look like a sound investment.

Click for a larger version. The dotted (—-) red line shows the battery capacity of two Tesla Powerwall 2 batteries and the green curve shows the state of charge of the battery. Both should be read against the left-hand axis. The yellow curve shows the electrical power generated from the solar panels and the blue curve shows the power exported to the grid. Both should be read against the right-hand axis.

Summary

Overall, I think I now understand how a battery would integrate with the way we use energy in this house, and I think it makes sense.

Regarding money:

  • Using a battery  I would appear to be able to save many hundreds of pounds each year by purchasing off-peak electricity instead of peak electricity.

Regarding carbon:

  • Without solar panels, the switch to ‘Off Peak’ electricity should reduce annual emissions from roughly 749 kg to about 661 kg – a saving of 88 kg.
  • With solar panels we should generate roughly 3700 kWh of low carbon electricity, all of which will be used either by me or by someone else, displacing carbon-producing generation. This would be true with or without a battery. But the battery allows me to personally benefit.
    • During the summer the battery should allow me to benefit from the full amount of solar energy generated, reducing grid use (and expenditure) to almost zero.
    • During the winter, where only about 2 kWh of solar generation is available each day, it should reduce carbon emissions by about 20% compared with using ‘Off Peak’ grid electricity.
    • In the worst case – when using a heat pump to heat the house on very cold days with negligible solar power – I will need to buy full price electricity for a few hours a day.

So when I replace the gas boiler with an air-source heat pump, we will inevitably rely on the grid for some full-price electricity on the few coldest days of the year. That is why I have been so keen to reduce the amount of heating required.

Solar Power in Teddington

November 18, 2020

An Accidental Installation

Slightly to my surprise, I became the owner of a solar power installation last week.

I had planned the works on my house in this order:

  • Triple-glazing
  • External Wall Insulation (EWI)

Wait for a Winter

  • Heat pump
  • Solar Panels
  • Battery

This plan was partly rational. The largest carbon emissions are associated with heating, and so tackling those first – and evaluating their performance overwinter – was sensible.

But I was irrationally averse to getting solar panels because they felt like an indulgence – something I would “allow myself as a treat” after the hard work had been done. However, I changed my mind.

The main reason was that for the EWI work, I needed my neighbour’s permission to put scaffolding in the side passage by their building. My neighbour is an NHS clinic and it took several weeks to locate the person responsible and submit appropriate safety documentation. Although it was all perfectly pleasant – it was long-winded and not a process I wanted to repeat.

With that in mind, once the EWI scaffolding was erected, I called up a local solar power installer, Andy Powell from GreenCap Energy and asked whether he could install a system in the next two weeks using the EWI scaffolding. He visited the next day, and said that he could indeed use the scaffolding and that a 12-panel system would cost £4200. Importantly, he was able to install it the following week.

I had been thinking of all kinds of clever arrangements of panels, but it turns that if you want to put more than 12 panels on your roof, you need a special licence – which takes quite a bit of work – and time.

I reflected that if I installed the panels now, I would save roughly £1000 by using the existing EWI scaffolding. And that £4200 was much less than I had expected. So I put myself in Andy Powell’s capable hands and let him get on with it!

And there was one more piece of serendipity. At that point in the EWI work, it was possible to run the power cable from the panels to the main distribution board by running an armoured cable outside the house, buried underneath the EWI. This saved a lot of mess inside the house.

#1: The Components

The system consists of 12 solar panels, a device called an inverter, some isolation switches and generation meter.

  • The 1.7 m x 1 m panels are from Q-cells. The choice of panels available is bewildering so I just accepted Andy’s recommendation. They look beautiful and seem to work just fine.
  • Each panel generates roughly 40 V and up to 10 A amperes of DC current. They are connected in two banks to the inverter through two cables that poke through a small hole in the roof.
  • The Inverter is a SOLIS 4G 3.6 kW model. It takes the DC voltage and turns it into 220 V AC that can be used around the house or exported to the grid. There is also an add-on that enables the system to be monitored from a phone.
  • In my installation this AC current then goes back outside via an armoured cable buried in the wall  and then comes back inside under the floor, through a power meter, to the distribution board.

The gallery below shows some pictures of the process: click a picture for a larger version.

#2: The Site

Google Maps view of my home showing the shape and orientation of the available roofs. And photographs before and after the installation. Click for a larger image.

There were two roofs available for solar panels on my house – a smaller triangular roof facing 25° east of south, and a larger roof facing 65° west of south.

My initial plan was to cram as many solar panels on the south-facing roof as possible, but being triangular, the large 1.7 m x 1 m panels do not fill up the space very efficiently. I could have squeezed 7 panels on, but in the end I opted for 6 on each roof – it seemed to look a little less ugly.

To my surprise – experimentation with Easy PV software (more details later in the article) seemed to show that the orientation wouldn’t make much difference to the overall energy generated.

The path of the Sun at the summer and winter solstices and the equinoxes. In the summer when most solar energy is generated, the sun sets up to 30° north of west, and so the west-facing panels continue generating later in the day after the south facing panels are in shadow. Having panels on both roofs allows for generation for a longer fraction of the long summer days. Click for a larger version.

I think the reason is that – as Andy Powell pointed out – no panel can generate for more than 12 hours because it doesn’t work when the Sun is behind it!

But in the summer, when most solar energy is generated, the Sun is above the horizon for up to 16 hours and at the solstice it sets more than 30° north of due west.

And so the west-facing panels continue generating later in the day after the south-facing panels are in shadow. Having panels on both roofs allows for generation for a longer fraction of the long summer days.

The path of the Sun at the summer and winter solstices and the equinoxes. The yellow zone shows sun orientations at which only the south-facing panels generate. The red zone shows sun orientations at which only the west-facing panels generate. The purple zone shows sun orientations at which both sets of panels generate.

Of course it is not just the east-west position of the Sun – the so-called azimuthal angle – that affects generation – the height of the sun in the sky – its elevation – is also important.

Sites such as this one will plot maps showing the course of the Sun through the sky on any particular date from your particular location.

The blue line shows the path of the Sun through the sky on November 11th for my location at 53° latitude and 0° longitude. 180° corresponds to due South. Click for a larger version. The shaded yellow boxes indicate the azimuthal angles at which the two banks of panels generate. And the green line shows the optimum elevation of the Sun.

From the figure above we see that the Sun is low in the sky at this time of year (Doh!) – it only rises 20° above the horizon at midday – but that even at this time of year, the generation from the west-facing panels in the afternoon prolongs the useful generation time. From initial observations the power on the two banks of panels is equal at about 1 p.m.

#3: Expected Performance

Frustratingly, working out the expected performance of a solar installation is complicated. One needs to:

  • calculate the Sun path diagram for each day of the year,
  • factor in the weather,
  • consider each bank of panels separately.

Fortunately, approved installers such as Greencap can run standard calculations, or using software like Easy PV you can – after some messing about – come up with your own estimate.

Output from Easy PV software allows one to calculate how much electricity is likely to be generated by each bank of solar panels in a year.

For my installation, both estimates suggested that I should expect to generate roughly 3700 kWh of electricity each year. This figure is roughly how much electricity my house uses each year.

If I could capture each one of those generated kilowatt hours and use it to displace one that I buy from EDF  I would save more than £800/year. However, things are not so simple.

Looking around  one can find a few records (such as this one) of people’s generated power. They seem to indicate that in the UK I should expect roughly 5 times as much daily generation in the summer as in the Winter.

Putting that information together with the fact that I can expect 3700 kWh over the year I concocted a function (sine squared with an offset in case you care) to guide my expectations.

So in the summer I can expect the panels to generate perhaps 15 kWh/day – much more than I need – but in the winter the panels might only generate perhaps 3 kWh/day – much less than I need.

My guess at how many kWh per day I can expect from my solar panels. The average household consumption is shown is shown as a red dotted line (- – -). Click for a larger version.

#4: Actual Performance

I only have 10 days of data for the panels and these are plotted on the graph above and they seem broadly in line with my expectations.

I can already see the effect on my electricity usage. As can be seen on the graph below, my daily average use is 2.1 kWh below the average before the panels were installed.

Daily electricity usage (from a smart meter) before and after solar panel installation. Click for a larger version.

This may not sound much but even if the panels only ever performed at that level, this would prevent the emission of 73 kg of CO2 per year, and (@£0.24/unit) save me £175 per year. For those of you that are interested, that’s a 4.2% return on investment. But I expect the panel performance to be much better than this when averaged over the year.

But what is hard to capture in words is the sheer wonder of the installation. In bright November sunshine the panels generate more than 2 kW of electrical power – so I can boil a kettle and still see the smart meter read zero usage.

#5: What next?

For the next few weeks I intend to let the dust settle, and try to get my head around how the system is working.

But one obvious difficultly – which will become more pressing as we move into Spring – is that when the Sun shines the panels produce kilowatts of electricity whereas the house itself generally consumes just a few hundred watts.

At the moment any excess electricity is exported to the grid – my smart meter says 13 kWh so far – as a gift to the nation!

So in the next few weeks I will sign up with a company to buy this electricity. There are several companies who will buy at between £0.03 and £0.055 per kWh.

In the longer term it may well make sense to get a battery as well. But batteries are expensive, and the more I have thought about it, the main use of a battery in my situation would not be to store solar electricity, but to switch the time at which I bought electricity from the day (when electricity is carbon intensive and expensive) to the night (when electricity is cheap and generally less carbon intensive). But that is a question for another time.


%d bloggers like this: