Archive for the ‘Simple Science’ Category

Is a UK grid-scale battery feasible?

April 26, 2019

This is quite a technical article, so here is the TL/DR: It would make excellent sense for the UK to build a distributed battery facility to enable renewable power to be used more effectively.


Energy generated from renewable sources – primarily solar and wind – varies from moment-to-moment and day-to-day.

The charts below are compiled from data available at Templar Gridwatch. It shows the hourly, daily and seasonal fluctuations in solar and wind generation plotted every 5 minutes for (a) 30 days and (b) for a whole year from April 21st 2018. Yes, that is more than 100,000 data points!

Wind (Green), Solar (Yellow) and Total (Red) renewable energy generation for the days since April 21st 2018

Wind (Green), Solar (Yellow) and Total (Red) renewable energy generation for 30 days following April 21st 2018. The annual average (~6 GW) is shown as black dotted line.


Wind (Green), Solar (Yellow) and Total (Red) renewable energy generation for the 365 days since April 21st 2018. The annual average (~6 GW) is shown as black dotted line.

An average of 6 GW is a lot of power. But suppose we could store some of this energy and use it when we wanted to rather than when nature supplied it. In other words:

Why don’t we just build a big battery?

It turns out we need quite a big battery!

How big a battery would be need?

The graphs below shows a nominal ‘demand’ for electrical energy (blue) and the electrical energy made available by the vagaries of nature (red) over periods of 30 days and 100 days respectively. I didn’t draw the whole year graph because one cannot see anything clearly on it!

The demand curve is a continuous demand for 3 GW of electrical power with a daily peak demand of 9 GW. This choice of demand curve is arbitrary, but it represents the kind of contribution we would like to be able to get from any energy source – its availability would ideally follow typical demand.



We can see that the renewable supply already has daily peaks in spring and summer due to the solar energy contribution.

The role of a big battery would be cope to with the difference between demand and supply. The figures below show the difference between my putative demand curve and supply, over periods of 30 days and a whole year.



I have drawn black dotted lines showing when the difference between demand and supply exceeds 5 GW one way or another. In spring and summer this catches most of the variations. So let’s imagine a battery that could store or release energy at a rate of 5 GW.

What storage capacity would the battery need to have? As a guess, I have done calculations for a battery that could store or release 5 GW of generated power for 5 hours i.e. a battery with a capacity of 5 GW x 5 hours = 25 GWh. We’ll look later to see if this is too much or too little.

How would such a battery perform?

So, how would such a battery affect the ability of wind and solar to deliver a specified demand?

To assess this I used the nominal ‘demand‘ I sketched at the top of this article – a demand for  3 GW continuously, but with a daily peak in demand to 9 GW – quite a severe challenge.

The two graphs below show the energy that would be stored in the battery for 30 days after 21 April 2018, and then for the whole following year.

  • When the battery is full then supply is exceeding demand and the excess is available for immediate use.
  • When the battery is empty then supply is simply whatever the elements have given us.
  • When the battery is in-between fully-charged and empty, then it is actively storing or supplying energy.


Over 30 days (above) the battery spends most of its time empty, but over a full year (below), the battery is put to extensive use.


How to measure performance?

To assess the performance of the battery I looked at how the renewable energy available last year would meet a levels of constant demand from 1 GW up to 10 GW with different sizes of battery. I consider battery sizes from zero (no storage) in 5 GWh steps up to our 25 GWh battery. The results are shown below:

Slide15It is clear that the first 5 GWh of storage makes the biggest difference.

Then I tried modelling several levels of variable demand: a combination of 3 GW of continuous demand with an increasingly large daily variation – up to a peak of 9 GW. This is a much more realistic demand curve.Slide17

Once again the first 5 GWh of storage makes a big difference for all the demand curves and the incremental benefit of bigger batteries is progressively smaller.

So based on the above analysis, I am going to consider a battery with 5 GWh of storage – but able to charge or discharge at a rate of 5 GW. But here is the big question:

Is such a battery even feasible?

Hornsdale Power Reserve

The Hornsdale Power Reserve Facility occupies an area bout the size of a football pitch. Picture from the ABC site

The Hornsdale Power Reserve Facility occupies an area about the size of a football pitch. Picture from the ABC site

The biggest battery grid storage facility on Earth was built a couple of years ago in Hornsdale, Australia (Wiki Link, Company Site). It seems to have been a success (link).

Here are its key parameters:

  • It can store or supply power at a rate of 100 MW or 0.1 GW
    • This is 50 times smaller than our planned battery
  • It can store 129 MWh of energy.
    • This is just under 40 times smaller than our planned battery
  • Tesla were reportedly paid 50 million US dollars
  • It was supplied in 100 days.
  • It occupies the size of a football pitch.

So why don’t we just build lots of similar things in the UK?

UK Requirements

So building 50 Hornsdale-size facilities, the cost would be roughly 2.5 billion dollars: i.e. about £2 billion.

If we could build 5 a year our 5 GWh battery would be built in 10 years at a cost of around £200 million per year. This is a lot of money. But it is not a ridiculous amount of money when considering the National Grid Infrastructure.

Why this might actually make sense

The key benefits of this kind of investment are:

  • It makes the most of all the renewable energy we generate.
    • By time-shifting the energy from when it is generated to when we need it, it allows renewable energy to be sold at a higher price and improves the economics of all renewable generation
  • The capital costs are predictable and, though large, are not extreme.
  • The capital generates an income within a year of commitment.
    • In contrast, the 3.2 GW nuclear power station like Hinkley Point C is currently estimated to cost about £20 billion but does not generate any return on investment for perhaps 10 years and carries a very high technical and political risk.
  • The plant lifetime appears to be reasonable and many elements of the plant would be recyclable.
  • If distributed into 50 separate Hornsdale-size facilities, the battery would be resilient against a single catastrophic failure.
  • Battery costs still appear to be falling year on year.
  • Spread across 30 million UK households, the cost is about £6 per year.


I performed these calculations for my own satisfaction. I am aware that I may have missed things, and that electrical grids are complicated, and that contracts to supply electricity are of labyrinthine complexity. But broadly speaking – more storage makes the grid more stable.

I can also think of some better modelling techniques. But I don’t think that they will affect my conclusion that a grid scale battery is feasible.

  • It would occupy about 50 football pitches worth of land spread around the country.
  • It would cost about £2 billion, about £6 per household per year for 10 years.
    • This is one tenth of the current projected cost of the Hinkley Point C nuclear power station.
  • It would deliver benefits immediately construction began, and the benefits would improve as the facility grew.

But I cannot comment on whether this makes economic sense. My guess is that when it does, it will be done!


Data came from Templar Gridwatch


Cloud in a bottle!

March 22, 2019

One of the best parts of the FREE! ‘Learn About Weather‘ course, was the chance to make a cloud in a bottle. Here’s my video!

The demonstration involves squeezing a bottle partly filled with water and then letting go. One can see a cloud form as one lets go, and then disappear again when one squeezes. Wow!

But there is a trick! You need to drop a burning match into the bottle first!

Heterogeneous versus homogeneous nucleation

How does the smoke make the trick work? It’s to do with the way droplets form – a process called nucleation.

There are two ways for droplets to nucleate. An easy way and a hard way. But those words are too short for scientists. Instead we call them heterogeneous and homogeneous nucleation!

  • Heterogeneous nucleation‘ means that the water droplets in a cloud form around dust or smoke particles. The ‘hetero-” prefix means ‘different’, because there is more than one type of entity involved in forming droplets – dust and water.
  • Homogeneous nucleation‘ means that the water droplets in a cloud form spontaneously without any other type of particle being present. The ‘homo-” prefix means ‘the same’, because there is just one substance present – water.

The experiment shows that hetero-gen-e-ous nucleation is dramatically easier than than homo-gen-e-ous nucleation. And in reality – in real clouds – practically all droplet formation is heterogeneous – involving dust particles.

The reason is easy to appreciate.

  • To form a tiny droplet by homogeneous nucleation requires a few water molecules to meet and stick together. It’s easy to imagine three or four molecules might do this, but as new molecules collide, some will have higher than average energy and tend to break the proto-droplet apart.
  • But a dust or smoke particle, though small by human standards (about 0.001 mm in diameter), is roughly 10,000 times larger than individual molecules. So its surface provides billions of locations for water molecules to stick. So when the average energy of the water molecules is at the appropriate level to form a liquid, the water molecules can quickly stick to the surface and cause a droplet to grow.

How big is the temperature change?

Squeezing the bottle compresses the air quickly (in much less than 1 second) and so (because the air is a poor conductor of heat), there is no time for the heat of compression to flow from the gas into the walls and the water (this takes a few seconds) and the air warms transiently.

I was curious about the size of the temperature change that brought about this cloud formation.

I calculated that if the air in the bottle changed volume by 5%, there should be a temperature change of around 6 °C – really quite large!

Squeezing the bottle warms the air rapidly – and then over a few seconds the temperature slowly returns to the temperature of the walls of the bottle and the water.

If one lets go at this point the volume increases by an equivalent amount and the temperature returns to ambient. It is this fall which is expected to precipitate the water droplets.

To get the biggest temperature change one needs a large fractional change in volume. I couldn’t do the calculation of the optimum filling fraction so I did an experiment instead.

I poked a thin thermocouple through a bottle top and made it air tight using lots of epoxy resin.


I then squeezed the bottle and measured the maximum temperature rise. The results are shown below.

Delta T versus Filling Fraction

The results indicate that for a bottle filled to around three quarters with water, the temperature change is about 6 °C.

But as you can see in the video – it takes a few seconds to reach this maximum temperature, so I suspect the instantaneous change in air temperature is much larger, but that even this small thermocouple takes a couple of seconds to warm up.

Happy Experimenting

The Met office have more cloud forming tricks here.




Learning about weather

March 17, 2019

I have just completed a FREE! ‘Learn About Weather‘ course, and slightly to my surprise I think I have learned some things about the weather!


Being an autodidact in the fields of Weather and Climate, I have been taught by an idiot. So ‘attending’ online courses is a genuine pleasure.

All I have to do is to listen – and re-listen – and then answer the questionsSomeone else has selected the topics they feel are most important and determined the order of presentation.

Taking a course on-line allows me to expose my ignorance to no-one but myself and the course-bot. And in this low-stress environment it is possible to remember the sheer pleasure of just learning stuff.

Previously I have used the FutureLearn platform, for courses on Global WarmingSoil, and Programming in Python. These courses have been relatively non-technical and excellent introductions to subjects of which I have little knowledge. I have also used the Coursera platform for a much more thorough course on Global Warming.

So what did I learn? Well several things about about why Global Circulation Cells are the size they are, the names of the clouds, and how tornadoes start to spin. But perhaps the best bit was finally getting my head around ‘weather fronts’.

Fronts: Warm and Cold

I had never understood the terms ‘warm front’ and ‘cold front’ on weather forecasts. I had looked at the charts with the isobars and thought that somehow the presence or absence of ‘a front’ could be deduced by the shapes of the lines. I was wrong. Allow me to try to explain my new insight.

Air Mixing

Air in the atmosphere doesn’t mix like air in a room. Air in a room generally mixes quite thoroughly and quite quickly. If someone sprays perfume in one corner of the room, the perfume spreads through the air quickly.

But on a global scale, air doesn’t mix quickly. Air moves around as ‘big blobs’ and mixing takes place only where the blobs meet. These areas of mixing between air in different blobs are called ‘fronts’


In the ‘mixing region’ between the two blobs, the warm – generally wet – air meets the cold air and the water vapour condenses to make clouds and rain. So fronts are rain-forming regions.

Type of front

However it is unusual for two blobs of air to sit still. In general one ‘blob’ of air is ‘advancing’ and the other is ‘retreating’.

This insight was achieved just after the First World War and so the interfaces between the blobs were referred to as ‘fronts’ after the name for the interface between fighting armies. 

  • If the warm air is advancing, then the front is called a warm front, and
  • if the cold air is advancing, then the front is called a cold front.

Surprisingly cold fronts and warm fronts are quite different in character.

Warm Fronts 

When a blob of warm air advances, because it tends to be less dense than the cold air, it rises above the cold air.

Thus the mixing region extends ahead of the location on the ground where the temperature of the air will change.

The course told me the slope of the mixing region was shallow, as low as 1 in 150. So as the warm air advances, there is a region of low, rain-forming cloud that can extend for hundreds of kilometres ahead of it.


So on the ground, what we experience is hours of steady rain, and then the rain stops as the temperature rises.

Cold Fronts 

When a blob of cold air advances, because it tends to be more dense than the warm air, it slides below it. But sliding under an air mass is harder than gliding above it – I think this is because of friction with the ground.

As a result there is a steep mixing region which extends a little bit ahead, and a short distance behind the location on the ground where the temperature of the air changes.


So as the cold air advances, there is a region of intense rain just before and for a short time after.

So on the ground what we experience are stronger, but much shorter, rain events at just about the same time as the temperature falls. There generally follows some clearer air – at least for a short while.


I had assumed that because of the messy nature of reality compared to theory, real weather data would look nothing like what the simple models above might lead me to expect. I was wrong!

As I was learning about warm and cold fronts last weekend (10 March 2019) by chance I looked at my weather station data and there – in a single day – was evidence for what I was learning – a warm front passing over at about 6:00 a.m. and then a cold front passing over at about 7:00 p.m.

  • You can look at the data from March 10th and zoom in using this link to Weather Underground.

This is the general overview of the air temperature, humidity, wind speed, rainfall and air pressure data. The left-hand side represents midnight on Saturday/Sunday and the right-hand side represents midnight on Sunday/Monday.


The warm front approaches overnight and reaches Teddington at around 6:00 a.m.:

  • Notice the steady rainfall from midnight onwards, and then as the rain eases off, the temperature rises by about 3 °C within half an hour.

The cold front reaches Teddington at around 7:00 p.m.:

  • There is no rain in advance of the front, but just as the rain falls – the temperature falls by an astonishing 5 °C!


Of course there is a lot of other stuff going on. I don’t understand how these frontal changes relate to the pressure changes and the sudden rise and fall of the winds as the fronts pass.

But I do feel I have managed to link what I learned on the course to something I have seen in the real world. And that is always a good feeling.

P.S. Here’s what the Met Office have to say about fronts…

Global Oxygen Depletion

February 4, 2019

While browsing over at the two degrees institute, I came across this figure for atmospheric oxygen concentrations measured at a station at the South Pole.

Graph 1

The graph shows the change in:

  • the ratio of oxygen to nitrogen molecules in samples of air taken at a particular date


  • the ratio of oxygen to nitrogen molecules in samples of air taken in the 1980’s.

The sentence above is complicated, but it can be interpreted without too many caveats as simply the change in oxygen concentration in air measured at the South Pole.

We see an annual variation – the Earth ‘breathing’- but more worryingly we see that:

  • The amount of oxygen in the atmosphere is declining.

It’s a small effect, and will only reach a 0.1% decline – 1000 parts per million – in 2035 or so. So it won’t affect our ability to breathe. Phewww. But it is nonetheless interesting.

Averaging the data from the South pole over the years since 2010, the oxygen concentration appears to be declining at roughly 25 parts per million per year.


The reason for the decline in oxygen concentration is that we are burning carbon to make carbon dioxide…

C + O2 = CO2

…and as we burn carbon, we consume oxygen.

I wondered if I could use the measured rate of decline in oxygen concentration to estimate the rate of emission of carbon dioxide.

How much carbon is that?

First I needed to know how much oxygen there was in the atmosphere. I considered a number of ways to calculate that, but it being Sunday, I just looked it up in Wikipedia. There I learned that the atmosphere has a mass of about 5.15×1018 kg.

I also learned the molar fractional concentration of the key gases:

  • nitrogen (molecular weight 28): 78.08%
  • oxygen (molecular weight 32): 20.95%
  • argon (molecular weight 40):0.93%

From this I estimated that the mass of 1 mole of the atmosphere was 0.02896 kg/mol. And so the mass of the atmosphere corresponded to…

5.15×1018 /0.02896 = 1.78×1020

…moles of atmosphere. This would correspond to roughly…

1.78×1020 × 0.02095 =3.73×1019

…moles of oxygen molecules. This is the number that appears to be declining by 25 parts per million per year i.e.

3.73×1019× 0.000 025= 9.32×1014

…moles of oxygen molecules are being consumed per year. From the chemical equation, this must correspond to exactly the same number of moles of carbon: 9.32×1014. Since 1 mole of carbon weighs 12 g, this corresponds to…

  • 1.12×1016 g of C,
  • 1.12×1013 kg of C
  • 1.12×1010 tonnes of C
  • 11.2 gigatonnes (Gt) of C

Looking up the sources of sources, I obtained the following estimate for global carbon emissions which indicates that currently emissions are running at about 10 Gt of carbon per year

Carbon Emissions


So Wikipedia tells me that humanity emits roughly 10 Gt of carbon per year, but based on measurements at the South pole, we infer that 11.2 Gt of carbon per year is being emitted and consuming the concomitant amount of oxygen. Mmmmm.

First of all, we notice that these figures actually agree within roughly 10%. Which is pleasing.

  • But what is the origin the disagreement?
  • Could it be that the data from the South Pole is not representative?

I downloaded data from the Scripps Institute for a number of sites and the graph below shows recent data from Barrow in Alaska alongside the South Pole data. These locations are roughly half a world – about 20,000 km – apart.

Graph 2

Fascinatingly, the ‘breathing’ parts of the data are out of phase! Presumably this arises from the phasing of summer and winter in the northern and southern hemispheres.

But significantly the slopes of the trend lines differ by only 1%.  So global variability doesn’t seem to able to explain the 10% difference between the rate of carbon burning predicted from the decline of atmospheric oxygen (11.2 Gt C per year) , and the number I got off Wikipedia (10 Gt C per year).

Wikipedia’s number was obtained from the Carbon Dioxide Information and Analysis Centre (CDIAC) which bases their estimate on statistics from countries around the world based on stated oil, gas and coal consumption.

My guess is that there is considerable uncertainty – on the order of a few percent –  on both the CDIAC estimate, and also on the Scripps Institute estimates. So agreement at the level of about 10% is actually – in the context of a blog article – acceptable.


My conclusion is that – as they say so clearly over at the two degrees project – we are in deep trouble. Oxygen depletion is actually just an interesting diversion.

The most troubling graph they present shows

  • the change in CO2 concentration over the last 800,000  years, shown against the left-hand axis,


  • the estimated change in Earth’s temperature over the last 800,000  years, shown  along the right-hand axis.

The correlation between the two quantities is staggering, and the conclusion is terrifying. chart

We’re cooked…


Weather Station Comparison

January 7, 2019

My new weather station is on the top left of the picture. The old weather station is in the middle of the picture on the right.

Back in October 2015 I installed a weather station at the end of my back garden and wrote about my adventures at length (Article 1 and Article 2)

Despite costing only £89, it was wirelessly linked to a computer in the house which uploaded data to weather aggregation sites run by the Met Office and Weather Underground. Using these sites, I could compare my readings with stations nearby.

I soon noticed that my weather station seemed to report temperatures which tended to be slightly higher than other local stations. Additionally, I noticed that as sunshine first struck the station in the morning, the reported temperature seemed to rise suddenly, indicating that the thermometer was being directly heated by the sunlight rather than sensing the air temperature.

So I began to think that the reported temperatures might sometimes be in error. Of course, I couldn’t prove that because I didn’t have a trusted weather station that I could place next to it.

So in October 2018 I ordered a new Youshiko Model YC9390 weather station, costing a rather extravagant £250.

Youshiko YC9390

The new station is – unsurprisingly – rather better constructed than the old one. It has a bigger, brighter, internal display and it links directly to Weather Underground via my home WI-FI and so does not require a PC. Happily it is possible to retrieve the data from Weather Underground.

The two weather stations are positioned about 3 metres apart and at slightly different heights, but in broad terms, their siting is similar.

Over the last few days of the New Year break, and the first few days of my three-day week, I took a look at how the two stations compared. And I was right! The old station is affected by sunshine, but the effect was significantly larger than I suspected.


I compared the temperature readings of the two stations over the period January 4th, 5th and 6th. The fourth was a bright almost cloudless, cold, winter day. The other two days were duller, but warmer, and all three days were almost windless.

The graphs below (all drawn to the same scale) show the data from each station versus time-of-day with readings to be compared against the left-hand axis.

Let’s look at the data from the 4th January 2019

4th January 2019

Data from the 4th January 2019. The red curve shows air temperature data from the old station and the blue curve shows data from the new station. Also shown in yellow is data showing the intensity of sunshine (to be read from the right-hand axis) taken from a station located 1 km away.

Two things struck me about this graph:

  • Firstly I was surprised by the agreement between the two stations during the night. Typically the readings are within ±0.2 °C and with no obvious offset.
  • Secondly I was shocked by the extent the over-reading. At approximately 10 a.m. the old station was over-reading by more than 4 °C!

To check that this was indeed a solar effect I downloaded data from a weather station used for site monitoring at NPL – just over a kilometre away from my back garden.

This station is situated on top of the NPL building and the intensity of sunlight there will not be directly applicable to the intensity of sunshine in my back garden. But hopefully, it is indicative.

The solar intensity reached just over 200 watts per square metre, about 20% of the solar intensity on a clear midsummer day. And it clearly correlated with the magnitude of the excess heating.

Let’s look at the data from the 5th January 2019


Data from 5th January 2019. See previous graph and text for key.

The night-time 5th January data also shows agreement between the two stations as was seen on the 4th January.

However I was surprised to see that even on this dismally dull January day – with insolation failing to reach even 100 watts per square metre – that there was a noticeable warming of the old station – amounting to typically 0.2 °C.

The timing of this weak warming again correlated with the recorded sunlight.

Finally let’s look at data from 6th January 2019


Data from 6th January 2019. See previous graph and text for key.

Once again the pleasing night-time agreement between the two station readings is striking.

And with an intermediate level of solar intensity the over-reading of the old station is less than on the 4th, but more than on the 5th.


I chose these dates for a comparison because on all three days wind speeds were low. This exacerbates the solar heating effect and makes it easier to detect.

The figures below show the same temperature data as in the graphs above, but now with the wind speed data plotted in green against the right-hand axis.

Almost every wind speed reading is 0 kilometres per hour, and during the nights there were only occasional flurries.  However during the day, there were slightly more frequent flurries, but as a pedestrian, the day seemed windless.


Data from 4th of January 2019 now showing wind speed on the right-hand axis.


Data from 5th of January 2019 now showing wind speed on the right-hand axis.


Data from the 6th January 2019 showing wind speed against the right-hand axis.


My conclusion is that the new weather station shows a much smaller solar-heating effect than the old one.

It is unlikely that the new station is itself perfect. In fact there is no accepted procedure for determining what the ‘right answer’ is in a meteorological setting!

The optimal air temperature measurement strategy is usually to use a fan to suck air across a temperature sensor at a steady speed of around 5 metres per second – roughly 18 kilometres per hour! But stations that employ such arrangements are generally quite expensive.

Anyway, it is pleasing to have resolved this long-standing question.

Where to see station data

On Weather Underground the station ID is ITEDDING4 and its readings can be monitored using this link.

The Weather Underground ‘Wundermap’ showing world wide stations can be found here. On a large scale the map shows local averages of station data, but  as you zoom in, you can see teh individual reporting stations.

The Met Office WOW site is here. Search on ‘Teddington’ if you would like to view the station data.

Christmas Bubbles

December 23, 2018
Champagne Time Lapse

A time-lapse photograph of a glass of fizzy wine.

Recently I encountered the fantastic:

Effervescence in champagne and sparkling wines:
From grape harvest to bubble rise

This is a 115-page review article by Gérard Liger-Belair about bubbles in Champagne, my most favourite type of carbon dioxide emission.

Until January 30th 2019 it is freely downloadable using this link

Since the bubbles in champagne arguably add £10 to the price of a bottle of wine, I guess it is worth understanding exactly how that value is added.

I found GLB’s paper fascinating with a delightful attention to detail. From amongst the arcane studies in the paper, here are three things I learned.

Thing 1: Amount of Gas

Champagne (and Prosecco and Cava) have about 9 grams of carbon dioxide in each 750 ml bottle [1].

Since the molar mass of carbon dioxide is 44 g, each bottle contains approximately 9/44 ~ 0.2 moles of carbon dioxide.

If released as gas at atmospheric pressure and 10 °C, it would have a volume of approximately 4.75 litres – more than six times the volume of the bottle!

This large volume of gas is said to be “dissolved” in the wine. The molecules can only leave when, by chance, they encounter the free surface of the wine.

Because the free-surface area of wine in a wine glass is usually larger than the combined surface area of bubbles, about 80% of the de-gassing happens through the liquid surface [2].

Thing 2: Bubble Size and Speed 

But fizzy wine is call “fizzy” because of the bubbles that seem to ceaselessly form on the inner surface of the glass.

Sadly, in a perfectly clean glass, such as one which has repeatedly been through a dishwasher, very few bubbles will form [3].

But if there are tiny cracks in the glass, or small specks of dust from, for example, a drying cloth, then these can trap tiny air bubbles and provide free-surfaces at which carbon dioxide can leave the liquid.

At first a bubble is just tens of nanometres in size, but it grows at a rate which depends upon the rate at which carbon dioxide enters the bubble.

As the bubble grows, its surface area increases allowing the rate at which carbon dioxide enters the bubble to increase.

Eventually the buoyancy of the bubble causes it to detach from its so-called ‘nucleation site’ (birthplace) and rise through the liquid.  This typically happens when bubbles are between 0.01 and 0.1 mm in diameter.

To such tiny bubbles, the wine is highly viscous, and at first the bubbles rise slowly. But as more carbon dioxide enters the bubble, the bubble grows [4] and its speed of rise increases. The rising speed is close to the so-called ‘Stokes’ terminal velocity. [5]

So when you look at a stream of bubbles you will see that at the bottom, the bubbles are small and close together and relatively slow-moving. As they rise through the glass, they grow, and their speed increases.

If you can bear to leave your glass undrunk for long enough, you should be able to see the rate of bubble formation slow as the carbon dioxide concentration falls.

This will be visible as an increase in the spacing of bubbles near the nucleation site of a rising ‘bubble train’.

Thing 3: Number of bubbles

Idle speculation often accompanies the consumption of fizzy wine.

And one common topic of speculation is the number of bubbles which can be formed in a gas of champagne [6]. We can now add to that speculation.

If a bubble has a typically diameter of approximately 1 mm as it reaches the surface, then each bubble will have a volume of approximately 0.5 cubic millimetres, or 0.000 5 millilitres.

So the 4.75 litres of carbon dioxide in a bottle could potentially form 4750/0.0005 = 9.5 million bubbles per bottle!

If a bottle is used for seven standard servings then there are potentially 1.3 million bubbles per glass.

In fact the number is generally smaller than this because as the concentration of carbon dioxide in the liquid falls, the rate of bubble formation falls also. And below approximately 4 grams of carbon dioxide per litre of wine, bubbles cease to form [7].

Thing 4: BONUS THING! Cork Speed

When the bottle is sealed there is a high pressure of carbon dioxide in the space above the wine. The pressure depends strongly on temperature [8], rising from approximately 5 atmospheres (500 kPa) if the bottle is opened at 10 °C to approximately 10 atmospheres (1 MPa) if the bottle is opened at 25 °C.

GLB uses high-speed photography to measure the velocity of exiting cork, and gets results which vary from around 10 metres second for a bottle at 4 °C to 14 metres per second for a bottle at 18 °C. [9]

I made my own measurements using my iPhone (see below) and the cork seems to move roughly 5 ± 2 cm in the 1/240th of a second between frames. So my estimate of the speed is about 12 ± 5 metres second, roughly in line GLB’s estimates

Why this matters

When we look at absolutely any phenomenon, there is a perspective from which that phenomenon – no matter how mundane or familiar – can appear profound and fascinating.

This paper has opened my eyes, and I will never look at a glass of Champagne again in quite the same way.

Wishing you happy experimentation over the Christmas break.



[1] Page 8 Paragraph 2

[2] Page 85 Section 6.3

[3] Page 42 Section 5.2

[4] Page 78 Figure 59

[5] Page 77 Figure 58

[6] Page 84 Section 6.3 & Figure 66

[7] Page 64

[8] Page 10 Figure 3

[9] Page 24 Figure 16

Ignorance: Eggs & Weather Forecasts

November 26, 2018

Every so I often I learn something so simple and shocking that I find myself asking:

How can I possibly not have known that already?“.




While listening to Farming Today the other morning, learned that:

Large eggs come from old hens

In order to produce large eggs – the most popular size with consumers – farmers need to allow hens to reach three years old.

So during the first and second years of their lives they will first lay small eggs, then medium eggs, and finally large eggs.

On Farming Today a farmer was explaining that egg production naturally resulted a range of egg sizes, and it was a challenge to find a market for small eggs. Then came the second bomb’shell’.

The yolk is roughly same size in all eggs

What varies between small and large eggs is mainly the amount of egg white (albumen).

How could I have reached the age of 58 and not  known that? Or not have even been curious about it?

Since learning this I have become a fan of small eggs: more yolk, less calories, more taste!

But my deep ignorance extends beyond everyday life and into the professional realm. And even my status as ‘an expert’ cannot help me.

Weather Forecasts & Weather Stations

Professionally I have become interested in weather stations and their role in both Numerical Weather Prediction (NWP, or just weather forecasting) and in Climate Studies.

And as I went about my work I had imagined that data from weather stations were used as inputs to NWP algorithms that forecast the weather.

But in September I attended CIMO TECO-2018 (Technical Conference on Meteorological and Environmental Instruments and Methods of Observation) in Amsterdam.

And there I learned in passing from an actual expert, that I had completely misunderstood their role.

Weather station data is not considered in the best weather forecasts.

And, on a moment’s reflection, it was completely obvious why.

Weather forecasting work like this:

  • First one gathers as much data as possible about the state of the atmosphere ‘now’. The key inputs to this are atmospheric ‘soundings’:
    • Balloon-borne ‘sondes’ fly upwards through the atmosphere sending back data on temperature, humidity and wind (speed and direction) versus height.
    • Satellites using infrared and microwave sensors probe downwards to work out the temperature and humidity at all points in the atmosphere in a swathe below the satellite’s orbit.
  • The NWP algorithms accept this vast amount of data about the state of the atmosphere, and then use basic physics to predict how the state of the entire atmosphere will evolve over the coming hours and days

And then, after working out the state of the entire atmosphere, the expected weather at ground level is extracted.

Visualisation of the amount of moisture distributed across different heights in the atmosphere based on a single pass of a 'microwave sounding' satellite. Image credit: NASA/JPL-Caltech

Visualisation of the amount of moisture distributed across different heights in the atmosphere based on a single pass of a ‘microwave sounding’ satellite. The data gathered at ground level is just a tiny fraction of the data input to NWP models. Image credit: NASA/JPL-Caltech

Ground-based weather stations are still important:

  • They are used to check the outputs of the NWP algorithms.
  • But they are not used as inputs to the NWP algorithms.

So why did I not realise this ‘obvious’ fact earlier? I think it was because amongst the meteorologists and climate scientists with whom I spoke, it was so obvious as to not require any explanation.

Life goes on

So I have reached the age of 58 without knowing about hen’s eggs and the role of weather stations in weather forecasting?

I don’t know how it happened. But it did. And I suspect that many people have similar areas of ignorance, even regarding aspects of life with which we are totally familiar – such as eggs – or where one is nominally an expert.

And so life goes on. Anyway…

This pleasing Met Office video shows the importance of understanding the three-dimensional state of the atmosphere…

And here is a video of some hens


Mug Cooling: Salty fingers

November 23, 2018

You wait years for an article about heat transfer at beverage-air interfaces and then four come along at once!

When I began writing these articles (1, 2, 3) I was just curious about the effect of insulation and lids.

But as I wrote more I had two further insights.

  • Firstly the complexity of the processes at the interface was mind-boggling!
  • Secondly, I realised that cooling beverages are just one example of the general problem of energy and material transfer at interfaces.

This is one of the most important processes that occurs on Earth. For example, it is how the top layer of the oceans – where most of the energy arriving on Earth from the Sun is absorbed – exchanges energy with the deeper ocean and the atmosphere.

But in the oceans there is another factor: salinity.


Sea water typically contains 35 grams of salt per litre of water, and is about 2.4% denser than pure water.

So pure water – such as rain water falling onto the ocean surface – will tend to float above the brine.

This effect is exacerbated if the pure water is warm. For example, water at 60 °C is approximately 1.5% less dense than water at around 20 °C.


In the video at the top of the article I added warm pure water (with added red food colouring) to a glass of cold pure water (on the left) and a glass of cold salty water (on the right).

[For the purposes of this article I hope you will allow that glasses are a type of mug]

The degree to which the pure and salty water spontaneously separated surprised me.

But more fascinating was the mechanism of eventual mixing – a variant on ‘salt fingering‘.

Salt Fingers Picture

The formation of ‘salty fingers’ of liquid is ubiquitous in the oceans and arises from density changes caused by salt diffusion and heat transfer.

As the time-lapse section of the movie shows – eventually the structure is lost and we just see ‘mixed fluid’ – but the initial stages, filmed in real time, are eerily beautiful.

Now I can’t quite explain what is happening in this movie – so I am not going to try.

But the web has articles, home-made videos and fancy computer simulations.


Mug Cooling: The Lid Effect

November 12, 2018

Droplets collect near the rim of a mug filled with hot water.

During my mug cooling experiment last week, I was surprised to find that taking the lid off a vacuum insulated mug increased its initial cooling rate by a factor 7.5.

Removing the lid allowed air from the room to flow across the surface of the water, cooling it in two ways.

  • Firstly, the air would warm up when it contacted the hot water, and then carry heat away in a convective flow.
  • Secondly, some hot water would evaporate into the moving air and carry away so – called ‘latent heat’.

I wondered which of these two effects was more important?

I decided to work out the answer by calculating how much evaporation would be required to explain ALL the cooling. I could then check my calculation against the measured mass of water that was lost to evaporation.

Where to start?

I started with the cooling curve from the previous blog.


Graph#1: Temperature (°C) versus time (minutes) for water cooling in an insulated mug with and without a lid. Without a lid, the water cools more than 7 times faster.

Because I knew the mass of water (g) and its heat capacity (joule per gram per °C), I could calculate the rate of heat loss in watts required to cool the water at the observed rate.

In Graph#2 below I have plotted this versus the difference in temperature between the water and the room temperature, which was around 20 °C.


Graph#2: The rate of heat flow (in watts) calculated from the cooling curve versus the temperature difference (°C) from the ambient environment. The raw estimates are very noisy so the dotted lines are ‘best fit lines’ which approximately capture the trend of the data.

I was struck by two things: 

  • Firstly, without the lid, the rate of heat loss was initially 40 watts – which seemed very high.
  • Secondly:
    • When the lid was on, the rate of heat loss was almost a perfect straight line This is broadly what one expects in a wide range of heat flow problems – the rate of heat flow is proportional to the temperature difference. But…
    • When the lid was off, the heat flow varied non-linearly with temperature difference.

To find out the effect of the lid, I subtracted the two curves from each other to get the difference in heat flow versus the temperature of the water above ambient (Graph#3).

[Technical Note: Because the data in Graph#2 is very noisy and irregularly spaced, I used Excel™ to work out a ‘trend line’ that describes the underlying ‘trend’ of the data. I then subtracted the two trend lines from each other.]


Graph#3: The dotted line shows the difference in power (watts) between the two curves in the previous graph. This should be a fair estimate for the heat loss across the liquid surface.

This curve now told me the extra rate of cooling caused by removing the lid.

If this was ALL due to evaporative cooling, then I could work out the expected loss of mass by dividing by the latent heat of vaporisation of water (approximately 2260 joules per gram) (Graph#4).


Graph#4. The calculated rate of evaporation (in milligrams per second) that would be required to explain the increased cooling rate caused by removing the lid.

Graph#4 told me the rate at which water would need to evaporate to explain ALL the cooling caused by removing the lid.

Combining that result with the data in Graph#1, I worked out the cumulative amount of water that would need to evaporate to explain ALL the observed extra cooling (Graph#5)


Graph#5: The red dashed line shows the cumulative mass loss (g) required to explain all the extra cooling caused by removing the lid. The green dashed lines show the amount of water that actually evaporated in each of the two ‘lid off’ experiments. The green data shows additional measurements of mass loss versus time from a third experiment.

In Lid-Off Experiments#1 and #2, I had weighed the water before and after the cooling experiment and so I knew that in each experiment with the lid off I had lost respectively 25 g and 31 g of water –  just under 10% of the water.

But Graph #5 really needed some data on the rate of mass loss, so I did an additional experiment where I didn’t measure the temperature, but instead just weighed the mug every few minutes. This is the data plotted on Graph#5 as discrete points.


In Graph#5, it’s clear that the measured rate of evaporation can’t explain all the increased cooling rate loss, but it can explain ‘about a third of it‘.

So evaporation is responsible for about a third of the extra cooling, with two thirds being driven by heat transfer to the flowing air above the cup.

It is also interesting that even though the cooling curves in Graph#1 are very similar, the amount of evaporation in Graph#5 is quite variable.

The video below is backlit to show the ‘steam’ rising above the mug, and it is clear that the particular patterns of air flow are very variable.

The actual amount of evaporation depends on the rate of air flow across the water surface, and that is driven both by

  1. natural convection – driven by the hot low-density air rising, but also by…
  2. forced convection – draughts flowing above the cup.

I don’t know, but I suspect it is this variability in air flow that caused the variability in the amount of evaporation.


I have wasted spent a several hours on these calculations. And I don’t really know why.

Partly, I was just curious about the answer.

Partly, I wanted to share my view that it is simply amazing how much subtle physics is taking place around us all the time.

And partly, I am still trying to catch my breath after deciding to go ‘part-time’ from next year. Writing blog articles such as this is part of just keeping on keeping on until something about the future becomes clearer.

P.S. Expensive Mugs

Finally, on the off-chance that (a) anybody is still reading and (b) they actually care passionately about the temperature of their beverages, and (c) they are prepared to spend £80 on a mug, then the Ember temperature-controlled Ceramic mug may be just thing for you. Enjoy 🙂


Mug Cooling: Initial Results

November 7, 2018

One of life’s greatest pleasures is a nice cup of tea or coffee.

  • But what temperature makes the drink ‘nice’?
  • And how long after making the beverage should we wait to drink it?
  • And what type of mug is optimal?

To answer these questions I devised a research proposal involving temperature measurements made inside mugs during the cooling process.

I am pleased to tell you that my proposal was fully-funded in its initial stage by the HBRC*, having scored highly on its societal impact.

Experimental Method

The basic experiment consisted of pouring approximately 300 ml of water (pre-stabilised at 90 °C) into a mug sitting on a weighing scale. The weighing allowed low uncertainty assessment of the amount of water added.

The temperature of the water was measured every 10 seconds using four thermocouples held in place by a wooden splint. The readings were generally very similar and so in the graphs below I have just plotted the average of the four readings.

Experiments were conducted for a fancy vacuum-insulated mug (with and without its lid) and a conventional thick-walled ceramic mug. The results for the vacuum-insulated mug without its lid were so surprising that I repeated them.

This slideshow requires JavaScript.


The average temperature of the water in the mugs is shown in the two graphs below.

The first graph shows all the data – more than 8 hours for the vacuum insulated mug – , and the second graph shows the initial behaviour.

Also shown are horizontal lines at various temperatures that I determined (in a separate series of experiments) to be the optimal drinking range.


The average temperature of the water in the mugs versus time.


The first 120 minutes of the cooling curves. The water was poured in at 4 minutes.


The most striking feature of the cooling curves is the massive difference between the results for the vacuum insulated mug with, and without, its lid.

As I mentioned at the start, the result was so striking that I repeated the measurements (marked as #1 and #2) on the graphs.

The table below shows how many minutes it took for the water to cool to the three states highlighted on the graphs above:

  • Too hot to drink, but just sippable
  • Mmmm. A nice hot cuppa.
  • I’ll finish this quickly otherwise it’ll be too cold.

Minutes to reach status

  Vacuum-Insulated Mug

Ceramic Mug

 No Lid

 With Lid

Just Sippable




Upper Drinkable Limit 12 24


Lower Drinkable Limit





The insulating prowess of the vacuum insulated mug (with lid) is outstanding.

But the purpose of a mug is not simply to prevent cooling. It is to enable drinking! 

So to me this data raises a profound question about the raison d’être for vacuum insulated mugs.

  • Who  makes a cup of coffee and then thinks “Mmm, that’ll be just right to drink in two and a half hours time!”

Admittedly,  the coffee will then stay in the drinkable range for an impressive two hours. But still.

In contrast, the ceramic mug cools the hot liquid initially and allows it to reach the optimal drinking temperature after just a few minutes.

Further work

The review committee rated this research very highly and suggested two further research proposals.

  • The first concerned the explanation for the very large effect of removing the lid from the vacuum insulated mug. That research has already been carried out and will be the result of a further report in this journal.
  • The second concerned the effect of milk addition which could significantly affect the time to reach the optimal drinking temperature. That research proposal is currently being considered by HBRC.


*HBRC = Hot Beverage Research Council

%d bloggers like this: