Archive for the ‘Personal’ Category

Learning about weather

March 17, 2019

I have just completed a FREE! ‘Learn About Weather‘ course, and slightly to my surprise I think I have learned some things about the weather!

Learning

Being an autodidact in the fields of Weather and Climate, I have been taught by an idiot. So ‘attending’ online courses is a genuine pleasure.

All I have to do is to listen – and re-listen – and then answer the questionsSomeone else has selected the topics they feel are most important and determined the order of presentation.

Taking a course on-line allows me to expose my ignorance to no-one but myself and the course-bot. And in this low-stress environment it is possible to remember the sheer pleasure of just learning stuff.

Previously I have used the FutureLearn platform, for courses on Global WarmingSoil, and Programming in Python. These courses have been relatively non-technical and excellent introductions to subjects of which I have little knowledge. I have also used the Coursera platform for a much more thorough course on Global Warming.

So what did I learn? Well several things about about why Global Circulation Cells are the size they are, the names of the clouds, and how tornadoes start to spin. But perhaps the best bit was finally getting my head around ‘weather fronts’.

Fronts: Warm and Cold

I had never understood the terms ‘warm front’ and ‘cold front’ on weather forecasts. I had looked at the charts with the isobars and thought that somehow the presence or absence of ‘a front’ could be deduced by the shapes of the lines. I was wrong. Allow me to try to explain my new insight.

Air Mixing

Air in the atmosphere doesn’t mix like air in a room. Air in a room generally mixes quite thoroughly and quite quickly. If someone sprays perfume in one corner of the room, the perfume spreads through the air quickly.

But on a global scale, air doesn’t mix quickly. Air moves around as ‘big blobs’ and mixing takes place only where the blobs meet. These areas of mixing between air in different blobs are called ‘fronts’

Slide1

In the ‘mixing region’ between the two blobs, the warm – generally wet – air meets the cold air and the water vapour condenses to make clouds and rain. So fronts are rain-forming regions.

Type of front

However it is unusual for two blobs of air to sit still. In general one ‘blob’ of air is ‘advancing’ and the other is ‘retreating’.

This insight was achieved just after the First World War and so the interfaces between the blobs were referred to as ‘fronts’ after the name for the interface between fighting armies. 

  • If the warm air is advancing, then the front is called a warm front, and
  • if the cold air is advancing, then the front is called a cold front.

Surprisingly cold fronts and warm fronts are quite different in character.

Warm Fronts 

When a blob of warm air advances, because it tends to be less dense than the cold air, it rises above the cold air.

Thus the mixing region extends ahead of the location on the ground where the temperature of the air will change.

The course told me the slope of the mixing region was shallow, as low as 1 in 150. So as the warm air advances, there is a region of low, rain-forming cloud that can extend for hundreds of kilometres ahead of it.

Slide2

So on the ground, what we experience is hours of steady rain, and then the rain stops as the temperature rises.

Cold Fronts 

When a blob of cold air advances, because it tends to be more dense than the warm air, it slides below it. But sliding under an air mass is harder than gliding above it – I think this is because of friction with the ground.

As a result there is a steep mixing region which extends a little bit ahead, and a short distance behind the location on the ground where the temperature of the air changes.

Slide3

So as the cold air advances, there is a region of intense rain just before and for a short time after.

So on the ground what we experience are stronger, but much shorter, rain events at just about the same time as the temperature falls. There generally follows some clearer air – at least for a short while.

Data

I had assumed that because of the messy nature of reality compared to theory, real weather data would look nothing like what the simple models above might lead me to expect. I was wrong!

As I was learning about warm and cold fronts last weekend (10 March 2019) by chance I looked at my weather station data and there – in a single day – was evidence for what I was learning – a warm front passing over at about 6:00 a.m. and then a cold front passing over at about 7:00 p.m.

  • You can look at the data from March 10th and zoom in using this link to Weather Underground.

This is the general overview of the air temperature, humidity, wind speed, rainfall and air pressure data. The left-hand side represents midnight on Saturday/Sunday and the right-hand side represents midnight on Sunday/Monday.

Slide4

The warm front approaches overnight and reaches Teddington at around 6:00 a.m.:

  • Notice the steady rainfall from midnight onwards, and then as the rain eases off, the temperature rises by about 3 °C within half an hour.

The cold front reaches Teddington at around 7:00 p.m.:

  • There is no rain in advance of the front, but just as the rain falls – the temperature falls by an astonishing 5 °C!

Slide5

Of course there is a lot of other stuff going on. I don’t understand how these frontal changes relate to the pressure changes and the sudden rise and fall of the winds as the fronts pass.

But I do feel I have managed to link what I learned on the course to something I have seen in the real world. And that is always a good feeling.

P.S. Here’s what the Met Office have to say about fronts…

Global Oxygen Depletion

February 4, 2019

While browsing over at the two degrees institute, I came across this figure for atmospheric oxygen concentrations measured at a station at the South Pole.

Graph 1

The graph shows the change in:

  • the ratio of oxygen to nitrogen molecules in samples of air taken at a particular date

to

  • the ratio of oxygen to nitrogen molecules in samples of air taken in the 1980’s.

The sentence above is complicated, but it can be interpreted without too many caveats as simply the change in oxygen concentration in air measured at the South Pole.

We see an annual variation – the Earth ‘breathing’- but more worryingly we see that:

  • The amount of oxygen in the atmosphere is declining.

It’s a small effect, and will only reach a 0.1% decline – 1000 parts per million – in 2035 or so. So it won’t affect our ability to breathe. Phewww. But it is nonetheless interesting.

Averaging the data from the South pole over the years since 2010, the oxygen concentration appears to be declining at roughly 25 parts per million per year.

Why?

The reason for the decline in oxygen concentration is that we are burning carbon to make carbon dioxide…

C + O2 = CO2

…and as we burn carbon, we consume oxygen.

I wondered if I could use the measured rate of decline in oxygen concentration to estimate the rate of emission of carbon dioxide.

How much carbon is that?

First I needed to know how much oxygen there was in the atmosphere. I considered a number of ways to calculate that, but it being Sunday, I just looked it up in Wikipedia. There I learned that the atmosphere has a mass of about 5.15×1018 kg.

I also learned the molar fractional concentration of the key gases:

  • nitrogen (molecular weight 28): 78.08%
  • oxygen (molecular weight 32): 20.95%
  • argon (molecular weight 40):0.93%

From this I estimated that the mass of 1 mole of the atmosphere was 0.02896 kg/mol. And so the mass of the atmosphere corresponded to…

5.15×1018 /0.02896 = 1.78×1020

…moles of atmosphere. This would correspond to roughly…

1.78×1020 × 0.02095 =3.73×1019

…moles of oxygen molecules. This is the number that appears to be declining by 25 parts per million per year i.e.

3.73×1019× 0.000 025= 9.32×1014

…moles of oxygen molecules are being consumed per year. From the chemical equation, this must correspond to exactly the same number of moles of carbon: 9.32×1014. Since 1 mole of carbon weighs 12 g, this corresponds to…

  • 1.12×1016 g of C,
  • 1.12×1013 kg of C
  • 1.12×1010 tonnes of C
  • 11.2 gigatonnes (Gt) of C

Looking up the sources of sources, I obtained the following estimate for global carbon emissions which indicates that currently emissions are running at about 10 Gt of carbon per year

Carbon Emissions

Analysis

So Wikipedia tells me that humanity emits roughly 10 Gt of carbon per year, but based on measurements at the South pole, we infer that 11.2 Gt of carbon per year is being emitted and consuming the concomitant amount of oxygen. Mmmmm.

First of all, we notice that these figures actually agree within roughly 10%. Which is pleasing.

  • But what is the origin the disagreement?
  • Could it be that the data from the South Pole is not representative?

I downloaded data from the Scripps Institute for a number of sites and the graph below shows recent data from Barrow in Alaska alongside the South Pole data. These locations are roughly half a world – about 20,000 km – apart.

Graph 2

Fascinatingly, the ‘breathing’ parts of the data are out of phase! Presumably this arises from the phasing of summer and winter in the northern and southern hemispheres.

But significantly the slopes of the trend lines differ by only 1%.  So global variability doesn’t seem to able to explain the 10% difference between the rate of carbon burning predicted from the decline of atmospheric oxygen (11.2 Gt C per year) , and the number I got off Wikipedia (10 Gt C per year).

Wikipedia’s number was obtained from the Carbon Dioxide Information and Analysis Centre (CDIAC) which bases their estimate on statistics from countries around the world based on stated oil, gas and coal consumption.

My guess is that there is considerable uncertainty – on the order of a few percent –  on both the CDIAC estimate, and also on the Scripps Institute estimates. So agreement at the level of about 10% is actually – in the context of a blog article – acceptable.

Conclusions

My conclusion is that – as they say so clearly over at the two degrees project – we are in deep trouble. Oxygen depletion is actually just an interesting diversion.

The most troubling graph they present shows

  • the change in CO2 concentration over the last 800,000  years, shown against the left-hand axis,

alongside

  • the estimated change in Earth’s temperature over the last 800,000  years, shown  along the right-hand axis.

The correlation between the two quantities is staggering, and the conclusion is terrifying. chart

We’re cooked…

 

The Death Knell for SI Base Units?

January 30, 2019

I love the International System of Units – the SI. 

Rooted in humanity’s ubiquitous need to measure things, the SI represents a hugely successful global human enterprise – a triumph of cooperation over competition, and accord over discord.

Day-by-day it enables measurements made around the world to be meaningfully compared with low uncertainty. And by doing this it underpins all of the sciences, every branch of engineering, and trade.

But changes are coming to the SI, and even after having worked on these changes for the last 12 years or so, in my recent reflections I have been surprised at how profound the changes will be.

Let me explain…

The Foundations of the SI

The SI is built upon the concept of ‘base units’. Unit amounts of any quantity are defined in terms of combinations of unit quantities of just a few ‘base units’. For example:

  • The SI unit of speed is the ‘metre per second’, where one metre and one second are the base units of length and time respectively.
    • The ‘metre per second’ is called a derived unit.
  • The SI unit of acceleration is the ‘metre per second per second’
    • Notice how the same base units are combined differently to make this new derived unit.
  • The SI unit of force  is the ‘kilogram metre per second per second’.
    • This is such a complicated phrase that this derived unit is given a special name – newton. But notice that it is still a combination of base units.

And so on. All the SI units required for science and engineering can be derived from just seven base units: the kilogram, metre, second, ampere, kelvin, mole and candela.

So these seven base units in a very real sense form the foundations of the SI.

The seven base units of the SI

The seven base units of the SI

This Hierarchical Structure is Important.

Measurement is the quantitative comparison of a thing against a standard.

So, for example, when we measure a speed, we are comparing the unknown speed against our unit of speed which in the SI is the metre per second.

So a measurement of speed can never be more accurate than our ability to create a standard speed – a known number of ‘metres per second‘ – against which we can compare our unknown speed.

FOR EXAMPLE: Imagine calibrating a speedometer in a car. The only way we can know if it indicates correctly is if we can check the reading of the speedometer when the car is travelling at a known speed – which we would have to verify with measurements of distance (in metres) and time (in seconds).

To create a standard speed, we need to create known distances and known time intervals. So a speed never be more accurately known that our ability to create standard ‘metres‘ and ‘seconds‘.

So the importance of the base units is that the accuracy with which they can be created represents a limit to the accuracy with which we could conceivably measure anything! Or at least anything expressed as in terms of derived unit quantities in the SI

This fact has driven the evolution of the SI. Since its founding in 1960, the definitions of what we mean by ‘one’ of the base units has changed only rarely. And the aim has always been the same – to create definitions which will allow more accurate realisations of the base units. This improved accuracy would then automatically affect all the derived units in SI.

Changes are coming to the SI.

In my earlier articles (e.g. here) I have mentioned that on 20th May 2019 the definition of four of the base units will change. Four base units changing at the same time!? Radical.

Much has been made of the fact that the base units will now be defined in terms of constants of nature. And this is indeed significant.

But in fact I think the re-definitions will lead to a broader change in the structure of the SI.

Eventually, I think they will lead to the abandonment of the concept of a ‘base unit’, and the difference between ‘base‘ units and ‘derived‘ units will slowly disappear.

The ‘New’ SI.

si illustration only defining constants full colour

The seven defining constants of the ‘New’ SI.

In the ‘New’ SI, the values of seven natural constants have been defined to have exact values with no measurement uncertainty.

These are constants of nature that we had previously measured in terms of the SI base units. The choice to give them an exact value is based on the belief – backed up by experiments – that the constants are truly constant!

In fact, some of the constants appear to be the most unchanging features of the universe that we have ever encountered.

Here are four of the constants that will have fixed numerical values in the New SI:

  • the speed of light in a vacuum, conventionally given the symbol c,
  • the frequency of microwaves absorbed by a particular transition in Caesium, atoms conventionally given the symbol ΔνCs, (This funny vee-like symbol ν is the Greek letter ‘n’ pronounced as ‘nu’)
  • the Planck constant, conventionally given the symbol h,
  • the magnitude of the charge on the electron, conventionally given the symbol e.

Electrical Units in the ‘Old’ SI and the ‘New’ SI.

In the Old SI the base unit referring to electrical quantities was the ampere.

If one were to make a measurement of a voltage (in the derived unit volt) or electrical resistance (in the derived unit ohm), then one would have to establish a sequence of comparisons that would eventually refer to combinations of base units. So:

  • one volt was equal to one kg m2 s-3 A-1 (or one watt per ampere)
  • one ohm was equal to one kg m2 s-3 A-2 (or one volt per ampere)

Please don’t be distracted by this odd combination of seconds, metres and kilograms. The important thing is that in the Old SI, volts and ohms were derived units with special names.

To make ‘one volt’ one needed experiments that combined the base units for the ampere, the kilogram, the second and the metre in a clever way to create a voltage known in terms of the base units.

But in the New SI things are different.

  • We can use an experiment to create volts directly in terms of the exactly-known constants ΔνCs×h/e.
  • And similarly we can create resistances directly in terms of the exactly-known constants e2/h

Since h and e and ΔνCs have exact values in the New SI, we can now create volts and ohms without any reference to amperes or any other base units.

This change is not just a detail. In an SI based on physical constants with exactly-known values, the ability to create accurate realisations of units no longer discriminates between base units and derived units – they all have the same status.

It’s not just electrical units

Consider the measurement of speed that I discussed earlier.

In the Old SI we would measure speed in derived units of metres per second i.e. in terms of the base units the metre and the second. And so we could never measure a speed with a lower fractional uncertainty than we could realise the composite base units, the metre or the second.

But in the New SI,

  • one metre can be realised in terms of the exactly-known constants c /ΔνCs
  • one second can be realised in terms of the exactly-known constant ΔνCs

So as a consequence,

  • one metre per second can be realised in terms of the exactly-known constant c

Since these constants are all exactly known, there is no reason why speeds in metres per second cannot be measured with an uncertainty which is lower than or equal to the uncertainty with which we can measure distances (in metres) or times (in seconds).

This doesn’t mean that it is currently technically possible to measure speeds with lower uncertainty than distances or times. What it means is that there is now nothing in the structure of the SI that would stop that being the case at some point in the future.

Is this good or bad?

So in the new SI, any unit – a derived unit or a base unit – can be expressed in terms of  exactly-known constants. So there will no longer be any intrinsic hierarchy of uncertainty in the SI.

On 20th May 2019 as the new system comes into force, nothing will initially change. We will still talk about base units and derived units.

But as measurement science evolves, I expect that – as is already the case for electrical units – the distinction between base units and derived units will slowly disappear.

And although I feel slightly surprised by this conclusion, and slightly shocked, it seems to be only a good thing – making the lowest uncertainty measurements available in the widest possible range of physical quantities.

Weather Station Comparison

January 7, 2019
img_7898

My new weather station is on the top left of the picture. The old weather station is in the middle of the picture on the right.

Back in October 2015 I installed a weather station at the end of my back garden and wrote about my adventures at length (Article 1 and Article 2)

Despite costing only £89, it was wirelessly linked to a computer in the house which uploaded data to weather aggregation sites run by the Met Office and Weather Underground. Using these sites, I could compare my readings with stations nearby.

I soon noticed that my weather station seemed to report temperatures which tended to be slightly higher than other local stations. Additionally, I noticed that as sunshine first struck the station in the morning, the reported temperature seemed to rise suddenly, indicating that the thermometer was being directly heated by the sunlight rather than sensing the air temperature.

So I began to think that the reported temperatures might sometimes be in error. Of course, I couldn’t prove that because I didn’t have a trusted weather station that I could place next to it.

So in October 2018 I ordered a new Youshiko Model YC9390 weather station, costing a rather extravagant £250.

Youshiko YC9390

The new station is – unsurprisingly – rather better constructed than the old one. It has a bigger, brighter, internal display and it links directly to Weather Underground via my home WI-FI and so does not require a PC. Happily it is possible to retrieve the data from Weather Underground.

The two weather stations are positioned about 3 metres apart and at slightly different heights, but in broad terms, their siting is similar.

Over the last few days of the New Year break, and the first few days of my three-day week, I took a look at how the two stations compared. And I was right! The old station is affected by sunshine, but the effect was significantly larger than I suspected.

Comparison 

I compared the temperature readings of the two stations over the period January 4th, 5th and 6th. The fourth was a bright almost cloudless, cold, winter day. The other two days were duller, but warmer, and all three days were almost windless.

The graphs below (all drawn to the same scale) show the data from each station versus time-of-day with readings to be compared against the left-hand axis.

Let’s look at the data from the 4th January 2019

4th January 2019

Data from the 4th January 2019. The red curve shows air temperature data from the old station and the blue curve shows data from the new station. Also shown in yellow is data showing the intensity of sunshine (to be read from the right-hand axis) taken from a station located 1 km away.

Two things struck me about this graph:

  • Firstly I was surprised by the agreement between the two stations during the night. Typically the readings are within ±0.2 °C and with no obvious offset.
  • Secondly I was shocked by the extent the over-reading. At approximately 10 a.m. the old station was over-reading by more than 4 °C!

To check that this was indeed a solar effect I downloaded data from a weather station used for site monitoring at NPL – just over a kilometre away from my back garden.

This station is situated on top of the NPL building and the intensity of sunlight there will not be directly applicable to the intensity of sunshine in my back garden. But hopefully, it is indicative.

The solar intensity reached just over 200 watts per square metre, about 20% of the solar intensity on a clear midsummer day. And it clearly correlated with the magnitude of the excess heating.

Let’s look at the data from the 5th January 2019

slide2

Data from 5th January 2019. See previous graph and text for key.

The night-time 5th January data also shows agreement between the two stations as was seen on the 4th January.

However I was surprised to see that even on this dismally dull January day – with insolation failing to reach even 100 watts per square metre – that there was a noticeable warming of the old station – amounting to typically 0.2 °C.

The timing of this weak warming again correlated with the recorded sunlight.

Finally let’s look at data from 6th January 2019

slide3

Data from 6th January 2019. See previous graph and text for key.

Once again the pleasing night-time agreement between the two station readings is striking.

And with an intermediate level of solar intensity the over-reading of the old station is less than on the 4th, but more than on the 5th.

Wind.

I chose these dates for a comparison because on all three days wind speeds were low. This exacerbates the solar heating effect and makes it easier to detect.

The figures below show the same temperature data as in the graphs above, but now with the wind speed data plotted in green against the right-hand axis.

Almost every wind speed reading is 0 kilometres per hour, and during the nights there were only occasional flurries.  However during the day, there were slightly more frequent flurries, but as a pedestrian, the day seemed windless.

slide4

Data from 4th of January 2019 now showing wind speed on the right-hand axis.

slide5

Data from 5th of January 2019 now showing wind speed on the right-hand axis.

slide6

Data from the 6th January 2019 showing wind speed against the right-hand axis.

Conclusions 

My conclusion is that the new weather station shows a much smaller solar-heating effect than the old one.

It is unlikely that the new station is itself perfect. In fact there is no accepted procedure for determining what the ‘right answer’ is in a meteorological setting!

The optimal air temperature measurement strategy is usually to use a fan to suck air across a temperature sensor at a steady speed of around 5 metres per second – roughly 18 kilometres per hour! But stations that employ such arrangements are generally quite expensive.

Anyway, it is pleasing to have resolved this long-standing question.

Where to see station data

On Weather Underground the station ID is ITEDDING4 and its readings can be monitored using this link.

The Weather Underground ‘Wundermap’ showing world wide stations can be found here. On a large scale the map shows local averages of station data, but  as you zoom in, you can see teh individual reporting stations.

The Met Office WOW site is here. Search on ‘Teddington’ if you would like to view the station data.

Getting off our eggs!

January 5, 2019

While listening to the radio last week, I heard a description of an astonishing experiment, apparently well known since the 1930’s, but new to me.

Niko Tinbergen conducted experiments in which he replaced birds’ eggs with replicas and then studied how the birds responded to differently-sized replicas with modified markings.

Diedre Barrett describes the results in her book Supernormal Stimuli

Song birds abandoned their pale blue eggs dappled with grey to hop on black polka-dot day-glo blue dummies so large that the birds constantly slid off and had to climb back on.

Hearing this for the first time, I was shocked. But the explanation is simple enough.

The birds are hard-wired to respond to egg-like objects with specific patterns, and Tinbergen’s modified replicas triggered the nesting response more strongly than the bird’s own eggs.

Tinbergen coined the term ‘super-normal stimulus’ to describe stimuli that exceeded anything conceivable in the natural world.

Diedre Barrett uses this shocking experimental result to reflect on some human responses to what are effectively super-normal stimuli in the world around us.

Using this insight, she points out that many of our responses are as simple and self-harming as the birds’ responses to the replica eggs.

The Book

In her short book Barrett writes clearly, makes her point, and then stops. It was a pleasure to read.

I will not attempt to replicate her exposition, but I was powerfully struck by the sad image of a bird condemned to waste its reproductive energy on a plaster egg, when its own eggs lay quietly in view.

I found it easy to find analogous instinctive self-harming patterns in my own life. Surely we all can.

But Barrett does not rant. She is not saying that we are all going to hell in a handcart.

She makes the point that super-normal stimuli are not necessarily negative. The visual arts, dance, music, theatre and literature can all be viewed as tricks/skills to elicit powerful and even life-changing responses to non-existent events.

In discussing television, her point is not that television is ‘bad’ per se, but that the intensity and availability of vicarious experiences exceeds anything a normal person is likely to encounter in real life.

If watching television enhances, educates and inspires, then great. But frequently we respond to the stimuli by just seeking more. In the UK on average we watch more than four hours of television per day.

Four HoursSuch a massive expenditure of time is surely the equivalent to sitting on a giant plaster egg.

Barrett’s key point is the ubiquity of these super-normal stimuli in modern life – stimuli with which our instincts are ill-equipped to cope with.

A rational response feels ‘un-natural’ because it requires conscious thought and reflection. For example, rather than just feeling that an image is ‘cute, are we able to notice our own response and ask why someone might use caricatures which elicit a ‘cute’ response?

Barrett ends by pointing out that we humans are the only animals that can notice that we are sitting on metaphorical polka-dotted plaster eggs.

Even in adult life, having sat on polka-dotted plaster eggs for many years, we can come to an understanding that will allow us to get off the egg, reflect on the experience, and get on with something more meaningful.

I am clambering off some eggs as I write.

1000 days of weighing myself

January 1, 2019

Thank you for stopping by: Happy New Year!

Well its the first of January 2019 and I am filled with trepidation as I begin a new phase of my career: working 3 days a week.

My plan is to take things one day at a time, and take heart from the fact that it will take only three days to go from one weekend to the next!

Weight

Anyway, the turn of the year means it is time to look at my weight again.

The graph below shows my weight in 2018 based on daily weighings. Also shown are monthly averages (green squares) with error bars drawn at ± one standard deviation. The red dotted line shows the yearly average.

My weight through 2018 based on daily weighings.

My weight through 2018 based on daily weighings. Monthly averages are shown as green squares and the yearly average is shown as a dotted line.

My specific aim for 2018 had been to stay the same weight, so I did not quite manage that. The December monthly average was 0.5 kg above the January monthly average.

And it is clear that there was weight loss through the first part of the year and weight gain through the second part of the year.

But looking on the broader scale, the changes aren’t very significant.

The graph below shows the data for the last three years. This amounts to just over 1000 days or roughly one twentieth of my life.

Slide2

When viewed on this larger scale there doesn’t seem to be much to fuss about.

But I am sure that if I had not weighed myself daily, my weight would have crept back on considerably faster than it has.

A weight gain of 1 kg over one year represents an energy imbalance of only about 20 kilocalories of food per day – which is less than a mouthful of almost any food worth eating! (Link)

Anyway. Another year of weighing begins!

 

Christmas Bubbles

December 23, 2018
Champagne Time Lapse

A time-lapse photograph of a glass of fizzy wine.

Recently I encountered the fantastic:

Effervescence in champagne and sparkling wines:
From grape harvest to bubble rise

This is a 115-page review article by Gérard Liger-Belair about bubbles in Champagne, my most favourite type of carbon dioxide emission.

Until January 30th 2019 it is freely downloadable using this link

Since the bubbles in champagne arguably add £10 to the price of a bottle of wine, I guess it is worth understanding exactly how that value is added.

I found GLB’s paper fascinating with a delightful attention to detail. From amongst the arcane studies in the paper, here are three things I learned.

Thing 1: Amount of Gas

Champagne (and Prosecco and Cava) have about 9 grams of carbon dioxide in each 750 ml bottle [1].

Since the molar mass of carbon dioxide is 44 g, each bottle contains approximately 9/44 ~ 0.2 moles of carbon dioxide.

If released as gas at atmospheric pressure and 10 °C, it would have a volume of approximately 4.75 litres – more than six times the volume of the bottle!

This large volume of gas is said to be “dissolved” in the wine. The molecules can only leave when, by chance, they encounter the free surface of the wine.

Because the free-surface area of wine in a wine glass is usually larger than the combined surface area of bubbles, about 80% of the de-gassing happens through the liquid surface [2].

Thing 2: Bubble Size and Speed 

But fizzy wine is call “fizzy” because of the bubbles that seem to ceaselessly form on the inner surface of the glass.

Sadly, in a perfectly clean glass, such as one which has repeatedly been through a dishwasher, very few bubbles will form [3].

But if there are tiny cracks in the glass, or small specks of dust from, for example, a drying cloth, then these can trap tiny air bubbles and provide free-surfaces at which carbon dioxide can leave the liquid.

At first a bubble is just tens of nanometres in size, but it grows at a rate which depends upon the rate at which carbon dioxide enters the bubble.

As the bubble grows, its surface area increases allowing the rate at which carbon dioxide enters the bubble to increase.

Eventually the buoyancy of the bubble causes it to detach from its so-called ‘nucleation site’ (birthplace) and rise through the liquid.  This typically happens when bubbles are between 0.01 and 0.1 mm in diameter.

To such tiny bubbles, the wine is highly viscous, and at first the bubbles rise slowly. But as more carbon dioxide enters the bubble, the bubble grows [4] and its speed of rise increases. The rising speed is close to the so-called ‘Stokes’ terminal velocity. [5]

So when you look at a stream of bubbles you will see that at the bottom, the bubbles are small and close together and relatively slow-moving. As they rise through the glass, they grow, and their speed increases.

If you can bear to leave your glass undrunk for long enough, you should be able to see the rate of bubble formation slow as the carbon dioxide concentration falls.

This will be visible as an increase in the spacing of bubbles near the nucleation site of a rising ‘bubble train’.

Thing 3: Number of bubbles

Idle speculation often accompanies the consumption of fizzy wine.

And one common topic of speculation is the number of bubbles which can be formed in a gas of champagne [6]. We can now add to that speculation.

If a bubble has a typically diameter of approximately 1 mm as it reaches the surface, then each bubble will have a volume of approximately 0.5 cubic millimetres, or 0.000 5 millilitres.

So the 4.75 litres of carbon dioxide in a bottle could potentially form 4750/0.0005 = 9.5 million bubbles per bottle!

If a bottle is used for seven standard servings then there are potentially 1.3 million bubbles per glass.

In fact the number is generally smaller than this because as the concentration of carbon dioxide in the liquid falls, the rate of bubble formation falls also. And below approximately 4 grams of carbon dioxide per litre of wine, bubbles cease to form [7].

Thing 4: BONUS THING! Cork Speed

When the bottle is sealed there is a high pressure of carbon dioxide in the space above the wine. The pressure depends strongly on temperature [8], rising from approximately 5 atmospheres (500 kPa) if the bottle is opened at 10 °C to approximately 10 atmospheres (1 MPa) if the bottle is opened at 25 °C.

GLB uses high-speed photography to measure the velocity of exiting cork, and gets results which vary from around 10 metres second for a bottle at 4 °C to 14 metres per second for a bottle at 18 °C. [9]

I made my own measurements using my iPhone (see below) and the cork seems to move roughly 5 ± 2 cm in the 1/240th of a second between frames. So my estimate of the speed is about 12 ± 5 metres second, roughly in line GLB’s estimates

Why this matters

When we look at absolutely any phenomenon, there is a perspective from which that phenomenon – no matter how mundane or familiar – can appear profound and fascinating.

This paper has opened my eyes, and I will never look at a glass of Champagne again in quite the same way.

Wishing you happy experimentation over the Christmas break.

Santé!

References

[1] Page 8 Paragraph 2

[2] Page 85 Section 6.3

[3] Page 42 Section 5.2

[4] Page 78 Figure 59

[5] Page 77 Figure 58

[6] Page 84 Section 6.3 & Figure 66

[7] Page 64

[8] Page 10 Figure 3

[9] Page 24 Figure 16

Ignorance: Eggs & Weather Forecasts

November 26, 2018

Every so I often I learn something so simple and shocking that I find myself asking:

How can I possibly not have known that already?“.

Eggs

Eggs

Eggs

While listening to Farming Today the other morning, learned that:

Large eggs come from old hens

In order to produce large eggs – the most popular size with consumers – farmers need to allow hens to reach three years old.

So during the first and second years of their lives they will first lay small eggs, then medium eggs, and finally large eggs.

On Farming Today a farmer was explaining that egg production naturally resulted a range of egg sizes, and it was a challenge to find a market for small eggs. Then came the second bomb’shell’.

The yolk is roughly same size in all eggs

What varies between small and large eggs is mainly the amount of egg white (albumen).

How could I have reached the age of 58 and not  known that? Or not have even been curious about it?

Since learning this I have become a fan of small eggs: more yolk, less calories, more taste!

But my deep ignorance extends beyond everyday life and into the professional realm. And even my status as ‘an expert’ cannot help me.

Weather Forecasts & Weather Stations

Professionally I have become interested in weather stations and their role in both Numerical Weather Prediction (NWP, or just weather forecasting) and in Climate Studies.

And as I went about my work I had imagined that data from weather stations were used as inputs to NWP algorithms that forecast the weather.

But in September I attended CIMO TECO-2018 (Technical Conference on Meteorological and Environmental Instruments and Methods of Observation) in Amsterdam.

And there I learned in passing from an actual expert, that I had completely misunderstood their role.

Weather station data is not considered in the best weather forecasts.

And, on a moment’s reflection, it was completely obvious why.

Weather forecasting work like this:

  • First one gathers as much data as possible about the state of the atmosphere ‘now’. The key inputs to this are atmospheric ‘soundings’:
    • Balloon-borne ‘sondes’ fly upwards through the atmosphere sending back data on temperature, humidity and wind (speed and direction) versus height.
    • Satellites using infrared and microwave sensors probe downwards to work out the temperature and humidity at all points in the atmosphere in a swathe below the satellite’s orbit.
  • The NWP algorithms accept this vast amount of data about the state of the atmosphere, and then use basic physics to predict how the state of the entire atmosphere will evolve over the coming hours and days

And then, after working out the state of the entire atmosphere, the expected weather at ground level is extracted.

Visualisation of the amount of moisture distributed across different heights in the atmosphere based on a single pass of a 'microwave sounding' satellite. Image credit: NASA/JPL-Caltech

Visualisation of the amount of moisture distributed across different heights in the atmosphere based on a single pass of a ‘microwave sounding’ satellite. The data gathered at ground level is just a tiny fraction of the data input to NWP models. Image credit: NASA/JPL-Caltech

Ground-based weather stations are still important:

  • They are used to check the outputs of the NWP algorithms.
  • But they are not used as inputs to the NWP algorithms.

So why did I not realise this ‘obvious’ fact earlier? I think it was because amongst the meteorologists and climate scientists with whom I spoke, it was so obvious as to not require any explanation.

Life goes on

So I have reached the age of 58 without knowing about hen’s eggs and the role of weather stations in weather forecasting?

I don’t know how it happened. But it did. And I suspect that many people have similar areas of ignorance, even regarding aspects of life with which we are totally familiar – such as eggs – or where one is nominally an expert.

And so life goes on. Anyway…

This pleasing Met Office video shows the importance of understanding the three-dimensional state of the atmosphere…

And here is a video of some hens

 

Mug Cooling: Salty fingers

November 23, 2018

You wait years for an article about heat transfer at beverage-air interfaces and then four come along at once!

When I began writing these articles (1, 2, 3) I was just curious about the effect of insulation and lids.

But as I wrote more I had two further insights.

  • Firstly the complexity of the processes at the interface was mind-boggling!
  • Secondly, I realised that cooling beverages are just one example of the general problem of energy and material transfer at interfaces.

This is one of the most important processes that occurs on Earth. For example, it is how the top layer of the oceans – where most of the energy arriving on Earth from the Sun is absorbed – exchanges energy with the deeper ocean and the atmosphere.

But in the oceans there is another factor: salinity.

Salinity 

Sea water typically contains 35 grams of salt per litre of water, and is about 2.4% denser than pure water.

So pure water – such as rain water falling onto the ocean surface – will tend to float above the brine.

This effect is exacerbated if the pure water is warm. For example, water at 60 °C is approximately 1.5% less dense than water at around 20 °C.

Video 

In the video at the top of the article I added warm pure water (with added red food colouring) to a glass of cold pure water (on the left) and a glass of cold salty water (on the right).

[For the purposes of this article I hope you will allow that glasses are a type of mug]

The degree to which the pure and salty water spontaneously separated surprised me.

But more fascinating was the mechanism of eventual mixing – a variant on ‘salt fingering‘.

Salt Fingers Picture

The formation of ‘salty fingers’ of liquid is ubiquitous in the oceans and arises from density changes caused by salt diffusion and heat transfer.

As the time-lapse section of the movie shows – eventually the structure is lost and we just see ‘mixed fluid’ – but the initial stages, filmed in real time, are eerily beautiful.

Now I can’t quite explain what is happening in this movie – so I am not going to try.

But the web has articles, home-made videos and fancy computer simulations.

 

Mug Cooling: The Lid Effect

November 12, 2018
IMG_7906

Droplets collect near the rim of a mug filled with hot water.

During my mug cooling experiment last week, I was surprised to find that taking the lid off a vacuum insulated mug increased its initial cooling rate by a factor 7.5.

Removing the lid allowed air from the room to flow across the surface of the water, cooling it in two ways.

  • Firstly, the air would warm up when it contacted the hot water, and then carry heat away in a convective flow.
  • Secondly, some hot water would evaporate into the moving air and carry away so – called ‘latent heat’.

I wondered which of these two effects was more important?

I decided to work out the answer by calculating how much evaporation would be required to explain ALL the cooling. I could then check my calculation against the measured mass of water that was lost to evaporation.

Where to start?

I started with the cooling curve from the previous blog.

Slide5

Graph#1: Temperature (°C) versus time (minutes) for water cooling in an insulated mug with and without a lid. Without a lid, the water cools more than 7 times faster.

Because I knew the mass of water (g) and its heat capacity (joule per gram per °C), I could calculate the rate of heat loss in watts required to cool the water at the observed rate.

In Graph#2 below I have plotted this versus the difference in temperature between the water and the room temperature, which was around 20 °C.

Slide6

Graph#2: The rate of heat flow (in watts) calculated from the cooling curve versus the temperature difference (°C) from the ambient environment. The raw estimates are very noisy so the dotted lines are ‘best fit lines’ which approximately capture the trend of the data.

I was struck by two things: 

  • Firstly, without the lid, the rate of heat loss was initially 40 watts – which seemed very high.
  • Secondly:
    • When the lid was on, the rate of heat loss was almost a perfect straight line This is broadly what one expects in a wide range of heat flow problems – the rate of heat flow is proportional to the temperature difference. But…
    • When the lid was off, the heat flow varied non-linearly with temperature difference.

To find out the effect of the lid, I subtracted the two curves from each other to get the difference in heat flow versus the temperature of the water above ambient (Graph#3).

[Technical Note: Because the data in Graph#2 is very noisy and irregularly spaced, I used Excel™ to work out a ‘trend line’ that describes the underlying ‘trend’ of the data. I then subtracted the two trend lines from each other.]

Slide7

Graph#3: The dotted line shows the difference in power (watts) between the two curves in the previous graph. This should be a fair estimate for the heat loss across the liquid surface.

This curve now told me the extra rate of cooling caused by removing the lid.

If this was ALL due to evaporative cooling, then I could work out the expected loss of mass by dividing by the latent heat of vaporisation of water (approximately 2260 joules per gram) (Graph#4).

Slide8c

Graph#4. The calculated rate of evaporation (in milligrams per second) that would be required to explain the increased cooling rate caused by removing the lid.

Graph#4 told me the rate at which water would need to evaporate to explain ALL the cooling caused by removing the lid.

Combining that result with the data in Graph#1, I worked out the cumulative amount of water that would need to evaporate to explain ALL the observed extra cooling (Graph#5)

Slide9

Graph#5: The red dashed line shows the cumulative mass loss (g) required to explain all the extra cooling caused by removing the lid. The green dashed lines show the amount of water that actually evaporated in each of the two ‘lid off’ experiments. The green data shows additional measurements of mass loss versus time from a third experiment.

In Lid-Off Experiments#1 and #2, I had weighed the water before and after the cooling experiment and so I knew that in each experiment with the lid off I had lost respectively 25 g and 31 g of water –  just under 10% of the water.

But Graph #5 really needed some data on the rate of mass loss, so I did an additional experiment where I didn’t measure the temperature, but instead just weighed the mug every few minutes. This is the data plotted on Graph#5 as discrete points.

Conclusions#1

In Graph#5, it’s clear that the measured rate of evaporation can’t explain all the increased cooling rate loss, but it can explain ‘about a third of it‘.

So evaporation is responsible for about a third of the extra cooling, with two thirds being driven by heat transfer to the flowing air above the cup.

It is also interesting that even though the cooling curves in Graph#1 are very similar, the amount of evaporation in Graph#5 is quite variable.

The video below is backlit to show the ‘steam’ rising above the mug, and it is clear that the particular patterns of air flow are very variable.

The actual amount of evaporation depends on the rate of air flow across the water surface, and that is driven both by

  1. natural convection – driven by the hot low-density air rising, but also by…
  2. forced convection – draughts flowing above the cup.

I don’t know, but I suspect it is this variability in air flow that caused the variability in the amount of evaporation.

Conclusions#2

I have wasted spent a several hours on these calculations. And I don’t really know why.

Partly, I was just curious about the answer.

Partly, I wanted to share my view that it is simply amazing how much subtle physics is taking place around us all the time.

And partly, I am still trying to catch my breath after deciding to go ‘part-time’ from next year. Writing blog articles such as this is part of just keeping on keeping on until something about the future becomes clearer.

P.S. Expensive Mugs

Finally, on the off-chance that (a) anybody is still reading and (b) they actually care passionately about the temperature of their beverages, and (c) they are prepared to spend £80 on a mug, then the Ember temperature-controlled Ceramic mug may be just thing for you. Enjoy 🙂

 


%d bloggers like this: