Global Oxygen Depletion

February 4, 2019

While browsing over at the two degrees institute, I came across this figure for atmospheric oxygen concentrations measured at a station at the South Pole.

Graph 1

The graph shows the change in:

  • the ratio of oxygen to nitrogen molecules in samples of air taken at a particular date

to

  • the ratio of oxygen to nitrogen molecules in samples of air taken in the 1980’s.

The sentence above is complicated, but it can be interpreted without too many caveats as simply the change in oxygen concentration in air measured at the South Pole.

We see an annual variation – the Earth ‘breathing’- but more worryingly we see that:

  • The amount of oxygen in the atmosphere is declining.

It’s a small effect, and will only reach a 0.1% decline – 1000 parts per million – in 2035 or so. So it won’t affect our ability to breathe. Phewww. But it is nonetheless interesting.

Averaging the data from the South pole over the years since 2010, the oxygen concentration appears to be declining at roughly 25 parts per million per year.

Why?

The reason for the decline in oxygen concentration is that we are burning carbon to make carbon dioxide…

C + O2 = CO2

…and as we burn carbon, we consume oxygen.

I wondered if I could use the measured rate of decline in oxygen concentration to estimate the rate of emission of carbon dioxide.

How much carbon is that?

First I needed to know how much oxygen there was in the atmosphere. I considered a number of ways to calculate that, but it being Sunday, I just looked it up in Wikipedia. There I learned that the atmosphere has a mass of about 5.15×1018 kg.

I also learned the molar fractional concentration of the key gases:

  • nitrogen (molecular weight 28): 78.08%
  • oxygen (molecular weight 32): 20.95%
  • argon (molecular weight 40):0.93%

From this I estimated that the mass of 1 mole of the atmosphere was 0.02896 kg/mol. And so the mass of the atmosphere corresponded to…

5.15×1018 /0.02896 = 1.78×1020

…moles of atmosphere. This would correspond to roughly…

1.78×1020 × 0.02095 =3.73×1019

…moles of oxygen molecules. This is the number that appears to be declining by 25 parts per million per year i.e.

3.73×1019× 0.000 025= 9.32×1014

…moles of oxygen molecules are being consumed per year. From the chemical equation, this must correspond to exactly the same number of moles of carbon: 9.32×1014. Since 1 mole of carbon weighs 12 g, this corresponds to…

  • 1.12×1016 g of C,
  • 1.12×1013 kg of C
  • 1.12×1010 tonnes of C
  • 11.2 gigatonnes (Gt) of C

Looking up the sources of sources, I obtained the following estimate for global carbon emissions which indicates that currently emissions are running at about 10 Gt of carbon per year

Carbon Emissions

Analysis

So Wikipedia tells me that humanity emits roughly 10 Gt of carbon per year, but based on measurements at the South pole, we infer that 11.2 Gt of carbon per year is being emitted and consuming the concomitant amount of oxygen. Mmmmm.

First of all, we notice that these figures actually agree within roughly 10%. Which is pleasing.

  • But what is the origin the disagreement?
  • Could it be that the data from the South Pole is not representative?

I downloaded data from the Scripps Institute for a number of sites and the graph below shows recent data from Barrow in Alaska alongside the South Pole data. These locations are roughly half a world – about 20,000 km – apart.

Graph 2

Fascinatingly, the ‘breathing’ parts of the data are out of phase! Presumably this arises from the phasing of summer and winter in the northern and southern hemispheres.

But significantly the slopes of the trend lines differ by only 1%.  So global variability doesn’t seem to able to explain the 10% difference between the rate of carbon burning predicted from the decline of atmospheric oxygen (11.2 Gt C per year) , and the number I got off Wikipedia (10 Gt C per year).

Wikipedia’s number was obtained from the Carbon Dioxide Information and Analysis Centre (CDIAC) which bases their estimate on statistics from countries around the world based on stated oil, gas and coal consumption.

My guess is that there is considerable uncertainty – on the order of a few percent –  on both the CDIAC estimate, and also on the Scripps Institute estimates. So agreement at the level of about 10% is actually – in the context of a blog article – acceptable.

Conclusions

My conclusion is that – as they say so clearly over at the two degrees project – we are in deep trouble. Oxygen depletion is actually just an interesting diversion.

The most troubling graph they present shows

  • the change in CO2 concentration over the last 800,000  years, shown against the left-hand axis,

alongside

  • the estimated change in Earth’s temperature over the last 800,000  years, shown  along the right-hand axis.

The correlation between the two quantities is staggering, and the conclusion is terrifying. chart

We’re cooked…

 

The Death Knell for SI Base Units?

January 30, 2019

I love the International System of Units – the SI. 

Rooted in humanity’s ubiquitous need to measure things, the SI represents a hugely successful global human enterprise – a triumph of cooperation over competition, and accord over discord.

Day-by-day it enables measurements made around the world to be meaningfully compared with low uncertainty. And by doing this it underpins all of the sciences, every branch of engineering, and trade.

But changes are coming to the SI, and even after having worked on these changes for the last 12 years or so, in my recent reflections I have been surprised at how profound the changes will be.

Let me explain…

The Foundations of the SI

The SI is built upon the concept of ‘base units’. Unit amounts of any quantity are defined in terms of combinations of unit quantities of just a few ‘base units’. For example:

  • The SI unit of speed is the ‘metre per second’, where one metre and one second are the base units of length and time respectively.
    • The ‘metre per second’ is called a derived unit.
  • The SI unit of acceleration is the ‘metre per second per second’
    • Notice how the same base units are combined differently to make this new derived unit.
  • The SI unit of force  is the ‘kilogram metre per second per second’.
    • This is such a complicated phrase that this derived unit is given a special name – newton. But notice that it is still a combination of base units.

And so on. All the SI units required for science and engineering can be derived from just seven base units: the kilogram, metre, second, ampere, kelvin, mole and candela.

So these seven base units in a very real sense form the foundations of the SI.

The seven base units of the SI

The seven base units of the SI

This Hierarchical Structure is Important.

Measurement is the quantitative comparison of a thing against a standard.

So, for example, when we measure a speed, we are comparing the unknown speed against our unit of speed which in the SI is the metre per second.

So a measurement of speed can never be more accurate than our ability to create a standard speed – a known number of ‘metres per second‘ – against which we can compare our unknown speed.

FOR EXAMPLE: Imagine calibrating a speedometer in a car. The only way we can know if it indicates correctly is if we can check the reading of the speedometer when the car is travelling at a known speed – which we would have to verify with measurements of distance (in metres) and time (in seconds).

To create a standard speed, we need to create known distances and known time intervals. So a speed never be more accurately known that our ability to create standard ‘metres‘ and ‘seconds‘.

So the importance of the base units is that the accuracy with which they can be created represents a limit to the accuracy with which we could conceivably measure anything! Or at least anything expressed as in terms of derived unit quantities in the SI

This fact has driven the evolution of the SI. Since its founding in 1960, the definitions of what we mean by ‘one’ of the base units has changed only rarely. And the aim has always been the same – to create definitions which will allow more accurate realisations of the base units. This improved accuracy would then automatically affect all the derived units in SI.

Changes are coming to the SI.

In my earlier articles (e.g. here) I have mentioned that on 20th May 2019 the definition of four of the base units will change. Four base units changing at the same time!? Radical.

Much has been made of the fact that the base units will now be defined in terms of constants of nature. And this is indeed significant.

But in fact I think the re-definitions will lead to a broader change in the structure of the SI.

Eventually, I think they will lead to the abandonment of the concept of a ‘base unit’, and the difference between ‘base‘ units and ‘derived‘ units will slowly disappear.

The ‘New’ SI.

si illustration only defining constants full colour

The seven defining constants of the ‘New’ SI.

In the ‘New’ SI, the values of seven natural constants have been defined to have exact values with no measurement uncertainty.

These are constants of nature that we had previously measured in terms of the SI base units. The choice to give them an exact value is based on the belief – backed up by experiments – that the constants are truly constant!

In fact, some of the constants appear to be the most unchanging features of the universe that we have ever encountered.

Here are four of the constants that will have fixed numerical values in the New SI:

  • the speed of light in a vacuum, conventionally given the symbol c,
  • the frequency of microwaves absorbed by a particular transition in Caesium, atoms conventionally given the symbol ΔνCs, (This funny vee-like symbol ν is the Greek letter ‘n’ pronounced as ‘nu’)
  • the Planck constant, conventionally given the symbol h,
  • the magnitude of the charge on the electron, conventionally given the symbol e.

Electrical Units in the ‘Old’ SI and the ‘New’ SI.

In the Old SI the base unit referring to electrical quantities was the ampere.

If one were to make a measurement of a voltage (in the derived unit volt) or electrical resistance (in the derived unit ohm), then one would have to establish a sequence of comparisons that would eventually refer to combinations of base units. So:

  • one volt was equal to one kg m2 s-3 A-1 (or one watt per ampere)
  • one ohm was equal to one kg m2 s-3 A-2 (or one volt per ampere)

Please don’t be distracted by this odd combination of seconds, metres and kilograms. The important thing is that in the Old SI, volts and ohms were derived units with special names.

To make ‘one volt’ one needed experiments that combined the base units for the ampere, the kilogram, the second and the metre in a clever way to create a voltage known in terms of the base units.

But in the New SI things are different.

  • We can use an experiment to create volts directly in terms of the exactly-known constants ΔνCs×h/e.
  • And similarly we can create resistances directly in terms of the exactly-known constants e2/h

Since h and e and ΔνCs have exact values in the New SI, we can now create volts and ohms without any reference to amperes or any other base units.

This change is not just a detail. In an SI based on physical constants with exactly-known values, the ability to create accurate realisations of units no longer discriminates between base units and derived units – they all have the same status.

It’s not just electrical units

Consider the measurement of speed that I discussed earlier.

In the Old SI we would measure speed in derived units of metres per second i.e. in terms of the base units the metre and the second. And so we could never measure a speed with a lower fractional uncertainty than we could realise the composite base units, the metre or the second.

But in the New SI,

  • one metre can be realised in terms of the exactly-known constants c /ΔνCs
  • one second can be realised in terms of the exactly-known constant ΔνCs

So as a consequence,

  • one metre per second can be realised in terms of the exactly-known constant c

Since these constants are all exactly known, there is no reason why speeds in metres per second cannot be measured with an uncertainty which is lower than or equal to the uncertainty with which we can measure distances (in metres) or times (in seconds).

This doesn’t mean that it is currently technically possible to measure speeds with lower uncertainty than distances or times. What it means is that there is now nothing in the structure of the SI that would stop that being the case at some point in the future.

Is this good or bad?

So in the new SI, any unit – a derived unit or a base unit – can be expressed in terms of  exactly-known constants. So there will no longer be any intrinsic hierarchy of uncertainty in the SI.

On 20th May 2019 as the new system comes into force, nothing will initially change. We will still talk about base units and derived units.

But as measurement science evolves, I expect that – as is already the case for electrical units – the distinction between base units and derived units will slowly disappear.

And although I feel slightly surprised by this conclusion, and slightly shocked, it seems to be only a good thing – making the lowest uncertainty measurements available in the widest possible range of physical quantities.

Weather Station Comparison

January 7, 2019
img_7898

My new weather station is on the top left of the picture. The old weather station is in the middle of the picture on the right.

Back in October 2015 I installed a weather station at the end of my back garden and wrote about my adventures at length (Article 1 and Article 2)

Despite costing only £89, it was wirelessly linked to a computer in the house which uploaded data to weather aggregation sites run by the Met Office and Weather Underground. Using these sites, I could compare my readings with stations nearby.

I soon noticed that my weather station seemed to report temperatures which tended to be slightly higher than other local stations. Additionally, I noticed that as sunshine first struck the station in the morning, the reported temperature seemed to rise suddenly, indicating that the thermometer was being directly heated by the sunlight rather than sensing the air temperature.

So I began to think that the reported temperatures might sometimes be in error. Of course, I couldn’t prove that because I didn’t have a trusted weather station that I could place next to it.

So in October 2018 I ordered a new Youshiko Model YC9390 weather station, costing a rather extravagant £250.

Youshiko YC9390

The new station is – unsurprisingly – rather better constructed than the old one. It has a bigger, brighter, internal display and it links directly to Weather Underground via my home WI-FI and so does not require a PC. Happily it is possible to retrieve the data from Weather Underground.

The two weather stations are positioned about 3 metres apart and at slightly different heights, but in broad terms, their siting is similar.

Over the last few days of the New Year break, and the first few days of my three-day week, I took a look at how the two stations compared. And I was right! The old station is affected by sunshine, but the effect was significantly larger than I suspected.

Comparison 

I compared the temperature readings of the two stations over the period January 4th, 5th and 6th. The fourth was a bright almost cloudless, cold, winter day. The other two days were duller, but warmer, and all three days were almost windless.

The graphs below (all drawn to the same scale) show the data from each station versus time-of-day with readings to be compared against the left-hand axis.

Let’s look at the data from the 4th January 2019

4th January 2019

Data from the 4th January 2019. The red curve shows air temperature data from the old station and the blue curve shows data from the new station. Also shown in yellow is data showing the intensity of sunshine (to be read from the right-hand axis) taken from a station located 1 km away.

Two things struck me about this graph:

  • Firstly I was surprised by the agreement between the two stations during the night. Typically the readings are within ±0.2 °C and with no obvious offset.
  • Secondly I was shocked by the extent the over-reading. At approximately 10 a.m. the old station was over-reading by more than 4 °C!

To check that this was indeed a solar effect I downloaded data from a weather station used for site monitoring at NPL – just over a kilometre away from my back garden.

This station is situated on top of the NPL building and the intensity of sunlight there will not be directly applicable to the intensity of sunshine in my back garden. But hopefully, it is indicative.

The solar intensity reached just over 200 watts per square metre, about 20% of the solar intensity on a clear midsummer day. And it clearly correlated with the magnitude of the excess heating.

Let’s look at the data from the 5th January 2019

slide2

Data from 5th January 2019. See previous graph and text for key.

The night-time 5th January data also shows agreement between the two stations as was seen on the 4th January.

However I was surprised to see that even on this dismally dull January day – with insolation failing to reach even 100 watts per square metre – that there was a noticeable warming of the old station – amounting to typically 0.2 °C.

The timing of this weak warming again correlated with the recorded sunlight.

Finally let’s look at data from 6th January 2019

slide3

Data from 6th January 2019. See previous graph and text for key.

Once again the pleasing night-time agreement between the two station readings is striking.

And with an intermediate level of solar intensity the over-reading of the old station is less than on the 4th, but more than on the 5th.

Wind.

I chose these dates for a comparison because on all three days wind speeds were low. This exacerbates the solar heating effect and makes it easier to detect.

The figures below show the same temperature data as in the graphs above, but now with the wind speed data plotted in green against the right-hand axis.

Almost every wind speed reading is 0 kilometres per hour, and during the nights there were only occasional flurries.  However during the day, there were slightly more frequent flurries, but as a pedestrian, the day seemed windless.

slide4

Data from 4th of January 2019 now showing wind speed on the right-hand axis.

slide5

Data from 5th of January 2019 now showing wind speed on the right-hand axis.

slide6

Data from the 6th January 2019 showing wind speed against the right-hand axis.

Conclusions 

My conclusion is that the new weather station shows a much smaller solar-heating effect than the old one.

It is unlikely that the new station is itself perfect. In fact there is no accepted procedure for determining what the ‘right answer’ is in a meteorological setting!

The optimal air temperature measurement strategy is usually to use a fan to suck air across a temperature sensor at a steady speed of around 5 metres per second – roughly 18 kilometres per hour! But stations that employ such arrangements are generally quite expensive.

Anyway, it is pleasing to have resolved this long-standing question.

Where to see station data

On Weather Underground the station ID is ITEDDING4 and its readings can be monitored using this link.

The Weather Underground ‘Wundermap’ showing world wide stations can be found here. On a large scale the map shows local averages of station data, but  as you zoom in, you can see teh individual reporting stations.

The Met Office WOW site is here. Search on ‘Teddington’ if you would like to view the station data.

Getting off our eggs!

January 5, 2019

While listening to the radio last week, I heard a description of an astonishing experiment, apparently well known since the 1930’s, but new to me.

Niko Tinbergen conducted experiments in which he replaced birds’ eggs with replicas and then studied how the birds responded to differently-sized replicas with modified markings.

Diedre Barrett describes the results in her book Supernormal Stimuli

Song birds abandoned their pale blue eggs dappled with grey to hop on black polka-dot day-glo blue dummies so large that the birds constantly slid off and had to climb back on.

Hearing this for the first time, I was shocked. But the explanation is simple enough.

The birds are hard-wired to respond to egg-like objects with specific patterns, and Tinbergen’s modified replicas triggered the nesting response more strongly than the bird’s own eggs.

Tinbergen coined the term ‘super-normal stimulus’ to describe stimuli that exceeded anything conceivable in the natural world.

Diedre Barrett uses this shocking experimental result to reflect on some human responses to what are effectively super-normal stimuli in the world around us.

Using this insight, she points out that many of our responses are as simple and self-harming as the birds’ responses to the replica eggs.

The Book

In her short book Barrett writes clearly, makes her point, and then stops. It was a pleasure to read.

I will not attempt to replicate her exposition, but I was powerfully struck by the sad image of a bird condemned to waste its reproductive energy on a plaster egg, when its own eggs lay quietly in view.

I found it easy to find analogous instinctive self-harming patterns in my own life. Surely we all can.

But Barrett does not rant. She is not saying that we are all going to hell in a handcart.

She makes the point that super-normal stimuli are not necessarily negative. The visual arts, dance, music, theatre and literature can all be viewed as tricks/skills to elicit powerful and even life-changing responses to non-existent events.

In discussing television, her point is not that television is ‘bad’ per se, but that the intensity and availability of vicarious experiences exceeds anything a normal person is likely to encounter in real life.

If watching television enhances, educates and inspires, then great. But frequently we respond to the stimuli by just seeking more. In the UK on average we watch more than four hours of television per day.

Four HoursSuch a massive expenditure of time is surely the equivalent to sitting on a giant plaster egg.

Barrett’s key point is the ubiquity of these super-normal stimuli in modern life – stimuli with which our instincts are ill-equipped to cope with.

A rational response feels ‘un-natural’ because it requires conscious thought and reflection. For example, rather than just feeling that an image is ‘cute, are we able to notice our own response and ask why someone might use caricatures which elicit a ‘cute’ response?

Barrett ends by pointing out that we humans are the only animals that can notice that we are sitting on metaphorical polka-dotted plaster eggs.

Even in adult life, having sat on polka-dotted plaster eggs for many years, we can come to an understanding that will allow us to get off the egg, reflect on the experience, and get on with something more meaningful.

I am clambering off some eggs as I write.

1000 days of weighing myself

January 1, 2019

Thank you for stopping by: Happy New Year!

Well its the first of January 2019 and I am filled with trepidation as I begin a new phase of my career: working 3 days a week.

My plan is to take things one day at a time, and take heart from the fact that it will take only three days to go from one weekend to the next!

Weight

Anyway, the turn of the year means it is time to look at my weight again.

The graph below shows my weight in 2018 based on daily weighings. Also shown are monthly averages (green squares) with error bars drawn at ± one standard deviation. The red dotted line shows the yearly average.

My weight through 2018 based on daily weighings.

My weight through 2018 based on daily weighings. Monthly averages are shown as green squares and the yearly average is shown as a dotted line.

My specific aim for 2018 had been to stay the same weight, so I did not quite manage that. The December monthly average was 0.5 kg above the January monthly average.

And it is clear that there was weight loss through the first part of the year and weight gain through the second part of the year.

But looking on the broader scale, the changes aren’t very significant.

The graph below shows the data for the last three years. This amounts to just over 1000 days or roughly one twentieth of my life.

Slide2

When viewed on this larger scale there doesn’t seem to be much to fuss about.

But I am sure that if I had not weighed myself daily, my weight would have crept back on considerably faster than it has.

A weight gain of 1 kg over one year represents an energy imbalance of only about 20 kilocalories of food per day – which is less than a mouthful of almost any food worth eating! (Link)

Anyway. Another year of weighing begins!

 

Christmas Bubbles

December 23, 2018
Champagne Time Lapse

A time-lapse photograph of a glass of fizzy wine.

Recently I encountered the fantastic:

Effervescence in champagne and sparkling wines:
From grape harvest to bubble rise

This is a 115-page review article by Gérard Liger-Belair about bubbles in Champagne, my most favourite type of carbon dioxide emission.

Until January 30th 2019 it is freely downloadable using this link

Since the bubbles in champagne arguably add £10 to the price of a bottle of wine, I guess it is worth understanding exactly how that value is added.

I found GLB’s paper fascinating with a delightful attention to detail. From amongst the arcane studies in the paper, here are three things I learned.

Thing 1: Amount of Gas

Champagne (and Prosecco and Cava) have about 9 grams of carbon dioxide in each 750 ml bottle [1].

Since the molar mass of carbon dioxide is 44 g, each bottle contains approximately 9/44 ~ 0.2 moles of carbon dioxide.

If released as gas at atmospheric pressure and 10 °C, it would have a volume of approximately 4.75 litres – more than six times the volume of the bottle!

This large volume of gas is said to be “dissolved” in the wine. The molecules can only leave when, by chance, they encounter the free surface of the wine.

Because the free-surface area of wine in a wine glass is usually larger than the combined surface area of bubbles, about 80% of the de-gassing happens through the liquid surface [2].

Thing 2: Bubble Size and Speed 

But fizzy wine is call “fizzy” because of the bubbles that seem to ceaselessly form on the inner surface of the glass.

Sadly, in a perfectly clean glass, such as one which has repeatedly been through a dishwasher, very few bubbles will form [3].

But if there are tiny cracks in the glass, or small specks of dust from, for example, a drying cloth, then these can trap tiny air bubbles and provide free-surfaces at which carbon dioxide can leave the liquid.

At first a bubble is just tens of nanometres in size, but it grows at a rate which depends upon the rate at which carbon dioxide enters the bubble.

As the bubble grows, its surface area increases allowing the rate at which carbon dioxide enters the bubble to increase.

Eventually the buoyancy of the bubble causes it to detach from its so-called ‘nucleation site’ (birthplace) and rise through the liquid.  This typically happens when bubbles are between 0.01 and 0.1 mm in diameter.

To such tiny bubbles, the wine is highly viscous, and at first the bubbles rise slowly. But as more carbon dioxide enters the bubble, the bubble grows [4] and its speed of rise increases. The rising speed is close to the so-called ‘Stokes’ terminal velocity. [5]

So when you look at a stream of bubbles you will see that at the bottom, the bubbles are small and close together and relatively slow-moving. As they rise through the glass, they grow, and their speed increases.

If you can bear to leave your glass undrunk for long enough, you should be able to see the rate of bubble formation slow as the carbon dioxide concentration falls.

This will be visible as an increase in the spacing of bubbles near the nucleation site of a rising ‘bubble train’.

Thing 3: Number of bubbles

Idle speculation often accompanies the consumption of fizzy wine.

And one common topic of speculation is the number of bubbles which can be formed in a gas of champagne [6]. We can now add to that speculation.

If a bubble has a typically diameter of approximately 1 mm as it reaches the surface, then each bubble will have a volume of approximately 0.5 cubic millimetres, or 0.000 5 millilitres.

So the 4.75 litres of carbon dioxide in a bottle could potentially form 4750/0.0005 = 9.5 million bubbles per bottle!

If a bottle is used for seven standard servings then there are potentially 1.3 million bubbles per glass.

In fact the number is generally smaller than this because as the concentration of carbon dioxide in the liquid falls, the rate of bubble formation falls also. And below approximately 4 grams of carbon dioxide per litre of wine, bubbles cease to form [7].

Thing 4: BONUS THING! Cork Speed

When the bottle is sealed there is a high pressure of carbon dioxide in the space above the wine. The pressure depends strongly on temperature [8], rising from approximately 5 atmospheres (500 kPa) if the bottle is opened at 10 °C to approximately 10 atmospheres (1 MPa) if the bottle is opened at 25 °C.

GLB uses high-speed photography to measure the velocity of exiting cork, and gets results which vary from around 10 metres second for a bottle at 4 °C to 14 metres per second for a bottle at 18 °C. [9]

I made my own measurements using my iPhone (see below) and the cork seems to move roughly 5 ± 2 cm in the 1/240th of a second between frames. So my estimate of the speed is about 12 ± 5 metres second, roughly in line GLB’s estimates

Why this matters

When we look at absolutely any phenomenon, there is a perspective from which that phenomenon – no matter how mundane or familiar – can appear profound and fascinating.

This paper has opened my eyes, and I will never look at a glass of Champagne again in quite the same way.

Wishing you happy experimentation over the Christmas break.

Santé!

References

[1] Page 8 Paragraph 2

[2] Page 85 Section 6.3

[3] Page 42 Section 5.2

[4] Page 78 Figure 59

[5] Page 77 Figure 58

[6] Page 84 Section 6.3 & Figure 66

[7] Page 64

[8] Page 10 Figure 3

[9] Page 24 Figure 16

What can we learn from The American President?

December 12, 2018

The American President

I love the American President. It’s a weakness of mine of which I am not proud. No. Not that one: the film.

The American President was an Oscar-nominated film made in 1995 starring Michael Douglas as the eponymous hero and Annette Bening as a lobbyist who comes to Washington to campaign for a 20% cut in US greenhouse gas emissions.

The film is unremarkable in many ways. But the fact that cutting greenhouse gas emissions was a mainstream idea 25 years ago (albeit in a light-hearted romantic comedy-drama) puts into perspective just how slowly political reality has changed.

Constant

During the period from the fictional 1995 American President to the present 2018 incumbent, one thing has remain constant: the science.

Since 1981, when James Hansen and colleagues wrote a landmark paper in Science, the complexity of our models of the Earth’s climate has increased dramatically.

And our understanding of the way our Climate System works has improved, increasing our confidence in future projections.

But the core science has barely changed. Indeed, it hasn’t changed that much since Svante Arrhenius’ insight back in 1896.

Climate Change: My part in its downfall

I have been speaking and writing about Climate Change since 2004 or so. I think I have spoken to a few thousand people directly, and I guess each web article has been read a few hundred times. So perhaps I have helped a little to ‘raise consciousness’.

But regular readers will have noticed that recently I haven’t written about Climate Change as often as I used to. The reason is that I am lost for words.

Back in 2004, (9 years the American President) I thought there was a genuine public education requirement. But now, I don’t believe any rational human on Earth seriously doubts the reality of Climate Change or its causes.

[But just in case: if there is a rational human out there who doubts the reality of Climate Change, please drop me a line: I am happy to discuss any questions you have.]

Political Science

I still believe that despite The American President (yes that one, not the film) and his supporters, humanity will act collectively and decisively on Climate Change. Eventually.

I expect this because ultimately I think we will collectively understand that the alternative is in nobody’s best interest.

The ‘Natural Sciences’ have identified the existence of Climate Change, worked out its causes, laid out clear paths for how to combat it, and estimated the consequences of inaction.

But the path to action involves what Charles Lane writing the Washington Post has called ‘Political Science’. He identified the impasse as arising from the fact that we are asking the rich world (us) to pay now to solve a problem which will (mainly) occur in the future.

  • If the spending is effective, then the worst aspects of Climate Change will be abated and that expenditure may then appear to be a waste – the disaster was averted!
  • But if it the spending is ineffective, then the worst aspects of Climate Change will be experienced anyway!

This (and many other difficulties) are real and they are readily exploited by people who are acting – frankly – in bad faith.

So I expect we will act, but too late to avoid bad consequences for communities world-wide. And the political path we will take to action is not at all clear to me.

Reasons to be hopeful

But there are plenty of reasons to be hopeful. Renewable energy alternatives to fossil fuels are now feasible in a large and growing number of sectors. And once the transition begins, I think it will move quickly.

The speed with which coal has been (and is is continuing to be) phased out in the UK has shocked and surprised me. You can check current grid generating mix at Gridwatch.

The chart below shows the last 12 months of generation on top and the previous 12 months below that. You can see that coal use has almost disappeared in summer and is now only used on the coldest darkest days.

This year UK Yearly generating mix

UK Yearly generating mix

UK Electricity Generating Mix for the last 12 months. Notice that coal generation – in black – is only significant for a few months of the year, and has declined this year (top) compared with last year (bottom)

Science is our greatest cause for hope.

Imagine if we were observing changes in climate and had no idea what was happening? We would be doomed to confusion and inaction. This has been the situation in which humanity has existed since the dawn of time.

But now, our collective scientific understanding  has allowed us quantify Climate Change, to discover its root cause, and to identify the practical steps we can take minimise the harm.

Humanity has never been in this position before. We have never previously known in advance the hand which nature will deal us.

So I see our inability to act collectively – as exemplified by the slowness of progress in the 23 years since the debut of the celluloid American President – as a temporary state.

I take hope from the fact that we when the political reality permits, science will guide us to the best available solution in the circumstances.

I just wish I could figure out what I can do to make that happen faster.

Links

On this site:

On Variable Variability

On IPCC web site

Refrigerators: Part#1

December 2, 2018

A month ago our refrigerator stopped working. A repair didn’t seem possible, so we headed to the shops to search for something as similar as possible to what we had just lost.

Thankfully, the snappily-named Bosch KGN33NW3AG fridge-freezer has proved to be entirely adequate.

Of course a new refrigerator requires testing (obviously) and an assessment of how close to specification it is performing. So…

How much energy should a fridge use?

Fridge Freezer Pictures

I made a ‘guess-timate’ by estimating the rate at which heat which would flow into the fridge. My thought was that this should be similar to rate at which the fridge would use energy.

[Aside: the actual calculation is tricky, but I’ll come back to it in a later post]

To estimate the heat flow into the fridge I measured the size of fridge and freezer compartments and the thickness of the insulation.

Then I calculated the area of each compartment that faced the room which I assumed to be at a nominal 20 °C.

Heat will constantly flow from the room, through the insulation, into the cold compartments and a simple rule (called Fourier’s Law) allows me to calculate the rate at which energy flows (watts).

I assumed that a perfect ‘heat pump’ – the scientific name for a refrigerator – would pump all this heat back out again, but would (unrealistically) not require any energy to operate.

By multiplying the rate of energy flowing into the refrigerator (in watts) by an amount of time (in seconds) I could work out how much energy (in joules) even a perfect refrigerator of this size must use.

I could then convert the energy used (in joules) into kilowatt-hours – the charging unit used by electricity companies – by dividing by 3.6 million (the product of 3600 seconds in an hour and 1000 watts in a kilowatt).

My calculations indicated that heat flows would be:

  • About 16.4 W into the refrigerator, amounting to around 144 kW-h over a year.
  • About 14.8 W into the freezer, amount to around 130 kW-h over a year.

So if the device were perfect, I calculated it would use 274 kW-h per year.

EU Label

The specification for the fridge says that it will use 290 kW-h per year, just 6% more energy than I estimated a perfect fridge would use. This indicates a fridge performing surprisingly well.

I assume that Bosch’s estimated consumption is realistic. So how wrong could my estimate be?

Well I assumed that the thermal insulation around the fridge had a thermal conductivity of 0.03 W/K/m – just three time greater than that of still air. This is exceptionally good insulation. But my estimate could easily be wrong by 10% or so if improved insulation had been used.

Opening the door.

Many people think that opening the door of the fridge will affect its energy consumption, but my calculations indicate that it is not really a very big problem.

I assumed that at worst, opening the door could replaces all the air in the fridge with room temperature air. If this were the case then:

  • opening the fridge door 10 times a day every day would use an additional 3.7 kW-h of energy per year which is just over 1% of the annual expected usage.
  • opening the freezer door once a day would use an additional 0.4 kW-h of energy per year which is much less than 1% of the annual consumption.

So my calculations indicate that as long as door is not left open for many minutes at time, perhaps by careless children tired of their parents nagging, then it will have relatively little effect on the energy consumption of the fridge.

Data

I logged data at four locations in the fridge/freezer over a day or so last weekend.

The figure below shows a composite view of the data from the top of the fridge and the freezer over a period from 7 p.m. on Saturday to 4:30 p.m. on Sunday.

Composite Data

I’ll analyse this data more in the next article, but here I will just note that the data show:

  • The basic cycle of the heat pump which switches on around once every 45 minutes.
  • The more rapid cycling of the air within the fridge – every 10 minutes or so.
  • The effect of leaving the door open.

It is pretty clear that when my son and his friend arrived home at approximately 5 a.m. on Sunday morning (!) they contrived to leave the door open for the best part of an hour!

Composite Data Close up

I thought this was impressive detective work on my part and it could well be the start of a new mode of behaviour analysis: Forensic Thermometry.

Perhaps I should propose °CSI Teddington. 😉

Anyway. More on the temperature and humidity data in the next article.

Ignorance: Eggs & Weather Forecasts

November 26, 2018

Every so I often I learn something so simple and shocking that I find myself asking:

How can I possibly not have known that already?“.

Eggs

Eggs

Eggs

While listening to Farming Today the other morning, learned that:

Large eggs come from old hens

In order to produce large eggs – the most popular size with consumers – farmers need to allow hens to reach three years old.

So during the first and second years of their lives they will first lay small eggs, then medium eggs, and finally large eggs.

On Farming Today a farmer was explaining that egg production naturally resulted a range of egg sizes, and it was a challenge to find a market for small eggs. Then came the second bomb’shell’.

The yolk is roughly same size in all eggs

What varies between small and large eggs is mainly the amount of egg white (albumen).

How could I have reached the age of 58 and not  known that? Or not have even been curious about it?

Since learning this I have become a fan of small eggs: more yolk, less calories, more taste!

But my deep ignorance extends beyond everyday life and into the professional realm. And even my status as ‘an expert’ cannot help me.

Weather Forecasts & Weather Stations

Professionally I have become interested in weather stations and their role in both Numerical Weather Prediction (NWP, or just weather forecasting) and in Climate Studies.

And as I went about my work I had imagined that data from weather stations were used as inputs to NWP algorithms that forecast the weather.

But in September I attended CIMO TECO-2018 (Technical Conference on Meteorological and Environmental Instruments and Methods of Observation) in Amsterdam.

And there I learned in passing from an actual expert, that I had completely misunderstood their role.

Weather station data is not considered in the best weather forecasts.

And, on a moment’s reflection, it was completely obvious why.

Weather forecasting work like this:

  • First one gathers as much data as possible about the state of the atmosphere ‘now’. The key inputs to this are atmospheric ‘soundings’:
    • Balloon-borne ‘sondes’ fly upwards through the atmosphere sending back data on temperature, humidity and wind (speed and direction) versus height.
    • Satellites using infrared and microwave sensors probe downwards to work out the temperature and humidity at all points in the atmosphere in a swathe below the satellite’s orbit.
  • The NWP algorithms accept this vast amount of data about the state of the atmosphere, and then use basic physics to predict how the state of the entire atmosphere will evolve over the coming hours and days

And then, after working out the state of the entire atmosphere, the expected weather at ground level is extracted.

Visualisation of the amount of moisture distributed across different heights in the atmosphere based on a single pass of a 'microwave sounding' satellite. Image credit: NASA/JPL-Caltech

Visualisation of the amount of moisture distributed across different heights in the atmosphere based on a single pass of a ‘microwave sounding’ satellite. The data gathered at ground level is just a tiny fraction of the data input to NWP models. Image credit: NASA/JPL-Caltech

Ground-based weather stations are still important:

  • They are used to check the outputs of the NWP algorithms.
  • But they are not used as inputs to the NWP algorithms.

So why did I not realise this ‘obvious’ fact earlier? I think it was because amongst the meteorologists and climate scientists with whom I spoke, it was so obvious as to not require any explanation.

Life goes on

So I have reached the age of 58 without knowing about hen’s eggs and the role of weather stations in weather forecasting?

I don’t know how it happened. But it did. And I suspect that many people have similar areas of ignorance, even regarding aspects of life with which we are totally familiar – such as eggs – or where one is nominally an expert.

And so life goes on. Anyway…

This pleasing Met Office video shows the importance of understanding the three-dimensional state of the atmosphere…

And here is a video of some hens

 

Mug Cooling: Salty fingers

November 23, 2018

You wait years for an article about heat transfer at beverage-air interfaces and then four come along at once!

When I began writing these articles (1, 2, 3) I was just curious about the effect of insulation and lids.

But as I wrote more I had two further insights.

  • Firstly the complexity of the processes at the interface was mind-boggling!
  • Secondly, I realised that cooling beverages are just one example of the general problem of energy and material transfer at interfaces.

This is one of the most important processes that occurs on Earth. For example, it is how the top layer of the oceans – where most of the energy arriving on Earth from the Sun is absorbed – exchanges energy with the deeper ocean and the atmosphere.

But in the oceans there is another factor: salinity.

Salinity 

Sea water typically contains 35 grams of salt per litre of water, and is about 2.4% denser than pure water.

So pure water – such as rain water falling onto the ocean surface – will tend to float above the brine.

This effect is exacerbated if the pure water is warm. For example, water at 60 °C is approximately 1.5% less dense than water at around 20 °C.

Video 

In the video at the top of the article I added warm pure water (with added red food colouring) to a glass of cold pure water (on the left) and a glass of cold salty water (on the right).

[For the purposes of this article I hope you will allow that glasses are a type of mug]

The degree to which the pure and salty water spontaneously separated surprised me.

But more fascinating was the mechanism of eventual mixing – a variant on ‘salt fingering‘.

Salt Fingers Picture

The formation of ‘salty fingers’ of liquid is ubiquitous in the oceans and arises from density changes caused by salt diffusion and heat transfer.

As the time-lapse section of the movie shows – eventually the structure is lost and we just see ‘mixed fluid’ – but the initial stages, filmed in real time, are eerily beautiful.

Now I can’t quite explain what is happening in this movie – so I am not going to try.

But the web has articles, home-made videos and fancy computer simulations.

 


%d bloggers like this: