Archive for the ‘Personal’ Category

World Metrology Day 2019

May 19, 2019

This slideshow requires JavaScript.

Monday 20 May – World Metrology Day 2019 – is a day towards which I have been working for the last 14 years.

Back in 2005, my NPL colleagues Richard Rusby,  Jonathan Williams and I compiled a report on possible methods for measuring the Boltzmann constant. The aim of the measurement would be to obtain an estimate of the Boltzmann constant with sufficiently small uncertainty that the International Bureau of Weights and Measures (BIPM) would feel able to redefine what we mean by ‘one kelvin’ and ‘one degree Celsius’ in terms of this new estimate.

To cut a very very long story short, we succeeded. And tomorrow, that project comes to fruition.

Of course it wasn’t just me. Or even me and my immediate colleagues in the thermal team at NPL. We were helped by colleagues from across the laboratory, and from other institutions. Notably:

  • Cranfield University who manufactured the key component in the experiment,
  • The Korean National Laboratory KRISS and the Scottish Universities Environmental Research Council who helped with isotopic analysis of argon gas.
  • Colleagues helped us from:
    • LNE-CNAM in France,
    • INRIM in Italy,
    • NIST in the USA,
    • PTB in Germany,
    • CEM in Spain.

And I have probably missed an important institution or partner from this list because – frankly – it has been a long haul!

But even this list doesn’t include all the other teams involved in the wider kelvin re-definition project.

Several other institutions also sought to independently measure the Boltzmann constant using a range of different techniques and the value chosen by the International Bureau of Weights and Measures (BIPM) was the weighted average of estimates from this international effort.

In all, hundreds of scientists, engineers and technical staff around the world have supported this effort and I feel humbled to have had the opportunity to take part in a project of this scale.

And it is not just the kelvin, today three other units will also be be redefined – the mole, the ampere and the kilogram.

In this troubled world, it is a real comfort to me to feel the friendships built and professional relationships created during these last 14 years.

I think it shows that the International System of Units is a living international institution which really works; which brings people together from around the globe to make measurements better. The SI is an institution of which the whole world can feel proud.

Happy World Metrology Day 🙂

Is a UK grid-scale battery feasible?

April 26, 2019

This is quite a technical article, so here is the TL/DR: It would make excellent sense for the UK to build a distributed battery facility to enable renewable power to be used more effectively.

=========================================

Energy generated from renewable sources – primarily solar and wind – varies from moment-to-moment and day-to-day.

The charts below are compiled from data available at Templar Gridwatch. It shows the hourly, daily and seasonal fluctuations in solar and wind generation plotted every 5 minutes for (a) 30 days and (b) for a whole year from April 21st 2018. Yes, that is more than 100,000 data points!

Wind (Green), Solar (Yellow) and Total (Red) renewable energy generation for the days since April 21st 2018

Wind (Green), Solar (Yellow) and Total (Red) renewable energy generation for 30 days following April 21st 2018. The annual average (~6 GW) is shown as black dotted line.

Slide7

Wind (Green), Solar (Yellow) and Total (Red) renewable energy generation for the 365 days since April 21st 2018. The annual average (~6 GW) is shown as black dotted line.

An average of 6 GW is a lot of power. But suppose we could store some of this energy and use it when we wanted to rather than when nature supplied it. In other words:

Why don’t we just build a big battery?

It turns out we need quite a big battery!

How big a battery would be need?

The graphs below shows a nominal ‘demand’ for electrical energy (blue) and the electrical energy made available by the vagaries of nature (red) over periods of 30 days and 100 days respectively. I didn’t draw the whole year graph because one cannot see anything clearly on it!

The demand curve is a continuous demand for 3 GW of electrical power with a daily peak demand of 9 GW. This choice of demand curve is arbitrary, but it represents the kind of contribution we would like to be able to get from any energy source – its availability would ideally follow typical demand.

Slide8

Slide9

We can see that the renewable supply already has daily peaks in spring and summer due to the solar energy contribution.

The role of a big battery would be cope to with the difference between demand and supply. The figures below show the difference between my putative demand curve and supply, over periods of 30 days and a whole year.

Slide10

Slide11

I have drawn black dotted lines showing when the difference between demand and supply exceeds 5 GW one way or another. In spring and summer this catches most of the variations. So let’s imagine a battery that could store or release energy at a rate of 5 GW.

What storage capacity would the battery need to have? As a guess, I have done calculations for a battery that could store or release 5 GW of generated power for 5 hours i.e. a battery with a capacity of 5 GW x 5 hours = 25 GWh. We’ll look later to see if this is too much or too little.

How would such a battery perform?

So, how would such a battery affect the ability of wind and solar to deliver a specified demand?

To assess this I used the nominal ‘demand‘ I sketched at the top of this article – a demand for  3 GW continuously, but with a daily peak in demand to 9 GW – quite a severe challenge.

The two graphs below show the energy that would be stored in the battery for 30 days after 21 April 2018, and then for the whole following year.

  • When the battery is full then supply is exceeding demand and the excess is available for immediate use.
  • When the battery is empty then supply is simply whatever the elements have given us.
  • When the battery is in-between fully-charged and empty, then it is actively storing or supplying energy.

Slide12

Over 30 days (above) the battery spends most of its time empty, but over a full year (below), the battery is put to extensive use.

Slide13

How to measure performance?

To assess the performance of the battery I looked at how the renewable energy available last year would meet a levels of constant demand from 1 GW up to 10 GW with different sizes of battery. I consider battery sizes from zero (no storage) in 5 GWh steps up to our 25 GWh battery. The results are shown below:

Slide15It is clear that the first 5 GWh of storage makes the biggest difference.

Then I tried modelling several levels of variable demand: a combination of 3 GW of continuous demand with an increasingly large daily variation – up to a peak of 9 GW. This is a much more realistic demand curve.Slide17

Once again the first 5 GWh of storage makes a big difference for all the demand curves and the incremental benefit of bigger batteries is progressively smaller.

So based on the above analysis, I am going to consider a battery with 5 GWh of storage – but able to charge or discharge at a rate of 5 GW. But here is the big question:

Is such a battery even feasible?

Hornsdale Power Reserve

The Hornsdale Power Reserve Facility occupies an area bout the size of a football pitch. Picture from the ABC site

The Hornsdale Power Reserve Facility occupies an area about the size of a football pitch. Picture from the ABC site

The biggest battery grid storage facility on Earth was built a couple of years ago in Hornsdale, Australia (Wiki Link, Company Site). It seems to have been a success (link).

Here are its key parameters:

  • It can store or supply power at a rate of 100 MW or 0.1 GW
    • This is 50 times smaller than our planned battery
  • It can store 129 MWh of energy.
    • This is just under 40 times smaller than our planned battery
  • Tesla were reportedly paid 50 million US dollars
  • It was supplied in 100 days.
  • It occupies the size of a football pitch.

So why don’t we just build lots of similar things in the UK?

UK Requirements

So building 50 Hornsdale-size facilities, the cost would be roughly 2.5 billion dollars: i.e. about £2 billion.

If we could build 5 a year our 5 GWh battery would be built in 10 years at a cost of around £200 million per year. This is a lot of money. But it is not a ridiculous amount of money when considering the National Grid Infrastructure.

Why this might actually make sense

The key benefits of this kind of investment are:

  • It makes the most of all the renewable energy we generate.
    • By time-shifting the energy from when it is generated to when we need it, it allows renewable energy to be sold at a higher price and improves the economics of all renewable generation
  • The capital costs are predictable and, though large, are not extreme.
  • The capital generates an income within a year of commitment.
    • In contrast, the 3.2 GW nuclear power station like Hinkley Point C is currently estimated to cost about £20 billion but does not generate any return on investment for perhaps 10 years and carries a very high technical and political risk.
  • The plant lifetime appears to be reasonable and many elements of the plant would be recyclable.
  • If distributed into 50 separate Hornsdale-size facilities, the battery would be resilient against a single catastrophic failure.
  • Battery costs still appear to be falling year on year.
  • Spread across 30 million UK households, the cost is about £6 per year.

Conclusion

I performed these calculations for my own satisfaction. I am aware that I may have missed things, and that electrical grids are complicated, and that contracts to supply electricity are of labyrinthine complexity. But broadly speaking – more storage makes the grid more stable.

I can also think of some better modelling techniques. But I don’t think that they will affect my conclusion that a grid scale battery is feasible.

  • It would occupy about 50 football pitches worth of land spread around the country.
  • It would cost about £2 billion, about £6 per household per year for 10 years.
    • This is one tenth of the current projected cost of the Hinkley Point C nuclear power station.
  • It would deliver benefits immediately construction began, and the benefits would improve as the facility grew.

But I cannot comment on whether this makes economic sense. My guess is that when it does, it will be done!

Resources

Data came from Templar Gridwatch

 

Cloud in a bottle!

March 22, 2019

One of the best parts of the FREE! ‘Learn About Weather‘ course, was the chance to make a cloud in a bottle. Here’s my video!

The demonstration involves squeezing a bottle partly filled with water and then letting go. One can see a cloud form as one lets go, and then disappear again when one squeezes. Wow!

But there is a trick! You need to drop a burning match into the bottle first!

Heterogeneous versus homogeneous nucleation

How does the smoke make the trick work? It’s to do with the way droplets form – a process called nucleation.

There are two ways for droplets to nucleate. An easy way and a hard way. But those words are too short for scientists. Instead we call them heterogeneous and homogeneous nucleation!

  • Heterogeneous nucleation‘ means that the water droplets in a cloud form around dust or smoke particles. The ‘hetero-” prefix means ‘different’, because there is more than one type of entity involved in forming droplets – dust and water.
  • Homogeneous nucleation‘ means that the water droplets in a cloud form spontaneously without any other type of particle being present. The ‘homo-” prefix means ‘the same’, because there is just one substance present – water.

The experiment shows that hetero-gen-e-ous nucleation is dramatically easier than than homo-gen-e-ous nucleation. And in reality – in real clouds – practically all droplet formation is heterogeneous – involving dust particles.

The reason is easy to appreciate.

  • To form a tiny droplet by homogeneous nucleation requires a few water molecules to meet and stick together. It’s easy to imagine three or four molecules might do this, but as new molecules collide, some will have higher than average energy and tend to break the proto-droplet apart.
  • But a dust or smoke particle, though small by human standards (about 0.001 mm in diameter), is roughly 10,000 times larger than individual molecules. So its surface provides billions of locations for water molecules to stick. So when the average energy of the water molecules is at the appropriate level to form a liquid, the water molecules can quickly stick to the surface and cause a droplet to grow.

How big is the temperature change?

Squeezing the bottle compresses the air quickly (in much less than 1 second) and so (because the air is a poor conductor of heat), there is no time for the heat of compression to flow from the gas into the walls and the water (this takes a few seconds) and the air warms transiently.

I was curious about the size of the temperature change that brought about this cloud formation.

I calculated that if the air in the bottle changed volume by 5%, there should be a temperature change of around 6 °C – really quite large!

Squeezing the bottle warms the air rapidly – and then over a few seconds the temperature slowly returns to the temperature of the walls of the bottle and the water.

If one lets go at this point the volume increases by an equivalent amount and the temperature returns to ambient. It is this fall which is expected to precipitate the water droplets.

To get the biggest temperature change one needs a large fractional change in volume. I couldn’t do the calculation of the optimum filling fraction so I did an experiment instead.

I poked a thin thermocouple through a bottle top and made it air tight using lots of epoxy resin.

Bottle

I then squeezed the bottle and measured the maximum temperature rise. The results are shown below.

Delta T versus Filling Fraction

The results indicate that for a bottle filled to around three quarters with water, the temperature change is about 6 °C.

But as you can see in the video – it takes a few seconds to reach this maximum temperature, so I suspect the instantaneous change in air temperature is much larger, but that even this small thermocouple takes a couple of seconds to warm up.

Happy Experimenting

The Met office have more cloud forming tricks here.

 

 

 

Learning about weather

March 17, 2019

I have just completed a FREE! ‘Learn About Weather‘ course, and slightly to my surprise I think I have learned some things about the weather!

Learning

Being an autodidact in the fields of Weather and Climate, I have been taught by an idiot. So ‘attending’ online courses is a genuine pleasure.

All I have to do is to listen – and re-listen – and then answer the questionsSomeone else has selected the topics they feel are most important and determined the order of presentation.

Taking a course on-line allows me to expose my ignorance to no-one but myself and the course-bot. And in this low-stress environment it is possible to remember the sheer pleasure of just learning stuff.

Previously I have used the FutureLearn platform, for courses on Global WarmingSoil, and Programming in Python. These courses have been relatively non-technical and excellent introductions to subjects of which I have little knowledge. I have also used the Coursera platform for a much more thorough course on Global Warming.

So what did I learn? Well several things about about why Global Circulation Cells are the size they are, the names of the clouds, and how tornadoes start to spin. But perhaps the best bit was finally getting my head around ‘weather fronts’.

Fronts: Warm and Cold

I had never understood the terms ‘warm front’ and ‘cold front’ on weather forecasts. I had looked at the charts with the isobars and thought that somehow the presence or absence of ‘a front’ could be deduced by the shapes of the lines. I was wrong. Allow me to try to explain my new insight.

Air Mixing

Air in the atmosphere doesn’t mix like air in a room. Air in a room generally mixes quite thoroughly and quite quickly. If someone sprays perfume in one corner of the room, the perfume spreads through the air quickly.

But on a global scale, air doesn’t mix quickly. Air moves around as ‘big blobs’ and mixing takes place only where the blobs meet. These areas of mixing between air in different blobs are called ‘fronts’

Slide1

In the ‘mixing region’ between the two blobs, the warm – generally wet – air meets the cold air and the water vapour condenses to make clouds and rain. So fronts are rain-forming regions.

Type of front

However it is unusual for two blobs of air to sit still. In general one ‘blob’ of air is ‘advancing’ and the other is ‘retreating’.

This insight was achieved just after the First World War and so the interfaces between the blobs were referred to as ‘fronts’ after the name for the interface between fighting armies. 

  • If the warm air is advancing, then the front is called a warm front, and
  • if the cold air is advancing, then the front is called a cold front.

Surprisingly cold fronts and warm fronts are quite different in character.

Warm Fronts 

When a blob of warm air advances, because it tends to be less dense than the cold air, it rises above the cold air.

Thus the mixing region extends ahead of the location on the ground where the temperature of the air will change.

The course told me the slope of the mixing region was shallow, as low as 1 in 150. So as the warm air advances, there is a region of low, rain-forming cloud that can extend for hundreds of kilometres ahead of it.

Slide2

So on the ground, what we experience is hours of steady rain, and then the rain stops as the temperature rises.

Cold Fronts 

When a blob of cold air advances, because it tends to be more dense than the warm air, it slides below it. But sliding under an air mass is harder than gliding above it – I think this is because of friction with the ground.

As a result there is a steep mixing region which extends a little bit ahead, and a short distance behind the location on the ground where the temperature of the air changes.

Slide3

So as the cold air advances, there is a region of intense rain just before and for a short time after.

So on the ground what we experience are stronger, but much shorter, rain events at just about the same time as the temperature falls. There generally follows some clearer air – at least for a short while.

Data

I had assumed that because of the messy nature of reality compared to theory, real weather data would look nothing like what the simple models above might lead me to expect. I was wrong!

As I was learning about warm and cold fronts last weekend (10 March 2019) by chance I looked at my weather station data and there – in a single day – was evidence for what I was learning – a warm front passing over at about 6:00 a.m. and then a cold front passing over at about 7:00 p.m.

  • You can look at the data from March 10th and zoom in using this link to Weather Underground.

This is the general overview of the air temperature, humidity, wind speed, rainfall and air pressure data. The left-hand side represents midnight on Saturday/Sunday and the right-hand side represents midnight on Sunday/Monday.

Slide4

The warm front approaches overnight and reaches Teddington at around 6:00 a.m.:

  • Notice the steady rainfall from midnight onwards, and then as the rain eases off, the temperature rises by about 3 °C within half an hour.

The cold front reaches Teddington at around 7:00 p.m.:

  • There is no rain in advance of the front, but just as the rain falls – the temperature falls by an astonishing 5 °C!

Slide5

Of course there is a lot of other stuff going on. I don’t understand how these frontal changes relate to the pressure changes and the sudden rise and fall of the winds as the fronts pass.

But I do feel I have managed to link what I learned on the course to something I have seen in the real world. And that is always a good feeling.

P.S. Here’s what the Met Office have to say about fronts…

Global Oxygen Depletion

February 4, 2019

While browsing over at the two degrees institute, I came across this figure for atmospheric oxygen concentrations measured at a station at the South Pole.

Graph 1

The graph shows the change in:

  • the ratio of oxygen to nitrogen molecules in samples of air taken at a particular date

to

  • the ratio of oxygen to nitrogen molecules in samples of air taken in the 1980’s.

The sentence above is complicated, but it can be interpreted without too many caveats as simply the change in oxygen concentration in air measured at the South Pole.

We see an annual variation – the Earth ‘breathing’- but more worryingly we see that:

  • The amount of oxygen in the atmosphere is declining.

It’s a small effect, and will only reach a 0.1% decline – 1000 parts per million – in 2035 or so. So it won’t affect our ability to breathe. Phewww. But it is nonetheless interesting.

Averaging the data from the South pole over the years since 2010, the oxygen concentration appears to be declining at roughly 25 parts per million per year.

Why?

The reason for the decline in oxygen concentration is that we are burning carbon to make carbon dioxide…

C + O2 = CO2

…and as we burn carbon, we consume oxygen.

I wondered if I could use the measured rate of decline in oxygen concentration to estimate the rate of emission of carbon dioxide.

How much carbon is that?

First I needed to know how much oxygen there was in the atmosphere. I considered a number of ways to calculate that, but it being Sunday, I just looked it up in Wikipedia. There I learned that the atmosphere has a mass of about 5.15×1018 kg.

I also learned the molar fractional concentration of the key gases:

  • nitrogen (molecular weight 28): 78.08%
  • oxygen (molecular weight 32): 20.95%
  • argon (molecular weight 40):0.93%

From this I estimated that the mass of 1 mole of the atmosphere was 0.02896 kg/mol. And so the mass of the atmosphere corresponded to…

5.15×1018 /0.02896 = 1.78×1020

…moles of atmosphere. This would correspond to roughly…

1.78×1020 × 0.02095 =3.73×1019

…moles of oxygen molecules. This is the number that appears to be declining by 25 parts per million per year i.e.

3.73×1019× 0.000 025= 9.32×1014

…moles of oxygen molecules are being consumed per year. From the chemical equation, this must correspond to exactly the same number of moles of carbon: 9.32×1014. Since 1 mole of carbon weighs 12 g, this corresponds to…

  • 1.12×1016 g of C,
  • 1.12×1013 kg of C
  • 1.12×1010 tonnes of C
  • 11.2 gigatonnes (Gt) of C

Looking up the sources of sources, I obtained the following estimate for global carbon emissions which indicates that currently emissions are running at about 10 Gt of carbon per year

Carbon Emissions

Analysis

So Wikipedia tells me that humanity emits roughly 10 Gt of carbon per year, but based on measurements at the South pole, we infer that 11.2 Gt of carbon per year is being emitted and consuming the concomitant amount of oxygen. Mmmmm.

First of all, we notice that these figures actually agree within roughly 10%. Which is pleasing.

  • But what is the origin the disagreement?
  • Could it be that the data from the South Pole is not representative?

I downloaded data from the Scripps Institute for a number of sites and the graph below shows recent data from Barrow in Alaska alongside the South Pole data. These locations are roughly half a world – about 20,000 km – apart.

Graph 2

Fascinatingly, the ‘breathing’ parts of the data are out of phase! Presumably this arises from the phasing of summer and winter in the northern and southern hemispheres.

But significantly the slopes of the trend lines differ by only 1%.  So global variability doesn’t seem to able to explain the 10% difference between the rate of carbon burning predicted from the decline of atmospheric oxygen (11.2 Gt C per year) , and the number I got off Wikipedia (10 Gt C per year).

Wikipedia’s number was obtained from the Carbon Dioxide Information and Analysis Centre (CDIAC) which bases their estimate on statistics from countries around the world based on stated oil, gas and coal consumption.

My guess is that there is considerable uncertainty – on the order of a few percent –  on both the CDIAC estimate, and also on the Scripps Institute estimates. So agreement at the level of about 10% is actually – in the context of a blog article – acceptable.

Conclusions

My conclusion is that – as they say so clearly over at the two degrees project – we are in deep trouble. Oxygen depletion is actually just an interesting diversion.

The most troubling graph they present shows

  • the change in CO2 concentration over the last 800,000  years, shown against the left-hand axis,

alongside

  • the estimated change in Earth’s temperature over the last 800,000  years, shown  along the right-hand axis.

The correlation between the two quantities is staggering, and the conclusion is terrifying. chart

We’re cooked…

 

The Death Knell for SI Base Units?

January 30, 2019

I love the International System of Units – the SI. 

Rooted in humanity’s ubiquitous need to measure things, the SI represents a hugely successful global human enterprise – a triumph of cooperation over competition, and accord over discord.

Day-by-day it enables measurements made around the world to be meaningfully compared with low uncertainty. And by doing this it underpins all of the sciences, every branch of engineering, and trade.

But changes are coming to the SI, and even after having worked on these changes for the last 12 years or so, in my recent reflections I have been surprised at how profound the changes will be.

Let me explain…

The Foundations of the SI

The SI is built upon the concept of ‘base units’. Unit amounts of any quantity are defined in terms of combinations of unit quantities of just a few ‘base units’. For example:

  • The SI unit of speed is the ‘metre per second’, where one metre and one second are the base units of length and time respectively.
    • The ‘metre per second’ is called a derived unit.
  • The SI unit of acceleration is the ‘metre per second per second’
    • Notice how the same base units are combined differently to make this new derived unit.
  • The SI unit of force  is the ‘kilogram metre per second per second’.
    • This is such a complicated phrase that this derived unit is given a special name – newton. But notice that it is still a combination of base units.

And so on. All the SI units required for science and engineering can be derived from just seven base units: the kilogram, metre, second, ampere, kelvin, mole and candela.

So these seven base units in a very real sense form the foundations of the SI.

The seven base units of the SI

The seven base units of the SI

This Hierarchical Structure is Important.

Measurement is the quantitative comparison of a thing against a standard.

So, for example, when we measure a speed, we are comparing the unknown speed against our unit of speed which in the SI is the metre per second.

So a measurement of speed can never be more accurate than our ability to create a standard speed – a known number of ‘metres per second‘ – against which we can compare our unknown speed.

FOR EXAMPLE: Imagine calibrating a speedometer in a car. The only way we can know if it indicates correctly is if we can check the reading of the speedometer when the car is travelling at a known speed – which we would have to verify with measurements of distance (in metres) and time (in seconds).

To create a standard speed, we need to create known distances and known time intervals. So a speed never be more accurately known that our ability to create standard ‘metres‘ and ‘seconds‘.

So the importance of the base units is that the accuracy with which they can be created represents a limit to the accuracy with which we could conceivably measure anything! Or at least anything expressed as in terms of derived unit quantities in the SI

This fact has driven the evolution of the SI. Since its founding in 1960, the definitions of what we mean by ‘one’ of the base units has changed only rarely. And the aim has always been the same – to create definitions which will allow more accurate realisations of the base units. This improved accuracy would then automatically affect all the derived units in SI.

Changes are coming to the SI.

In my earlier articles (e.g. here) I have mentioned that on 20th May 2019 the definition of four of the base units will change. Four base units changing at the same time!? Radical.

Much has been made of the fact that the base units will now be defined in terms of constants of nature. And this is indeed significant.

But in fact I think the re-definitions will lead to a broader change in the structure of the SI.

Eventually, I think they will lead to the abandonment of the concept of a ‘base unit’, and the difference between ‘base‘ units and ‘derived‘ units will slowly disappear.

The ‘New’ SI.

si illustration only defining constants full colour

The seven defining constants of the ‘New’ SI.

In the ‘New’ SI, the values of seven natural constants have been defined to have exact values with no measurement uncertainty.

These are constants of nature that we had previously measured in terms of the SI base units. The choice to give them an exact value is based on the belief – backed up by experiments – that the constants are truly constant!

In fact, some of the constants appear to be the most unchanging features of the universe that we have ever encountered.

Here are four of the constants that will have fixed numerical values in the New SI:

  • the speed of light in a vacuum, conventionally given the symbol c,
  • the frequency of microwaves absorbed by a particular transition in Caesium, atoms conventionally given the symbol ΔνCs, (This funny vee-like symbol ν is the Greek letter ‘n’ pronounced as ‘nu’)
  • the Planck constant, conventionally given the symbol h,
  • the magnitude of the charge on the electron, conventionally given the symbol e.

Electrical Units in the ‘Old’ SI and the ‘New’ SI.

In the Old SI the base unit referring to electrical quantities was the ampere.

If one were to make a measurement of a voltage (in the derived unit volt) or electrical resistance (in the derived unit ohm), then one would have to establish a sequence of comparisons that would eventually refer to combinations of base units. So:

  • one volt was equal to one kg m2 s-3 A-1 (or one watt per ampere)
  • one ohm was equal to one kg m2 s-3 A-2 (or one volt per ampere)

Please don’t be distracted by this odd combination of seconds, metres and kilograms. The important thing is that in the Old SI, volts and ohms were derived units with special names.

To make ‘one volt’ one needed experiments that combined the base units for the ampere, the kilogram, the second and the metre in a clever way to create a voltage known in terms of the base units.

But in the New SI things are different.

  • We can use an experiment to create volts directly in terms of the exactly-known constants ΔνCs×h/e.
  • And similarly we can create resistances directly in terms of the exactly-known constants e2/h

Since h and e and ΔνCs have exact values in the New SI, we can now create volts and ohms without any reference to amperes or any other base units.

This change is not just a detail. In an SI based on physical constants with exactly-known values, the ability to create accurate realisations of units no longer discriminates between base units and derived units – they all have the same status.

It’s not just electrical units

Consider the measurement of speed that I discussed earlier.

In the Old SI we would measure speed in derived units of metres per second i.e. in terms of the base units the metre and the second. And so we could never measure a speed with a lower fractional uncertainty than we could realise the composite base units, the metre or the second.

But in the New SI,

  • one metre can be realised in terms of the exactly-known constants c /ΔνCs
  • one second can be realised in terms of the exactly-known constant ΔνCs

So as a consequence,

  • one metre per second can be realised in terms of the exactly-known constant c

Since these constants are all exactly known, there is no reason why speeds in metres per second cannot be measured with an uncertainty which is lower than or equal to the uncertainty with which we can measure distances (in metres) or times (in seconds).

This doesn’t mean that it is currently technically possible to measure speeds with lower uncertainty than distances or times. What it means is that there is now nothing in the structure of the SI that would stop that being the case at some point in the future.

Is this good or bad?

So in the new SI, any unit – a derived unit or a base unit – can be expressed in terms of  exactly-known constants. So there will no longer be any intrinsic hierarchy of uncertainty in the SI.

On 20th May 2019 as the new system comes into force, nothing will initially change. We will still talk about base units and derived units.

But as measurement science evolves, I expect that – as is already the case for electrical units – the distinction between base units and derived units will slowly disappear.

And although I feel slightly surprised by this conclusion, and slightly shocked, it seems to be only a good thing – making the lowest uncertainty measurements available in the widest possible range of physical quantities.

Weather Station Comparison

January 7, 2019
img_7898

My new weather station is on the top left of the picture. The old weather station is in the middle of the picture on the right.

Back in October 2015 I installed a weather station at the end of my back garden and wrote about my adventures at length (Article 1 and Article 2)

Despite costing only £89, it was wirelessly linked to a computer in the house which uploaded data to weather aggregation sites run by the Met Office and Weather Underground. Using these sites, I could compare my readings with stations nearby.

I soon noticed that my weather station seemed to report temperatures which tended to be slightly higher than other local stations. Additionally, I noticed that as sunshine first struck the station in the morning, the reported temperature seemed to rise suddenly, indicating that the thermometer was being directly heated by the sunlight rather than sensing the air temperature.

So I began to think that the reported temperatures might sometimes be in error. Of course, I couldn’t prove that because I didn’t have a trusted weather station that I could place next to it.

So in October 2018 I ordered a new Youshiko Model YC9390 weather station, costing a rather extravagant £250.

Youshiko YC9390

The new station is – unsurprisingly – rather better constructed than the old one. It has a bigger, brighter, internal display and it links directly to Weather Underground via my home WI-FI and so does not require a PC. Happily it is possible to retrieve the data from Weather Underground.

The two weather stations are positioned about 3 metres apart and at slightly different heights, but in broad terms, their siting is similar.

Over the last few days of the New Year break, and the first few days of my three-day week, I took a look at how the two stations compared. And I was right! The old station is affected by sunshine, but the effect was significantly larger than I suspected.

Comparison 

I compared the temperature readings of the two stations over the period January 4th, 5th and 6th. The fourth was a bright almost cloudless, cold, winter day. The other two days were duller, but warmer, and all three days were almost windless.

The graphs below (all drawn to the same scale) show the data from each station versus time-of-day with readings to be compared against the left-hand axis.

Let’s look at the data from the 4th January 2019

4th January 2019

Data from the 4th January 2019. The red curve shows air temperature data from the old station and the blue curve shows data from the new station. Also shown in yellow is data showing the intensity of sunshine (to be read from the right-hand axis) taken from a station located 1 km away.

Two things struck me about this graph:

  • Firstly I was surprised by the agreement between the two stations during the night. Typically the readings are within ±0.2 °C and with no obvious offset.
  • Secondly I was shocked by the extent the over-reading. At approximately 10 a.m. the old station was over-reading by more than 4 °C!

To check that this was indeed a solar effect I downloaded data from a weather station used for site monitoring at NPL – just over a kilometre away from my back garden.

This station is situated on top of the NPL building and the intensity of sunlight there will not be directly applicable to the intensity of sunshine in my back garden. But hopefully, it is indicative.

The solar intensity reached just over 200 watts per square metre, about 20% of the solar intensity on a clear midsummer day. And it clearly correlated with the magnitude of the excess heating.

Let’s look at the data from the 5th January 2019

slide2

Data from 5th January 2019. See previous graph and text for key.

The night-time 5th January data also shows agreement between the two stations as was seen on the 4th January.

However I was surprised to see that even on this dismally dull January day – with insolation failing to reach even 100 watts per square metre – that there was a noticeable warming of the old station – amounting to typically 0.2 °C.

The timing of this weak warming again correlated with the recorded sunlight.

Finally let’s look at data from 6th January 2019

slide3

Data from 6th January 2019. See previous graph and text for key.

Once again the pleasing night-time agreement between the two station readings is striking.

And with an intermediate level of solar intensity the over-reading of the old station is less than on the 4th, but more than on the 5th.

Wind.

I chose these dates for a comparison because on all three days wind speeds were low. This exacerbates the solar heating effect and makes it easier to detect.

The figures below show the same temperature data as in the graphs above, but now with the wind speed data plotted in green against the right-hand axis.

Almost every wind speed reading is 0 kilometres per hour, and during the nights there were only occasional flurries.  However during the day, there were slightly more frequent flurries, but as a pedestrian, the day seemed windless.

slide4

Data from 4th of January 2019 now showing wind speed on the right-hand axis.

slide5

Data from 5th of January 2019 now showing wind speed on the right-hand axis.

slide6

Data from the 6th January 2019 showing wind speed against the right-hand axis.

Conclusions 

My conclusion is that the new weather station shows a much smaller solar-heating effect than the old one.

It is unlikely that the new station is itself perfect. In fact there is no accepted procedure for determining what the ‘right answer’ is in a meteorological setting!

The optimal air temperature measurement strategy is usually to use a fan to suck air across a temperature sensor at a steady speed of around 5 metres per second – roughly 18 kilometres per hour! But stations that employ such arrangements are generally quite expensive.

Anyway, it is pleasing to have resolved this long-standing question.

Where to see station data

On Weather Underground the station ID is ITEDDING4 and its readings can be monitored using this link.

The Weather Underground ‘Wundermap’ showing world wide stations can be found here. On a large scale the map shows local averages of station data, but  as you zoom in, you can see teh individual reporting stations.

The Met Office WOW site is here. Search on ‘Teddington’ if you would like to view the station data.

Getting off our eggs!

January 5, 2019

While listening to the radio last week, I heard a description of an astonishing experiment, apparently well known since the 1930’s, but new to me.

Niko Tinbergen conducted experiments in which he replaced birds’ eggs with replicas and then studied how the birds responded to differently-sized replicas with modified markings.

Diedre Barrett describes the results in her book Supernormal Stimuli

Song birds abandoned their pale blue eggs dappled with grey to hop on black polka-dot day-glo blue dummies so large that the birds constantly slid off and had to climb back on.

Hearing this for the first time, I was shocked. But the explanation is simple enough.

The birds are hard-wired to respond to egg-like objects with specific patterns, and Tinbergen’s modified replicas triggered the nesting response more strongly than the bird’s own eggs.

Tinbergen coined the term ‘super-normal stimulus’ to describe stimuli that exceeded anything conceivable in the natural world.

Diedre Barrett uses this shocking experimental result to reflect on some human responses to what are effectively super-normal stimuli in the world around us.

Using this insight, she points out that many of our responses are as simple and self-harming as the birds’ responses to the replica eggs.

The Book

In her short book Barrett writes clearly, makes her point, and then stops. It was a pleasure to read.

I will not attempt to replicate her exposition, but I was powerfully struck by the sad image of a bird condemned to waste its reproductive energy on a plaster egg, when its own eggs lay quietly in view.

I found it easy to find analogous instinctive self-harming patterns in my own life. Surely we all can.

But Barrett does not rant. She is not saying that we are all going to hell in a handcart.

She makes the point that super-normal stimuli are not necessarily negative. The visual arts, dance, music, theatre and literature can all be viewed as tricks/skills to elicit powerful and even life-changing responses to non-existent events.

In discussing television, her point is not that television is ‘bad’ per se, but that the intensity and availability of vicarious experiences exceeds anything a normal person is likely to encounter in real life.

If watching television enhances, educates and inspires, then great. But frequently we respond to the stimuli by just seeking more. In the UK on average we watch more than four hours of television per day.

Four HoursSuch a massive expenditure of time is surely the equivalent to sitting on a giant plaster egg.

Barrett’s key point is the ubiquity of these super-normal stimuli in modern life – stimuli with which our instincts are ill-equipped to cope with.

A rational response feels ‘un-natural’ because it requires conscious thought and reflection. For example, rather than just feeling that an image is ‘cute, are we able to notice our own response and ask why someone might use caricatures which elicit a ‘cute’ response?

Barrett ends by pointing out that we humans are the only animals that can notice that we are sitting on metaphorical polka-dotted plaster eggs.

Even in adult life, having sat on polka-dotted plaster eggs for many years, we can come to an understanding that will allow us to get off the egg, reflect on the experience, and get on with something more meaningful.

I am clambering off some eggs as I write.

1000 days of weighing myself

January 1, 2019

Thank you for stopping by: Happy New Year!

Well its the first of January 2019 and I am filled with trepidation as I begin a new phase of my career: working 3 days a week.

My plan is to take things one day at a time, and take heart from the fact that it will take only three days to go from one weekend to the next!

Weight

Anyway, the turn of the year means it is time to look at my weight again.

The graph below shows my weight in 2018 based on daily weighings. Also shown are monthly averages (green squares) with error bars drawn at ± one standard deviation. The red dotted line shows the yearly average.

My weight through 2018 based on daily weighings.

My weight through 2018 based on daily weighings. Monthly averages are shown as green squares and the yearly average is shown as a dotted line.

My specific aim for 2018 had been to stay the same weight, so I did not quite manage that. The December monthly average was 0.5 kg above the January monthly average.

And it is clear that there was weight loss through the first part of the year and weight gain through the second part of the year.

But looking on the broader scale, the changes aren’t very significant.

The graph below shows the data for the last three years. This amounts to just over 1000 days or roughly one twentieth of my life.

Slide2

When viewed on this larger scale there doesn’t seem to be much to fuss about.

But I am sure that if I had not weighed myself daily, my weight would have crept back on considerably faster than it has.

A weight gain of 1 kg over one year represents an energy imbalance of only about 20 kilocalories of food per day – which is less than a mouthful of almost any food worth eating! (Link)

Anyway. Another year of weighing begins!

 

Christmas Bubbles

December 23, 2018
Champagne Time Lapse

A time-lapse photograph of a glass of fizzy wine.

Recently I encountered the fantastic:

Effervescence in champagne and sparkling wines:
From grape harvest to bubble rise

This is a 115-page review article by Gérard Liger-Belair about bubbles in Champagne, my most favourite type of carbon dioxide emission.

Until January 30th 2019 it is freely downloadable using this link

Since the bubbles in champagne arguably add £10 to the price of a bottle of wine, I guess it is worth understanding exactly how that value is added.

I found GLB’s paper fascinating with a delightful attention to detail. From amongst the arcane studies in the paper, here are three things I learned.

Thing 1: Amount of Gas

Champagne (and Prosecco and Cava) have about 9 grams of carbon dioxide in each 750 ml bottle [1].

Since the molar mass of carbon dioxide is 44 g, each bottle contains approximately 9/44 ~ 0.2 moles of carbon dioxide.

If released as gas at atmospheric pressure and 10 °C, it would have a volume of approximately 4.75 litres – more than six times the volume of the bottle!

This large volume of gas is said to be “dissolved” in the wine. The molecules can only leave when, by chance, they encounter the free surface of the wine.

Because the free-surface area of wine in a wine glass is usually larger than the combined surface area of bubbles, about 80% of the de-gassing happens through the liquid surface [2].

Thing 2: Bubble Size and Speed 

But fizzy wine is call “fizzy” because of the bubbles that seem to ceaselessly form on the inner surface of the glass.

Sadly, in a perfectly clean glass, such as one which has repeatedly been through a dishwasher, very few bubbles will form [3].

But if there are tiny cracks in the glass, or small specks of dust from, for example, a drying cloth, then these can trap tiny air bubbles and provide free-surfaces at which carbon dioxide can leave the liquid.

At first a bubble is just tens of nanometres in size, but it grows at a rate which depends upon the rate at which carbon dioxide enters the bubble.

As the bubble grows, its surface area increases allowing the rate at which carbon dioxide enters the bubble to increase.

Eventually the buoyancy of the bubble causes it to detach from its so-called ‘nucleation site’ (birthplace) and rise through the liquid.  This typically happens when bubbles are between 0.01 and 0.1 mm in diameter.

To such tiny bubbles, the wine is highly viscous, and at first the bubbles rise slowly. But as more carbon dioxide enters the bubble, the bubble grows [4] and its speed of rise increases. The rising speed is close to the so-called ‘Stokes’ terminal velocity. [5]

So when you look at a stream of bubbles you will see that at the bottom, the bubbles are small and close together and relatively slow-moving. As they rise through the glass, they grow, and their speed increases.

If you can bear to leave your glass undrunk for long enough, you should be able to see the rate of bubble formation slow as the carbon dioxide concentration falls.

This will be visible as an increase in the spacing of bubbles near the nucleation site of a rising ‘bubble train’.

Thing 3: Number of bubbles

Idle speculation often accompanies the consumption of fizzy wine.

And one common topic of speculation is the number of bubbles which can be formed in a gas of champagne [6]. We can now add to that speculation.

If a bubble has a typically diameter of approximately 1 mm as it reaches the surface, then each bubble will have a volume of approximately 0.5 cubic millimetres, or 0.000 5 millilitres.

So the 4.75 litres of carbon dioxide in a bottle could potentially form 4750/0.0005 = 9.5 million bubbles per bottle!

If a bottle is used for seven standard servings then there are potentially 1.3 million bubbles per glass.

In fact the number is generally smaller than this because as the concentration of carbon dioxide in the liquid falls, the rate of bubble formation falls also. And below approximately 4 grams of carbon dioxide per litre of wine, bubbles cease to form [7].

Thing 4: BONUS THING! Cork Speed

When the bottle is sealed there is a high pressure of carbon dioxide in the space above the wine. The pressure depends strongly on temperature [8], rising from approximately 5 atmospheres (500 kPa) if the bottle is opened at 10 °C to approximately 10 atmospheres (1 MPa) if the bottle is opened at 25 °C.

GLB uses high-speed photography to measure the velocity of exiting cork, and gets results which vary from around 10 metres second for a bottle at 4 °C to 14 metres per second for a bottle at 18 °C. [9]

I made my own measurements using my iPhone (see below) and the cork seems to move roughly 5 ± 2 cm in the 1/240th of a second between frames. So my estimate of the speed is about 12 ± 5 metres second, roughly in line GLB’s estimates

Why this matters

When we look at absolutely any phenomenon, there is a perspective from which that phenomenon – no matter how mundane or familiar – can appear profound and fascinating.

This paper has opened my eyes, and I will never look at a glass of Champagne again in quite the same way.

Wishing you happy experimentation over the Christmas break.

Santé!

References

[1] Page 8 Paragraph 2

[2] Page 85 Section 6.3

[3] Page 42 Section 5.2

[4] Page 78 Figure 59

[5] Page 77 Figure 58

[6] Page 84 Section 6.3 & Figure 66

[7] Page 64

[8] Page 10 Figure 3

[9] Page 24 Figure 16


%d bloggers like this: