Archive for the ‘The Future’ Category

Why does heating my house require 280 watts per degree Celsius above ambient?

August 18, 2019

Previously I explained how I learned that for each degree Celsius the outside temperature falls below 20 °C, it takes 280 watts of heating to keep my house at 20 °C.

In order to provide this heating, I burn gas which last winter resulted in the emission of around 17 kg of carbon dioxide per day – around 2.5 tonnes in all.

I would really like to reduce this shameful figure, but I have only finite resources. In order to act I need to know where best to spend my money.

In this article I will explain how I came to understand the relative significance of the windows, roof and walls in this heat loss.

Windows

It is easier to estimate the heat loss from windows than it is from walls.

This is because walls are opaque and (without expert knowledge) it is not obvious what the wall is made of. Moreover, different walls in the house can have different construction and thickness. However, being transparent, one can see directly the type and construction of windows.

The heat flow through a window(or wall) is characterised by a U-value. This states the amount of heat which flows across 1 square metre of the window when there is one degree Celsius of temperature difference across the window.

The units are for U-values are watts per metre squared per degree Celsius (W/m2/°C) or watts per metre squared per kelvin  (W/m2/K). These two units are equal to each other.

Roughly speaking U-values for windows are [Link]:

  • Old single-glazed windows: 6 W/m2/°C
  • Old double-glazed windows: 4 W/m2/°C
  • New double-glazed windows: 1.5 W/m2/°C
  • The best triple-glazed windows: 1.0 W/m2/°C

I proceeded as follows:

  • I made a list of the 21 windows, skylights and glazed doors in in my house.
  • I measured their area – width × height in metres.
  • I multiplied their area by their U-value to get the transmission per degree Celsius through that window.
  • I then added them all up.
Slide5

For each window in the house I multiplied the area by the estimated U-value to get the heat transmitted per degree Celsius of temperature difference. I colour-coded the column to highlight which windows were the worst. Adding up all the windows came to 75.7 watts per degree Celsius. If I replaced all the windows with the best available I might be able to reduce this to 24.0 watts per degree Celsius.

The estimated total transmission through all the windows and doors came to about 76 watts per degree Celsius. I concluded that:

  • Firstly,  I could see which windows lost the most energy – they are colour-coded red, amber, and green in the figure above. There are no surprises – the largest area windows lose the most energy.
  • Secondly, I could see that if I replaced all the old windows with modern ones (U = 1.5 W/m2/°C), I might hope to reduce the window losses by roughly half their current value, to around 36 watts per degree Celsius. If I spent a lot – on triple-glazed windows and used insulating blinds, I might hope to achieve U = 1.0 W/m2/°C and reduce the losses to 24 watts per degree Celsius.
  • Thirdly, since the house as a whole is losing 280 watts per degree Celsius, I could see that windows and doors account for about a quarter of the energy lost from the house.
  • And finally, logically, the remaining 75% of the losses (280 – 76 = 204) must be going the through the roof, walls, and floors or lost in draughts.

Roof and Walls 

By analysing the thermal transmission of the windows and doors (transmission = 76 watts per degree Celsius), I concluded that roof and walls must be transmitting about 204 watts per degree Celsius.

  • Is this estimate reasonable?

To answer this question I embarked on yet another tedious and difficult exercise.

  • The tediousness arises because I need to add up all the areas of the roof and walls, subtract the areas of the windows and skylights, and then estimate the U-value,
  • The difficulty arises because I don’t know the materials from which the walls of the house are constructed!

Most of the walls date from the 1930’s (I think) and are probably solid brick. A 1970’s extension is probably not much better thermally, but I don’t know. However, the extension we built 10 years ago was built to building regulations at the time and I have a pretty good idea of the appropriate U-value.

So I made measurements of the wall areas. And then I assumed (link) that:

  • The old walls had a U-value of 2 W/m2/°C – a value appropriate for a double-skin solid brick wall.
  • The new walls had a U-value of 0.3 W/m2/°C – a value specified by current building regulations.
Slide6

For each wall or roof, I multiplied the area by the estimated U-value to get the heat transmitted per degree Celsius of temperature difference. I colour-coded the column to highlight which were the worst. Adding it up came to about 229 watts per degree Celsius. If I clad all the walls to achieve a U-value of 0.3 watts per metre squared per degree Celsius, I might be able to reduce this to 54 watts per degree Celsius.

With these assumptions I estimated the heat transmission through the roof and walls. As shown in the table above, I arrived at an estimate of 229 watts per degree Celsius. This should be compared with estimate of 204 watts per degree Celsius that I arrived by analysing:

  • My gas meter readings
  • The average weekly temperature
  • The estimated properties of the windows.

Given all the uncertainties, I take this as confirmation that within about 10% uncertainty, I can understand the thermal properties of my house.

Summary

Slide7

Currently my house loses 280 watts for each degree Celsius the external temperature falls below ambient. Of those 280 watts,

  • roughly 76 watts flow through the windows and doors
  • the remaining 204 watts flow through the walls, floors and roof.

With modern double-glazing I could reasonably hope to reduce the glazing losses from 76 watts to around 36 watts, or possibly even lower with triple-glazing and thermal blinds.

Cladding the entire house I could hope to reduce the losses from around 204 watts to around 50 watts.

  • What should I do?

In the next article I will discuss my strategy.

BAMS State of the Climate 2018

August 14, 2019

Reading the annual ‘State of the Climate’ report in the Bulletin of the American Meteorological Society (BAMS) has done nothing to help with my anxiety.

If you dare, you too can read it here:

Summary 

Imagine learning that your friend was in hospital. You race to the hospital and find your friend hooked up to every conceivable monitoring device.

If your friend is “the Climate”, then reading the BAMS State of the Climate report is like reading their autopsy before they have died.

You can foresee every tiny detail of their future suffering.

And yet the doctors don’t seem to be doing anything. Your friend is on the table, haemorrhaging, and the doctors are in an endless series of meetings!

The alarms on the monitors are beeping and flashing. But nobody comes to attend your friend.

You bang on the windows of the doctors’ meeting room and the doctors turn and glance at you, and then turn back to their conference.

You ask to see the hospital administrator. But they are too busy. An assistant assures you that they understand your distress.

You explain that this is not just A. N. Other Climate. This is the Climate, the one we all depend on for our food and air and water.

And the assistant agrees with you, sympathetically. But they patiently explain that the administrator is busy with IMPORTANT budget meetings right now.

And then you realise that your friend has been on the table for years…

…and that the doctors meeting has been going on all this time.

With each passing year the doctors become more and more certain of the exact manner in which your friend will die. But no treatment has begun.

You begin to feel angry. And depressed. And frustrated. And you consider acting irrationally.

You begin to consider that acting – rationally or irrationally – is the only chance to save the friend you love.

Being alive at the peak of the carbon age

July 22, 2019

View from aeroplane

Friends, we collectively wish the best for our families, friends and the wider communities to which we belong. But how do we avoid having conversations like this with our grandchildren?

Granny, what was it like to live at the peak of the carbon age?”

Our teacher said that back in the 2020’s you could still fly around the world for the cost of a few weeks wages and that planes then emitted hundreds of TONNES of carbon dioxide on every flight?

“And she said that those old aeroplanes left clouds that changed the look of the sky!”

“Is that true Granny? Did the planes really do that?”

“Yes, darling, that’s what it was like back then.”

But why Granny? In History we learned that everyone knew for decades that carbon dioxide emissions would melt the Arctic ice. And now that the Greenland Ice Sheet has begun its strong melt, we have rising sea levels and strange weather and its harder to grow food. “

“Didn’t you know what you were doing?”

“Darling, yes, we knew, but, we sort of didn’t really want to think about it.”

“For example, Michael, your grandad, wanted your parents to see what the Mediterranean was like, so we flew to Greece one year. It was so good to swim in the warm clear water and we all had a great time. We just didn’t discuss the extreme heat or the carbon.”

“And Michael wanted to show your parents California where his friend from school lived. We had a couple of great holidays there. It was so, so beautiful. We even saw the Sequoias before the Great Fire.”

“And more recently, I ached to see you and your parents again. After your parents left the UK in their twenties, the thought of not seeing them again felt like a death sentence.”

“And the tele-screens weren’t like the tele-presence systems we have now, so we both needed to travel for work.” 

“Everyone knew we were storing up problems for the future, but it wasn’t as socially unacceptable as it is now. Now everyone boasts about how far under-quota they live. But back then some people took exotic holidays several times a year. Even Climate Scientists flew on aeroplanes – every one did it.

“A few people went on and on and on and on about it, but while flying was easy and cheap we just tried not to talk about it.”

“And there didn’t seem to be an alternative.” 

But there were alternatives Granny! If you had just begun to really do something twenty years earlier, things would be so different for us now.” 

(C) Tina Meyer https://www.pinterest.co.uk/tmeyersd/

Granny, what was it like to live at the peak of the carbon age?”

 

 

 

 

The Moon as a symbol of hope

July 19, 2019

Eclipse July 2019

I sat out by the Diana Fountain in Bushy Park on Tuesday night and took a picture of the eclipsed Moon.

As I sat in the peaceful darkness, I thought about the fact that when I was nine-years old, human beings had sent a rocket ship to the moon, and men had walked about and collected some rocks.

As technology has advanced since the 1960s, the engineering in the Apollo program has not been eclipsed. Indeed, it seems ever more remarkable.

And amongst the moths and the bats, I reflected that “…if human beings can do that, then we can do anything that can be done…”. 

That qualification “…that can be done…” is there because although the aim of the Apollo programme was built on a whimsical folly, the engineers who made it happen could only use practical steps to make it real.

Some of the steps they took seem astonishing, but there was – obviously – nothing ‘impossible’. No steps relied on wishful thinking.

The excellent bookHow Apollo Flew to the Moon” , (my review is here) highlighted some of most astonishing facts:

  • The total mechanical output power of five first stage rockets was 60 GW. This is equivalent to peak electrical supply of the entire United Kingdom.
  • On its return from the moon, its speed just before entry into the Earth’s atmosphere was more than 11 kilometres per second.
  • Since Apollo 17 returned in 1972. no human being has been more than 700 kilometres from Earth’s surface.

And sitting in the dark I reflected that if we could achieve all these things then, surely we can – and eventually will – get our act together on Climate Change.

It may seem impossible now, but even the most politically deaf regimes will eventually dance to the theme of climate change – they have no choice.

And if the US were to devote to this problem even a small fraction of the energy and enterprise that it devoted to Apollo, they could yet inspire us all again, and leave a legacy to be proud of for all our children.

Is a UK grid-scale battery feasible?

April 26, 2019

This is quite a technical article, so here is the TL/DR: It would make excellent sense for the UK to build a distributed battery facility to enable renewable power to be used more effectively.

=========================================

Energy generated from renewable sources – primarily solar and wind – varies from moment-to-moment and day-to-day.

The charts below are compiled from data available at Templar Gridwatch. It shows the hourly, daily and seasonal fluctuations in solar and wind generation plotted every 5 minutes for (a) 30 days and (b) for a whole year from April 21st 2018. Yes, that is more than 100,000 data points!

Wind (Green), Solar (Yellow) and Total (Red) renewable energy generation for the days since April 21st 2018

Wind (Green), Solar (Yellow) and Total (Red) renewable energy generation for 30 days following April 21st 2018. The annual average (~6 GW) is shown as black dotted line.

Slide7

Wind (Green), Solar (Yellow) and Total (Red) renewable energy generation for the 365 days since April 21st 2018. The annual average (~6 GW) is shown as black dotted line.

An average of 6 GW is a lot of power. But suppose we could store some of this energy and use it when we wanted to rather than when nature supplied it. In other words:

Why don’t we just build a big battery?

It turns out we need quite a big battery!

How big a battery would be need?

The graphs below shows a nominal ‘demand’ for electrical energy (blue) and the electrical energy made available by the vagaries of nature (red) over periods of 30 days and 100 days respectively. I didn’t draw the whole year graph because one cannot see anything clearly on it!

The demand curve is a continuous demand for 3 GW of electrical power with a daily peak demand of 9 GW. This choice of demand curve is arbitrary, but it represents the kind of contribution we would like to be able to get from any energy source – its availability would ideally follow typical demand.

Slide8

Slide9

We can see that the renewable supply already has daily peaks in spring and summer due to the solar energy contribution.

The role of a big battery would be cope to with the difference between demand and supply. The figures below show the difference between my putative demand curve and supply, over periods of 30 days and a whole year.

Slide10

Slide11

I have drawn black dotted lines showing when the difference between demand and supply exceeds 5 GW one way or another. In spring and summer this catches most of the variations. So let’s imagine a battery that could store or release energy at a rate of 5 GW.

What storage capacity would the battery need to have? As a guess, I have done calculations for a battery that could store or release 5 GW of generated power for 5 hours i.e. a battery with a capacity of 5 GW x 5 hours = 25 GWh. We’ll look later to see if this is too much or too little.

How would such a battery perform?

So, how would such a battery affect the ability of wind and solar to deliver a specified demand?

To assess this I used the nominal ‘demand‘ I sketched at the top of this article – a demand for  3 GW continuously, but with a daily peak in demand to 9 GW – quite a severe challenge.

The two graphs below show the energy that would be stored in the battery for 30 days after 21 April 2018, and then for the whole following year.

  • When the battery is full then supply is exceeding demand and the excess is available for immediate use.
  • When the battery is empty then supply is simply whatever the elements have given us.
  • When the battery is in-between fully-charged and empty, then it is actively storing or supplying energy.

Slide12

Over 30 days (above) the battery spends most of its time empty, but over a full year (below), the battery is put to extensive use.

Slide13

How to measure performance?

To assess the performance of the battery I looked at how the renewable energy available last year would meet a levels of constant demand from 1 GW up to 10 GW with different sizes of battery. I consider battery sizes from zero (no storage) in 5 GWh steps up to our 25 GWh battery. The results are shown below:

Slide15It is clear that the first 5 GWh of storage makes the biggest difference.

Then I tried modelling several levels of variable demand: a combination of 3 GW of continuous demand with an increasingly large daily variation – up to a peak of 9 GW. This is a much more realistic demand curve.Slide17

Once again the first 5 GWh of storage makes a big difference for all the demand curves and the incremental benefit of bigger batteries is progressively smaller.

So based on the above analysis, I am going to consider a battery with 5 GWh of storage – but able to charge or discharge at a rate of 5 GW. But here is the big question:

Is such a battery even feasible?

Hornsdale Power Reserve

The Hornsdale Power Reserve Facility occupies an area bout the size of a football pitch. Picture from the ABC site

The Hornsdale Power Reserve Facility occupies an area about the size of a football pitch. Picture from the ABC site

The biggest battery grid storage facility on Earth was built a couple of years ago in Hornsdale, Australia (Wiki Link, Company Site). It seems to have been a success (link).

Here are its key parameters:

  • It can store or supply power at a rate of 100 MW or 0.1 GW
    • This is 50 times smaller than our planned battery
  • It can store 129 MWh of energy.
    • This is just under 40 times smaller than our planned battery
  • Tesla were reportedly paid 50 million US dollars
  • It was supplied in 100 days.
  • It occupies the size of a football pitch.

So why don’t we just build lots of similar things in the UK?

UK Requirements

So building 50 Hornsdale-size facilities, the cost would be roughly 2.5 billion dollars: i.e. about £2 billion.

If we could build 5 a year our 5 GWh battery would be built in 10 years at a cost of around £200 million per year. This is a lot of money. But it is not a ridiculous amount of money when considering the National Grid Infrastructure.

Why this might actually make sense

The key benefits of this kind of investment are:

  • It makes the most of all the renewable energy we generate.
    • By time-shifting the energy from when it is generated to when we need it, it allows renewable energy to be sold at a higher price and improves the economics of all renewable generation
  • The capital costs are predictable and, though large, are not extreme.
  • The capital generates an income within a year of commitment.
    • In contrast, the 3.2 GW nuclear power station like Hinkley Point C is currently estimated to cost about £20 billion but does not generate any return on investment for perhaps 10 years and carries a very high technical and political risk.
  • The plant lifetime appears to be reasonable and many elements of the plant would be recyclable.
  • If distributed into 50 separate Hornsdale-size facilities, the battery would be resilient against a single catastrophic failure.
  • Battery costs still appear to be falling year on year.
  • Spread across 30 million UK households, the cost is about £6 per year.

Conclusion

I performed these calculations for my own satisfaction. I am aware that I may have missed things, and that electrical grids are complicated, and that contracts to supply electricity are of labyrinthine complexity. But broadly speaking – more storage makes the grid more stable.

I can also think of some better modelling techniques. But I don’t think that they will affect my conclusion that a grid scale battery is feasible.

  • It would occupy about 50 football pitches worth of land spread around the country.
  • It would cost about £2 billion, about £6 per household per year for 10 years.
    • This is one tenth of the current projected cost of the Hinkley Point C nuclear power station.
  • It would deliver benefits immediately construction began, and the benefits would improve as the facility grew.

But I cannot comment on whether this makes economic sense. My guess is that when it does, it will be done!

Resources

Data came from Templar Gridwatch

 

Here and there. Now and then

April 21, 2019

Note: Reflecting on what matters to me most, I feel increasingly conscious that the only issue I care about deeply is Climate Change. In my mind, all other issues pale in comparison to the devastation to which we – you, reader and me – are condemning future generations because of our indifference and wilful ignorance.

But even so, I find it hard to know how to act…

On the one hand… 

It has been a beautiful April day.

On the other hand… 

Today, Sea Ice Extent in the Arctic is lower than it has ever been on this date since satellite measurements began in 1979. (Link)

Arctic Sea Ice Extent for March to May from every year since 1979.

Arctic Sea Ice Extent for March to May from every year since 1979.

On the one hand… 

I strongly support the aims of Climate protesters in London. I share their profound frustration.

On the other hand… 

I feel the protesters are not being honest about the impact of the actions they advocate.

For example, I think if their wishes were granted, we would all be obliged to use much less energy and I only know two ways to do that.

  • The first method is to increase the price of energy – famously not a route to popularity.
  • The second method is to ration energy which has not been attempted in the UK (that I can recall) since the 1974 Oil Crisis.

One could use some combination of these two methods, but I don’t know of any fundamentally different ways.

We are all in favour of ‘Saving the Planet’, but higher energy costs or rationing would be wildly unpopular. This would increase the cost of almost all products and services.

I would vote for climate action and an impoverishment of my life and my future in a heartbeat. But I am well off.

Unless other people are convinced, and until we find a way to address this problem which is acceptable to those who will be most hurt in the short term – poorer people –  it will never actually happen. And all I care about is that it actually happens.

On the one hand… 

I strongly support the goal of a zero-carbon economy.

On the other hand… 

If the existing carbon-intensive economy reduces in scope too fast, then we will lack the resources to create the new economy.

On the one hand… 

David Attenborough spoke movingly on television this week about ‘Climate Change: the facts.

David Attenborough

David Attenborough

I watched his programme and while it’s not the story I would have told, it seemed to me to be a pretty straightforward and a fair presentation.

On the other hand… 

Not every one thought it fair. Here are specific comments (1, 2) or follow these links for torrents more similar stuff (Link#1, Link#2, Link#3). I disagree with these people, and their specific points are broadly irrelevant. But their votes are worth just as much as mine.

On the one hand… 

I am trying hard to lower the amount of energy I personally use.

I am measuring the energy use of appliances, reading my meters once a week and switching things off.

My aim is to reduce the electrical power being used by an average of more than 200 watts.

Over one year this will reduce my carbon dioxide emissions by around 0.35 tonnes. (Link).

On the other hand… 

Last year I was invited to give a keynote talk at a conference in New Zealand. I was honoured and said ‘Yes’.

This will cause an additional 7.4 tonnes of carbon dioxide to be emitted. (Link)

CO2 flight to New Zealand

Andrea Sella has written about this issue and perhaps we are at the end of the era of hypermobility.

On the one hand… 

I felt sad when I saw Notre Dame in flames.

On the other hand… 

I feel sad about droughts and floods and wild fires and destroyed livelihoods and brothers and sisters in poverty around the world.

If billions of euros can be found ‘in an instant’ for Notre Dame, why can’t we address these much more serious and urgent problems as dynamically?

And on this Easter day, I think:

What would Jesus do? 

 

Air Temperature

April 1, 2018

Recently, two disparate strands of my work produced publications within a week of each other.

Curiously they both concerned one of the commonest measurements made on Earth today – the measurement of air temperature.

  • One of the papers was the result of a humbling discovery I made last year concerning a common source of error in air temperature measurements. (Link to open access paper)
  • On the other  paper I was just one amongst 17 authors calling for the establishment of global reference network to monitor the climate. My guess is that most people imagine such a network already exists – but it doesn’t! (Link to open access paper)

I am writing this article because I was struck by the contrasting styles of these papers: one describing an arcane experimental detail; and the other proposing a global inter-governmental initiative.

And yet the aim of both papers was identical: to improve measurement so that we can more clearly see what is happening in the world.

Paper 1

In the middle of 2018 I was experimenting with a new device for measuring air temperature by measuring the speed of sound in air.

It’s an ingenious device, but it obviously needed to be checked. We had previously carried out tests inside environmental chambers, but the temperature stability and uniformity inside the chambers was not as good as we had hoped for.

So we decided to test the device in one of NPL’s dimensional laboratories. In these laboratories, there is a gentle, uniform flow of air from ceiling to floor, and the temperature is stable to within a hundredth of a degree Celsius (0.01 °C) indefinitely.

However, when I tried to measure the temperature of the air using conventional temperature sensors I got widely differing answers – varying by a quarter of a degree depending on where I placed the thermometer. I felt utterly depressed and humiliated.

Eventually I realised what the problem was. This involved stopping. Thinking carefully. And talking with colleagues. It was a classic case of eliminating the impossible leaving only the improbable.

After believing I understood the effect, I devised a simple experiment to test my understanding – a photograph of the apparatus is shown below.

tubes-in-a-lab-photo.png

The apparatus consisted of a set of stainless steel tubes held in a clamp stand. It was almost certainly the cheapest experiment I have ever conducted.

I placed the tubes in the laboratory, exposed to the downward air flow, and  left them for several hours to equilibrate with air.

Prior to this experience, I would have bet serious amounts of money on the ‘fact’ that all these tubes would be at the same temperature. My insight had led me to question this assumption.

And my insight was correct. Every one of the tubes was at a different temperature and none of them were at the temperature of the air! The temperature of the tubes depended on:

  • the brightness of the lights in the room – which was understandable but a larger effect than I expected, and
  • the diameter of the tubes – which was the truly surprising result.

Results 1

I was shocked. But although the reason for this is not obvious, it is also not complicated to understand.

When air flows air around a cylindrical (or spherical) sensor only a very small amount of air actually makes contact with the sensor.

Air reaching the sensor first is stopped (it ‘stagnates’ to use the jargon). At this point heat exchange is very effective. But this same air is then forced to flow around the sensor in a ‘boundary layer’ which effectively insulates the sensor from the rest of the air.

Air flow

For small sensors, the sensor acquires a temperature close to that of the air. But the air is surprisingly ineffective at changing the temperature of larger sensors.

The effect matters in two quite distinct realms.

Metrology

In metrology – the science of measurement – it transpires that knowledge of the temperature of the air is important for the most accurate length measurements.

This is because we measure the dimensions of objects in terms of the wavelength of light, and this wavelength is slightly affected by the temperature of the air through which the light passes.

In a dimensional laboratory such as the one illustrated below, the thermometer will indicate a temperature which is:

  • different from the temperature of artefacts placed in the room, and
  • different from the temperature of the air.

Laboratory

Unless the effect is accounted for – which it generally isn’t – then length measurements will be slightly incorrect.

Climatology

The effect is also important in climatology. If a sensor is changed in a meteorological station people check that the sensor is calibrated, but they rarely record its diameter.

If a calibrated sensor is replaced by another calibrated sensor with a different diameter, then there will be a systematic effect on the temperatures recorded by the station. Such effects won’t matter for weather forecasting, but they will matter for people using the stations for a climate record.

And that brings me to Paper 2

Paper 2

Hadcrut4 Global Temperature

When we see graphs of ‘global temperatures’ over time, many people assume that the data is derived from satellites or some ‘high-tech’ network of sensors. Not so.

The ‘surface’ temperature of the Earth is generally estimated in two quite distinct parts – sea surface temperature and land surface temperature. But both these terms are slight misnomers.

Considering just the land measurements, the actual temperature measured is the air temperature above the land surface. In the jargon, the measurement is called LSAT – the Land Surface Air Temperature.

LSAT is the temperature which human beings experience and satellites can’t measure it.

LSAT data is extracted from temperature measurements made in thousands of meteorological stations around the world. We have data records from some stations extending back for 150 years.

However, it is well known that data is less than ideal: it is biased and unrepresentative in many ways.

The effect described in Paper 1 is just one of many such biases which have been extensively studied. And scientists have devised many ways to check that the overall trend they have extracted – what we now call global warming – is real.

Nonetheless. It is slightly shocking that a global network of stations designed specifically with the aim of climate monitoring does not exist.

And that is what we were calling for in Paper 2. Such a climate network would consist of less than 200 stations world-wide and cost less than a modest satellite launch. But it would add confidence to the measurements extracted from meteorological stations.

Perhaps the most important reason for creating such a network is that we don’t know how meteorological technology will evolve over the coming century.

Over the last century, the technology has remained reasonably stable. But it is quite possible that the nature of data acquisition for meteorological applications will change  in ways we cannot anticipate.

It seems prudent to me that we establish a global climate reference network as soon as possible.

References

Paper 1

Air temperature sensors: dependence of radiative errors on sensor diameter in precision metrology and meteorology
Michael de Podesta, Stephanie Bell and Robin Underwood

Published 28 February 2018
Metrologia, Volume 55, Number 2 https://doi.org/10.1088/1681-7575/aaaa52

Paper 2

Towards a global land surface climate fiducial reference measurements network
P. W. Thorne, H. J. Diamond, B. Goodison , S. Harrigan , Z. Hausfather , N. B. Ingleby , P. D. Jones ,J. H. Lawrimore , D. H. Lister , A. Merlone , T. Oakley , M. Palecki , T. C. Peterson , M. de Podesta , C. Tassone ,  V. Venema, K. M. Willett

Published: 1 March 2018
Int. J. Climatol 2018;1–15. https://doi.org/10.1002/joc.5458

How do we know anything?

November 18, 2017

How do we know Anything MdeP-NPL

This is an edited video of a talk I gave recently to the NPL Postgraduate Institute about the forthcoming changes to the International System of Units, the SI.

It’s 36 minutes long and you can download the PowerPoint slides here.

It features the first ever public display of the Standard Michael – the artefact defining length in the SM, le systeme de moi, or My System of Units. (It’s shown at about 6 minutes 40 seconds into the video).

The central thesis of the talk is summarised in the slide below:

Measurement

In the talk I explain how the forthcoming changes to the SI will improve future measurements.

I hope you enjoy it.

 

The Past, Present and Future of Measurement

October 22, 2017

Measurement, the simple process of comparing an unknown quantity with a standard quantity, is the essential component of all scientific endeavours. We are currently about to enter a new epoch of metrology, one which will permit the breath-taking progress of the last hundred years to continue unimpeded into the next century and beyond.

The dawn of this new age has been heralded this week by the publication of an apparently innocuous paper in the journal Metrologia. The paper is entitled:

Data and Analysis for the CODATA 2017 Special Fundamental Constants Adjustment

and its authors, Peter Mohr, David Newell, Barry Taylor and Eite Tiesinga constitute the Committee on Data for Science and Technology, commonly referred to as CODATA. In this article I will try to explain the relevance of CODATA’s paper to developments in the science of metrology.

The Past

The way human beings began to make sense of their environment was by measuring it. We can imagine that our agrarian ancestors might have wondered whether crops were taller or heavier this year than last. Or whether plants grew better in one field rather than another. And they would have answered these questions by creating standard weights and measuring rods.

But to effectively communicate their findings, the standard units of measurement would need to be shared. First between villages, and then towns, and then counties and kingdoms. Eventually entire empires would share a system of measurement.

First units of weight and length were shared. Then, as time became more critical for scientific and technical endeavours, units of time were added to systems of the measurement. And these three quantities: mass, length and time, are shared by all systems of units.

These quantities formed the so-called ‘base units’ of a system of measurement. Many other quantities could be described in terms of these ‘base units’. For example, speeds would be described in multiples of [the base unit of length] divided by [the base unit of time]. They might be [feet] per [second] in one system, or [metres] per [second] in another.

Over the last few hundred years, the consistent improvement in measurement techniques has enabled measurements with reduced uncertainty. And since no measurement can ever have a lower uncertainty that the standard quantity in that system of units, there has been a persistent drive to have the most stable, most accurately-known standards, so that they do not form a barrier to improved measurements.

The Present

Presently, all scientific and technical measurements on Earth are made using the International System of Units, the SI. The naming of this system – as an explicitly international system – represented a profound change in conception. It is not an ‘imperial’ system or an ‘English’ system, but a shared enterprise administered by the International Bureau of Weights and Measures (BIPM), a laboratory located in diplomatically-protected land in Sèvres, near Paris, France. Its operation is internationally funded by the dozens of nations who have signed the international treaty known as the Convention of the Metre.

In essence, the SI is humanity’s standard way of giving quantitative descriptions of the world around us. It is really an annex to all human languages, allowing all nationalities and cultures to communicate unambiguously in the realms of science and engineering.

Founded in 1960, the SI was based upon the system of measurement using the metre as the unit of length, the kilogram as the unit of mass, and the second as the unit of time. It also included three more base units.

The kelvin and degree Celsius were adopted as units of temperature, and the ampere was adopted as the unit of electric current. The candela was defined as the unit of luminous efficacy – or how bright lights of different colours appear to human beings. And then in 1971 the often qualitative science of chemistry was included in the fold with the introduction of the mole as a unit of amount of substance, a recognition of the increasing importance of analytical measurements.

SI Circle - no constants

The SI is administered by committees of international experts that seek to make sure that the system evolves to meet humanity’s changing needs. Typically these changes are minor and technical, but in 1984 an important conceptual change was made.

Since the foundation of the SI, the ability to measure time had improved more rapidly than the ability to measure length. It was realised that if the metre was defined differently, then length measurements could be improved.

The change proposed was to define what we mean by ‘one metre’ in terms of the distance travelled by light, in a vacuum, in a fixed time. Based on Einstein’s insights, the speed of light in a vacuum, c, is thought to be a universal constant, but at the time it had to be measured in terms metres and seconds i.e. human-scale measurement standards. This proposal defined a metre in terms of a natural constant – something we believe is truly constant.

The re-definition went well, and set metrologists thinking about whether the change could be adopted more widely.

The Future

Typically every four years, CODATA examine the latest measurements of natural constants, and propose the latest best estimate of the values of a range of natural constants.

Measurement Graphic

This is a strange. We believe that the natural constants are really constant, not having changed measurably since the first few seconds of our universe’s existence. Whereas our human standards are at most a few decades old, and (as with all human standards) are subject to slow changes. Surely, it would make more sense, to base our measurement standards on these fundamental constants of the natural world? This insight is at the heart of the changes which are about to take place. The CODATA publication this week is the latest in a series of planned steps that will bring about this change on 20th May 2019.

Constants Graphic

After years of work by hundreds of scientists, the values of the natural constants recommended by the CODATA committee will be fixed – and will form the basis for the new definitions of the seven SI base units.

What will happen on 20th May 2019?

On the 20th May 2019, revised definitions of four of the base units of the SI will come into force. More than 10 years of careful measurements by scientists world-wide will ensure that the new definitions are, as closely as possible, equivalent to the old definitions.

The change is equivalent to removing the foundations underneath a structure and then inserting new foundations which should leave the structure supported in exactly the same way. However the new foundations – being based on natural constants rather than human artefacts – should be much more stable than the old foundations.

If the past is any guide to the future, then in the coming decades and centuries, we can anticipate that measurement technology will improve dramatically. However we cannot anticipate exactly how and where these improvements will take place. By building the SI on foundations based on the natural constants, we are ensuring that the definitions of the unit quantities of the SI will place no restriction whatever on these future possible improvements.

The kilogram

The kilogram is the SI unit of mass. It is currently defined as the mass of the International Prototype of the Kilogram (IPK), a cylinder of platinum-iridium alloy held in a safe at the BIPM. Almost every weighing in the world is, indirectly, a comparison against the mass of this artefact.

On 20th May 2019, this will change. Instead, the kilogram will be defined in terms of a combination of fundamental constants including the Planck constant, h, and the speed of light, c. Although more abstract than the current definition, the new definition is thought to be at least one million times more stable.

The new definition will enable a new kind of weighing technology called a Kibble balance. Instead of balancing the weight of a mass against another object whose mass is known by comparison with the IPK, the weight will be balanced by a force which is calculable in terms of electrical power, and which can be expressed as a multiple of the fundamental constants e, h and c.

The ampere

The ampere is the SI unit of electrical current. It is presently defined in terms of the current which, if it flowed in two infinitely thin, infinitely long, parallel wires would (in vacuum) produce a specified force between the wires. This definition, arcane even by metrologists’ standards, was intended to allow the measurement of the ampere in terms of the force between carefully constructed coils of wire. Sadly, it was out of date shortly after it was implemented.

On 20th May 2019, this will change. Instead, the ampere will be defined in terms of a particular number of electrons per second, each with an exactly specified electrical charge e, flowing past a point on a wire. This definition finally corresponds to the way electric current is described in textbooks.

The new definition will give impetus to techniques which create known electrical currents by using electrical devices which can output an exactly countable number of electrons per second. At the moment these devices are limited to approximately 1 billion (a thousand million) electrons per second, but in future this is likely to increase substantially.

The kelvin

The kelvin is the SI unit of temperature. It is currently defined as the temperature of the ‘triple point of water’. This temperature – at which liquid water, solid water (ice) and water vapour (but no air) co-exist in equilibrium – is defined to be 273.16 kelvin exactly. Glass cells re-creating this conjunction are located in every temperature calibration lab in the world, and every temperature measurement is a comparison of how much hotter a temperature is than the temperature at one position within a ‘triple point of water cell’.

On 20th May 2019, this will change. Instead, the kelvin will be defined in terms of a particular amount of energy per molecule as specified by the Boltzmann constant, kB. This definition finally corresponds to the way thermal energy is described in textbooks.

The requirement to compare every measurement of temperature with the temperature of the triple point of water adds uncertainty to measurements at extremely low temperatures (below about 20 K) and at high temperatures (above about 1300 K). The new definition will immediately allow small improvements in these measurement ranges, and further improvements are expected to follow.

The definition of the degree Celsius (°C) in terms the kelvin will remain unchanged.

The mole

The mole is the SI unit of ‘amount of substance’. It is currently defined as the amount of substance which contains the same number of ‘elementary entities’ as there are atoms in 12 grams of carbon-12. The change in the definition of the kilogram required a re-think of this definition.

On 20th May 2019, it will change. The mole will be defined as the amount of substance which contains a particular, exactly specified, number of elementary entities. This number – known as the Avogadro number, NA – is currently estimated experimentally, but in future it will have fixed value.

The specification of an exact number of entities effectively links the masses of microscopic entities such as atoms and molecules to the new definition of the kilogram.

The ‘New’ SI

On 20th May 2019 four of the seven base units of the SI will be re-defined. But what of the other three?

The second is already defined in terms of the natural frequency of microwaves emitted by atoms of a particular caesium isotope. The metre is defined in terms of the second and the speed of light in vacuum – a natural constant. And the candela is defined in terms of Kcd, the only natural constant in the SI that relates to human beings. So from 20th May 2019 the entire SI will be defined in terms of natural constants.

SI Circle - with constants

The SI is not perfect. And it will not be perfect even after the redefinitions come into force. This is because it is a system devised by human beings, for human beings. But by incorporating natural constants into the definitions of all its base units, the SI has taken a profound step towards being a system of measurement which will enable ever greater precision in metrology.

And who knows what features of the Universe these improved measurements will reveal.

Talking about the ‘New’ SI

July 3, 2017

I was asked to give a talk about the SI to some visitors tomorrow morning, and so I have prepared some PowerPoint slides

If you are interested, you can download them using this link (.pptx 13 Mb!): please credit me and NPL if you use them.

But I also experimentally narrated my way through the talk and recorded the result as a movie.

The result is… well, a bit dull. But if you’re interested you can view the results below.

I have split the talk into three parts, which I have called Part 1, Part 2 and Part 3.

Part 1: My System of Units

This 14 minute section is the fun part. It describes a hypothetical system of units which is a bit like the SI, but in which all the units are named after my family and friends.

The idea is to show the structure of any system of units and to highlight some potential shortcomings.

It also emphasises the fact that systems of units are not ‘natural’. They have been created by people to meet our needs.

Part 2: The International System of Units

This 22 minute section – the dullest and most rambling part of the talk – explains the subtle rationale for the changes in the SI upon which we have embarked.

There are two key ideas in this part of the talk:

  • Firstly there is a description of the separation of the concepts of the definition of a unit from the way in which copies of the unit are ‘realised‘.
  • And secondly, there is a description of the role of natural constants in the new definitions of the units of the SI.

Part 3: The Kilogram Problem

This 11 minute section is a description of one of the two ways of solving the kilogram problem: the Kibble balance. It has three highlights!

  • It features a description of the balance by none other than Bryan Kibble himself.
  • There is an animation of a Kibble balance which takes just seconds to play but which took hours to create!
  • And there are also some nice pictures of the Mark II Kibble Balance installed in its new home in Canada, including a short movie of the coil going up and down.

Overall

This is all a bit dull, and I apologise. It’s an experiment and please don’t feel obliged to listen to all or any of it.

When I talk to a live audience I hope it will all be a little punchier – and that the 2800 seconds it took to record this will be reduced to something nearer to its target 2100 seconds.

 

 

 


%d bloggers like this: