Archive for the ‘The Future’ Category

Is a UK grid-scale battery feasible?

April 26, 2019

This is quite a technical article, so here is the TL/DR: It would make excellent sense for the UK to build a distributed battery facility to enable renewable power to be used more effectively.

=========================================

Energy generated from renewable sources – primarily solar and wind – varies from moment-to-moment and day-to-day.

The charts below are compiled from data available at Templar Gridwatch. It shows the hourly, daily and seasonal fluctuations in solar and wind generation plotted every 5 minutes for (a) 30 days and (b) for a whole year from April 21st 2018. Yes, that is more than 100,000 data points!

Wind (Green), Solar (Yellow) and Total (Red) renewable energy generation for the days since April 21st 2018

Wind (Green), Solar (Yellow) and Total (Red) renewable energy generation for 30 days following April 21st 2018. The annual average (~6 GW) is shown as black dotted line.

Slide7

Wind (Green), Solar (Yellow) and Total (Red) renewable energy generation for the 365 days since April 21st 2018. The annual average (~6 GW) is shown as black dotted line.

An average of 6 GW is a lot of power. But suppose we could store some of this energy and use it when we wanted to rather than when nature supplied it. In other words:

Why don’t we just build a big battery?

It turns out we need quite a big battery!

How big a battery would be need?

The graphs below shows a nominal ‘demand’ for electrical energy (blue) and the electrical energy made available by the vagaries of nature (red) over periods of 30 days and 100 days respectively. I didn’t draw the whole year graph because one cannot see anything clearly on it!

The demand curve is a continuous demand for 3 GW of electrical power with a daily peak demand of 9 GW. This choice of demand curve is arbitrary, but it represents the kind of contribution we would like to be able to get from any energy source – its availability would ideally follow typical demand.

Slide8

Slide9

We can see that the renewable supply already has daily peaks in spring and summer due to the solar energy contribution.

The role of a big battery would be cope to with the difference between demand and supply. The figures below show the difference between my putative demand curve and supply, over periods of 30 days and a whole year.

Slide10

Slide11

I have drawn black dotted lines showing when the difference between demand and supply exceeds 5 GW one way or another. In spring and summer this catches most of the variations. So let’s imagine a battery that could store or release energy at a rate of 5 GW.

What storage capacity would the battery need to have? As a guess, I have done calculations for a battery that could store or release 5 GW of generated power for 5 hours i.e. a battery with a capacity of 5 GW x 5 hours = 25 GWh. We’ll look later to see if this is too much or too little.

How would such a battery perform?

So, how would such a battery affect the ability of wind and solar to deliver a specified demand?

To assess this I used the nominal ‘demand‘ I sketched at the top of this article – a demand for  3 GW continuously, but with a daily peak in demand to 9 GW – quite a severe challenge.

The two graphs below show the energy that would be stored in the battery for 30 days after 21 April 2018, and then for the whole following year.

  • When the battery is full then supply is exceeding demand and the excess is available for immediate use.
  • When the battery is empty then supply is simply whatever the elements have given us.
  • When the battery is in-between fully-charged and empty, then it is actively storing or supplying energy.

Slide12

Over 30 days (above) the battery spends most of its time empty, but over a full year (below), the battery is put to extensive use.

Slide13

How to measure performance?

To assess the performance of the battery I looked at how the renewable energy available last year would meet a levels of constant demand from 1 GW up to 10 GW with different sizes of battery. I consider battery sizes from zero (no storage) in 5 GWh steps up to our 25 GWh battery. The results are shown below:

Slide15It is clear that the first 5 GWh of storage makes the biggest difference.

Then I tried modelling several levels of variable demand: a combination of 3 GW of continuous demand with an increasingly large daily variation – up to a peak of 9 GW. This is a much more realistic demand curve.Slide17

Once again the first 5 GWh of storage makes a big difference for all the demand curves and the incremental benefit of bigger batteries is progressively smaller.

So based on the above analysis, I am going to consider a battery with 5 GWh of storage – but able to charge or discharge at a rate of 5 GW. But here is the big question:

Is such a battery even feasible?

Hornsdale Power Reserve

The Hornsdale Power Reserve Facility occupies an area bout the size of a football pitch. Picture from the ABC site

The Hornsdale Power Reserve Facility occupies an area about the size of a football pitch. Picture from the ABC site

The biggest battery grid storage facility on Earth was built a couple of years ago in Hornsdale, Australia (Wiki Link, Company Site). It seems to have been a success (link).

Here are its key parameters:

  • It can store or supply power at a rate of 100 MW or 0.1 GW
    • This is 50 times smaller than our planned battery
  • It can store 129 MWh of energy.
    • This is just under 40 times smaller than our planned battery
  • Tesla were reportedly paid 50 million US dollars
  • It was supplied in 100 days.
  • It occupies the size of a football pitch.

So why don’t we just build lots of similar things in the UK?

UK Requirements

So building 50 Hornsdale-size facilities, the cost would be roughly 2.5 billion dollars: i.e. about £2 billion.

If we could build 5 a year our 5 GWh battery would be built in 10 years at a cost of around £200 million per year. This is a lot of money. But it is not a ridiculous amount of money when considering the National Grid Infrastructure.

Why this might actually make sense

The key benefits of this kind of investment are:

  • It makes the most of all the renewable energy we generate.
    • By time-shifting the energy from when it is generated to when we need it, it allows renewable energy to be sold at a higher price and improves the economics of all renewable generation
  • The capital costs are predictable and, though large, are not extreme.
  • The capital generates an income within a year of commitment.
    • In contrast, the 3.2 GW nuclear power station like Hinkley Point C is currently estimated to cost about £20 billion but does not generate any return on investment for perhaps 10 years and carries a very high technical and political risk.
  • The plant lifetime appears to be reasonable and many elements of the plant would be recyclable.
  • If distributed into 50 separate Hornsdale-size facilities, the battery would be resilient against a single catastrophic failure.
  • Battery costs still appear to be falling year on year.
  • Spread across 30 million UK households, the cost is about £6 per year.

Conclusion

I performed these calculations for my own satisfaction. I am aware that I may have missed things, and that electrical grids are complicated, and that contracts to supply electricity are of labyrinthine complexity. But broadly speaking – more storage makes the grid more stable.

I can also think of some better modelling techniques. But I don’t think that they will affect my conclusion that a grid scale battery is feasible.

  • It would occupy about 50 football pitches worth of land spread around the country.
  • It would cost about £2 billion, about £6 per household per year for 10 years.
    • This is one tenth of the current projected cost of the Hinkley Point C nuclear power station.
  • It would deliver benefits immediately construction began, and the benefits would improve as the facility grew.

But I cannot comment on whether this makes economic sense. My guess is that when it does, it will be done!

Resources

Data came from Templar Gridwatch

 

Here and there. Now and then

April 21, 2019

Note: Reflecting on what matters to me most, I feel increasingly conscious that the only issue I care about deeply is Climate Change. In my mind, all other issues pale in comparison to the devastation to which we – you, reader and me – are condemning future generations because of our indifference and wilful ignorance.

But even so, I find it hard to know how to act…

On the one hand… 

It has been a beautiful April day.

On the other hand… 

Today, Sea Ice Extent in the Arctic is lower than it has ever been on this date since satellite measurements began in 1979. (Link)

Arctic Sea Ice Extent for March to May from every year since 1979.

Arctic Sea Ice Extent for March to May from every year since 1979.

On the one hand… 

I strongly support the aims of Climate protesters in London. I share their profound frustration.

On the other hand… 

I feel the protesters are not being honest about the impact of the actions they advocate.

For example, I think if their wishes were granted, we would all be obliged to use much less energy and I only know two ways to do that.

  • The first method is to increase the price of energy – famously not a route to popularity.
  • The second method is to ration energy which has not been attempted in the UK (that I can recall) since the 1974 Oil Crisis.

One could use some combination of these two methods, but I don’t know of any fundamentally different ways.

We are all in favour of ‘Saving the Planet’, but higher energy costs or rationing would be wildly unpopular. This would increase the cost of almost all products and services.

I would vote for climate action and an impoverishment of my life and my future in a heartbeat. But I am well off.

Unless other people are convinced, and until we find a way to address this problem which is acceptable to those who will be most hurt in the short term – poorer people –  it will never actually happen. And all I care about is that it actually happens.

On the one hand… 

I strongly support the goal of a zero-carbon economy.

On the other hand… 

If the existing carbon-intensive economy reduces in scope too fast, then we will lack the resources to create the new economy.

On the one hand… 

David Attenborough spoke movingly on television this week about ‘Climate Change: the facts.

David Attenborough

David Attenborough

I watched his programme and while it’s not the story I would have told, it seemed to me to be a pretty straightforward and a fair presentation.

On the other hand… 

Not every one thought it fair. Here are specific comments (1, 2) or follow these links for torrents more similar stuff (Link#1, Link#2, Link#3). I disagree with these people, and their specific points are broadly irrelevant. But their votes are worth just as much as mine.

On the one hand… 

I am trying hard to lower the amount of energy I personally use.

I am measuring the energy use of appliances, reading my meters once a week and switching things off.

My aim is to reduce the electrical power being used by an average of more than 200 watts.

Over one year this will reduce my carbon dioxide emissions by around 0.35 tonnes. (Link).

On the other hand… 

Last year I was invited to give a keynote talk at a conference in New Zealand. I was honoured and said ‘Yes’.

This will cause an additional 7.4 tonnes of carbon dioxide to be emitted. (Link)

CO2 flight to New Zealand

Andrea Sella has written about this issue and perhaps we are at the end of the era of hypermobility.

On the one hand… 

I felt sad when I saw Notre Dame in flames.

On the other hand… 

I feel sad about droughts and floods and wild fires and destroyed livelihoods and brothers and sisters in poverty around the world.

If billions of euros can be found ‘in an instant’ for Notre Dame, why can’t we address these much more serious and urgent problems as dynamically?

And on this Easter day, I think:

What would Jesus do? 

 

Air Temperature

April 1, 2018

Recently, two disparate strands of my work produced publications within a week of each other.

Curiously they both concerned one of the commonest measurements made on Earth today – the measurement of air temperature.

  • One of the papers was the result of a humbling discovery I made last year concerning a common source of error in air temperature measurements. (Link to open access paper)
  • On the other  paper I was just one amongst 17 authors calling for the establishment of global reference network to monitor the climate. My guess is that most people imagine such a network already exists – but it doesn’t! (Link to open access paper)

I am writing this article because I was struck by the contrasting styles of these papers: one describing an arcane experimental detail; and the other proposing a global inter-governmental initiative.

And yet the aim of both papers was identical: to improve measurement so that we can more clearly see what is happening in the world.

Paper 1

In the middle of 2018 I was experimenting with a new device for measuring air temperature by measuring the speed of sound in air.

It’s an ingenious device, but it obviously needed to be checked. We had previously carried out tests inside environmental chambers, but the temperature stability and uniformity inside the chambers was not as good as we had hoped for.

So we decided to test the device in one of NPL’s dimensional laboratories. In these laboratories, there is a gentle, uniform flow of air from ceiling to floor, and the temperature is stable to within a hundredth of a degree Celsius (0.01 °C) indefinitely.

However, when I tried to measure the temperature of the air using conventional temperature sensors I got widely differing answers – varying by a quarter of a degree depending on where I placed the thermometer. I felt utterly depressed and humiliated.

Eventually I realised what the problem was. This involved stopping. Thinking carefully. And talking with colleagues. It was a classic case of eliminating the impossible leaving only the improbable.

After believing I understood the effect, I devised a simple experiment to test my understanding – a photograph of the apparatus is shown below.

tubes-in-a-lab-photo.png

The apparatus consisted of a set of stainless steel tubes held in a clamp stand. It was almost certainly the cheapest experiment I have ever conducted.

I placed the tubes in the laboratory, exposed to the downward air flow, and  left them for several hours to equilibrate with air.

Prior to this experience, I would have bet serious amounts of money on the ‘fact’ that all these tubes would be at the same temperature. My insight had led me to question this assumption.

And my insight was correct. Every one of the tubes was at a different temperature and none of them were at the temperature of the air! The temperature of the tubes depended on:

  • the brightness of the lights in the room – which was understandable but a larger effect than I expected, and
  • the diameter of the tubes – which was the truly surprising result.

Results 1

I was shocked. But although the reason for this is not obvious, it is also not complicated to understand.

When air flows air around a cylindrical (or spherical) sensor only a very small amount of air actually makes contact with the sensor.

Air reaching the sensor first is stopped (it ‘stagnates’ to use the jargon). At this point heat exchange is very effective. But this same air is then forced to flow around the sensor in a ‘boundary layer’ which effectively insulates the sensor from the rest of the air.

Air flow

For small sensors, the sensor acquires a temperature close to that of the air. But the air is surprisingly ineffective at changing the temperature of larger sensors.

The effect matters in two quite distinct realms.

Metrology

In metrology – the science of measurement – it transpires that knowledge of the temperature of the air is important for the most accurate length measurements.

This is because we measure the dimensions of objects in terms of the wavelength of light, and this wavelength is slightly affected by the temperature of the air through which the light passes.

In a dimensional laboratory such as the one illustrated below, the thermometer will indicate a temperature which is:

  • different from the temperature of artefacts placed in the room, and
  • different from the temperature of the air.

Laboratory

Unless the effect is accounted for – which it generally isn’t – then length measurements will be slightly incorrect.

Climatology

The effect is also important in climatology. If a sensor is changed in a meteorological station people check that the sensor is calibrated, but they rarely record its diameter.

If a calibrated sensor is replaced by another calibrated sensor with a different diameter, then there will be a systematic effect on the temperatures recorded by the station. Such effects won’t matter for weather forecasting, but they will matter for people using the stations for a climate record.

And that brings me to Paper 2

Paper 2

Hadcrut4 Global Temperature

When we see graphs of ‘global temperatures’ over time, many people assume that the data is derived from satellites or some ‘high-tech’ network of sensors. Not so.

The ‘surface’ temperature of the Earth is generally estimated in two quite distinct parts – sea surface temperature and land surface temperature. But both these terms are slight misnomers.

Considering just the land measurements, the actual temperature measured is the air temperature above the land surface. In the jargon, the measurement is called LSAT – the Land Surface Air Temperature.

LSAT is the temperature which human beings experience and satellites can’t measure it.

LSAT data is extracted from temperature measurements made in thousands of meteorological stations around the world. We have data records from some stations extending back for 150 years.

However, it is well known that data is less than ideal: it is biased and unrepresentative in many ways.

The effect described in Paper 1 is just one of many such biases which have been extensively studied. And scientists have devised many ways to check that the overall trend they have extracted – what we now call global warming – is real.

Nonetheless. It is slightly shocking that a global network of stations designed specifically with the aim of climate monitoring does not exist.

And that is what we were calling for in Paper 2. Such a climate network would consist of less than 200 stations world-wide and cost less than a modest satellite launch. But it would add confidence to the measurements extracted from meteorological stations.

Perhaps the most important reason for creating such a network is that we don’t know how meteorological technology will evolve over the coming century.

Over the last century, the technology has remained reasonably stable. But it is quite possible that the nature of data acquisition for meteorological applications will change  in ways we cannot anticipate.

It seems prudent to me that we establish a global climate reference network as soon as possible.

References

Paper 1

Air temperature sensors: dependence of radiative errors on sensor diameter in precision metrology and meteorology
Michael de Podesta, Stephanie Bell and Robin Underwood

Published 28 February 2018
Metrologia, Volume 55, Number 2 https://doi.org/10.1088/1681-7575/aaaa52

Paper 2

Towards a global land surface climate fiducial reference measurements network
P. W. Thorne, H. J. Diamond, B. Goodison , S. Harrigan , Z. Hausfather , N. B. Ingleby , P. D. Jones ,J. H. Lawrimore , D. H. Lister , A. Merlone , T. Oakley , M. Palecki , T. C. Peterson , M. de Podesta , C. Tassone ,  V. Venema, K. M. Willett

Published: 1 March 2018
Int. J. Climatol 2018;1–15. https://doi.org/10.1002/joc.5458

How do we know anything?

November 18, 2017

How do we know Anything MdeP-NPL

This is an edited video of a talk I gave recently to the NPL Postgraduate Institute about the forthcoming changes to the International System of Units, the SI.

It’s 36 minutes long and you can download the PowerPoint slides here.

It features the first ever public display of the Standard Michael – the artefact defining length in the SM, le systeme de moi, or My System of Units. (It’s shown at about 6 minutes 40 seconds into the video).

The central thesis of the talk is summarised in the slide below:

Measurement

In the talk I explain how the forthcoming changes to the SI will improve future measurements.

I hope you enjoy it.

 

The Past, Present and Future of Measurement

October 22, 2017

Measurement, the simple process of comparing an unknown quantity with a standard quantity, is the essential component of all scientific endeavours. We are currently about to enter a new epoch of metrology, one which will permit the breath-taking progress of the last hundred years to continue unimpeded into the next century and beyond.

The dawn of this new age has been heralded this week by the publication of an apparently innocuous paper in the journal Metrologia. The paper is entitled:

Data and Analysis for the CODATA 2017 Special Fundamental Constants Adjustment

and its authors, Peter Mohr, David Newell, Barry Taylor and Eite Tiesinga constitute the Committee on Data for Science and Technology, commonly referred to as CODATA. In this article I will try to explain the relevance of CODATA’s paper to developments in the science of metrology.

The Past

The way human beings began to make sense of their environment was by measuring it. We can imagine that our agrarian ancestors might have wondered whether crops were taller or heavier this year than last. Or whether plants grew better in one field rather than another. And they would have answered these questions by creating standard weights and measuring rods.

But to effectively communicate their findings, the standard units of measurement would need to be shared. First between villages, and then towns, and then counties and kingdoms. Eventually entire empires would share a system of measurement.

First units of weight and length were shared. Then, as time became more critical for scientific and technical endeavours, units of time were added to systems of the measurement. And these three quantities: mass, length and time, are shared by all systems of units.

These quantities formed the so-called ‘base units’ of a system of measurement. Many other quantities could be described in terms of these ‘base units’. For example, speeds would be described in multiples of [the base unit of length] divided by [the base unit of time]. They might be [feet] per [second] in one system, or [metres] per [second] in another.

Over the last few hundred years, the consistent improvement in measurement techniques has enabled measurements with reduced uncertainty. And since no measurement can ever have a lower uncertainty that the standard quantity in that system of units, there has been a persistent drive to have the most stable, most accurately-known standards, so that they do not form a barrier to improved measurements.

The Present

Presently, all scientific and technical measurements on Earth are made using the International System of Units, the SI. The naming of this system – as an explicitly international system – represented a profound change in conception. It is not an ‘imperial’ system or an ‘English’ system, but a shared enterprise administered by the International Bureau of Weights and Measures (BIPM), a laboratory located in diplomatically-protected land in Sèvres, near Paris, France. Its operation is internationally funded by the dozens of nations who have signed the international treaty known as the Convention of the Metre.

In essence, the SI is humanity’s standard way of giving quantitative descriptions of the world around us. It is really an annex to all human languages, allowing all nationalities and cultures to communicate unambiguously in the realms of science and engineering.

Founded in 1960, the SI was based upon the system of measurement using the metre as the unit of length, the kilogram as the unit of mass, and the second as the unit of time. It also included three more base units.

The kelvin and degree Celsius were adopted as units of temperature, and the ampere was adopted as the unit of electric current. The candela was defined as the unit of luminous efficacy – or how bright lights of different colours appear to human beings. And then in 1971 the often qualitative science of chemistry was included in the fold with the introduction of the mole as a unit of amount of substance, a recognition of the increasing importance of analytical measurements.

SI Circle - no constants

The SI is administered by committees of international experts that seek to make sure that the system evolves to meet humanity’s changing needs. Typically these changes are minor and technical, but in 1984 an important conceptual change was made.

Since the foundation of the SI, the ability to measure time had improved more rapidly than the ability to measure length. It was realised that if the metre was defined differently, then length measurements could be improved.

The change proposed was to define what we mean by ‘one metre’ in terms of the distance travelled by light, in a vacuum, in a fixed time. Based on Einstein’s insights, the speed of light in a vacuum, c, is thought to be a universal constant, but at the time it had to be measured in terms metres and seconds i.e. human-scale measurement standards. This proposal defined a metre in terms of a natural constant – something we believe is truly constant.

The re-definition went well, and set metrologists thinking about whether the change could be adopted more widely.

The Future

Typically every four years, CODATA examine the latest measurements of natural constants, and propose the latest best estimate of the values of a range of natural constants.

Measurement Graphic

This is a strange. We believe that the natural constants are really constant, not having changed measurably since the first few seconds of our universe’s existence. Whereas our human standards are at most a few decades old, and (as with all human standards) are subject to slow changes. Surely, it would make more sense, to base our measurement standards on these fundamental constants of the natural world? This insight is at the heart of the changes which are about to take place. The CODATA publication this week is the latest in a series of planned steps that will bring about this change on 20th May 2019.

Constants Graphic

After years of work by hundreds of scientists, the values of the natural constants recommended by the CODATA committee will be fixed – and will form the basis for the new definitions of the seven SI base units.

What will happen on 20th May 2019?

On the 20th May 2019, revised definitions of four of the base units of the SI will come into force. More than 10 years of careful measurements by scientists world-wide will ensure that the new definitions are, as closely as possible, equivalent to the old definitions.

The change is equivalent to removing the foundations underneath a structure and then inserting new foundations which should leave the structure supported in exactly the same way. However the new foundations – being based on natural constants rather than human artefacts – should be much more stable than the old foundations.

If the past is any guide to the future, then in the coming decades and centuries, we can anticipate that measurement technology will improve dramatically. However we cannot anticipate exactly how and where these improvements will take place. By building the SI on foundations based on the natural constants, we are ensuring that the definitions of the unit quantities of the SI will place no restriction whatever on these future possible improvements.

The kilogram

The kilogram is the SI unit of mass. It is currently defined as the mass of the International Prototype of the Kilogram (IPK), a cylinder of platinum-iridium alloy held in a safe at the BIPM. Almost every weighing in the world is, indirectly, a comparison against the mass of this artefact.

On 20th May 2019, this will change. Instead, the kilogram will be defined in terms of a combination of fundamental constants including the Planck constant, h, and the speed of light, c. Although more abstract than the current definition, the new definition is thought to be at least one million times more stable.

The new definition will enable a new kind of weighing technology called a Kibble balance. Instead of balancing the weight of a mass against another object whose mass is known by comparison with the IPK, the weight will be balanced by a force which is calculable in terms of electrical power, and which can be expressed as a multiple of the fundamental constants e, h and c.

The ampere

The ampere is the SI unit of electrical current. It is presently defined in terms of the current which, if it flowed in two infinitely thin, infinitely long, parallel wires would (in vacuum) produce a specified force between the wires. This definition, arcane even by metrologists’ standards, was intended to allow the measurement of the ampere in terms of the force between carefully constructed coils of wire. Sadly, it was out of date shortly after it was implemented.

On 20th May 2019, this will change. Instead, the ampere will be defined in terms of a particular number of electrons per second, each with an exactly specified electrical charge e, flowing past a point on a wire. This definition finally corresponds to the way electric current is described in textbooks.

The new definition will give impetus to techniques which create known electrical currents by using electrical devices which can output an exactly countable number of electrons per second. At the moment these devices are limited to approximately 1 billion (a thousand million) electrons per second, but in future this is likely to increase substantially.

The kelvin

The kelvin is the SI unit of temperature. It is currently defined as the temperature of the ‘triple point of water’. This temperature – at which liquid water, solid water (ice) and water vapour (but no air) co-exist in equilibrium – is defined to be 273.16 kelvin exactly. Glass cells re-creating this conjunction are located in every temperature calibration lab in the world, and every temperature measurement is a comparison of how much hotter a temperature is than the temperature at one position within a ‘triple point of water cell’.

On 20th May 2019, this will change. Instead, the kelvin will be defined in terms of a particular amount of energy per molecule as specified by the Boltzmann constant, kB. This definition finally corresponds to the way thermal energy is described in textbooks.

The requirement to compare every measurement of temperature with the temperature of the triple point of water adds uncertainty to measurements at extremely low temperatures (below about 20 K) and at high temperatures (above about 1300 K). The new definition will immediately allow small improvements in these measurement ranges, and further improvements are expected to follow.

The definition of the degree Celsius (°C) in terms the kelvin will remain unchanged.

The mole

The mole is the SI unit of ‘amount of substance’. It is currently defined as the amount of substance which contains the same number of ‘elementary entities’ as there are atoms in 12 grams of carbon-12. The change in the definition of the kilogram required a re-think of this definition.

On 20th May 2019, it will change. The mole will be defined as the amount of substance which contains a particular, exactly specified, number of elementary entities. This number – known as the Avogadro number, NA – is currently estimated experimentally, but in future it will have fixed value.

The specification of an exact number of entities effectively links the masses of microscopic entities such as atoms and molecules to the new definition of the kilogram.

The ‘New’ SI

On 20th May 2019 four of the seven base units of the SI will be re-defined. But what of the other three?

The second is already defined in terms of the natural frequency of microwaves emitted by atoms of a particular caesium isotope. The metre is defined in terms of the second and the speed of light in vacuum – a natural constant. And the candela is defined in terms of Kcd, the only natural constant in the SI that relates to human beings. So from 20th May 2019 the entire SI will be defined in terms of natural constants.

SI Circle - with constants

The SI is not perfect. And it will not be perfect even after the redefinitions come into force. This is because it is a system devised by human beings, for human beings. But by incorporating natural constants into the definitions of all its base units, the SI has taken a profound step towards being a system of measurement which will enable ever greater precision in metrology.

And who knows what features of the Universe these improved measurements will reveal.

Talking about the ‘New’ SI

July 3, 2017

I was asked to give a talk about the SI to some visitors tomorrow morning, and so I have prepared some PowerPoint slides

If you are interested, you can download them using this link (.pptx 13 Mb!): please credit me and NPL if you use them.

But I also experimentally narrated my way through the talk and recorded the result as a movie.

The result is… well, a bit dull. But if you’re interested you can view the results below.

I have split the talk into three parts, which I have called Part 1, Part 2 and Part 3.

Part 1: My System of Units

This 14 minute section is the fun part. It describes a hypothetical system of units which is a bit like the SI, but in which all the units are named after my family and friends.

The idea is to show the structure of any system of units and to highlight some potential shortcomings.

It also emphasises the fact that systems of units are not ‘natural’. They have been created by people to meet our needs.

Part 2: The International System of Units

This 22 minute section – the dullest and most rambling part of the talk – explains the subtle rationale for the changes in the SI upon which we have embarked.

There are two key ideas in this part of the talk:

  • Firstly there is a description of the separation of the concepts of the definition of a unit from the way in which copies of the unit are ‘realised‘.
  • And secondly, there is a description of the role of natural constants in the new definitions of the units of the SI.

Part 3: The Kilogram Problem

This 11 minute section is a description of one of the two ways of solving the kilogram problem: the Kibble balance. It has three highlights!

  • It features a description of the balance by none other than Bryan Kibble himself.
  • There is an animation of a Kibble balance which takes just seconds to play but which took hours to create!
  • And there are also some nice pictures of the Mark II Kibble Balance installed in its new home in Canada, including a short movie of the coil going up and down.

Overall

This is all a bit dull, and I apologise. It’s an experiment and please don’t feel obliged to listen to all or any of it.

When I talk to a live audience I hope it will all be a little punchier – and that the 2800 seconds it took to record this will be reduced to something nearer to its target 2100 seconds.

 

 

 

Not everything is getting worse!

April 19, 2017

Carbon Intensity April 2017

Friends, I find it hard to believe, but I think I have found something happening in the world which is not bad. Who knew such things still happened?

The news comes from the fantastic web site MyGridGB which charts the development of electricity generation in the UK.

On the site I read that:

  • At lunchtime on Sunday 9th April 2017,  8 GW of solar power was generated.
  • On Friday all coal power stations in the UK were off.
  • On Saturday, strong winds and solar combined with low demand to briefly provide 73% of power.

All three of these facts fill me with hope. Just think:

  • 8 gigawatts of solar power. In the UK! IN APRIL!!!
  • And no coal generation at all!
  • And renewable energy providing 73% of our power!

Even a few years ago each of these facts would have been unthinkable!

And even more wonderfully: nobody noticed!

Of course, these were just transients, but they show we have the potential to generate electricity which has a significantly low carbon intensity.

Carbon Intensity is a measure of the amount of carbon dioxide emitted into the atmosphere for each unit (kWh) of electricity generated.

Wikipedia tells me that electricity generated from:

  • Coal has a carbon intensity of about 1.0 kg of CO2 per kWh
  • Gas has a carbon intensity of about 0.47 kg of CO2 per kWh
  • Biomass has a carbon intensity of about 0.23 kg of CO2 per kWh
  • Solar PV has a carbon intensity of about 0.05 kg of CO2 per kW
  • Nuclear has a carbon intensity of about 0.02 kg of CO2 per kWh
  • Wind has a carbon intensity of about 0.01 kg of CO2 per kWh

The graph at the head of the page shows that in April 2017 the generating mix in the UK has a carbon intensity of about 0.25 kg of CO2 per kWh.

MyGridGB’s mastermind is Andrew Crossland. On the site he has published a manifesto outlining a plan which would actually reduce our carbon intensity to less than 0.1 kg of CO2 per kWh.

What I like about the manifesto is that it is eminently doable.

And who knows? Perhaps we might actually do it?

Ahhhh. Thank you Andrew.

Even thinking that a good thing might still be possible makes me feel better.

 

Remarkably Unremarkable

February 24, 2017

img_4966

The ‘Now’

‘The future’ is a mysterious place.

And our first encounter with ‘the future’ is ‘the now’.

Today I felt like I encountered the future when I drove a car powered by a hydrogen fuel cell. And far from being mysterious it was remarkably unremarkable.

The raw driving experience was similar to using a conventional car with automatic transmission.

But instead of filling the car with liquid fuel derived from fossil plant matter,  I filled it with hydrogen gas at a pressure 700 times greater than atmospheric pressure.

img_4976

This was achieved using a pump similar in appearance to a conventional petrol pump.

img_4964

This was the interface to some industrial plant which generated 80 kg of hydrogen each day from nothing more than electricity and water. This is enough to fill roughly 20 cars.

This is small scale in comparison with a conventional petrol station, but these are early days. We are still at the interface with the future. Or one possible future.

The past

Some years ago, I remember making measurements of the temperature and humidity inside a fuel cell during operation.

The measurements were difficult, and the results surprising – to me at least.

And at the end of the project I remember thinking “Well, that was interesting, but it will never work in practice”.

Allow me please to eat my words: it works fine.

Today I was enormously impressed by the engineering prowess that made the fuel cell technology transparent to the driver.

The future

What I learned today was that the technology to make cars which emit no pollution at their point of use exists, now.

The range of this car is 300 miles and it takes only 5 minutes to re-fill. When there are more re-filling stations than the dozen or so currently around the UK, this will become a very attractive proposition.

I have no idea if fuel cell cars will become ubiquitous. Or whether they will become novelties like steam-powered cars from the end of the nineteenth century.

Perhaps this will represent the high-water mark of this technology. Or perhaps this will represent the first swallow in a summer of fuel cell cars.

None of us can know the future. But for the present, I was impressed.

It felt like the future was knocking on the door and asking us to hurry up.

When will the North Pole become the North Pool?

December 16, 2016

arctic_ssi_201612_chart

It is a sad fact, but it is likely that within my lifetime it will become possible to sail to the North Pole. I am 56.

Tragically it is also true that there is absolutely nothing that you or I can do about it.

In fact, even in the unlikely event that humanity en masse decided it wanted to prevent this liquefaction, there would be literally nothing we could do to stop it.

The carbon dioxide we have already put in the atmosphere will warm the Earth’s surface for a few decades yet even if we stopped all emissions right now.

Causation

The particular line of causation between carbon dioxide emissions and warming of the arctic is long, and difficult to pin down.

Similarly it is difficult to determine if a bull in a china shop broke a particular vase, or whether it was a shop helper trying to escape.

Nonetheless, in both cases the ultimate cause is undeniable.

What does the figure show?

The animation at the head of the page, stolen from NASA’s Earth Observatory, is particularly striking and clear.

The animation shows data from 1979 to this past November 2016 showing the extent of sea ice versus the month of year.

Initially the data is stable: each year is the same. But since the year 2000, we have seen reductions in the amount of sea ice which remains frozen over the summer.

In 2012, an additional one million square kilometres – four times the area of England Scotland and Wales combined – melted.

The summer of 2016 showed the second largest melt ever.

The animation highlights the fact that the Arctic has been so warm this autumn, that Sea Ice is forming at an unprecedentedly slow rate.

The Arctic Sea Ice extent for November 2016 is about one million square kilometres less than what we might expect it to be at this time of year.

My Concern 

Downloading the data from the US National Snow and Ice Data Centre, I produced my own graph of exactly the same data used in the animation.

The graph below lacks the drama of the animated version at the head of the article. But it shows some things more clearly.

sea-ice-december-2016-graph

This static graph shows that the minimum ice extent used to be stable at around 7 ± 1 million square kilometres. The minimum value in 2012 was around half that.

The animated graph at the head of the article highlights the fact that the autumn freeze (dotted blue circle) is slower than usual – something which is not clear in the static graph.

My concern is that if this winter’s freeze is ‘weak’, then the ice formed will be thin, and then next summer’s melt is likely to be especially strong.

And that raises a big question at the very heart of our culture.

When the North Pole becomes the North Pool, where will Santa live?

 

Science in the face of complexity

February 4, 2016
Jeff Dahn: Battery Expert at Dalhousie University

Jeff Dahn: Battery Expert at Dalhousie University

My mother-in-law bought me a great book for Christmas: Black Box Thinking by Matthew Syed: Thanks Kathleen 🙂

The gist of the book is easy to state: our cultural attitude towards “failure”- essentially one of blame and shame – is counter productive.

Most of the book is spent discussing this theme in relation to the practice of medicine and the law, contrasting attitudes in these areas to those in modern aviation. The stories of unnecessary deaths and of lives wasted are horrific and shocking.

Engineering

But when he moves on to engineering, the theme plays out more subtly. He discusses the cases of James Dyson, the Mercedes Formula 1 team, and David Brailsford from Sky Cycling. All of them have sought success in the face of complexity.

In the case of Dyson, his initial design of a ‘cyclone-based’ dust extractor wasn’t good enough, and the theory was too complex to guide improvements. So he started changing the design and seeing what happened. As recounted, he investigated 5,127 prototypes before he was satisfied with the results. The relevant point here is that his successful design created 5,126 failures.

One of his many insights was to devise a simple measurement technique that detected tiny changes in the effectiveness of his dust extraction: he sucked up fine white dust and blew the exhaust over black velvet.

Jeff Dahn

This approach put me in mind of Jeff Dahn, a battery expert I met at Dalhousie University.

Batteries are really complicated and improving them is hard because there are so many design features that could be changed. What one wants is a way to test as many variants as quickly and as sensitively as possible in order to identify what works and what doesn’t.

However when it comes to battery lifetime – the rate at which the capacity of a battery falls over time – it might seem inevitable that this would take years.

Not so. By charging and discharging batteries in a special manner and at elevated temperatures, it is possible to accelerate the degradation. Jeff then detects this with precision measurements of the ‘coulombic efficiency’ of the cell.

‘Coulombic efficiency’ sounds complicated but is simple. One first measures the electric current as the cell is charged. If the electric current is constant during charging then the electric current multiplied by the charging time gives the total amount of electric charge stored in the cell. One then measures the same thing as the cell discharges.

For the lithium batteries used in electric cars and smart phones, the coulombic efficiency is around 99.9%. But it is that tiny of amount (less than 0.1%) of the electric charge which doesn’t come back that is progressively damaging the cell and limiting it’s life.

One of Jeff’s innovations is the application of precision measurement to this problem. By measuring electric currents with uncertainties of around one part in a million, Jeff can measure that 0.1% of non-returned charge with an uncertainty of around 0.1%. So he can distinguish between cells that 99.95% efficient and 99.96% efficient. That may not sound much, but the second one is 20% better!

By looking in detail at the Coulombic efficiencyJeff can tell in a few weeks whether a new design of electrode will improve or degrade battery life.

The sensitivity of this test is akin to the ‘white dust on black velvet’ test used by Dyson: it doesn’t tell him why something got better or worse – he has to figure that out for himself. But it does tell him quickly which things were bad ideas.

I couldn’t count the ammeters in Jeff’s lab – each one attached to a test cell – but he was measuring hundreds of cells simultaneously. Inevitably, most of these tests will make the cells perform worse and be categorised as ‘failures’.

But this system allows him to fail fast and fail often: and it is this capability that allows him to succeed at all. I found this application of precision measurement really inspiring.

Thanks Jeff.

 

 

 

 

 


%d bloggers like this: