Exactitude and Inexactitude

July 19, 2017

Exactitude and Inexactitude

After being a professional physicist for more than 30 years, I realised the other day that I write for a living.

Yes, I am a physicist, and I still carry out experiments, do calculations and write computer programs.

But at the end of all these activities, I usually end up writing something: a scientific paper; a report; some notes for myself; or a blog article like this.

But although the final ‘output’ of most of what I do is a written communication of some description, nobody ever taught me to write.

I learned to write by reading what I had written. And being appalled.

Appalled by missed words and typographic errors, and by mangled ideas and inappropriate assumptions of familiarity with the subject matter.

Learning to write is a difficult, painful and never-ending process.

And over and over again I am torn between exactitude – which I seek – and inexactitude, which I have learned to tolerate for two reasons.

  • Firstly, a perfect article which is never completed communicates nothing. Lesson one for writing is that finishing is essential.
  • Secondly, an article which has all the appropriate details will be too long and may never be read by the people with whom I seek to communicate.

So in order communicate optimally, I need to find the appropriate tension between the competing forces of exactitude and inexactitude.

This blog 

When I write for this blog, I try to write articles that are about 500 words long. I rarely succeed.

Typically, I write something. Read it. And then add explanatory text either at the start or at the end?

But with each extra word I type, I realise that fewer and fewer people will read the article and appreciate the clarity of my writing.

And I have to acknowledge that if I had written fewer words I might have communicated something to more people.

Or even communicated more by omitting detail people might find obfuscatory

Indeed I have to acknowledge – and this is hard – that I could have even written something erroneous and communicated something to more people.

For example

For example, in the previous article on the GEO600 Gravity Wave detector, I said that “moving a mirror by half a wavelength of light caused the interferometer to change from constructive to destructive interference.”

Now I know what you are thinking: and yes, it only has to move by a quarter of a wavelength of light.

I realised this before I finished the article but it had already taken hours, and I had already recorded the narrative to the movie.

Similarly, my animation showed one of the reflections coming from the wrong side of a piece of glass (!), and it omitted the normal ‘compensator’ plate in the interferometer.

And how many people noticed or complained? None so far.

So the article was published and presumably communicated something, inexactly and slightly incorrectly. And it was not wholly erroneous.

Exactitude and Inexactitude.

Exactitude and Inexactitude are like two mis-matched protagonists in a ‘buddy movie’.

At the start they hate each other, but over the course of ‘a journey’ in which they are compelled to accompany one another, they learn to love each other for what they are, and to accept each other for what they are not.

Inexactitude: You drive me crazy, but I love you.

Gravity Wave Detector#2

July 15, 2017

GEO600 One arm


After presenting a paper at the European Society of Precision Engineering and Nanotechnology (EUSPEN) in Hannover back in May, I was offered the chance to visit a Gravity Wave Detector. Wow! I jumped at the opportunity!

The visiting delegation were driven in a three-minibus convoy for about 30 minutes, ending up in the middle of a field of cabbages.

After artfully turning around and re-tracing our steps, we found a long, straight, gated track running off the cabbage-field track.

Near the gate was a shed, and alongside the road ran some corrugated sheet covering what looked like a drainage ditch.

These were the only clues that we were approaching one of the most sensitive devices that human beings have ever built: the GEO600 gravity-wave detector(Wikipedia or GEO600 home page)

Even as we drove down the road, the device in ‘the ditch’ was looking for length changes in the 600 metre road of less than one thousandth the diameter of a single proton.

Nothing about how to achieve such sensitivity is obvious. And as my previous article made clear, there have been many false steps along the way.

But even the phenomenal sensitivity of this detector turns out be not quite good enough to detect the gravity waves from colliding black holes.

In order to detect recent events GEO600 would have to have been between 3 and 10 times more sensitive.

The measuring principle

The GEO600 device as it appears above ground is illustrated in the drone movie above.

It consists of a series of huts and an underground laboratory at the intersection of two 600 metre long ‘arms’.

In the central laboratory, a powerful (30 watt) laser shines light of a single wavelength onto a beam-splitter: a piece of glass with a thin metal coating.

The beam-splitter reflects half the light and transmits the other other half, creating two beams which travel at 90° to each other along the two arms of the device.

At the end of the arms, a mirror reflects the light back to the beam-splitter and onto a light detector where the beams re-combine.

Aside from the laser, all the optical components are suspended from anti-vibration mountings inside vacuum tubes about 50 cm in diameter.

When set up optimally, the light traversing the two arms interferes destructively, giving almost zero light signal at the detector.

But a motion of one mirror by half of a wavelength of light (~0.0005 millimetres), will result in a signal going from nearly zero watts (when there is destructive interference) to roughly 30 watts (when there is constructive interference).

So this device – which is called a Michelson Interferometer – senses tiny differences in the path of light in the two arms. These differences might be due to the motion of one of the mirrors, or due to light in one arm being delayed with respect to light in the other arm.


The basic sensitivity to motion can be calculated (roughly) as follows.

Shifting one mirror by one half a wavelength (roughly 0.0005 millimetres) results in an optical signal increasing from near zero to roughly 30 watts, a sensitivity of around 60,000 watts per millimetre.

Modern silicon detectors can detect perhaps a pico-watt (10-12 watt) of light.

So the device can detect a motion of just

10-12 watts ÷ 60000 watts per millimetre

or roughly 2 x 10-17 mm which is 10-20 metres. Or one hundred thousandth the diameter of a proton!

If the beam paths are each 600 metres long then the ability to detect displacements is equivalent to a fractional strain of roughly 10-23 in one beam path over the other.

So GEO600 could, in principle, detect a change in length of one arm compared to the other by a fraction:

0.000 000 000 000 000 000 000 01

There are lots of reasons why this sensitivity is not fully realised, but that is the basic operating principle of the interferometer.

The ‘trick’ is isolation

The scientists running the experiment think that a gravity wave passing through the detector will cause tiny, fluctuating changes in the length of one arm of GEO600 compared with the other arm.

The changes they expect are tiny which is why they made GEO600 so sensitive.

But in the same way that a super-sensitive microphone in a noisy room would just makes the noise appear louder, so GEO600 is useless unless it can be isolated from noise and vibrations.

So the ‘trick’ is to place this extraordinarily sensitive ‘microphone’ into an extraordinarily ‘quiet’ environment. This is very difficult.

If one sits in a quiet room, one can slowly become aware of all kinds of noises which were previously present, but of which one was unaware:

  • the sound of the flow of blood in our ears:
  • the sound of the house ‘creaking’
  • other ‘hums’ of indeterminate origin.

Similarly GEO600, can ‘hear’ previously unimaginably ‘quiet’ sounds:

  • the ground vibrations of Atlantic waves crashing on the shores of Europe:
  • the atom-by-atom ‘creeping’ of the suspension holding the mirrors


So during an experiment, the components of GEO600 sit in a vacuum and the mirrors and optical components are suspended from silica (glass) fibres, which are themselves suspended from the end of a spring-on-a-spring-on-a-spring!

In the photograph below, the stainless steel vacuum vessels containing the key components can be seen in the underground ‘hub’ at the intersection of the two arms.

GEO600 Beam Splitter

They are as isolated from the ‘local’ environment as possible.

The output of the detector – the brightness of the light on the detector is shown live on one of the many screens in the control ‘hut’.

GEO 600 Control Centre

But instead of a graph of ‘brightness versus time, the signal is shown as a graph of the frequencies of vibration detected by the silicon detector.


The picture below shows a graph of the strain – the difference in length of the two arms – detected at different frequencies.

[Please note the graph is what scientists call ‘logarithmic’. This means that a given distance on either axis corresponds to a constant multiplier. So the each group of horizontal lines corresponds to a change in strain by a factor 10, and the maximum strain shown on the vertical 10,000 times larger than the smallest strain shown.]

Sensitivity Curve

The picture above shows two traces, which both have three key features:

  • The blue curve showed the signal being detected as we watched. The red curve was the best performance of the detector. So the detector was performing close to its optimal performance.
  • Both curves are large at low frequencies, have a minimum close to 600 Hz, and then rise slowly. This is the background noise of the detector. Ideally they would like this to be about 10 times lower, particularly at low frequencies.
  • Close to the minimum is a large cluster of spikes: these are the natural frequencies of vibration of the mirror suspensions and the other optical components.
  • There are lots of spikes caused by specific noise sources in the environment.

If a gravity wave passed by…

…it would appear as a sudden spike at a particular frequency, and this frequency would then increase, and finally the spike would disappear.

It would be over in less than a second.

And how could they tell it was a gravity wave and not just random noise? Well that’s the second trick: gravity wave detectors hunt in pairs.

The signal from this detector is analysed alongside signals from other gravity wave detectors located thousands of kilometres away.

If the signal came from a gravity wave, then they would expect to see a similar signal in the second detector either just before or just afterwards – within a ‘time window’ consistent with a wave travelling at the speed of light.


Because powerful lasers were in use, visitors were obliged to wear laser google!

Because powerful lasers were in use, visitors were obliged to wear laser goggles!

This was the second gravity wave detector I have seen that has never detected a gravity wave.

But I have seen this in the new era where we now know these waves exist.

People have been actively searching for these waves for roughly 50 years and I am filled with admiration for the nobility of the researchers who spent their careers fruitlessly searching and failing to find gravity waves.

But the collective effect of these decades of ‘failure’ is a collective success: we now know how to the ‘listen’ to the Universe in a new way which will probably revolutionise how we look at the Universe in the coming centuries.

A 12-minute Documentary

Gravity Wave Detector#1

July 6, 2017
Me and Albert Einstein

Not Charlie Chaplin: That’s me and Albert Einstein. A special moment for me. Not so much for him.

I belong to an exclusive club! I have visited two gravity wave detectors in my life.

Neither of the detectors have ever detected gravity waves, but nonetheless, both of them filled me with admiration for their inventors.

Bristol, 1987 

In 1987, the buzz of the discovery of high-temperature superconductors was still intense.

I was in my first post-doctoral appointment at the University of Bristol and I spent many late late nights ‘cooking’ up compounds and carrying out experiments.

As I wandered around the H. H. Wills Physics department late at night I opened a door and discovered a secret corridor underneath the main corridor.

Stretching for perhaps 50 metres along the subterranean hideout was a high-tech arrangement of vacuum tubing, separated every 10 metres or so by a ‘castle’ of vacuum apparatus.

It lay dormant and dusty and silent in the stillness of the night.

The next day I asked about the apparatus at morning tea – a ritual amongst the low-temperature physicists.

It was Peter Aplin who smiled wryly and claimed ownership. Peter was a kindly antipodean physicist, a generalist – and an expert in electronics.

New Scientist article from 1975

New Scientist article from 1975

He explained that it was his new idea for a gravity wave detector.

In each of the ‘castles’ was a mass suspended in vacuum from a spring made of quartz.

He had calculated that by detecting ‘ringing’ in multiple masses, rather than in a single mass, he could make a detector whose sensitivity scaled as its Length2 rather than as its Length.

He had devised the theory; built the apparatus; done the experiment; and written the paper announcing that gravity waves had not been detected with a new limit of sensitivity.

He then submitted the paper to Physical Review. It was at this point that a referee had reminded him that:

When a term in L2 is taken from the left-hand side of the equation to the right-hand side, it changes sign. You will thus find that in your Equation 13, the term in L2 will cancel.

And so his detector was not any more sensitive than anyone else’s.

And so…

If it had been me, I think I might have cried.

But as Peter recounted this tale, he did not cry. He smiled and put it down to experience.

Peter was – and perhaps still is – a brilliant physicist. And amongst the kindest and most helpful people I have ever met.

And I felt inspired by his screw up. Or rather I was inspired by his ability to openly acknowledge his mistake. Smile. And move on.

30 years later…

…I visited Geo 600. And I will describe this dramatically scaled-up experiment in my next article.

P.S. (Aplin)

Peter S Aplin wrote a review of gravitational wave experiments in 1972 and had a paper at a conference called “A novel gravitational wave antenna“. Sadly, I don’t have easy access to either of these sources.


Talking about the ‘New’ SI

July 3, 2017

I was asked to give a talk about the SI to some visitors tomorrow morning, and so I have prepared some PowerPoint slides

If you are interested, you can download them using this link (.pptx 13 Mb!): please credit me and NPL if you use them.

But I also experimentally narrated my way through the talk and recorded the result as a movie.

The result is… well, a bit dull. But if you’re interested you can view the results below.

I have split the talk into three parts, which I have called Part 1, Part 2 and Part 3.

Part 1: My System of Units

This 14 minute section is the fun part. It describes a hypothetical system of units which is a bit like the SI, but in which all the units are named after my family and friends.

The idea is to show the structure of any system of units and to highlight some potential shortcomings.

It also emphasises the fact that systems of units are not ‘natural’. They have been created by people to meet our needs.

Part 2: The International System of Units

This 22 minute section – the dullest and most rambling part of the talk – explains the subtle rationale for the changes in the SI upon which we have embarked.

There are two key ideas in this part of the talk:

  • Firstly there is a description of the separation of the concepts of the definition of a unit from the way in which copies of the unit are ‘realised‘.
  • And secondly, there is a description of the role of natural constants in the new definitions of the units of the SI.

Part 3: The Kilogram Problem

This 11 minute section is a description of one of the two ways of solving the kilogram problem: the Kibble balance. It has three highlights!

  • It features a description of the balance by none other than Bryan Kibble himself.
  • There is an animation of a Kibble balance which takes just seconds to play but which took hours to create!
  • And there are also some nice pictures of the Mark II Kibble Balance installed in its new home in Canada, including a short movie of the coil going up and down.


This is all a bit dull, and I apologise. It’s an experiment and please don’t feel obliged to listen to all or any of it.

When I talk to a live audience I hope it will all be a little punchier – and that the 2800 seconds it took to record this will be reduced to something nearer to its target 2100 seconds.





July 2, 2017

SI Units

Welcome to the Interregnum.

At midnight on the 30th June 2017 the world stepped over the threshold into a new domain of metrology.

It is now too late to ever measure the Boltzmann constant or the Planck constant 😦

What do you mean?

Measuring is the process of comparing one thing – the thing you are trying to measure – with a standard, or combination of standards.

So when we measure a speed, we are comparing the speed of an object with the speed of “one metre per one second”.

  • The Boltzmann constant tells us (amongst other things) the amount of energy that a gas molecule possesses at a particular temperature.
  • The Planck constant tells us (amongst other things) the quantum mechanical wavelength of a particle travelling with a steady speed.

To measure these constants we need to make comparisons against our measurement standards of metres, seconds, kilograms and kelvins.


But actually we think that quantities such as the Planck constant are really more constant than any human-conceived standard. That’s why we call them ‘constants’!

And so it seems a bit ‘cart-before-horse’ to compare these ‘truly-constant’ quantities to our inevitably-imperfect ‘human standards’.

Over the last few decades it has become apparent that it would make much more sense if we reversed the direction of comparison.

In this new conception of measurement standards, we would base the length of a metre, the mass of kilogram etc. on these truly constant quantities.

And that is what we are doing.

Over the last decade or so, metrologists world-wide have made intense efforts to make the most accurate measurements of these constants in terms of the current definitions of units embodied in the International System of Measurement, the SI.

On July 1st 2017, we entered a transition period – an interregnum – in which scientists will analyse these results.

The analysis is complicated and so for practical reasons, even if new and improved measurements were made, they would not be considered.

If the results are satisfactory the General Conference on Weights and Measures, a high-powered diplomatic meeting, will approve them. And on May 20th 2019 the world will switch to a new system of measurement.

This will be a system of measurement which is scaled to constants of nature that we see around us.

And afterwards?

The value of seven ‘natural constants’ including the Boltzmann Constant and the Planck Constant will be fixed.

So previously people placed known masses onto special ‘Kibble balances’ and made an estimate of the Planck constant.

By ‘known masses’ we mean  masses that had been compared (directly or indirectly) with the mass of the International Prototype of the Kilogram.

After 20th May 2019, people carrying out the same experiment will already know the value of the Planck constant: we will build our system of measurement on that value.

And so the results of the same experiment will result in an estimate for the mass of object on the Kibble balance.

What difference will it make?

At the point of the switch-over it will make no difference what so ever.

Which begs the question:Why are you doing this?”

The reason is that these unit definitions form the foundations for measurements in every branch of every science.

And the foundations of every complex structure – be it a building or the system of units – needs occasional maintenance.

Such work is often expensive and afterwards there is nothing to show except confidence that the structure will not subside or crack. And that is the aim of this change.

The advances in measurement science over the last century have been staggering. And key developments would have been inconceivable even a few decades before they were made.

Similarly we anticipate that over future centuries  measurement science will continue to improve, presumably in ways that we cannot yet conceive.

By building the most stable foundations of which we can conceive, we are making sure that – to the very best of our ability – scientific advances will not be hindered by drifts or inconsistency in the system of units used to report the results of experiments.


What is Life?

June 28, 2017
Royal Trinity Hospice

A pond in the garden of the Royal Trinity Hospice.

On Monday, my good friend Paula Chandler died.

It seems shocking to me that I can even type those words.

She had cancer, and was in a hospice, and her passing was no surprise to her or those who loved her. But it was, and still is, a terrible shock.

It is unthinkable to me that we will never converse again.

How can someone be alive and completely self-aware and witty on Saturday; exchanging texts on Sunday evening; and then simply gone on Monday morning?

Her body was still there, but the essential spark that anyone would recognise as being ‘Paula’, was gone.

As I sat in the garden of the Royal Trinity Hospice, I reflected on a number of things.

And surrounded by teeming beautiful life, the question of “What is Life?” came to my mind. Paula would have been interested in this question.

What is life?

In particular I tried to recall the details of the eponymous book by Addy Pross.

In honesty I can’t recommend the book because it singularly fails to answer the question it sets itself.

In the same way that a book called “How to become rich” might provide an answer for the author but not the reader, so Addy Pross’s book was probably valuable for Addy Pross as he tried to clarify his thoughts. And to that extent the book is worth reading.

Life is ubiquitous on Earth, and after surveying previous authors’ reflections, Addy Pross focuses the question of “What is Life?” at one specific place: the interface between chemistry and biology:

  • In chemistry, reactions run their course blindly and become exhausted.
  • In biology, chemistry seeks out energy sources to maintain what Addy Pross calls a dynamic, kinetic stability.

So how does chemistry ‘become’ biology?

In the same way that a spinning top is stable as long as it spins. Or a vortex persists in a flowing fluid. Similarly life seems to be a set of chemical reactions which exhibit an ability to ‘keep themselves going’.

What is life?

Re-naming ‘life’ as ‘dynamic kinetic stability’ does not seem to me to be particularly satisfactory.

It doesn’t explain how or why things spontaneously acquire dynamic kinetic stability any more than saying something is alive explains its aliveness.

I do expect that one day someone will answer the question of “What is Life?” in a meaningful technical way.

But for now, as I think about Paula, and the shocking disappearance of her unique dynamic kinetic stability, I am simply lost for words.

Measuring the Boltzmann constant for the last time

June 27, 2017
BIPM gardens

The gardens of the International Bureau of Weights and Measures (BIPM) in Paris

If you were thinking of measuring the Boltzmann constant, you had better hurry up.

If your research paper reporting your result is not accepted for publication by the end of this Friday 30th June 2017 then you are out of time.

As I write this on the morning of Tuesday 27th June 2017, there are four days to go and one very significant measurement has yet to be published.

UPDATE: It’s arrived! See the end of the article for details

What’s going on?

The Boltzmann constant is the conversion factor between mechanical energy and temperature.

Setting to one side my compulsion to scientific exactitude, the Boltzmann constant tells us how many joules of energy we must give to a molecule in order to increase its temperature by one kelvin (or one degree Celsius).

At the moment we measure temperatures in terms of other temperatures: we measure how much hotter or colder something is than a special temperature called the Triple Point of Water.

And energy is measured quite separately in joules.

From May 2019 the world’s metrologists plan to change this. We plan to use our best estimate of the Boltzmann constant to define temperature in terms of the energy of molecules.

This represents a fundamental change in our conception of the unit of temperature and of what we mean by ‘one degree’.

In my view, it is a change which is long overdue.

How will this changeover be made?

For the last decade or so, research teams from different countries have been making measurements of the Boltzmann constant.

The aim has been to make measurements with low measurement uncertainty.

Establishing a robust estimate of the measurement uncertainty is difficult and time-consuming.

It involves considering every part of an experiment and then asking two questions. Firstly:

  • “How wrong could this part of the experiment be?”

and secondly:

  • “What effect could this have on the final estimate of the Boltzmann constant?”

Typically working out the effect of one part of an experiment on the overall estimate of the Boltzmann constant might involve auxiliary experiments that may themselves take years.

Finally one constructs a big table (or spreadsheet) in which one adds up all the possible sources of uncertainty to produce an overall uncertainty estimate.

Every four years, a committee of experts called CODATA critically reviews all the published estimates of fundamental constants made in the last four years and comes up with a set of recommended values.

The CODATA recommendations are a ‘weighted’ average of the published data giving more weight to estimates which have a low measurement uncertainty.

In order to make their consensus estimate of the value of the Boltzmann constant in good time for the redefinition of the kelvin in 2019, CODATA set a deadline of 1st July 2017 – this coming Saturday.

Only papers which have been accepted for publication – i.e. submitted and refereed by that date will be considered.

After this date, a new measurement of the link between temperature and molecular energy will be reflected as a change in our temperature scale, not a change in the Boltzmann constant, which will be fixed forever.

The NPL Boltzmann constant estimate.

Professionally and personally, I have spent a decent fraction of the last 10 years working on an estimate of the Boltzmann constant – the official NPL estimate.

To do this we worked out the energy of molecules in a two-step process.

  • We inferred the average speed of argon molecules held at the temperature of the triple point of water using precision measurements of the speed of sound in argon gas.
  • We then worked out the average mass of an argon atom from measurements of the isotopic composition of argon.

Bringing these results together we were able work out the kinetic energy of argon molecules at the temperature of the triple point of water.

When we published our Boltzmann constant estimate in 2013 we estimated that it had a fractional uncertainty of 0.7 parts per million.

Unfortunately it transpired that our estimate was just wrong. Colleagues from around the world helpfully highlighted my mistake. That led to a revised estimate in 2015 with a fractional uncertainty of 0.9 parts per million.

At the time I found this cripplingly humiliating, but as I look at it now, it seems like just a normal part of the scientific process.

The source of my error was in the estimate of the isotopic content of the argon gas we used in our experiment.

Since then I have worked with many colleagues inside and outside NPL to improve this part of the experiment.  And earlier this month we published our final NPL estimate of the Boltzmann constant with a fractional uncertainty of… 0.7 parts per million: back to where we were four years ago!

Our estimate is just one among many from laboratories in the USA, China, Japan, Spain, Italy, France, and Germany.

But at the moment (7:30 a.m. BST on 27th June 2017) the NPL-2017 estimate has the lowest uncertainty of any published value of the Boltzmann constant.

The NPL 2017 estimates of the Boltzmann constant is very close to CODATA's 2014 consensus estimate

The history of NPL’s recent estimates of the Boltzmann constant. The NPL 2017 estimate of the Boltzmann constant is close to CODATA’s 2014 consensus estimate

The LNE-CNAM Boltzmann constant estimate.

However my Frieval – i.e.friendly rival – Dr. Laurent Pitre from LNE-CNAM in France reported at meeting at BIPM last month that he had made an estimate of the Boltzmann constant with a fractional uncertainty of just 0.6 parts per million.

WOW! That’s right. 0.1 parts per million more accurate than the NPL estimate.

Dr. Pitre is a brilliant experimenter and if he has achieved this, I take my hat off to him.

But I have been looking daily at this page on the website of the journal Metrologia to see if his paper is there. But as I write – the paper has not yet been accepted for publication!

So after working on this project for 10 years I still don’t know if I will have made the most accurate measurement of the Boltzmann constant ever. Or only the second most accurate.

But I will know for sure in just 4 days time.


The article arrived this lunchtime

New Measurement of the Boltzmann Constant by acoustic thermometry in helium-4 gas

The paper reports a measurement of the Boltzmann Constant with a fractional uncertainty of just 0.6 parts per million.

The  measurements are similar in overall quality to those we published four years ago, but the French team made a crucial advance: they used helium for the measurements rather than argon.

Overall measurements are technically more difficult in helium gas than in the argon. These difficulties arise from the fact that helium isn’t a very dense gas and so microphones don’t work so well. Additionally the speed of sound is high – around three times higher than in argon.

But they have put in a lot of work to overcome these difficulties. And there are two rewards.

Their first reward is that by using a liquid helium ‘trap’ they can ensure exceptional gas purity. Their ‘trap’ is a device cooled to 4.2 degrees above absolute zero at which temperature every other gas solidifies. This has allowed them to obtain an exceptionally low uncertainty in the determination of the molar mass of the gas.

Their second reward is the most astounding. Critical uncertainties in the experiment originate with measurements of properties of helium gas, such as its compressibility or thermal conductivity.

For helium gas, these properties can be calculated from first principles more accurately than they can measured. Let me explain.

These calculations assume the known properties of a helium nucleus and that a helium atom has two electrons. Then everything is calculated assuming that the Schrödinger Equation describes the dynamics of the electrons and that electrons and the nucleus interact with each other using Coulomb’s law. That’s it!

  • First the basic properties of the helium atom are calculated.
  • Then the way electric fields affect the atom is calculated.
  • The the way two helium atoms interact is calculated.
  • And then the way the interaction of two helium atoms is affected if a third atom is nearby.
  • And so on

Finally, the numbers in the calculation are jiggled about a bit to see how wrong the calculation might be so that the uncertainty of the calculation can be estimated.

In this way, the physical properties of helium gas can be calculated more accurately than they can measured, and that is the reward that the French team could use to overcome some of their experimental difficulties.

Is it hotter than normal?

June 21, 2017

This map shows how the average of the maximum daily temperature in June varies across the UK.

It was hot last night. And hot today. But is this hotter than normal? Is this global warming?

Human beings have a remarkably poor perspective on such questions for two reasons.

  • Firstly we only experience the weather in a single place which may not be representative of a country or region. And certainly not the entire Earth!
  • And secondly, our memory of previous weather is poor. Can you remember whether last winter was warmer or colder than average?

Personally I thought last winter was cold. But it was not.

Another reason to love the Met Office.

The Met Office have created carefully written digests of past weather, with month-by-month summaries.

You can see their summaries here and use links from that page to chase historical month-by-month data for the UK as a whole, or for regions of the country.

Below I have extracted the last 12 months of temperature summaries. Was this what you remembered?

  • May 2017: UK mean temperature was 12.1 °C, which is 1.7 °C above the 1981-2010 long-term average, making it the second warmest May in a series from 1910 (behind 2008).
  • April 2017: UK mean temperature was 8.0 °C, which is 0.6 °C above the 1981-2010 long-term average.
  • March 2017 :UK mean temperature was 7.3 °C, which is 1.8 °C above the 1981-2010 long-term average, making it the joint fifth warmest March in a series since 1910.
  • February 2017: UK mean temperature was 5.3 °C, which is 1.6 °C above the 1981-2010 long-term average, making it the ninth warmest February in a series since 1910.
  • January 2017: UK mean temperature was 3.9 °C, which is 0.2 °C above the 1981-2010 long-term average. It was a cold month in the south-east but generally milder than average elsewhere.
  • December 2016: UK mean temperature was 5.9 °C, which is 2.0 °C above the 1981-2010 long-term average, and the eighth warmest December in a series from 1910.
  • November 2016: The UK mean temperature was 4.9 °C, which is 1.3 °C below the 1981-2010 long-term average.
  • October 2016: The UK mean temperature was 9.8 °C, which is 0.3 °C above the 1981-2010 long-term average.
  • September 2016: The UK mean temperature was 14.6 °C, which is 2.0 °C above the 1981-2010 long-term average, making it the equal second warmest September in a series from 1910.
  • August 2016: The UK mean temperature was 15.5 °C, which is 0.6 °C above the 1981-2010 long-term average.
  • July 2016: The UK mean temperature was 15.3 °C, which is 0.2 °C above the 1981-2010 long-term average.
  • June 2016: The UK mean temperature was 13.9 °C, which is 0.9 °C above the 1981-2010 long-term average.

So all but one month in the last year has been warmer than the 1981 to 2010 long term average. It is almost as if the whole country were warming up.

But UK mean temperature is not we feel. Often we remember single hot or cold days.

So I looked up the maximum June temperature recorded in England or Wales for every year of my life.

Each point on the graph below may have occurred for just a day, or for several days, and may have occurred in a different place. But it is broadly indicative of whether there were some ‘very hot days’ in June.

June Maximum Temperatures

The exceptional year of 1976 stands out in the data and in my memory: I was 16. And 2017 is the first June to come close to that year.

But something else stands out too.

  • From 1960 to 1993 – the years up until I was 34 – the maximum June temperature in England and Wales exceeded 30 °C just 6 times i.e. 18% of the years had a ‘very hot day in June’.
  • Since 2001 – the years from age 41 to my present 57 – there were 10 years in which the maximum June temperature in the England and Wales exceeded 30 °C i.e. 63% of the years had a ‘very hot day in June’.


  • From 1960 to 1993 there were 6 years when the maximum June temperature fell below 26 °C  i.e. 18% of the years didn’t have any very hot days.
  • Since 2001 the maximum June temperature in the England and Wales has always exceeded 26 °C.

Together these data tell us something about our climate – our average weather.

They tell us that weather such as we are experiencing now is normal. But it didn’t used to be: our climate – our average weather – has changed.

Is this global warming?

Broadly speaking, yes. In our new warming world, weather like we are experiencing now is likely to be become more common.

More technically, global warming is – obviously – global and requires the measurement of temperatures all around the world. It also refers to climate – the average weather – and not individual weather events. So…

  • The fact that this year we have had exceptionally hot days this June is not global warming: indeed 1976 was hotter!
  • But the fact that exceptionally hot days in June have become more common is a manifestation of global warming.

P.S. This Met Office page shows all the weather ‘records’ so you can check for when new ‘records’ are likely to be set.

Be kind

May 16, 2017


Dear Reader,

Last week I attended a lunchtime seminar on ‘Imposter Syndrome‘.

The specifications of the syndrome seem to be rather broadly drawn, but roughly speaking, it involves ‘successful people‘ who are ‘unable to internalise, or feel deserving of, their success‘.

The seminar leader had many good quotes, but somehow missed out the genre-defining Groucho Marx quote:


I should have felt more surprised that anyone turned up! But feeling persistent and unquenchable self-doubt is the ideal mental disposition for a person interested in precision metrology.

So it might not surprise you that the session was attended by some of the best scientists at NPL. At least, I think they are some of our best scientists. They might not feel the same way.


I came across the following tale from Neil Gaiman on Twitter.

“… Some years ago I was lucky enough to be invited to a gathering of great and good people: artists and scientists, writers and discoverers of things. And I felt that at any moment they would realise that I didn’t qualify to be there, among these people who had really done things.

On my second or third night there, I was standing at the back of the hall, while a musical entertainment happened, and I started talking to a very nice polite elderly gentleman about several things, including our shared first name. And then he pointed to the hall of people and said words to the effect of “ I just look at all these people and I think, what the heck am I doing here? They’ve made amazing things. I just went where I was sent.

And I said, “Yes, But you were the first man on the moon. I think that counts for something”

And so…

It is clear that “Imposter Syndrome” is a common cognitive bias in which people whom most people would consider to be “successful”, are unable to feel the positivity which we imagine would accompany such a designation.

Like most cognitive biases, this is something we can become aware of and transcend.

I am aware that I am ‘successful’. Indeed I am more “successful” than I ever aspired to be.

It would be invidious to list my own ‘successes’. Indeed, I put ‘successes’ in quotation marks, because what I think other people might imagine to be ‘my successes’ feel to me either like ‘good fortune‘ or ‘a narrow escape from failure‘.

I know I ought to feel successful. But that is not what I actually feel.

What I learn from this.

At the height of his Nobel-prize winning powers, Bob Dylan wrote:

There’s no success like failure, and failure’s no success at all

(Taken from the Book of Bob, Subterraneans, 19:65)

What I think his Bob’ness means by this is that the very idea of a person being “successful” is nonsense.

As we each travel the path from our birth to our death, to call people following one path ‘successful’ and people on other paths ‘failures’ would be bizarre.

Compassion for one’s fellow travellers should outweigh any illusion of success or failure.

And thinking of: my colleagues; acquaintances; the more senior and the more junior; the faster and the slower; women and men; even managers. And thinking of all their situations in life, and of how quickly our lives pass, I am reminded of another quotation:

Each person you meet is carrying a heavy load. Be kind. 

Perhaps this should read:

Each person you meet, even apparently successful people, may be carrying a heavy load. Be kind.

Wishing you every success.

With kind thoughts.


Reasons to be cheerful

May 6, 2017

My Grid GB 28 daysWhen everything feels rubbish, it is sometimes calming to remind oneself that progress in human affairs is possible.

Solar Energy in the UK!

Looking at the MyGrid GB site I notice that the longer brighter days are leading to significant solar power generation every day – the yellow in the figure above and below.

My Grid GB 48 hours

Over the last month, solar power has contributed more than 5% of the UK’s electricity supply. I find this truly astonishing.

The electricity comes from solar panels on people’s rooftops, and from large solar ‘farms’ – which can still be used to graze sheep!

It is clear that solar energy generation is well matched to the demand for electricity – peaking every day at around 1:00 p.m. BST.

Storing the energy

We could certainly generate two or three times as much solar energy as this with relatively low impact. But imagine how cool it would be if we could store some of that energy as it was generated, and then release it exactly when we most needed it.

Over the last year I have noticed that ‘energy storage’ has gone from being ‘a great thing if it existed‘ to ‘a reality on a small but ever-growing scale‘.

Here are five ideas I have seen recently. They don’t have much in common, but I am collecting them together simply to hearten myself.

One of the ideas pumps water as an energy storage medium, one compresses air, one uses ice, and another is just a big battery! And one is just a cool idea whose point I don’t quite understand!

Pumping Water

At the Dinorwig power station (earlier blog) water is pumped uphill at night and released at times of peak demand to generate electricity. Dinorwig stores approximately 10 GWh of energy with approximately 75% efficiency – enough to generate 1.8 GW of electricity for approximately 6 hours.

But sites such as Dinorwig are rare. What if the same trick could be done in a more mundane way?

Ars Technica describes a sweet idea in which 30 metre diameter concrete spheres – each containing a pump-generator set – would be placed deep underwater.

Positioned near a wind farm, wind-generated electricity could be used to pump water out of the sphere against the enormous head of a few hundred metres of water. When electricity was required, water could be let back in, generating electricity.

They report that each sphere could generate 5 MW for 4 hours. So a Dinorwig-scale installation would require 500 spheres – which would probably occupy about 1 square kilometre of sea bed.

German Undersea Spheres

Less practically, The Independent have a story describing an artificial island in the North Sea that could form a hub of a renewable energy facility.

NOrth Sea Island

I don’t quite know what the point of the island would be, but I love the sheer chutzpah of the proposal.

Compressing Air

Much more practically, the Hydrostor Terra company have a plan to store energy as compressed air in scale-able plants built by a lakeside.

Interestingly – and showing a reassuring contact with reality – they separately store and recover the heat generated  when the air is compressed. This is the key to getting a reasonable efficiency.

It would take perhaps a thousand of these systems to create a Dinorwig-scale storage facility. However, because the system is scale-able, small systems could be built and put into operation quickly, with the revenue being used to fund the creation of expanded storage over the coming decades.

This ‘scale-ability’ avoids the need for billions of pounds to be invested up front and is important for demonstrating new technologies.

Ice Batteries

In the here and now, I love this idea of power companies subsidising the purchase of equipment which will lower demand for electricity rather than simply building more capacity.

In this scheme, air conditioning plant is run off-peak to create a store of around 2 cubic metres of ice. The ice is then used to chill air at times of peak demand.

In a way this is really ‘demand management’ rather than ‘energy storage’, but it achieves the same effect.

Ice Storage

Tesla Batteries

And finally, the obvious idea of storing electricity in batteries! This story reports on an an actual real functioning 80 MWh storage facility in California that can deliver 20 MW of electricity for 4 hours.

It would take 120 of these installations to create a Dinorwig Scale facility, but because each unit can be built independently, it does not require investment at the same scale and risk as that required to build a Dinorwig.

Energy Storage has arrived

The problem of grid scale energy storage has many solutions, and they are available now using current engineering practices.

My hope is that the growth of energy storage will surprise me in the same way that the the growth of solar energy has suprised me.

I hope that one day soon I will look at the chart on MyGrid GB and see that the wind supply is smooth not spiky – and that solar power is supplying electricity after the sun has gone down!

%d bloggers like this: