It’s a shame…

August 2, 2017

JCM_Grave

Pictured above is the humble grave of James Clerk Maxwell.

By all accounts, he was a kind and humble man, and so in many ways it is an entirely appropriate memorial.

But simple as it is, surely we could show our respect and admiration by as simple an act as mowing the grass? It seems not.

My attention was drawn to the unkempt state of his grave by this article in the Scottish Daily Record.

In death we are all equal.

And I have no doubt that Maxwell himself would have wanted no fuss.

But some people – very few – have led such exceptional lives that it is appropriate for us to collectively mark their mortal remains in a way which shows how much we honour their achievements in life.

This is not an indicator of our belief in any kind of saintliness on their part.

It is rather a statement about us.

It is a statement about what we currently admire and treasure and celebrate.

I have been told that Ren Zhengfei, the founder and President of Huawei Technology visited the grave and was embarrassed and shocked.

To neglect the grave of such a monumental figure says something about us.

It is actually a matter of national shame. And while acknowledging that Maxwell was decidedly Scottish, I draw the boundaries of ‘nation-hood’ more widely.

So how great was James Clerk Maxwell?

Maxwell’s many contributions to our modern view of the world are difficult to summarise without being trite, and they span an enormous range. But here are two of his achievements concerning light.

The first colour photograph taken using Maxwell's prescription. (Credit: Wikipedia)

The first colour photograph taken using Maxwell’s prescription. (Credit: Wikipedia)

Having made a breakthrough understanding of the nature of human colour vision, he used that understanding to describe how to take the first colour photograph.

YoungJamesClerkMaxwell

A picture from Wikipedia showing a young James Clerk-Maxwell at Trinity College, Cambridge. He is holding one of his colour wheels that he used to study colour vision.

Later he became the first person to appreciate that light was an electrical phenomenon.

And the equations he wrote down to describe the nature of light are still those we use today to describe just about all electrical and magnetic phenomena*.

Richard Feynman, the person who made the next step in our understanding of the light said:

“From a long view of the history of mankind — seen from, say, ten thousand years from now — there can be little doubt that the most significant event of the 19th century will be judged as Maxwell’s discovery of the laws of electrodynamics. The American Civil War will pale into provincial insignificance in comparison with this important scientific event of the same decade.”

And Michael de Podesta, the person writing this blog said:

“I named my son after him”

That a true hero should not be honoured in his own land, is a shame on us all.

Surely we could collectively manage to keep the grass on his grave tidy?

———————————————————-

*Note for pedants: In fact the equations we use are a simplified form of Maxwell’s Equations devised by Oliver Heaviside after Maxwell’s tragic early death.

Work Experience

August 2, 2017

Film Crew

 

I had a work experience student with me last week. Let’s call him ‘William’.

On reflection, I am rather concerned about the impression that the “work” he witnessed might have on him.

Firstly

Firstly, everything was very ‘bitty’: it was hard to concentrate on a single task for any period as long as a half day.

And in between explicit tasks, I spent a fair amount of time composing e-mails. That’s right, I said composing, not writing. Because e-mails are generally not simply ‘written’.

For despite the immediacy of the transmission, words in e-mails have to be chosen as carefully as words in a missive that might travel more slowly.

So even though I may appear to be sitting in front of a computer for an hour, I am in fact ‘composing’: plucking words from the vacuum of possibility, and then distilling the raw words to create clear and unambiguous text.

Anyway, I think that bit may have been a bit boring for him.

Secondly

Secondly, although primarily temperature-related, it was extremely diverse.

One activity involved measuring the temperature of the air using our non-contact thermometer and hygrometer (NCTAH).

NCTAH in lab with notes

We set up the experiment in one of NPL’s ultra-stable temperature labs which we normally use for dimensional measurements.

The idea was to compare the temperature indicated by NCTAH with four conventional thermometers. However while NCTAH operated beautifully, it was the readings of the conventional sensors I couldn’t understand.

They indicated that objects in the room were hotter than the air in the room by as much as 0.3 °C. Unfortunately I was in a bit of a rush and I was bamboozled by this result. And I am still working on an answer. However I would have liked him to see something simple ‘just work’. Hey, ho.

And finally…

A film crew visited to interview me about the re-definition of the kelvin. They were charming and professional and genuinely interested in the subject.

They shot a long interview one afternoon, and then the next day they must have spent a good two hours filming me walking.

It wasn’t just walking. We spent a fair amount of time opening doors and then walking. Also walking and then opening doors.

Then it was time for a solid 30 minutes of emerging from corridors, and turning into corridors.

I am not sure what I made of the experience, and I am curious to see what the director Ed Watkins will make of the footage. But he and his colleagues seemed happy as they headed off to film at the PTB in Braunschweig, Germany.

And as for what ‘William’ made of it all, I haven’t a clue. It involved quite a lot of just ‘sitting’ and ‘keeping out of shot’.

But I guess he got to see how documentaries are constructed which might have been the most valuable experience of all.

Would you like to work with me?

July 29, 2017
Lab Panorama

The Acoustic Thermometry Lab at NPL (Photo by Sam Gibbs: thanks 🙂 )

Friends and colleagues,

  • Do you know anyone who would like to work with me?

In the next few months I expect to be starting some new projects at NPL. And this means that I will not be able to work on my existing projects 😦

So NPL have created the opportunity for someone to work with me to help complete those projects.

  • You can read about the job here.
  • It’s also on the NPL web site here where it’s the described as “Research Or Higher Research Scientist – Temperature & Humidity” reference 65552.

What’s involved?

Good question. And it is one that is still being decided.

But it would involve working mainly in the acoustic thermometry lab .

Lab Panorama with notes

In acoustic thermometry, the temperature of a gas is inferred from measurements of the speed of sound.

On the left-hand side of the picture is an apparatus that uses a spherical resonator to measure the speed of sound. It is the most accurate thermometer on Earth.

On the right-hand side of the picture is a new apparatus that uses a cylindrical resonator to measure the speed of sound and has been designed to operate up 700 °C.

The job would involve learning about these techniques but that wouldn’t be the main activity.

Running around the lab is 50 metres of bright yellow tubing that we refer to as ‘an acoustic waveguide’.

By measuring the transmission of sound along the tube it is possible to turn it into a useful thermometer. I hope.

Finding out whether this can be made to work practically would be one part of the job. And testing the same idea is smaller tubes would be another.

Finally, by measuring the speed of sound in air it is possible to measure the temperature of the air and we would like to investigate applications of this technology.

What does the job involve?

Well it will involve learning a lot of new stuff. Typically projects involve:

  • Programming in Labview to control instruments and acquire and analyse data.
  • Writing spreadsheets and reports and PowerPoint presentations.
  • Keep track of stuff in a lab book.
  • Using acoustic and optical transducers
  • Signal processing
  • Electronics
  • Mechanical design and construction.
  • Vacuum and gas handling systems – ‘plumbing’.

And lots more. And the chance that someone with those skills will walk through the door is pretty low.

So prior knowledge is great but the key requirement is the mindset to face all those unknown things without letting the bewilderment become overwhelming.

So we are looking for someone with enthusiasm.

Enthusiasm?

Learning new stuff is painful. Especially when it seems endless.

So I couldn’t imagine working with someone who wasn’t enthusiastic about the miracle of physics.

And there is one benefit which isn’t mentioned in the advert.

To cope with the inevitable disappointments and to reward ourselves for our minor successes, our research group has freely available Tunnock’s Caramel Wafers.

Anyway, if this person isn’t you, please do pass on the opportunity to anyone you think might be interested.

The closing date for applications is 28th August 2017.

 

Exactitude and Inexactitude

July 19, 2017

Exactitude and Inexactitude

After being a professional physicist for more than 30 years, I realised the other day that I write for a living.

Yes, I am a physicist, and I still carry out experiments, do calculations and write computer programs.

But at the end of all these activities, I usually end up writing something: a scientific paper; a report; some notes for myself; or a blog article like this.

But although the final ‘output’ of most of what I do is a written communication of some description, nobody ever taught me to write.

I learned to write by reading what I had written. And being appalled.

Appalled by missed words and typographic errors, and by mangled ideas and inappropriate assumptions of familiarity with the subject matter.

Learning to write is a difficult, painful and never-ending process.

And over and over again I am torn between exactitude – which I seek – and inexactitude, which I have learned to tolerate for two reasons.

  • Firstly, a perfect article which is never completed communicates nothing. Lesson one for writing is that finishing is essential.
  • Secondly, an article which has all the appropriate details will be too long and may never be read by the people with whom I seek to communicate.

So in order communicate optimally, I need to find the appropriate tension between the competing forces of exactitude and inexactitude.

This blog 

When I write for this blog, I try to write articles that are about 500 words long. I rarely succeed.

Typically, I write something. Read it. And then add explanatory text either at the start or at the end?

But with each extra word I type, I realise that fewer and fewer people will read the article and appreciate the clarity of my writing.

And I have to acknowledge that if I had written fewer words I might have communicated something to more people.

Or even communicated more by omitting detail people might find obfuscatory

Indeed I have to acknowledge – and this is hard – that I could have even written something erroneous and communicated something to more people.

For example

For example, in the previous article on the GEO600 Gravity Wave detector, I said that “moving a mirror by half a wavelength of light caused the interferometer to change from constructive to destructive interference.”

Now I know what you are thinking: and yes, it only has to move by a quarter of a wavelength of light.

I realised this before I finished the article but it had already taken hours, and I had already recorded the narrative to the movie.

Similarly, my animation showed one of the reflections coming from the wrong side of a piece of glass (!), and it omitted the normal ‘compensator’ plate in the interferometer.

And how many people noticed or complained? None so far.

So the article was published and presumably communicated something, inexactly and slightly incorrectly. And it was not wholly erroneous.

Exactitude and Inexactitude.

Exactitude and Inexactitude are like two mis-matched protagonists in a ‘buddy movie’.

At the start they hate each other, but over the course of ‘a journey’ in which they are compelled to accompany one another, they learn to love each other for what they are, and to accept each other for what they are not.

Inexactitude: You drive me crazy, but I love you.

Gravity Wave Detector#2

July 15, 2017

GEO600 One arm

GEO600

After presenting a paper at the European Society of Precision Engineering and Nanotechnology (EUSPEN) in Hannover back in May, I was offered the chance to visit a Gravity Wave Detector. Wow! I jumped at the opportunity!

The visiting delegation were driven in a three-minibus convoy for about 30 minutes, ending up in the middle of a field of cabbages.

After artfully turning around and re-tracing our steps, we found a long, straight, gated track running off the cabbage-field track.

Near the gate was a shed, and alongside the road ran some corrugated sheet covering what looked like a drainage ditch.

These were the only clues that we were approaching one of the most sensitive devices that human beings have ever built: the GEO600 gravity-wave detector(Wikipedia or GEO600 home page)

Even as we drove down the road, the device in ‘the ditch’ was looking for length changes in the 600 metre road of less than one thousandth the diameter of a single proton.

Nothing about how to achieve such sensitivity is obvious. And as my previous article made clear, there have been many false steps along the way.

But even the phenomenal sensitivity of this detector turns out be not quite good enough to detect the gravity waves from colliding black holes.

In order to detect recent events GEO600 would have to have been between 3 and 10 times more sensitive.

The measuring principle

The GEO600 device as it appears above ground is illustrated in the drone movie above.

It consists of a series of huts and an underground laboratory at the intersection of two 600 metre long ‘arms’.

In the central laboratory, a powerful (30 watt) laser shines light of a single wavelength onto a beam-splitter: a piece of glass with a thin metal coating.

The beam-splitter reflects half the light and transmits the other other half, creating two beams which travel at 90° to each other along the two arms of the device.

At the end of the arms, a mirror reflects the light back to the beam-splitter and onto a light detector where the beams re-combine.

Aside from the laser, all the optical components are suspended from anti-vibration mountings inside vacuum tubes about 50 cm in diameter.

When set up optimally, the light traversing the two arms interferes destructively, giving almost zero light signal at the detector.

But a motion of one mirror by half of a wavelength of light (~0.0005 millimetres), will result in a signal going from nearly zero watts (when there is destructive interference) to roughly 30 watts (when there is constructive interference).

So this device – which is called a Michelson Interferometer – senses tiny differences in the path of light in the two arms. These differences might be due to the motion of one of the mirrors, or due to light in one arm being delayed with respect to light in the other arm.

Sensitivity

The basic sensitivity to motion can be calculated (roughly) as follows.

Shifting one mirror by one half a wavelength (roughly 0.0005 millimetres) results in an optical signal increasing from near zero to roughly 30 watts, a sensitivity of around 60,000 watts per millimetre.

Modern silicon detectors can detect perhaps a pico-watt (10-12 watt) of light.

So the device can detect a motion of just

10-12 watts ÷ 60000 watts per millimetre

or roughly 2 x 10-17 mm which is 10-20 metres. Or one hundred thousandth the diameter of a proton!

If the beam paths are each 600 metres long then the ability to detect displacements is equivalent to a fractional strain of roughly 10-23 in one beam path over the other.

So GEO600 could, in principle, detect a change in length of one arm compared to the other by a fraction:

0.000 000 000 000 000 000 000 01

There are lots of reasons why this sensitivity is not fully realised, but that is the basic operating principle of the interferometer.

The ‘trick’ is isolation

The scientists running the experiment think that a gravity wave passing through the detector will cause tiny, fluctuating changes in the length of one arm of GEO600 compared with the other arm.

The changes they expect are tiny which is why they made GEO600 so sensitive.

But in the same way that a super-sensitive microphone in a noisy room would just makes the noise appear louder, so GEO600 is useless unless it can be isolated from noise and vibrations.

So the ‘trick’ is to place this extraordinarily sensitive ‘microphone’ into an extraordinarily ‘quiet’ environment. This is very difficult.

If one sits in a quiet room, one can slowly become aware of all kinds of noises which were previously present, but of which one was unaware:

  • the sound of the flow of blood in our ears:
  • the sound of the house ‘creaking’
  • other ‘hums’ of indeterminate origin.

Similarly GEO600, can ‘hear’ previously unimaginably ‘quiet’ sounds:

  • the ground vibrations of Atlantic waves crashing on the shores of Europe:
  • the atom-by-atom ‘creeping’ of the suspension holding the mirrors

Results

So during an experiment, the components of GEO600 sit in a vacuum and the mirrors and optical components are suspended from silica (glass) fibres, which are themselves suspended from the end of a spring-on-a-spring-on-a-spring!

In the photograph below, the stainless steel vacuum vessels containing the key components can be seen in the underground ‘hub’ at the intersection of the two arms.

GEO600 Beam Splitter

They are as isolated from the ‘local’ environment as possible.

The output of the detector – the brightness of the light on the detector is shown live on one of the many screens in the control ‘hut’.

GEO 600 Control Centre

But instead of a graph of ‘brightness versus time, the signal is shown as a graph of the frequencies of vibration detected by the silicon detector.

Results

The picture below shows a graph of the strain – the difference in length of the two arms – detected at different frequencies.

[Please note the graph is what scientists call ‘logarithmic’. This means that a given distance on either axis corresponds to a constant multiplier. So the each group of horizontal lines corresponds to a change in strain by a factor 10, and the maximum strain shown on the vertical 10,000 times larger than the smallest strain shown.]

Sensitivity Curve

The picture above shows two traces, which both have three key features:

  • The blue curve showed the signal being detected as we watched. The red curve was the best performance of the detector. So the detector was performing close to its optimal performance.
  • Both curves are large at low frequencies, have a minimum close to 600 Hz, and then rise slowly. This is the background noise of the detector. Ideally they would like this to be about 10 times lower, particularly at low frequencies.
  • Close to the minimum is a large cluster of spikes: these are the natural frequencies of vibration of the mirror suspensions and the other optical components.
  • There are lots of spikes caused by specific noise sources in the environment.

If a gravity wave passed by…

…it would appear as a sudden spike at a particular frequency, and this frequency would then increase, and finally the spike would disappear.

It would be over in less than a second.

And how could they tell it was a gravity wave and not just random noise? Well that’s the second trick: gravity wave detectors hunt in pairs.

The signal from this detector is analysed alongside signals from other gravity wave detectors located thousands of kilometres away.

If the signal came from a gravity wave, then they would expect to see a similar signal in the second detector either just before or just afterwards – within a ‘time window’ consistent with a wave travelling at the speed of light.

Reflections

Because powerful lasers were in use, visitors were obliged to wear laser google!

Because powerful lasers were in use, visitors were obliged to wear laser goggles!

This was the second gravity wave detector I have seen that has never detected a gravity wave.

But I have seen this in the new era where we now know these waves exist.

People have been actively searching for these waves for roughly 50 years and I am filled with admiration for the nobility of the researchers who spent their careers fruitlessly searching and failing to find gravity waves.

But the collective effect of these decades of ‘failure’ is a collective success: we now know how to the ‘listen’ to the Universe in a new way which will probably revolutionise how we look at the Universe in the coming centuries.

A 12-minute Documentary

Gravity Wave Detector#1

July 6, 2017
Me and Albert Einstein

Not Charlie Chaplin: That’s me and Albert Einstein. A special moment for me. Not so much for him.

I belong to an exclusive club! I have visited two gravity wave detectors in my life.

Neither of the detectors have ever detected gravity waves, but nonetheless, both of them filled me with admiration for their inventors.

Bristol, 1987 

In 1987, the buzz of the discovery of high-temperature superconductors was still intense.

I was in my first post-doctoral appointment at the University of Bristol and I spent many late late nights ‘cooking’ up compounds and carrying out experiments.

As I wandered around the H. H. Wills Physics department late at night I opened a door and discovered a secret corridor underneath the main corridor.

Stretching for perhaps 50 metres along the subterranean hideout was a high-tech arrangement of vacuum tubing, separated every 10 metres or so by a ‘castle’ of vacuum apparatus.

It lay dormant and dusty and silent in the stillness of the night.

The next day I asked about the apparatus at morning tea – a ritual amongst the low-temperature physicists.

It was Peter Aplin who smiled wryly and claimed ownership. Peter was a kindly antipodean physicist, a generalist – and an expert in electronics.

New Scientist article from 1975

New Scientist article from 1975

He explained that it was his new idea for a gravity wave detector.

In each of the ‘castles’ was a mass suspended in vacuum from a spring made of quartz.

He had calculated that by detecting ‘ringing’ in multiple masses, rather than in a single mass, he could make a detector whose sensitivity scaled as its Length2 rather than as its Length.

He had devised the theory; built the apparatus; done the experiment; and written the paper announcing that gravity waves had not been detected with a new limit of sensitivity.

He then submitted the paper to Physical Review. It was at this point that a referee had reminded him that:

When a term in L2 is taken from the left-hand side of the equation to the right-hand side, it changes sign. You will thus find that in your Equation 13, the term in L2 will cancel.

And so his detector was not any more sensitive than anyone else’s.

And so…

If it had been me, I think I might have cried.

But as Peter recounted this tale, he did not cry. He smiled and put it down to experience.

Peter was – and perhaps still is – a brilliant physicist. And amongst the kindest and most helpful people I have ever met.

And I felt inspired by his screw up. Or rather I was inspired by his ability to openly acknowledge his mistake. Smile. And move on.

30 years later…

…I visited Geo 600. And I will describe this dramatically scaled-up experiment in my next article.

P.S. (Aplin)

Peter S Aplin wrote a review of gravitational wave experiments in 1972 and had a paper at a conference called “A novel gravitational wave antenna“. Sadly, I don’t have easy access to either of these sources.

 

Talking about the ‘New’ SI

July 3, 2017

I was asked to give a talk about the SI to some visitors tomorrow morning, and so I have prepared some PowerPoint slides

If you are interested, you can download them using this link (.pptx 13 Mb!): please credit me and NPL if you use them.

But I also experimentally narrated my way through the talk and recorded the result as a movie.

The result is… well, a bit dull. But if you’re interested you can view the results below.

I have split the talk into three parts, which I have called Part 1, Part 2 and Part 3.

Part 1: My System of Units

This 14 minute section is the fun part. It describes a hypothetical system of units which is a bit like the SI, but in which all the units are named after my family and friends.

The idea is to show the structure of any system of units and to highlight some potential shortcomings.

It also emphasises the fact that systems of units are not ‘natural’. They have been created by people to meet our needs.

Part 2: The International System of Units

This 22 minute section – the dullest and most rambling part of the talk – explains the subtle rationale for the changes in the SI upon which we have embarked.

There are two key ideas in this part of the talk:

  • Firstly there is a description of the separation of the concepts of the definition of a unit from the way in which copies of the unit are ‘realised‘.
  • And secondly, there is a description of the role of natural constants in the new definitions of the units of the SI.

Part 3: The Kilogram Problem

This 11 minute section is a description of one of the two ways of solving the kilogram problem: the Kibble balance. It has three highlights!

  • It features a description of the balance by none other than Bryan Kibble himself.
  • There is an animation of a Kibble balance which takes just seconds to play but which took hours to create!
  • And there are also some nice pictures of the Mark II Kibble Balance installed in its new home in Canada, including a short movie of the coil going up and down.

Overall

This is all a bit dull, and I apologise. It’s an experiment and please don’t feel obliged to listen to all or any of it.

When I talk to a live audience I hope it will all be a little punchier – and that the 2800 seconds it took to record this will be reduced to something nearer to its target 2100 seconds.

 

 

 

Interregnum

July 2, 2017

SI Units

Welcome to the Interregnum.

At midnight on the 30th June 2017 the world stepped over the threshold into a new domain of metrology.

It is now too late to ever measure the Boltzmann constant or the Planck constant 😦

What do you mean?

Measuring is the process of comparing one thing – the thing you are trying to measure – with a standard, or combination of standards.

So when we measure a speed, we are comparing the speed of an object with the speed of “one metre per one second”.

  • The Boltzmann constant tells us (amongst other things) the amount of energy that a gas molecule possesses at a particular temperature.
  • The Planck constant tells us (amongst other things) the quantum mechanical wavelength of a particle travelling with a steady speed.

To measure these constants we need to make comparisons against our measurement standards of metres, seconds, kilograms and kelvins.

So…

But actually we think that quantities such as the Planck constant are really more constant than any human-conceived standard. That’s why we call them ‘constants’!

And so it seems a bit ‘cart-before-horse’ to compare these ‘truly-constant’ quantities to our inevitably-imperfect ‘human standards’.

Over the last few decades it has become apparent that it would make much more sense if we reversed the direction of comparison.

In this new conception of measurement standards, we would base the length of a metre, the mass of kilogram etc. on these truly constant quantities.

And that is what we are doing.

Over the last decade or so, metrologists world-wide have made intense efforts to make the most accurate measurements of these constants in terms of the current definitions of units embodied in the International System of Measurement, the SI.

On July 1st 2017, we entered a transition period – an interregnum – in which scientists will analyse these results.

The analysis is complicated and so for practical reasons, even if new and improved measurements were made, they would not be considered.

If the results are satisfactory the General Conference on Weights and Measures, a high-powered diplomatic meeting, will approve them. And on May 20th 2019 the world will switch to a new system of measurement.

This will be a system of measurement which is scaled to constants of nature that we see around us.

And afterwards?

The value of seven ‘natural constants’ including the Boltzmann Constant and the Planck Constant will be fixed.

So previously people placed known masses onto special ‘Kibble balances’ and made an estimate of the Planck constant.

By ‘known masses’ we mean  masses that had been compared (directly or indirectly) with the mass of the International Prototype of the Kilogram.

After 20th May 2019, people carrying out the same experiment will already know the value of the Planck constant: we will build our system of measurement on that value.

And so the results of the same experiment will result in an estimate for the mass of object on the Kibble balance.

What difference will it make?

At the point of the switch-over it will make no difference what so ever.

Which begs the question:Why are you doing this?”

The reason is that these unit definitions form the foundations for measurements in every branch of every science.

And the foundations of every complex structure – be it a building or the system of units – needs occasional maintenance.

Such work is often expensive and afterwards there is nothing to show except confidence that the structure will not subside or crack. And that is the aim of this change.

The advances in measurement science over the last century have been staggering. And key developments would have been inconceivable even a few decades before they were made.

Similarly we anticipate that over future centuries  measurement science will continue to improve, presumably in ways that we cannot yet conceive.

By building the most stable foundations of which we can conceive, we are making sure that – to the very best of our ability – scientific advances will not be hindered by drifts or inconsistency in the system of units used to report the results of experiments.

 

What is Life?

June 28, 2017
Royal Trinity Hospice

A pond in the garden of the Royal Trinity Hospice.

On Monday, my good friend Paula Chandler died.

It seems shocking to me that I can even type those words.

She had cancer, and was in a hospice, and her passing was no surprise to her or those who loved her. But it was, and still is, a terrible shock.

It is unthinkable to me that we will never converse again.

How can someone be alive and completely self-aware and witty on Saturday; exchanging texts on Sunday evening; and then simply gone on Monday morning?

Her body was still there, but the essential spark that anyone would recognise as being ‘Paula’, was gone.

As I sat in the garden of the Royal Trinity Hospice, I reflected on a number of things.

And surrounded by teeming beautiful life, the question of “What is Life?” came to my mind. Paula would have been interested in this question.

What is life?

In particular I tried to recall the details of the eponymous book by Addy Pross.

In honesty I can’t recommend the book because it singularly fails to answer the question it sets itself.

In the same way that a book called “How to become rich” might provide an answer for the author but not the reader, so Addy Pross’s book was probably valuable for Addy Pross as he tried to clarify his thoughts. And to that extent the book is worth reading.

Life is ubiquitous on Earth, and after surveying previous authors’ reflections, Addy Pross focuses the question of “What is Life?” at one specific place: the interface between chemistry and biology:

  • In chemistry, reactions run their course blindly and become exhausted.
  • In biology, chemistry seeks out energy sources to maintain what Addy Pross calls a dynamic, kinetic stability.

So how does chemistry ‘become’ biology?

In the same way that a spinning top is stable as long as it spins. Or a vortex persists in a flowing fluid. Similarly life seems to be a set of chemical reactions which exhibit an ability to ‘keep themselves going’.

What is life?

Re-naming ‘life’ as ‘dynamic kinetic stability’ does not seem to me to be particularly satisfactory.

It doesn’t explain how or why things spontaneously acquire dynamic kinetic stability any more than saying something is alive explains its aliveness.

I do expect that one day someone will answer the question of “What is Life?” in a meaningful technical way.

But for now, as I think about Paula, and the shocking disappearance of her unique dynamic kinetic stability, I am simply lost for words.

Measuring the Boltzmann constant for the last time

June 27, 2017
BIPM gardens

The gardens of the International Bureau of Weights and Measures (BIPM) in Paris

If you were thinking of measuring the Boltzmann constant, you had better hurry up.

If your research paper reporting your result is not accepted for publication by the end of this Friday 30th June 2017 then you are out of time.

As I write this on the morning of Tuesday 27th June 2017, there are four days to go and one very significant measurement has yet to be published.

====================================
UPDATE: It’s arrived! See the end of the article for details
====================================

What’s going on?

The Boltzmann constant is the conversion factor between mechanical energy and temperature.

Setting to one side my compulsion to scientific exactitude, the Boltzmann constant tells us how many joules of energy we must give to a molecule in order to increase its temperature by one kelvin (or one degree Celsius).

At the moment we measure temperatures in terms of other temperatures: we measure how much hotter or colder something is than a special temperature called the Triple Point of Water.

And energy is measured quite separately in joules.

From May 2019 the world’s metrologists plan to change this. We plan to use our best estimate of the Boltzmann constant to define temperature in terms of the energy of molecules.

This represents a fundamental change in our conception of the unit of temperature and of what we mean by ‘one degree’.

In my view, it is a change which is long overdue.

How will this changeover be made?

For the last decade or so, research teams from different countries have been making measurements of the Boltzmann constant.

The aim has been to make measurements with low measurement uncertainty.

Establishing a robust estimate of the measurement uncertainty is difficult and time-consuming.

It involves considering every part of an experiment and then asking two questions. Firstly:

  • “How wrong could this part of the experiment be?”

and secondly:

  • “What effect could this have on the final estimate of the Boltzmann constant?”

Typically working out the effect of one part of an experiment on the overall estimate of the Boltzmann constant might involve auxiliary experiments that may themselves take years.

Finally one constructs a big table (or spreadsheet) in which one adds up all the possible sources of uncertainty to produce an overall uncertainty estimate.

Every four years, a committee of experts called CODATA critically reviews all the published estimates of fundamental constants made in the last four years and comes up with a set of recommended values.

The CODATA recommendations are a ‘weighted’ average of the published data giving more weight to estimates which have a low measurement uncertainty.

In order to make their consensus estimate of the value of the Boltzmann constant in good time for the redefinition of the kelvin in 2019, CODATA set a deadline of 1st July 2017 – this coming Saturday.

Only papers which have been accepted for publication – i.e. submitted and refereed by that date will be considered.

After this date, a new measurement of the link between temperature and molecular energy will be reflected as a change in our temperature scale, not a change in the Boltzmann constant, which will be fixed forever.

The NPL Boltzmann constant estimate.

Professionally and personally, I have spent a decent fraction of the last 10 years working on an estimate of the Boltzmann constant – the official NPL estimate.

To do this we worked out the energy of molecules in a two-step process.

  • We inferred the average speed of argon molecules held at the temperature of the triple point of water using precision measurements of the speed of sound in argon gas.
  • We then worked out the average mass of an argon atom from measurements of the isotopic composition of argon.

Bringing these results together we were able work out the kinetic energy of argon molecules at the temperature of the triple point of water.

When we published our Boltzmann constant estimate in 2013 we estimated that it had a fractional uncertainty of 0.7 parts per million.

Unfortunately it transpired that our estimate was just wrong. Colleagues from around the world helpfully highlighted my mistake. That led to a revised estimate in 2015 with a fractional uncertainty of 0.9 parts per million.

At the time I found this cripplingly humiliating, but as I look at it now, it seems like just a normal part of the scientific process.

The source of my error was in the estimate of the isotopic content of the argon gas we used in our experiment.

Since then I have worked with many colleagues inside and outside NPL to improve this part of the experiment.  And earlier this month we published our final NPL estimate of the Boltzmann constant with a fractional uncertainty of… 0.7 parts per million: back to where we were four years ago!

Our estimate is just one among many from laboratories in the USA, China, Japan, Spain, Italy, France, and Germany.

But at the moment (7:30 a.m. BST on 27th June 2017) the NPL-2017 estimate has the lowest uncertainty of any published value of the Boltzmann constant.

The NPL 2017 estimates of the Boltzmann constant is very close to CODATA's 2014 consensus estimate

The history of NPL’s recent estimates of the Boltzmann constant. The NPL 2017 estimate of the Boltzmann constant is close to CODATA’s 2014 consensus estimate

The LNE-CNAM Boltzmann constant estimate.

However my Frieval – i.e.friendly rival – Dr. Laurent Pitre from LNE-CNAM in France reported at meeting at BIPM last month that he had made an estimate of the Boltzmann constant with a fractional uncertainty of just 0.6 parts per million.

WOW! That’s right. 0.1 parts per million more accurate than the NPL estimate.

Dr. Pitre is a brilliant experimenter and if he has achieved this, I take my hat off to him.

But I have been looking daily at this page on the website of the journal Metrologia to see if his paper is there. But as I write – the paper has not yet been accepted for publication!

So after working on this project for 10 years I still don’t know if I will have made the most accurate measurement of the Boltzmann constant ever. Or only the second most accurate.

But I will know for sure in just 4 days time.

=========
UPDATE
=========

The article arrived this lunchtime

New Measurement of the Boltzmann Constant by acoustic thermometry in helium-4 gas

The paper reports a measurement of the Boltzmann Constant with a fractional uncertainty of just 0.6 parts per million.

The  measurements are similar in overall quality to those we published four years ago, but the French team made a crucial advance: they used helium for the measurements rather than argon.

Overall measurements are technically more difficult in helium gas than in the argon. These difficulties arise from the fact that helium isn’t a very dense gas and so microphones don’t work so well. Additionally the speed of sound is high – around three times higher than in argon.

But they have put in a lot of work to overcome these difficulties. And there are two rewards.

Their first reward is that by using a liquid helium ‘trap’ they can ensure exceptional gas purity. Their ‘trap’ is a device cooled to 4.2 degrees above absolute zero at which temperature every other gas solidifies. This has allowed them to obtain an exceptionally low uncertainty in the determination of the molar mass of the gas.

Their second reward is the most astounding. Critical uncertainties in the experiment originate with measurements of properties of helium gas, such as its compressibility or thermal conductivity.

For helium gas, these properties can be calculated from first principles more accurately than they can measured. Let me explain.

These calculations assume the known properties of a helium nucleus and that a helium atom has two electrons. Then everything is calculated assuming that the Schrödinger Equation describes the dynamics of the electrons and that electrons and the nucleus interact with each other using Coulomb’s law. That’s it!

  • First the basic properties of the helium atom are calculated.
  • Then the way electric fields affect the atom is calculated.
  • The the way two helium atoms interact is calculated.
  • And then the way the interaction of two helium atoms is affected if a third atom is nearby.
  • And so on

Finally, the numbers in the calculation are jiggled about a bit to see how wrong the calculation might be so that the uncertainty of the calculation can be estimated.

In this way, the physical properties of helium gas can be calculated more accurately than they can measured, and that is the reward that the French team could use to overcome some of their experimental difficulties.


%d bloggers like this: