Posts Tagged ‘Boltzmann COnstant’

Everyone makes mistakes

June 16, 2013
A train crash

Somebody has made a terrible mistake. I live in fear of having made even a tiny mistake.

I do love reading about mistakes. At least I love reading about other people’s mistakes. As I read them I comfort myself with the thought that I haven’t yet messed up as badly as that.

I am particularly sensitive on the subject of mistakes at the moment because of my paper on the Boltzmann constant, claiming the most accurate measurement ever. This is a bold claim and if I have made a mistake I will look very stupid in front of my colleagues.

So as the date of publication approaches, I don’t feel proud, or relieved. I just feel sick with anxiety. I worry that I have forgotten something obvious. Or something not-so-obvious. I worry that some step in the logic that leads to the estimated value is weaker than I thought. In short, I worry that I have made a mistake.

To be sure, the work has been checked and re-checked, and my co-authors are pretty smart. But there are always errors. However, this type of work involves a different approach to measurement, one in which the actual value of the thing being measured barely matters. What counts is our estimate of how wrong our answer could be.

Our main result is an estimate of the speed of sound in argon gas in the limit of low pressure. And to get this we need to measure (amongst other things), pressure, the size of the container, and the frequencies of some acoustic resonances at different pressures.

And how do we know we are right? We don’t. But by measuring the pressure in two different ways we can estimate how wrong we could be. By measuring the size of the container in three different ways we can estimate how wrong we could be. And by estimating the speed of sound from six separate resonances we can estimate  how wrong we could be.

The fact that different estimates of a quantity are self-consistent does not mean they are necessarily correct. But it does make it harder for them to be wrong. And if the data are not self-consistent, then we know that something is definitely incorrect.

So the whole experiment has been designed and performed in a way that will allow us to estimate how wrong we could possibly be in a precise, numerical value. And our description of the experiment is written – as far as is possible – in a way which exposes our mistakes.

But in a primitive and superstitious manner I still feel the need to worry about it, even though there is now nothing I can do. So if I have made a mistake, then it is already too late and I will look silly in front of my colleagues. But as least I didn’t drive a train through a wall!

The Boltzmann constant and the age of the Earth

July 23, 2011

Fin Stuart (Left) and Darren Mark (right) together with ARGUS (foreground). These three are the 'go to' team for Argon isotope ratio measurements.

I am currently just at the end of a research programme to determine a value of the Boltzmann constant, the constant of nature that determines how much molecular energy (measured in joules) corresponds to what we call ‘one degree’ of temperature. We are close to getting the answer and I will update you shortly. But the experiment has involved learning about things that seem a long way from the original aim, and by chance the measurements we made and our quest for the lowest undertainty of measurement have ended up by helping improving estimates for the age of the Earth!

This happy coincidence came about because we worked out a value of the Boltzmann constant by measuring the molecular speeds in argon gas. And in order to work out our answer we needed to know how much of each of the three stable isotopes of Argon (36Ar, 38Ar, and 40Ar) there were in the particular sample of gas we used. By chance this is exactly what geologists need to know in order to work out how old rocks are. And our quest for low uncertainty led us to the world’s best laboratory, the Argon Isotope Facility in the Scottish Universities Environmental Research Council.

Their work is amazing and exploits the fact that potassium is commonly present in many rocks on Earth. One isotope of potassium (40K) is radioactively unstable and decays very slowly to yield 40Ar. Half of the potassium decays every 1.250 billion years, so even in the oldest rocks on Earth – around 4 billion years old – there is plenty of 40K left. But over time, as long as the argon cannot escape from the rock, the amount of radiogenic 40Ar increases, and the amount of radioactive 40K decreases. Measurements of the relative concentrations of 40Ar and 40K allow clever folk such Darren Mark and Fin Stuart to work out the age of the rocks!

Darren and Fin calibrate the sensitivity of their mass spectrometer (affectionately known as ARGUS) using argon from atmospheric air, and they also need to correct for the amount of atmospheric argon that might have diffused into their samples. And this is where I came in! While I was harassing them, they noticed that if they analysed their results in a slightly different way, they could reduce their uncertainty of measurement (and mine!).

And the moral is… at the start of this research I didn’t have a clue about the measurement of argon isotopes. But now I know that it is primarily through argon isotope measurements that we work out the age of rocks on Earth, and this is really important for our understanding of the Earth’s history. And since fossils are dated from measurements of the rocks in which they are found, this is how we know that the dinosaurs became extinct 65 million years ago and not (say) 23 million years ago. And at the end of the project I find that I have had my eyes opened to this area of science: I find that although my knowledge has grown, my awareness of what I don’t know has grown even faster! But mostly I am happy and proud that this interaction between metrologists and geologists has helped in a minor way to improve the accuracy of the timescale of events on Earth.

Nearly Over

July 5, 2011
Robin Underwood, Myself, and Gavin Sutton

Robin Underwood, Myself, and Gavin Sutton just after 'switch off'. The laboratory is genuinely not usually that untidy - we are half-way through a laboratory move. Click for larger picture.

Roughly four years ago I began planning an experiment to measure the Boltzmann constant, and today, together with my colleagues Robin Underwood and Gavin Sutton, we switched off the experiment. I would love to tell you more about the experiment, and indeed one aim of this blog had been to talk about some of the ups and downs of the experiment as they happened. But in the end, after eating, breathing and sleeping the experiment, I just couldn’t bear to blog it too! So most of this blog has been about other things. But I thought that at this point I would like to record answers to the three things about which I am asked most.

1. What is the Boltzmann Constant? The temperature of an object is a measure of the speed with which the atoms within a substance are moving. So in principle, instead of measuring temperature in degrees Celsius, we could measure temperature in terms of the speed of molecules.

It feels cold today darling, I think the average speed of the air molecules must be only 423 metres per second!

The Boltzmann constant is a measure of how much energy of motion – kinetic energy – of molecules corresponds to one degree Celsius. So measuring the Boltzmann constant allows to link our normal temperature scale, to the fundamental definition of temperature.

2. So what’s the Answer? I don’t know yet! I’ll tell you in a month or so.

3. How do we know that we’re right? Knowing that our answer is correct represents the hardest part of the experiment and is why the whole thing has taken so long. Along with our estimate of the true value of the Boltzmann constant, we need to produce another number: an estimate of the uncertainty in our estimate. To produce this uncertainty estimate we look at every assumption we make in the experiment, and evaluate the extent to which that assumption is true. In our experiment we do lots of things to check that our answer is correct, but the simplest thing we is that we do the experiment seven different ways!  Every different way tests our assumptions in a slightly different way, and by looking at the tiny disagreements between the results from different ways of doing the experiment, we can estimate how reliable our assumptions were. At this moment, we are closing in on the uncertainty value and without saying exactly what it is, we expect our answer to have an uncertainty estimate of close to (and hopefully less than) one part in a million.

  • One part in a million? If we were estimating a length, we would be able to to measure 1 kilometre with an uncertainty of less than 1 millimetre.

One last thing. This experiment has been the hardest thing I have ever done. I have to learn about entire fields about which I had no prior experience, and in each field I had to work with world-leading experts and typically had to go from nothing to understanding the limits of the technology in a few months. For example, I had to learn about:

  • Humidity measurements in ultra dry gases,
  • The technology of mass flow controllers,
  • Precision pressure measuring technology,
  • Co-ordinate measurement technology for determining the shape of objects with amazingly low uncertainty,
  • Precision grinding technology,
  • Diamond-turning of objects to create objects with ultra-precision surfaces,
  • The calculation of oscillating magnetic and electric fields in unusually shaped containers,
  • The calculation of microwave fields in unusually-shaped containers,
  • Ultra precision thermometer calibrations,
  • Measurement of ratio of argon isotopes in our gas,
  • Measurement of the amount of neon in ultra pure argon,
  • Pyknometry,
  • Anti corrosion coatings for copper.
  • Ultra precision weighing
And in each field I worked the many fantastic colleagues that make NPL a national treasure. So I didn’t do all the work in all these areas. Far, far from it. But I had to understand what was happening, understand the consequences for the experiment, read scientific papers, and occasionally, driven by overwhelming anxiety, make a good suggestion or two. In this time I made lots of mistakes, but I also got lots of things of right. And now I am really looking forward, to finishing the analysis and finding out the uncertainty value. Oh yes, and working out the final estimate of the Boltzmann constant. I will keep you informed.

Cryosat 2:Isn’t precise measurement cool:-)

April 8, 2010
How Cryosat Works

How Cryosat Works

Precision Measurement 🙂

The BBC had a heartwarming story today about an ‘Ice Explorer’ satellite. The story was meant to be heartwarming because poor Professor Duncan Wingham had already built the satellite once and seen it blown up at launch.  Now he watched the rebuilt version lifted to orbit in a mere 16 minutes. However my heart was warmed not by his triumph over adversity – a typical media take an any endeavour – but by the mission’s reliance on precise electromagnetic measurement. Something my own team are getting quite good at.

As far as I can tell, the Cryosat 2 measures sea ice thickness by measuring the time delay between radar pulses reflected from the top surface of sea ice, and the surface of the sea detected in between patches of floating ice. All this while traveling at thousands of kilometres per hour 700 kilometres above the Earth’s surface. The heart of this measurement is the ability to detect a time delay corresponding to radar waves traveling an additional 50 centimetres or so. In order to resolve this extra distance with a resolution of a centimetre or so, the satellite must be able to make timing discrimination of just 30 picoseconds. WOW! At a height of 700 km, a centimetre represents just one part in 70 million of the travel time. Very clever.

The NPL Boltzmann Team with 'Cranberry 2' cavity reson

The NPL Boltzmann Team with 'Cranberry 2' cavity reson

Very clever, but actually we have done something not too dissimilar. As part of our efforts to determine an accurate value for the Boltzmann constant we have recently worked out the diameter of a 120 mm diameter spherical resonator with an uncertainty of just ±10 namometres. This is a measurement uncertainty of roughly 1 part in 10 million. This is not quite as good as Cryosat, but this is an absolute measurement – something Cryosat doesn’t need to do. We can detect changes at level around 10 times better than this!

Our latest triumph, has been to detect the effects of the antennae we use to make our measurements! This is quite a trick. To measure the diameter we insert two small antennae (just tiny straight pieces of wire) into our sphere which send and receive microwave signals. We send out different frequencies of microwaves and work out the diameter from the frequencies at which the microwaves bounce backwards and forwards most strongly within our sphere. However all kinds of tiny defects have a small effect on the result. We can measure these, but how do we detect the effect of the probes we are using to measure with? If we remove the probes, then we can’t measure anything! The trick for achieving this is subtle and depends on a profound understanding of what is happening inside the sphere. We have been able to determine that our probes change our estimate for the radius by around 2 parts in a million – this is way more than anyone else had ever assumed. In other words, the probes we use to measure the diameter change our estimate of the diameter by around 240 nanometres. We are able to detect this and correct for it with an uncertainty of only around 10 nanometres.

So What?

I would bet money that some of NPL’s advanced metrology is somewhere inside Cryosat 2. I don’t know how or where. But I would also bet money that some of the metrology my team and I are developing in our project will also be inside something equally cool and clever in 10 years time. But at the moment I just don’t know what!

%d bloggers like this: