Archive for the ‘Cosmology’ Category

The James Webb Space Telescope

May 10, 2018

Last week I was on holiday in Southern California. Lucky me.

Lucky me indeed. During my visit I had – by extreme good fortune – the opportunity to meet with Jon Arenberg – former engineering director of the James Webb Space Telescope (JWST).

And by even more extreme good fortune I had the opportunity to speak with him while overlooking the JWST itself – held upright in a clean room at the Northrop Grumman campus in Redondo Beach, California.

[Sadly, photography was not allowed, so I will have to paint you a picture in words and use some stock images.]


In case you don’t know, the JWST will be the successor to the Hubble Space Telescope (HST), and has been designed to exceed the operational performance of the HST in two key areas.

  • Firstly, it is designed to gather more light than the HST. This will allow the JWST to see very faint objects.
  • Secondly, it is designed to work better with infrared light than the HST. This will allow the JWST to see objects whose light has been extremely red-shifted from the visible.

A full-size model of the JWST is shown below and it is clear that the design is extraordinary, and at first sight, rather odd-looking. But the structure – and much else besides – is driven by these two requirements.

JWST and people

Requirement#1: Gather more light.

To gather more light, the main light-gathering mirror in the JWST is 6.5 metres across rather than just 2.5 metres in the HST. That means it gathers around 7 times more light than the HST and so can see fainter objects and produce sharper images.


Image courtesy of Wikipedia

But in order to launch a mirror this size from Earth on a rocket, it is necessary to use a  mirror which can be folded for launch. This is why the mirror is made in hexagonal segments.

To cope with the alignment requirements of a folding mirror, the mirror segments have actuators to enable fine-tuning of the shape of the mirror.

To reduce the weight of such a large mirror it had to be made of beryllium – a highly toxic metal which is difficult to machine. It is however 30% less dense than aluminium and also has a much lower coefficient of thermal expansion.

The ‘deployment’ or ‘unfolding’ sequence of the JWST is shown below.

Requirement#2: Improved imaging of infrared light.

The wavelength of visible light varies from roughly 0.000 4 mm for light which elicits the sensation we call violet, to 0.000 7 mm for light which elicits the sensation we call red.

Light with a wavelength longer than 0.000 7 mm does not elicit any visible sensation in humans and is called ‘infrared’ light.

Imaging so-called ‘near’ infrared light (with wavelengths from 0.000 7 mm to 0.005 mm) is relatively easy.

Hubble can ‘see’ at wavelengths as long as 0.002 5 mm. To achieve this, the detector in HST was cooled. But to work at longer wavelengths the entire telescope needs to be cold.

This is because every object emits infrared light and the amount of infrared light it emits is related to its temperature. So a warm telescope ‘glows’ and offers no chance to image dim infrared light from the edge of the universe!

The JWST is designed to ‘see’ at wavelengths as long as 0.029 mm – 10 times longer wavelengths than the HST – and that means that typically the telescope needs to be on the order of 10 times colder.

To cool the entire telescope requires a breathtaking – but logical – design. There were two parts to the solution.

  • The first part involved the design of the satellite itself.
  • The second part involved the positioning the satellite.

Cooling the telescope part#1: design

The telescope and detectors were separated from the rest of the satellite that contains elements such as the thrusters, cryo-coolers, data transmission equipment and solar cells. These parts need to be warm to operate correctly.

The telescope is separated from the ‘operational’ part of the satellite with a sun-shield roughly the size of tennis court. When shielded from the Sun, the telescope is exposed to the chilly universe, and cooled gas from the cryo-coolers cools some of the detectors to just a few degrees above absolute zero.

Cooling the telescope part#2: location

The HST is only 300 miles or so from Earth, and orbits every 97 minutes. It travels in-to and out-of full sunshine on each orbit. This type of orbit is not compatible with keeping a gigantic telescope cold.

So the second part of the cooling strategy is to position the JWST approximately 1 million miles from Earth at a location known as the second Lagrange point L2.

At L2 the gravitational attraction of the Sun is approximately 30 times greater than the gravitational attraction of the Earth and Moon.

At L2 the satellite orbits the Sun in a period of one year – and so stays in the same position relative to the Earth.

  • The advantage of orbiting at L2 is that the satellite can maintain the same orientation with respect to the Sun for long periods. And so the sun-shade can shield the telescope very effectively, allowing it to stay cool.
  • The disadvantage of orbiting at L2 is that it is beyond the orbit of the moon and no manned space-craft has ever travelled so far from Earth. So once launched, there is absolutely no possibility of a rescue mission.

The most expensive object on Earth?

I love the concept of the JWST. At an estimated cost of $8 billion, if this is not the most expensive single object on Earth, then I would be interested to know what is.

But it has not been created to make money or as an act of aggression.

Instead, it has been created to answer the simple question

I wonder what we would see if we looked into deep space at infrared wavelengths.”. 

Ultimately, we just don’t know until we look.

In a year or two, engineers will place the JWST on top of an Ariane rocket and fire it into space. And the most expensive object on Earth will then – hopefully – become the most expensive object in space.

Personally I find the mere existence of such an enterprise a bastion of hope in a world full of worry.


Many thanks to Jon Arenberg  and Stephanie Sandor-Leahy for the opportunity to see this apogee of science and engineering.


Breathtaking photographs are available in galleries linked to from this page


Gravity Wave Detector#2

July 15, 2017

GEO600 One arm


After presenting a paper at the European Society of Precision Engineering and Nanotechnology (EUSPEN) in Hannover back in May, I was offered the chance to visit a Gravity Wave Detector. Wow! I jumped at the opportunity!

The visiting delegation were driven in a three-minibus convoy for about 30 minutes, ending up in the middle of a field of cabbages.

After artfully turning around and re-tracing our steps, we found a long, straight, gated track running off the cabbage-field track.

Near the gate was a shed, and alongside the road ran some corrugated sheet covering what looked like a drainage ditch.

These were the only clues that we were approaching one of the most sensitive devices that human beings have ever built: the GEO600 gravity-wave detector(Wikipedia or GEO600 home page)

Even as we drove down the road, the device in ‘the ditch’ was looking for length changes in the 600 metre road of less than one thousandth the diameter of a single proton.

Nothing about how to achieve such sensitivity is obvious. And as my previous article made clear, there have been many false steps along the way.

But even the phenomenal sensitivity of this detector turns out be not quite good enough to detect the gravity waves from colliding black holes.

In order to detect recent events GEO600 would have to have been between 3 and 10 times more sensitive.

The measuring principle

The GEO600 device as it appears above ground is illustrated in the drone movie above.

It consists of a series of huts and an underground laboratory at the intersection of two 600 metre long ‘arms’.

In the central laboratory, a powerful (30 watt) laser shines light of a single wavelength onto a beam-splitter: a piece of glass with a thin metal coating.

The beam-splitter reflects half the light and transmits the other other half, creating two beams which travel at 90° to each other along the two arms of the device.

At the end of the arms, a mirror reflects the light back to the beam-splitter and onto a light detector where the beams re-combine.

Aside from the laser, all the optical components are suspended from anti-vibration mountings inside vacuum tubes about 50 cm in diameter.

When set up optimally, the light traversing the two arms interferes destructively, giving almost zero light signal at the detector.

But a motion of one mirror by half of a wavelength of light (~0.0005 millimetres), will result in a signal going from nearly zero watts (when there is destructive interference) to roughly 30 watts (when there is constructive interference).

So this device – which is called a Michelson Interferometer – senses tiny differences in the path of light in the two arms. These differences might be due to the motion of one of the mirrors, or due to light in one arm being delayed with respect to light in the other arm.


The basic sensitivity to motion can be calculated (roughly) as follows.

Shifting one mirror by one half a wavelength (roughly 0.0005 millimetres) results in an optical signal increasing from near zero to roughly 30 watts, a sensitivity of around 60,000 watts per millimetre.

Modern silicon detectors can detect perhaps a pico-watt (10-12 watt) of light.

So the device can detect a motion of just

10-12 watts ÷ 60000 watts per millimetre

or roughly 2 x 10-17 mm which is 10-20 metres. Or one hundred thousandth the diameter of a proton!

If the beam paths are each 600 metres long then the ability to detect displacements is equivalent to a fractional strain of roughly 10-23 in one beam path over the other.

So GEO600 could, in principle, detect a change in length of one arm compared to the other by a fraction:

0.000 000 000 000 000 000 000 01

There are lots of reasons why this sensitivity is not fully realised, but that is the basic operating principle of the interferometer.

The ‘trick’ is isolation

The scientists running the experiment think that a gravity wave passing through the detector will cause tiny, fluctuating changes in the length of one arm of GEO600 compared with the other arm.

The changes they expect are tiny which is why they made GEO600 so sensitive.

But in the same way that a super-sensitive microphone in a noisy room would just makes the noise appear louder, so GEO600 is useless unless it can be isolated from noise and vibrations.

So the ‘trick’ is to place this extraordinarily sensitive ‘microphone’ into an extraordinarily ‘quiet’ environment. This is very difficult.

If one sits in a quiet room, one can slowly become aware of all kinds of noises which were previously present, but of which one was unaware:

  • the sound of the flow of blood in our ears:
  • the sound of the house ‘creaking’
  • other ‘hums’ of indeterminate origin.

Similarly GEO600, can ‘hear’ previously unimaginably ‘quiet’ sounds:

  • the ground vibrations of Atlantic waves crashing on the shores of Europe:
  • the atom-by-atom ‘creeping’ of the suspension holding the mirrors


So during an experiment, the components of GEO600 sit in a vacuum and the mirrors and optical components are suspended from silica (glass) fibres, which are themselves suspended from the end of a spring-on-a-spring-on-a-spring!

In the photograph below, the stainless steel vacuum vessels containing the key components can be seen in the underground ‘hub’ at the intersection of the two arms.

GEO600 Beam Splitter

They are as isolated from the ‘local’ environment as possible.

The output of the detector – the brightness of the light on the detector is shown live on one of the many screens in the control ‘hut’.

GEO 600 Control Centre

But instead of a graph of ‘brightness versus time, the signal is shown as a graph of the frequencies of vibration detected by the silicon detector.


The picture below shows a graph of the strain – the difference in length of the two arms – detected at different frequencies.

[Please note the graph is what scientists call ‘logarithmic’. This means that a given distance on either axis corresponds to a constant multiplier. So the each group of horizontal lines corresponds to a change in strain by a factor 10, and the maximum strain shown on the vertical 10,000 times larger than the smallest strain shown.]

Sensitivity Curve

The picture above shows two traces, which both have three key features:

  • The blue curve showed the signal being detected as we watched. The red curve was the best performance of the detector. So the detector was performing close to its optimal performance.
  • Both curves are large at low frequencies, have a minimum close to 600 Hz, and then rise slowly. This is the background noise of the detector. Ideally they would like this to be about 10 times lower, particularly at low frequencies.
  • Close to the minimum is a large cluster of spikes: these are the natural frequencies of vibration of the mirror suspensions and the other optical components.
  • There are lots of spikes caused by specific noise sources in the environment.

If a gravity wave passed by…

…it would appear as a sudden spike at a particular frequency, and this frequency would then increase, and finally the spike would disappear.

It would be over in less than a second.

And how could they tell it was a gravity wave and not just random noise? Well that’s the second trick: gravity wave detectors hunt in pairs.

The signal from this detector is analysed alongside signals from other gravity wave detectors located thousands of kilometres away.

If the signal came from a gravity wave, then they would expect to see a similar signal in the second detector either just before or just afterwards – within a ‘time window’ consistent with a wave travelling at the speed of light.


Because powerful lasers were in use, visitors were obliged to wear laser google!

Because powerful lasers were in use, visitors were obliged to wear laser goggles!

This was the second gravity wave detector I have seen that has never detected a gravity wave.

But I have seen this in the new era where we now know these waves exist.

People have been actively searching for these waves for roughly 50 years and I am filled with admiration for the nobility of the researchers who spent their careers fruitlessly searching and failing to find gravity waves.

But the collective effect of these decades of ‘failure’ is a collective success: we now know how to the ‘listen’ to the Universe in a new way which will probably revolutionise how we look at the Universe in the coming centuries.

A 12-minute Documentary

Gravity Wave Detector#1

July 6, 2017
Me and Albert Einstein

Not Charlie Chaplin: That’s me and Albert Einstein. A special moment for me. Not so much for him.

I belong to an exclusive club! I have visited two gravity wave detectors in my life.

Neither of the detectors have ever detected gravity waves, but nonetheless, both of them filled me with admiration for their inventors.

Bristol, 1987 

In 1987, the buzz of the discovery of high-temperature superconductors was still intense.

I was in my first post-doctoral appointment at the University of Bristol and I spent many late late nights ‘cooking’ up compounds and carrying out experiments.

As I wandered around the H. H. Wills Physics department late at night I opened a door and discovered a secret corridor underneath the main corridor.

Stretching for perhaps 50 metres along the subterranean hideout was a high-tech arrangement of vacuum tubing, separated every 10 metres or so by a ‘castle’ of vacuum apparatus.

It lay dormant and dusty and silent in the stillness of the night.

The next day I asked about the apparatus at morning tea – a ritual amongst the low-temperature physicists.

It was Peter Aplin who smiled wryly and claimed ownership. Peter was a kindly antipodean physicist, a generalist – and an expert in electronics.

New Scientist article from 1975

New Scientist article from 1975

He explained that it was his new idea for a gravity wave detector.

In each of the ‘castles’ was a mass suspended in vacuum from a spring made of quartz.

He had calculated that by detecting ‘ringing’ in multiple masses, rather than in a single mass, he could make a detector whose sensitivity scaled as its Length2 rather than as its Length.

He had devised the theory; built the apparatus; done the experiment; and written the paper announcing that gravity waves had not been detected with a new limit of sensitivity.

He then submitted the paper to Physical Review. It was at this point that a referee had reminded him that:

When a term in L2 is taken from the left-hand side of the equation to the right-hand side, it changes sign. You will thus find that in your Equation 13, the term in L2 will cancel.

And so his detector was not any more sensitive than anyone else’s.

And so…

If it had been me, I think I might have cried.

But as Peter recounted this tale, he did not cry. He smiled and put it down to experience.

Peter was – and perhaps still is – a brilliant physicist. And amongst the kindest and most helpful people I have ever met.

And I felt inspired by his screw up. Or rather I was inspired by his ability to openly acknowledge his mistake. Smile. And move on.

30 years later…

…I visited Geo 600. And I will describe this dramatically scaled-up experiment in my next article.

P.S. (Aplin)

Peter S Aplin wrote a review of gravitational wave experiments in 1972 and had a paper at a conference called “A novel gravitational wave antenna“. Sadly, I don’t have easy access to either of these sources.


How would you take a dinosaur’s temperature?

March 15, 2017
A tooth from a tyrannosaurus rex.

A tooth from a tyrannosaurus rex.

Were dinosaurs warm-blooded or cold-blooded?

That is an interesting question. And one might imagine that we could infer an answer by looking at fossil skeletons and drawing inferences from analogies with modern animals.

But with dinosaurs all being dead these last 66 million years or so, a direct temperature measurement is obviously impossible.

Or so I thought until earlier today when I visited the isotope facilities at the Scottish Universities Environmental Research Centre in East Kilbride.

There they have a plan to make direct physical measurements on dinosaur remains, and from these measurements work out the temperature of the dinosaur during its life.

Their cunning three-step plan goes like this:

  1. Find some dinosaur remains: They have chosen to study the teeth from tyrannosaurs because it transpires that there are plenty of these available and so museums will let them carry out experiments on samples.
  2. Analyse the isotopic composition of carbonate compounds in the teeth. It turns out that the detailed isotopic composition of carbonates changes systematically with the temperature at which the carbonate was formed. Studying the isotopic composition of the carbon dioxide gas given off when the teeth are dissolved reveals that subtle change in carbonate composition, and hence the temperature at which the carbonate was formed.
  3. Study the ‘formation temperature’ of the carbonate in dinosaur teeth discovered in a range of different climates. If dinosaurs were cold-blooded, (i.e. unable to control their own body temperature) then the temperature ought to vary systematically with climate. But if dinosaurs were warm-blooded, then the formation temperature should be the same no matter where they lived (in the same way that human body temperature doesn’t vary with latitude).
A 'paleo-thermometer'

A ‘paleo-thermometer’

I have written out the three step plan above, and I hope it sort of made sense.

So contrary to what I said at the start of this article, it is possible – at least in principle – to measure the temperature of a dinosaur that died at least 66 million years ago.

But in fact work like this is right on the edge of ‘the possible’. It ought to work. And the people doing the work think it will work.

But the complexities of the measurement in Step 2 appeared to me to be so many that it must be possible that it won’t work. Or not as well as hoped.

However I don’t say that as a criticism: I say it with admiration.

To be able to even imagine making such a measurement seems to me to be on a par with measuring the cosmic microwave background, or gravitational waves.

It involves stretching everything we can do to its limits and then studying the faint structures and patterns that we detect. Ghosts from the past, whispering to us through time.

I was inspired.


Thanks to Adrian Boyce and Darren Mark for their time today, and apologies to them both if I have mangled this story!

Cosmic rays surprise us again

April 7, 2013
The Alpha Magnetic Spectrometer being tested at CERN by being exposed to a beam of positrons.

The Alpha Magnetic Spectrometer (AMS-02)being tested at CERN by being exposed to a beam of positrons. (Picture from Wikipedia)

[Text and figures updated on April 9th 2013 due to insight from Ryan Nichol: Thanks]

The team running the Alpha Magnetic Spectrometer (AMS-02) have produced their first set of results. And as expected, they are full of surprises.

AMS-02 is an awesomely complex device – too power-hungry, heavy and complex to be placed on its own space platform, it was attached to the International Space Station 18 months ago on the last space shuttle mission. I wrote about this here.

It has with 650 separate microprocessors, 1118 temperature sensors and 298 active thermostatically-controlled heaters. It is basically a general-purpose particle detector like those found at CERN, and represents the culmination of nearly one hundred years of ‘fishing for particles’ in the high atmosphere.

  • First we flew balloons and found that ‘radiation levels’ increased as we went higher.
  • Then we discovered a ‘zoo’ of particles not yet observed on Earth – positrons, muons, pions, and anti-protons.
  • Then we discovered that ‘cosmic rays’ were not ‘rays’ but particles. And we realised that at the Earth’s surface we only observed the debris of collisions of ‘cosmic ray’ particles with the atoms in the upper atmosphere.

Where did these primary cosmic ray particle from?  What physical process accelerated them? Why did they have the range of energies that we observed? What were they? Protons? Electrons? Positrons? We just didn’t know. The AMS-02 was sent up to answer these questions.

I have found much of the comment on the results incomprehensible (BBC Example) with the discussion being exclusively focussed on ‘dark matter’.  So I thought I would try to summarise the results as I see them based on reading the original paper.

Over the last 18 months (roughly 50 million seconds) AMS-02 has observed 25 billion ‘events’  (roughly 600 per second). However, the results they report concern only a tiny fraction of these events – around 6.8 million observations of positrons or electrons believed to be ‘primary’ – coming straight from outer space.

  • They found that – as is usual for cosmic rays – there were fewer and fewer particles with high energies (Figure 1 below)
  • Looking at just the electrons and positrons (i.e. ignoring the protons and other particles they observed) there were only about 10% the number of positrons compared with electrons, but that the exact fraction changed with energy (See Figure 2 below)
  • They found that there were no ‘special’ energies – the spectrum was smooth.
  • They observed that the particles came uniformly from all directions  – the distribution was uniform with variations of greater 4% very unlikely.
  • The electron and positron fluxes followed nearly the same ‘power law’ i.e. the number of particles observed with a given energy changes in nearly the same way – indicating that they probably have the same source.

They conclude very modestly that the detailed observation of this positron ‘spectrum’ demonstrates…

“…the existence of new physical phenomena, whether from a particle physics of astrophysical origin.”

I like this experiment because it represents a new way to observe the Universe – and our observations of the Universe have always surprised us. Observations have the power to puncture the vast bubbles of speculation and fantasy that constitute much of cosmology. I am sure that over the 20 year lifetime of this experiment, AMS-02 will surprise us again and again.


Figure 1: Graph of the number of positron events observed as a function of energy in billions of electron volts (GeV). Notice that there only roughly 100 events in teh highest energy category.

Figure 1: Graph of the number of positron events observed as a function of energy in billions of electron volts (GeV). Notice that there only roughly 100 events in teh highest energy category.

AMS Figure 6

Figure 2: Graph of the fraction of positrons compared with electrons as a function of energy in billions of electron volts (GeV). The ‘error’ bars show the uncertainty in the fraction due to the small number of events detected.

Critical Opalescence

January 14, 2013
Critical Opalesence observed in a binary liquid system.

Critical Opalesence observed in a binary liquid system. What is happening? And what is the relevance to the early Universe?

Phase changes – the changes from solid to liquid or liquid to vapour – are amongst the most astonishing phenomena in nature. However, because they are familiar to us, we sometimes fail to notice just how amazing they are.

But consider this: water molecules with one level of molecular jiggling form a solid so hard that it can rip apart the metal hull of ship. But if their average energy is increased by just a small fraction of one percent, the same molecules will form a liquid and the ship will be quite safe.

Critical opalescence is a phenomenon associated with a phase change which you will almost certainly not have witnessed. It occurs when a liquid at room temperature is sealed in a strong container and heated. Some liquid evaporates filling the space in the container. As the container is heated further, more liquid evaporates, increasing the vapour pressure. The density of the liquid falls as it expands and the density of the vapour increases and eventually a point is reached where the density of liquid and the gas are equal. At this point there is no way to tell which phase is which and the container is filled with ‘fluid’.

The point at which this happens – known as the critical point –  is usually at high temperature and pressure. For water the critical condition occurs at 220 atmospheres and a temperature of 374.2 °C at which point the density of water is roughly one third of its value at room temperature. The extreme conditions required to achieve the critical condition make it hard to observe, but despite the difficulties people have studied the behaviour of liquids around this point

Even though both liquid and vapour are transparent, as one approaches the critical point, the mixture becomes cloudy – a phenomenon scientists ‘big up’ and call critical opalescence. It is a consequence of the fluctuations that are always taking place at the molecular scale. A few molecules at a time jiggle around and ‘try out’ new ways of sticking together. For example small numbers of  molecules in the liquid phase ‘try out’ being in the gas phase. If it turns out to be lower in energy then more and more molecules join them and a large-scale phase transition takes place.

As the energy required to change from liquid-to-gas (latent heat) become smaller and smaller – reaching zero at the critical point – the number of molecules able to take part in these fluctuations grows. The opalescence occurs when the size of the fluctuations involves around a billion atoms – in droplets with a typical size of roughly  0.000 3 millimetres – roughly the wavelength of light. Then droplets of liquid constantly form and disperse, scattering light from their surface even though liquid and vapour are both transparent. This is similar to the way in which fog is opaque even though water and air are both transparent.

Anyway you will almost certainly never get to see this phenomenon in a liquid-gas system, but critical opalescence may be observed in a much simpler way: in a mixture of methanol and hexane. Below 39 °C these organic solvents separate into immiscible phases, but above that they form a single phase – analogous to the phase separation in the liquid-gas system around its critical point. Cooling the mixture after heating above its ‘critical point’, the opalescence is easily observed.

Below is a compilation of still photographs of the phenomenon in the form a movie, and technical details of how to do the experiment. And you might think that would be it – a cute trick. But in fact this demonstration shows two astonishing things.

Firstly, it is direct visible evidence of the dynamical nature of fluids and the importance of microscopic fluctuations in phase changes. This underlies the detailed dynamics of every change of phase, explaining why things do – or don’t – supercool or superheat.

Secondly, the universe itself is thought to have undergone an analogous phase transition in the first instants after the Big Bang. Our search of the heavens for the fluctuations in the intensity of microwave radiation is a search for the remnants of similar fluctuations to those which you can see in the methanol/hexane mixture. Wow! The Universe in a glass of solvent!

Detailed Notes

Phase transition is the technical term used to describe phenomena such as the melting/freezing of ice/water, or the evaporation of a liquid to make a vapour. Just below the temperature at which ice melts, it is a solid, and in most senses of the word, there is no liquid present at all. Just above the temperature at which ice melts, it is a liquid, and in there is no solid present at all. Raising the temperature by just a few thousandths of a degree is sufficient to transform completely the properties of water-substance. Transitions such as this are called first-order.

Not all phase transitions proceed in this way. For some transitions there is a critical temperature below which some kind of structure or order begins to appear. That is, just below the critical temperature Tthe structure or order is not fully developed, but in some sense ‘grows stronger’ as one proceeds below TC. Thus both phases co-exist just below TC with the low-temperature phase growing stronger the further the temperature is lowered below TC. These phase transitions used to be called second-order, but are now known as continuous.

The most common example of a continuous phase transition is the liquid/vapour system. Consider a volume filled with vapour at constant pressure. This might correspond to vapour trapped in a very loose balloon. For the sake of definiteness let’s assume that we have water vapour and that there is no liquid water present in the container. As one lowers the temperature, the density of the vapour increases. Eventually one reaches a point where some liquid condenses. Both liquid and vapour co-exist, but the amount of liquid increases as the temperature is lowered below this condensation temperature. But even well below TC, there is still vapour present above the surface of the liquid.

Above TC for a continuous phase transition, the substance is constantly fluctuating into a state which is close to the new state which will eventually take over at lower temperature. These fluctuations are generally confined to just a few molecules at a time and take place on the length scale of a few nanometres. However as one gets closer to the transition temperature, the ‘stability’ of the two phases (technically: their specific Gibbs Free Energy) becomes closer and the random fluctuations grow larger both in their physical extent and the length of time over which they last. Eventually the length scale and the time scale become so large that the system simply fluctuates into the new phase. TC marks the temperature at which length scale and timescale of the fluctuations diverges.

Critcial Temperature in a Liquid/Vapour System

To appreciate what the critical temperature indicates in a liquid-vapour system, consider the case now of liquid water heated in a closed container. As the temperature of the container increases, the density of the the water falls (slowly) and the density of the vapour rises (rapidly). If the volume of the container is chosen so that all the liquid does not evaporate, at some point the densities of the liquid and vapour become equal. For most fluids this occurs when the liquid density is around one third its “normal” value. For water this corresponds to a pressure of around 220 atmospheres and a temperature of 647.3 K (374.2  °C): the critical temperature of water substance.

At this point there is no distinction between the dense vapour and the low density “liquid” and so there is no latent heat associated with the transition. Both are just matter in a fluid state. Near this point there is only a small distinction betwen the between the dense vapour and the low density “liquid” and the latent heat becomes smaller and smaller as one approaches TC.

Opalescence in a Liquid/Vapour System

On a microscopic scale, the molecular randomness cause a liquid near TC to constantly fluctuate into nearly vapour-like volumes and back again. As one approches Tthe length scale of the fluctuations grows. This is because the energy required for a fluctuation into the vapour state becomes smaller as one approaches TC. Eventually the fluctuations occur on the scale of a fraction of a micron: i.e. of the same order as the wavelength of light. If there is a difference in refractive index of the vapor and liquid phases (and there is generally a small difference) then light will be strongly scattered and the mixture of the phases appears cloudy. This phenomenon is known as critical opalescence.

However the critical points of all practical substances occur at pressures of above 10 atmospheres and so require special safety precautions to observe. As mentioned above, the pressure required for water is greater than 200 atmospheres.The combination of high pressure and glass windows is generally a troubling one.

Opalescence in a Binary Liquid System

Some binary fluid mixtures also show a critical temperature. Above Tthe fluids are miscible, but below Tthey separate into two separate phases. As one cools a binary fluid towards Tfrom above, the fully-mixed phase is constantly fluctuating into phase-separated volumes and back again. As one approches Tthe length scale of the fluctuations grows and eventually reaches the scale of a fraction of a micron: i.e. of the same order as the wavelength of light. As with the liquid vapour systems, light will be strongly scattered and the mixture of the phases appears cloudy. This critical opalescence is exactly analogous to that seen in liquid/vapour systems.

One can imagine that one of the fluids (e.g. in our case the hexane) is a vacuum. The methanol “evaporates” into the vacuum. Since the volume of the hexane is fixed, as the temperature increases, the density of the methanol “evaporated” into the hexane equals the density of the methanol “liquid”. This marks the critical point of this system. The analogy is complicated because simultaneously the hexane is “evaporating” into the “vacuum” of the methanol, but the basic analogy between the binary fluid system and the liquid/vapour system is sound.Thus binary fluids allow one to observe critical opalescence, with negligible safety risks. And of course, observing it on the web your safety risk is reduced still further :-).

Practical Details

The demonstration is described in the book The Theory of Critical Phenomena by JJ Binney, NJ Dowrick, AJ Fisher and MEJ Newman, published by Oxford University Press in 1992 ISBN0-19-851393-3. It involves mixing hexane (C6H14) with methanol (CH3OH) in the ratio such that ratio of the number of molecules is 665:435. Using the data for the density and molecular weight it can be shown that this corresponds to a volume ratio of 1 to 4.93.



Boiling Point

Refractive Index



791.3 kg m-3

64.7 C




659 kg m-3

68.7 C


Data from Kaye and Laby.

The book specifies the use of n-hexane rather than just hexane but I found using regular hexane made no difference. I mixed the liquids in the ratio 20 cc methanol to 98.6 cc hexane. The precision required did not seem to be too great. The photographs on this site represent my second attempt at the experiment and my only tip is that observe it is important to keep the container still as it cooled.


1. Picture taken after mixing hexane and methanol at = 18 °C. The liquid-vapour boundary is at the top the picture and the phase boundary between the lower methanol-rich phase and the upper hexane-rich phase is visible about 1/4 of the way up the picture. The silver “blob” in the centre is a thermometer.

2. Now the liquid has been put onto a hot-plate and heated to around 30 °C. Notice the presence of so-called capillary waves at the interface between the two fluids. The waves are very persistent and move slowly around the container giving a kind of “slow motion” effect.

3. Still on the hot-plate the liquid has now reached 45 °C. The liquid appears to “boiling” but it is not: 45°C is well below the boiling temperature of both liquids (64.7 °C and 68.7 °C for methanol and hexane respectively). What is happening is that one phase is “boiling” into the other. Indeed, just before this picture was taken, small jets of the lower (hotter) methanol-rich phase could be seen exploding into the other!


4. Now the flask has been removed from the hot-plate and is being held in a clamp stand and allowed to cool. It is at about 46 °C. Notice that the liquid is clear (compared with Picture 3) and that the phase boundary visible in Picture 1 has disappeared. This is the high-temperature mixed phase.

5. Now the flask is continuing to cool and the nominal temperature is about 46 °C. Notice that the liquid is becoming cloudy at the bottom of the container. Due to convection, the bottom of the container is probably the colder than the top of the container by a few tenths of a degree celsius.

6. The flask is continuing to cool and the nominal temperature is about 41 °C. The cloudiness is now much more noticeable and reaches further up the container. With “the eye of faith” one can just begin to see the methanol-rich phase collecting at the bottom of the container. 

7. The flask is continuing to cool and the nominal temperature is about 39°C. The cloudiness is now much more noticeable and reaches further up the container. The methanol-rich phase may now be more clearly seen at the bottom of the container.

8. The flask is continuing to cool and the nominal temperature is about 37.5°C. The cloudiness now reaches further up the container. The methanol-rich phase may now be clearly seen at the bottom of the container and a reasonably clear phase boundary is visible.

9. The flask is continuing to cool and the nominal temperature is about 37°C. The cloudiness now reaches its maximum extent. The methanol-rich phase may now be clearly seen at the bottom of the container and that too appears to be cloudy, but it is difficult to be sure.

10. The flask is continuing to cool and the nominal temperature is about 36.5°C. The cloudiness is now beginning to clear.

11. The flask is continuing to cool and the nominal temperature is about 35.5°C. The cloudiness is now definitely clearing.

12. The flask is continuing to cool and the nominal temperature is about 33°C. The cloudiness continues to clear and one can now (just) see droplets forming on the glass. These droplets are on the inside of the glassware and form in both phases. Those in the upper hexane-rich phase drip down to interface, but the drops in the lower methanol-rich phase drip upwards to the interface!

13. The flask is continuing to cool and the nominal temperature is about 28°C. The cloudiness is nearly gone and one can now clearly see the droplets described in Picture 12.

14. The flask is continuing to cool and the nominal temperature is about 26°C. The cloudiness is nearly gone and the droplets described in Picture 12 are clearly visible.

15. The flask is has now been left for one hour since picture 14 was taken and its nominal temperature is about 18°C. The cloudiness is nearly gone and the droplets described in Picture 12 are still clearly visible. Phase separation is essentially complete and the system has returned to its equilibrium state similar to that visible in Picture 1.


Not all mass derives from the Higgs!

July 15, 2012

An unspectacular representation of ‘the Higgs result’. Slightly more events of a particular kind have been seen when particles collide with a particular energy. Picture from DOE Tevatron.

My last course in particle physics at University represented the high water-mark of my understanding of the subject. But I was already struggling. So in the matter of the Higgs discovery my scientific qualifications, offer me only the slightest of advantages.

Fortunately the man who was responsible for what understanding I once possessed, David Bailin, responded to my plea for help:

As you say, the Higgs mechanism gives masses to all (truly elementary) particles, … but that certainly does not account for most of the mass of ordinary matter like us. Most of the mass of neutrons and protons derives from non-perturbative strong interaction effects in the gluons (and quark-anti-quark pairs) that bind the valence quarks in the nucleon.

The key words here are ‘truly elementary. As I understand this, the Higgs mechanism is responsible for all the mass of electrons, muons, and quarks. But it is not responsible for all – or even most – of the mass of composite particles such mesons, protons and neutrons (glossary here). I think this is an interesting subtlety to the more generally “Higgs gives mass” simple story that has been widely reported. This simple story is true in that the quarkquark interactions that give rise to most of the mass of ordinary matter would not have got started if the Higgs field had not given some initial mass.

The idea is that as the Universe cooled, the Higgs field first gave truly elementary particles their initial mass, and then later on these particles interacted and acquired the bulk of their additional mass as a consequence.

Is that clearer now?


Full Quote from David Bailin

As you say, the Higgs mechanism gives masses to all (truly elementary) particles, except those like the photon, gluons (and graviton) that are protected by an unbroken gauge symmetry. But that certainly does not account for most of the mass of ordinary matter like us. Most of the mass of neutrons and protons derives from non-perturbative strong interaction effects in the gluons (and quark-anti-quark pairs) that bind the valence quarks in the nucleon. That’s what the lattice QCD people have spent so long and so much money trying to compute. The nucleon masses would hardly change even if there was no Higgs effect. Gravity is coupled to anything with a non-zero energy momentum tensor, and that certainly includes the “condensate” in the nucleons, as well as the Higgs field.

Another thought on Higgs

July 9, 2012
Higgs boson: Proton-proton collisions as measured by Cern

Another incomprehensible image typically used to illustrate stories about the Higgs Boson. It shows ‘things’ shooting out from the point where two protons have been smashed together. Picture stolen from The Guardian

I had one more thought about the recent discovery of the Higgs particle: if the Higgs is the particle which gives ‘mass’ to all the other particles, then surely the nature of the Higgs must be linked to the nature of gravity?

As you may be aware, the concept of ‘mass’ enters our lives in two quite distinct ways: as inertial mass and as gravitational mass.

  • Inertial mass is the property of an object which makes it harder to speed up or slow down. This is encapsulated in Newton’s Second Law of Motion: that the amount of force required to achieve a given acceleration is related to the inertial mass of the object.
  • Gravitational mass is the property of an object which makes it attract other objects at a distance through space. This is encapsulated in Newton’s Law of Universal Gravitation: that all the matter in the universe attracts all the other matter with a force which is inversely proportional to the square of the distance between the objects, .

Einstein was fascinated by the simple observation that inertial and gravitational mass were – as well as can be measured – always exactly equal. From this insight he was inspired to derive his General Theory of Relativity which is based on the central tenet – the principle of equivalence – that these two types of mass are not in fact two distinct properties, but one single property.

When scientists say the Higgs particle is responsible for giving ‘mass’ to all the other particles, they mean inertial mass. But this will also have a gravitational effect. I have not seen any discussion of this feature in the news, but surely, if the Higgs particle gives rise to both types of mass, it must provide some kind of link between the two different manifestations of mass. Discovering any kind of connection at all between the Electroweak force, the Strong force and Gravity would really be a major step forward in our understanding of the Universe.

Or maybe I have completely missed the point?

Noticing the obvious

June 11, 2012
Electrical Neutrality

An artistic visualisation of the concept of equality of positive and negative electrical charges.

I tidied my office last week, and flicking through my pile of ‘papers to read later’ I came upon:

The electrical neutral neutrality of atoms and of bulk matter
C. S. Unnikrishnan and C.T. Gillies
Metrologia 41 (2004) S125-S135

The basic question asked in the paper was:

  • Is the electrical charge on the electron equal in magnitude to the electrical charge on the proton?

Or alternatively:

  • Is bulk matter electrically neutral?

What struck me about the paper was ‘Why had I never asked either of these questions myself?“. I think it is because I had somehow thought the answers were ‘obvious’ or ‘self-evident’, but in fact they are neither. They are -IMHO – bloody good questions!

Ultimately the questions are experimental, and a number of clever techniques have been used to provide the answers. So far we know that the charge on the electron and the charge on the proton are equal in magnitude to within 1 part in 1021. Wow! I don’t know of any other two experimental numbers that are known to be equal with that degree of precision.

Electrons have a completely different internal structure to protons and neutrons. Electrons are part of class of particles called leptons (‘light’* ones) which as far as we know have no internal structure. Protons and neutrons are part of class of particles called baryons (‘heavy’** ones)  and we know they are in some sense composed of quarks. Each proton or neutron is composed of three quarks.

Given the completely different structure of each type of particle, the equality of the magnitude of the electric charge on each goes from being ‘remarkable’  to ‘astounding’! And even though we  have no fundamental explanation of what electric charge is, it is shocking to find that two types of particle with completely different structures should have exactly the same amount of it – whatever ‘it‘ is. In fact, I refuse to believe it. It seems inevitable that we we will eventually discover some tiny difference between the magnitude of the electric charge on the proton and the electron.

What would the consequences of such a discovery be? Unnikrishnan and Gillies covers these in some detail but two points caught my attention. The first was a comment by Einstein noticing that if normal matter were not exactly neutral, then when it flowed, it could generate a magnetic field.

The Earth and Sun have magnetic fields, the orientation of which stand in approximate relationship with the axes of rotation… But is hard to imagine that electrical conduction or convection currents of sufficient magnitudes are really present… It rather looks as if the cyclic motions of neutral matter are producing magnetic fields.

And the second was an empirical law proposed by Schuster who noticed that the strength of the magnetic field around a range of planets and galaxies was linked to their respective angular momenta. Could that be evidence that bulk matter is not perfectly neutral?

Now these suggestions are highly speculative, and I am sure that people cleverer than I have worked out all kinds of reasons why they do, or do not, make sense. But whether or not they turn out to be true, I feel like I have had my sense of what is ‘normal’ re-adjusted. I am left with the sense that something I experience everyday – the neutrality of bulk matter – is not in any sense obvious.


* Light in the sense of low mass rather than in the sense of electromagnetic radiation

Heavy in the sense of high mass rather than in the sense of really serious

The Gravity Gnome

April 27, 2012
Weighing a Gnome

Weighing a gnome is actually a way of probing the gravity field around us.

Gravity is the most mysterious of the forces we experience in our lives. Impossible to screen against, it extends throughout space to the farthest corners (corners?) of the cosmos, causing every piece of matter in the Universe to affect every other. Wow!

More prosaically, gravity gives rise to the phenomenon of ‘weight’ – the force which pulls us ‘down’ to the Earth. GCSE students are tutored on the difference between mass and weight, and are told that the weight of an object, say a Gnome, varies from one planet to another, but its mass is the same on any planet. However, the Kern instrument company are keen to point out that if you use a sensitive force balance, its weight changes from place to place around the Earth.

The balance doesn’t even need to be that sensitive. I was surprised to find out by how much the weight of an object measured at a fixed height above sea level changes with latitude and longitude: – it varies by  around 0.5%. So for a Gnome weighing around 300 g, changes of 1.5 g should be seen and this is easily detectable. The rationale for the publicity stunt is explained here, and you can follow the Kern Gnome on his journey here. I like this experiment because the measurement is so simple – and yet the physics it uncovers is so profound.

The light-hearted video below shows my children’s reflections on the mystery of Gravity.

P.S. After a period of steady decline, my own weight has been mysteriously increasing. I think this may be due to a fluctuation in the gravitational constant G. More about this in future articles.

%d bloggers like this: