Archive for the ‘Cosmology’ Category

How would you take a dinosaur’s temperature?

March 15, 2017
A tooth from a tyrannosaurus rex.

A tooth from a tyrannosaurus rex.

Were dinosaurs warm-blooded or cold-blooded?

That is an interesting question. And one might imagine that we could infer an answer by looking at fossil skeletons and drawing inferences from analogies with modern animals.

But with dinosaurs all being dead these last 66 million years or so, a direct temperature measurement is obviously impossible.

Or so I thought until earlier today when I visited the isotope facilities at the Scottish Universities Environmental Research Centre in East Kilbride.

There they have a plan to make direct physical measurements on dinosaur remains, and from these measurements work out the temperature of the dinosaur during its life.

Their cunning three-step plan goes like this:

  1. Find some dinosaur remains: They have chosen to study the teeth from tyrannosaurs because it transpires that there are plenty of these available and so museums will let them carry out experiments on samples.
  2. Analyse the isotopic composition of carbonate compounds in the teeth. It turns out that the detailed isotopic composition of carbonates changes systematically with the temperature at which the carbonate was formed. Studying the isotopic composition of the carbon dioxide gas given off when the teeth are dissolved reveals that subtle change in carbonate composition, and hence the temperature at which the carbonate was formed.
  3. Study the ‘formation temperature’ of the carbonate in dinosaur teeth discovered in a range of different climates. If dinosaurs were cold-blooded, (i.e. unable to control their own body temperature) then the temperature ought to vary systematically with climate. But if dinosaurs were warm-blooded, then the formation temperature should be the same no matter where they lived (in the same way that human body temperature doesn’t vary with latitude).
A 'paleo-thermometer'

A ‘paleo-thermometer’

I have written out the three step plan above, and I hope it sort of made sense.

So contrary to what I said at the start of this article, it is possible – at least in principle – to measure the temperature of a dinosaur that died at least 66 million years ago.

But in fact work like this is right on the edge of ‘the possible’. It ought to work. And the people doing the work think it will work.

But the complexities of the measurement in Step 2 appeared to me to be so many that it must be possible that it won’t work. Or not as well as hoped.

However I don’t say that as a criticism: I say it with admiration.

To be able to even imagine making such a measurement seems to me to be on a par with measuring the cosmic microwave background, or gravitational waves.

It involves stretching everything we can do to its limits and then studying the faint structures and patterns that we detect. Ghosts from the past, whispering to us through time.

I was inspired.

=============================

Thanks to Adrian Boyce and Darren Mark for their time today, and apologies to them both if I have mangled this story!

Cosmic rays surprise us again

April 7, 2013
The Alpha Magnetic Spectrometer being tested at CERN by being exposed to a beam of positrons.

The Alpha Magnetic Spectrometer (AMS-02)being tested at CERN by being exposed to a beam of positrons. (Picture from Wikipedia)

[Text and figures updated on April 9th 2013 due to insight from Ryan Nichol: Thanks]

The team running the Alpha Magnetic Spectrometer (AMS-02) have produced their first set of results. And as expected, they are full of surprises.

AMS-02 is an awesomely complex device – too power-hungry, heavy and complex to be placed on its own space platform, it was attached to the International Space Station 18 months ago on the last space shuttle mission. I wrote about this here.

It has with 650 separate microprocessors, 1118 temperature sensors and 298 active thermostatically-controlled heaters. It is basically a general-purpose particle detector like those found at CERN, and represents the culmination of nearly one hundred years of ‘fishing for particles’ in the high atmosphere.

  • First we flew balloons and found that ‘radiation levels’ increased as we went higher.
  • Then we discovered a ‘zoo’ of particles not yet observed on Earth – positrons, muons, pions, and anti-protons.
  • Then we discovered that ‘cosmic rays’ were not ‘rays’ but particles. And we realised that at the Earth’s surface we only observed the debris of collisions of ‘cosmic ray’ particles with the atoms in the upper atmosphere.

Where did these primary cosmic ray particle from?  What physical process accelerated them? Why did they have the range of energies that we observed? What were they? Protons? Electrons? Positrons? We just didn’t know. The AMS-02 was sent up to answer these questions.

I have found much of the comment on the results incomprehensible (BBC Example) with the discussion being exclusively focussed on ‘dark matter’.  So I thought I would try to summarise the results as I see them based on reading the original paper.

Over the last 18 months (roughly 50 million seconds) AMS-02 has observed 25 billion ‘events’  (roughly 600 per second). However, the results they report concern only a tiny fraction of these events – around 6.8 million observations of positrons or electrons believed to be ‘primary’ – coming straight from outer space.

  • They found that – as is usual for cosmic rays – there were fewer and fewer particles with high energies (Figure 1 below)
  • Looking at just the electrons and positrons (i.e. ignoring the protons and other particles they observed) there were only about 10% the number of positrons compared with electrons, but that the exact fraction changed with energy (See Figure 2 below)
  • They found that there were no ‘special’ energies – the spectrum was smooth.
  • They observed that the particles came uniformly from all directions  – the distribution was uniform with variations of greater 4% very unlikely.
  • The electron and positron fluxes followed nearly the same ‘power law’ i.e. the number of particles observed with a given energy changes in nearly the same way – indicating that they probably have the same source.

They conclude very modestly that the detailed observation of this positron ‘spectrum’ demonstrates…

“…the existence of new physical phenomena, whether from a particle physics of astrophysical origin.”

I like this experiment because it represents a new way to observe the Universe – and our observations of the Universe have always surprised us. Observations have the power to puncture the vast bubbles of speculation and fantasy that constitute much of cosmology. I am sure that over the 20 year lifetime of this experiment, AMS-02 will surprise us again and again.

Figures

Figure 1: Graph of the number of positron events observed as a function of energy in billions of electron volts (GeV). Notice that there only roughly 100 events in teh highest energy category.

Figure 1: Graph of the number of positron events observed as a function of energy in billions of electron volts (GeV). Notice that there only roughly 100 events in teh highest energy category.

AMS Figure 6

Figure 2: Graph of the fraction of positrons compared with electrons as a function of energy in billions of electron volts (GeV). The ‘error’ bars show the uncertainty in the fraction due to the small number of events detected.

Critical Opalescence

January 14, 2013
Critical Opalesence observed in a binary liquid system.

Critical Opalesence observed in a binary liquid system. What is happening? And what is the relevance to the early Universe?

Phase changes – the changes from solid to liquid or liquid to vapour – are amongst the most astonishing phenomena in nature. However, because they are familiar to us, we sometimes fail to notice just how amazing they are.

But consider this: water molecules with one level of molecular jiggling form a solid so hard that it can rip apart the metal hull of ship. But if their average energy is increased by just a small fraction of one percent, the same molecules will form a liquid and the ship will be quite safe.

Critical opalescence is a phenomenon associated with a phase change which you will almost certainly not have witnessed. It occurs when a liquid at room temperature is sealed in a strong container and heated. Some liquid evaporates filling the space in the container. As the container is heated further, more liquid evaporates, increasing the vapour pressure. The density of the liquid falls as it expands and the density of the vapour increases and eventually a point is reached where the density of liquid and the gas are equal. At this point there is no way to tell which phase is which and the container is filled with ‘fluid’.

The point at which this happens – known as the critical point –  is usually at high temperature and pressure. For water the critical condition occurs at 220 atmospheres and a temperature of 374.2 °C at which point the density of water is roughly one third of its value at room temperature. The extreme conditions required to achieve the critical condition make it hard to observe, but despite the difficulties people have studied the behaviour of liquids around this point

Even though both liquid and vapour are transparent, as one approaches the critical point, the mixture becomes cloudy – a phenomenon scientists ‘big up’ and call critical opalescence. It is a consequence of the fluctuations that are always taking place at the molecular scale. A few molecules at a time jiggle around and ‘try out’ new ways of sticking together. For example small numbers of  molecules in the liquid phase ‘try out’ being in the gas phase. If it turns out to be lower in energy then more and more molecules join them and a large-scale phase transition takes place.

As the energy required to change from liquid-to-gas (latent heat) become smaller and smaller – reaching zero at the critical point – the number of molecules able to take part in these fluctuations grows. The opalescence occurs when the size of the fluctuations involves around a billion atoms – in droplets with a typical size of roughly  0.000 3 millimetres – roughly the wavelength of light. Then droplets of liquid constantly form and disperse, scattering light from their surface even though liquid and vapour are both transparent. This is similar to the way in which fog is opaque even though water and air are both transparent.

Anyway you will almost certainly never get to see this phenomenon in a liquid-gas system, but critical opalescence may be observed in a much simpler way: in a mixture of methanol and hexane. Below 39 °C these organic solvents separate into immiscible phases, but above that they form a single phase – analogous to the phase separation in the liquid-gas system around its critical point. Cooling the mixture after heating above its ‘critical point’, the opalescence is easily observed.

Below is a compilation of still photographs of the phenomenon in the form a movie, and technical details of how to do the experiment. And you might think that would be it – a cute trick. But in fact this demonstration shows two astonishing things.

Firstly, it is direct visible evidence of the dynamical nature of fluids and the importance of microscopic fluctuations in phase changes. This underlies the detailed dynamics of every change of phase, explaining why things do – or don’t – supercool or superheat.

Secondly, the universe itself is thought to have undergone an analogous phase transition in the first instants after the Big Bang. Our search of the heavens for the fluctuations in the intensity of microwave radiation is a search for the remnants of similar fluctuations to those which you can see in the methanol/hexane mixture. Wow! The Universe in a glass of solvent!

Detailed Notes

Phase transition is the technical term used to describe phenomena such as the melting/freezing of ice/water, or the evaporation of a liquid to make a vapour. Just below the temperature at which ice melts, it is a solid, and in most senses of the word, there is no liquid present at all. Just above the temperature at which ice melts, it is a liquid, and in there is no solid present at all. Raising the temperature by just a few thousandths of a degree is sufficient to transform completely the properties of water-substance. Transitions such as this are called first-order.

Not all phase transitions proceed in this way. For some transitions there is a critical temperature below which some kind of structure or order begins to appear. That is, just below the critical temperature Tthe structure or order is not fully developed, but in some sense ‘grows stronger’ as one proceeds below TC. Thus both phases co-exist just below TC with the low-temperature phase growing stronger the further the temperature is lowered below TC. These phase transitions used to be called second-order, but are now known as continuous.

The most common example of a continuous phase transition is the liquid/vapour system. Consider a volume filled with vapour at constant pressure. This might correspond to vapour trapped in a very loose balloon. For the sake of definiteness let’s assume that we have water vapour and that there is no liquid water present in the container. As one lowers the temperature, the density of the vapour increases. Eventually one reaches a point where some liquid condenses. Both liquid and vapour co-exist, but the amount of liquid increases as the temperature is lowered below this condensation temperature. But even well below TC, there is still vapour present above the surface of the liquid.

Above TC for a continuous phase transition, the substance is constantly fluctuating into a state which is close to the new state which will eventually take over at lower temperature. These fluctuations are generally confined to just a few molecules at a time and take place on the length scale of a few nanometres. However as one gets closer to the transition temperature, the ‘stability’ of the two phases (technically: their specific Gibbs Free Energy) becomes closer and the random fluctuations grow larger both in their physical extent and the length of time over which they last. Eventually the length scale and the time scale become so large that the system simply fluctuates into the new phase. TC marks the temperature at which length scale and timescale of the fluctuations diverges.

Critcial Temperature in a Liquid/Vapour System

To appreciate what the critical temperature indicates in a liquid-vapour system, consider the case now of liquid water heated in a closed container. As the temperature of the container increases, the density of the the water falls (slowly) and the density of the vapour rises (rapidly). If the volume of the container is chosen so that all the liquid does not evaporate, at some point the densities of the liquid and vapour become equal. For most fluids this occurs when the liquid density is around one third its “normal” value. For water this corresponds to a pressure of around 220 atmospheres and a temperature of 647.3 K (374.2  °C): the critical temperature of water substance.

At this point there is no distinction between the dense vapour and the low density “liquid” and so there is no latent heat associated with the transition. Both are just matter in a fluid state. Near this point there is only a small distinction betwen the between the dense vapour and the low density “liquid” and the latent heat becomes smaller and smaller as one approaches TC.

Opalescence in a Liquid/Vapour System

On a microscopic scale, the molecular randomness cause a liquid near TC to constantly fluctuate into nearly vapour-like volumes and back again. As one approches Tthe length scale of the fluctuations grows. This is because the energy required for a fluctuation into the vapour state becomes smaller as one approaches TC. Eventually the fluctuations occur on the scale of a fraction of a micron: i.e. of the same order as the wavelength of light. If there is a difference in refractive index of the vapor and liquid phases (and there is generally a small difference) then light will be strongly scattered and the mixture of the phases appears cloudy. This phenomenon is known as critical opalescence.

However the critical points of all practical substances occur at pressures of above 10 atmospheres and so require special safety precautions to observe. As mentioned above, the pressure required for water is greater than 200 atmospheres.The combination of high pressure and glass windows is generally a troubling one.

Opalescence in a Binary Liquid System

Some binary fluid mixtures also show a critical temperature. Above Tthe fluids are miscible, but below Tthey separate into two separate phases. As one cools a binary fluid towards Tfrom above, the fully-mixed phase is constantly fluctuating into phase-separated volumes and back again. As one approches Tthe length scale of the fluctuations grows and eventually reaches the scale of a fraction of a micron: i.e. of the same order as the wavelength of light. As with the liquid vapour systems, light will be strongly scattered and the mixture of the phases appears cloudy. This critical opalescence is exactly analogous to that seen in liquid/vapour systems.

One can imagine that one of the fluids (e.g. in our case the hexane) is a vacuum. The methanol “evaporates” into the vacuum. Since the volume of the hexane is fixed, as the temperature increases, the density of the methanol “evaporated” into the hexane equals the density of the methanol “liquid”. This marks the critical point of this system. The analogy is complicated because simultaneously the hexane is “evaporating” into the “vacuum” of the methanol, but the basic analogy between the binary fluid system and the liquid/vapour system is sound.Thus binary fluids allow one to observe critical opalescence, with negligible safety risks. And of course, observing it on the web your safety risk is reduced still further :-).

Practical Details

The demonstration is described in the book The Theory of Critical Phenomena by JJ Binney, NJ Dowrick, AJ Fisher and MEJ Newman, published by Oxford University Press in 1992 ISBN0-19-851393-3. It involves mixing hexane (C6H14) with methanol (CH3OH) in the ratio such that ratio of the number of molecules is 665:435. Using the data for the density and molecular weight it can be shown that this corresponds to a volume ratio of 1 to 4.93.

MW

Density

Boiling Point

Refractive Index

CH3OH

32

791.3 kg m-3

64.7 C

1.3284

C6H14

86

659 kg m-3

68.7 C

1.3749

Data from Kaye and Laby.

The book specifies the use of n-hexane rather than just hexane but I found using regular hexane made no difference. I mixed the liquids in the ratio 20 cc methanol to 98.6 cc hexane. The precision required did not seem to be too great. The photographs on this site represent my second attempt at the experiment and my only tip is that observe it is important to keep the container still as it cooled.

Heating

1. Picture taken after mixing hexane and methanol at = 18 °C. The liquid-vapour boundary is at the top the picture and the phase boundary between the lower methanol-rich phase and the upper hexane-rich phase is visible about 1/4 of the way up the picture. The silver “blob” in the centre is a thermometer.

2. Now the liquid has been put onto a hot-plate and heated to around 30 °C. Notice the presence of so-called capillary waves at the interface between the two fluids. The waves are very persistent and move slowly around the container giving a kind of “slow motion” effect.

3. Still on the hot-plate the liquid has now reached 45 °C. The liquid appears to “boiling” but it is not: 45°C is well below the boiling temperature of both liquids (64.7 °C and 68.7 °C for methanol and hexane respectively). What is happening is that one phase is “boiling” into the other. Indeed, just before this picture was taken, small jets of the lower (hotter) methanol-rich phase could be seen exploding into the other!

Cooling

4. Now the flask has been removed from the hot-plate and is being held in a clamp stand and allowed to cool. It is at about 46 °C. Notice that the liquid is clear (compared with Picture 3) and that the phase boundary visible in Picture 1 has disappeared. This is the high-temperature mixed phase.

5. Now the flask is continuing to cool and the nominal temperature is about 46 °C. Notice that the liquid is becoming cloudy at the bottom of the container. Due to convection, the bottom of the container is probably the colder than the top of the container by a few tenths of a degree celsius.

6. The flask is continuing to cool and the nominal temperature is about 41 °C. The cloudiness is now much more noticeable and reaches further up the container. With “the eye of faith” one can just begin to see the methanol-rich phase collecting at the bottom of the container. 

7. The flask is continuing to cool and the nominal temperature is about 39°C. The cloudiness is now much more noticeable and reaches further up the container. The methanol-rich phase may now be more clearly seen at the bottom of the container.

8. The flask is continuing to cool and the nominal temperature is about 37.5°C. The cloudiness now reaches further up the container. The methanol-rich phase may now be clearly seen at the bottom of the container and a reasonably clear phase boundary is visible.

9. The flask is continuing to cool and the nominal temperature is about 37°C. The cloudiness now reaches its maximum extent. The methanol-rich phase may now be clearly seen at the bottom of the container and that too appears to be cloudy, but it is difficult to be sure.

10. The flask is continuing to cool and the nominal temperature is about 36.5°C. The cloudiness is now beginning to clear.

11. The flask is continuing to cool and the nominal temperature is about 35.5°C. The cloudiness is now definitely clearing.

12. The flask is continuing to cool and the nominal temperature is about 33°C. The cloudiness continues to clear and one can now (just) see droplets forming on the glass. These droplets are on the inside of the glassware and form in both phases. Those in the upper hexane-rich phase drip down to interface, but the drops in the lower methanol-rich phase drip upwards to the interface!

13. The flask is continuing to cool and the nominal temperature is about 28°C. The cloudiness is nearly gone and one can now clearly see the droplets described in Picture 12.

14. The flask is continuing to cool and the nominal temperature is about 26°C. The cloudiness is nearly gone and the droplets described in Picture 12 are clearly visible.

15. The flask is has now been left for one hour since picture 14 was taken and its nominal temperature is about 18°C. The cloudiness is nearly gone and the droplets described in Picture 12 are still clearly visible. Phase separation is essentially complete and the system has returned to its equilibrium state similar to that visible in Picture 1.

 

Not all mass derives from the Higgs!

July 15, 2012
Tevatron-Higgs-results-20120702-mr

An unspectacular representation of ‘the Higgs result’. Slightly more events of a particular kind have been seen when particles collide with a particular energy. Picture from DOE Tevatron.

My last course in particle physics at University represented the high water-mark of my understanding of the subject. But I was already struggling. So in the matter of the Higgs discovery my scientific qualifications, offer me only the slightest of advantages.

Fortunately the man who was responsible for what understanding I once possessed, David Bailin, responded to my plea for help:

As you say, the Higgs mechanism gives masses to all (truly elementary) particles, … but that certainly does not account for most of the mass of ordinary matter like us. Most of the mass of neutrons and protons derives from non-perturbative strong interaction effects in the gluons (and quark-anti-quark pairs) that bind the valence quarks in the nucleon.

The key words here are ‘truly elementary. As I understand this, the Higgs mechanism is responsible for all the mass of electrons, muons, and quarks. But it is not responsible for all – or even most – of the mass of composite particles such mesons, protons and neutrons (glossary here). I think this is an interesting subtlety to the more generally “Higgs gives mass” simple story that has been widely reported. This simple story is true in that the quarkquark interactions that give rise to most of the mass of ordinary matter would not have got started if the Higgs field had not given some initial mass.

The idea is that as the Universe cooled, the Higgs field first gave truly elementary particles their initial mass, and then later on these particles interacted and acquired the bulk of their additional mass as a consequence.

Is that clearer now?

===========

Full Quote from David Bailin

As you say, the Higgs mechanism gives masses to all (truly elementary) particles, except those like the photon, gluons (and graviton) that are protected by an unbroken gauge symmetry. But that certainly does not account for most of the mass of ordinary matter like us. Most of the mass of neutrons and protons derives from non-perturbative strong interaction effects in the gluons (and quark-anti-quark pairs) that bind the valence quarks in the nucleon. That’s what the lattice QCD people have spent so long and so much money trying to compute. The nucleon masses would hardly change even if there was no Higgs effect. Gravity is coupled to anything with a non-zero energy momentum tensor, and that certainly includes the “condensate” in the nucleons, as well as the Higgs field.

Another thought on Higgs

July 9, 2012
Higgs boson: Proton-proton collisions as measured by Cern

Another incomprehensible image typically used to illustrate stories about the Higgs Boson. It shows ‘things’ shooting out from the point where two protons have been smashed together. Picture stolen from The Guardian

I had one more thought about the recent discovery of the Higgs particle: if the Higgs is the particle which gives ‘mass’ to all the other particles, then surely the nature of the Higgs must be linked to the nature of gravity?

As you may be aware, the concept of ‘mass’ enters our lives in two quite distinct ways: as inertial mass and as gravitational mass.

  • Inertial mass is the property of an object which makes it harder to speed up or slow down. This is encapsulated in Newton’s Second Law of Motion: that the amount of force required to achieve a given acceleration is related to the inertial mass of the object.
  • Gravitational mass is the property of an object which makes it attract other objects at a distance through space. This is encapsulated in Newton’s Law of Universal Gravitation: that all the matter in the universe attracts all the other matter with a force which is inversely proportional to the square of the distance between the objects, .

Einstein was fascinated by the simple observation that inertial and gravitational mass were – as well as can be measured – always exactly equal. From this insight he was inspired to derive his General Theory of Relativity which is based on the central tenet – the principle of equivalence – that these two types of mass are not in fact two distinct properties, but one single property.

When scientists say the Higgs particle is responsible for giving ‘mass’ to all the other particles, they mean inertial mass. But this will also have a gravitational effect. I have not seen any discussion of this feature in the news, but surely, if the Higgs particle gives rise to both types of mass, it must provide some kind of link between the two different manifestations of mass. Discovering any kind of connection at all between the Electroweak force, the Strong force and Gravity would really be a major step forward in our understanding of the Universe.

Or maybe I have completely missed the point?

Noticing the obvious

June 11, 2012
Electrical Neutrality

An artistic visualisation of the concept of equality of positive and negative electrical charges.

I tidied my office last week, and flicking through my pile of ‘papers to read later’ I came upon:

The electrical neutral neutrality of atoms and of bulk matter
C. S. Unnikrishnan and C.T. Gillies
Metrologia 41 (2004) S125-S135

The basic question asked in the paper was:

  • Is the electrical charge on the electron equal in magnitude to the electrical charge on the proton?

Or alternatively:

  • Is bulk matter electrically neutral?

What struck me about the paper was ‘Why had I never asked either of these questions myself?“. I think it is because I had somehow thought the answers were ‘obvious’ or ‘self-evident’, but in fact they are neither. They are -IMHO – bloody good questions!

Ultimately the questions are experimental, and a number of clever techniques have been used to provide the answers. So far we know that the charge on the electron and the charge on the proton are equal in magnitude to within 1 part in 1021. Wow! I don’t know of any other two experimental numbers that are known to be equal with that degree of precision.

Electrons have a completely different internal structure to protons and neutrons. Electrons are part of class of particles called leptons (‘light’* ones) which as far as we know have no internal structure. Protons and neutrons are part of class of particles called baryons (‘heavy’** ones)  and we know they are in some sense composed of quarks. Each proton or neutron is composed of three quarks.

Given the completely different structure of each type of particle, the equality of the magnitude of the electric charge on each goes from being ‘remarkable’  to ‘astounding’! And even though we  have no fundamental explanation of what electric charge is, it is shocking to find that two types of particle with completely different structures should have exactly the same amount of it – whatever ‘it‘ is. In fact, I refuse to believe it. It seems inevitable that we we will eventually discover some tiny difference between the magnitude of the electric charge on the proton and the electron.

What would the consequences of such a discovery be? Unnikrishnan and Gillies covers these in some detail but two points caught my attention. The first was a comment by Einstein noticing that if normal matter were not exactly neutral, then when it flowed, it could generate a magnetic field.

The Earth and Sun have magnetic fields, the orientation of which stand in approximate relationship with the axes of rotation… But is hard to imagine that electrical conduction or convection currents of sufficient magnitudes are really present… It rather looks as if the cyclic motions of neutral matter are producing magnetic fields.

And the second was an empirical law proposed by Schuster who noticed that the strength of the magnetic field around a range of planets and galaxies was linked to their respective angular momenta. Could that be evidence that bulk matter is not perfectly neutral?

Now these suggestions are highly speculative, and I am sure that people cleverer than I have worked out all kinds of reasons why they do, or do not, make sense. But whether or not they turn out to be true, I feel like I have had my sense of what is ‘normal’ re-adjusted. I am left with the sense that something I experience everyday – the neutrality of bulk matter – is not in any sense obvious.

======================================

* Light in the sense of low mass rather than in the sense of electromagnetic radiation

Heavy in the sense of high mass rather than in the sense of really serious

The Gravity Gnome

April 27, 2012
Weighing a Gnome

Weighing a gnome is actually a way of probing the gravity field around us.

Gravity is the most mysterious of the forces we experience in our lives. Impossible to screen against, it extends throughout space to the farthest corners (corners?) of the cosmos, causing every piece of matter in the Universe to affect every other. Wow!

More prosaically, gravity gives rise to the phenomenon of ‘weight’ – the force which pulls us ‘down’ to the Earth. GCSE students are tutored on the difference between mass and weight, and are told that the weight of an object, say a Gnome, varies from one planet to another, but its mass is the same on any planet. However, the Kern instrument company are keen to point out that if you use a sensitive force balance, its weight changes from place to place around the Earth.

The balance doesn’t even need to be that sensitive. I was surprised to find out by how much the weight of an object measured at a fixed height above sea level changes with latitude and longitude: – it varies by  around 0.5%. So for a Gnome weighing around 300 g, changes of 1.5 g should be seen and this is easily detectable. The rationale for the publicity stunt is explained here, and you can follow the Kern Gnome on his journey here. I like this experiment because the measurement is so simple – and yet the physics it uncovers is so profound.

The light-hearted video below shows my children’s reflections on the mystery of Gravity.

P.S. After a period of steady decline, my own weight has been mysteriously increasing. I think this may be due to a fluctuation in the gravitational constant G. More about this in future articles.

My big problem with astronomy

February 22, 2012
Pretty Galaxy

A Pretty Galaxy - the subject of erudite speculation by astronomers and mindless reporting by hacks. The circle marks the apparent location of a black hole called HLX-1. Don't believe the colours - the picture is 'data' and not a photograph. The galaxy - which is inferred to be spiral in shape even though we see it edge on - is called ESO 243-49

Recent stories in Wired and The Register illustrate perfectly everything I hate about popular astronomy. First of all, you can see these are both routine hacks by comparing them with the press release.

Don’t get me wrong: I am filled with admiration for astronomers: their instruments are astounding; the maths and physics of observing is inspiring; and of course the Universe is just breathtakingly beautiful. What irritates the pants off me is the ridiculous desire to ‘explain’ what they observe. What we end up with is a pretty picture and a fantastical, unverifiable ‘sciency’ tale. Frankly we would be better of with just the pretty picture and good old fashioned ‘fairy’ tale.

To explain what I mean I have reproduced extracts from the ‘Wired’ article below in blue with what the article should (IMHO) have said.

Wired: The Hubble space telescope has spotted a supermassive black hole floating on the outskirts of a large galaxy.
Actual: Scientists looking at data from the Hubble Space Telescope have inferred the existence of a black hole near a large galaxy.(How?)

Wired: The location is odd because black holes of this size generally form in the centers of galaxies, not at their edges. This suggests the black hole is the lone survivor of a now-disintegrated dwarf galaxy.
Actual: The location is odd because evidence indicates black holes of this size are generally  found near the centers of galaxies, not at their edges. Scientists don’t understand this.

Wired:The black hole — named HLX-1 — is 20,000 times more massive than the sun, and is situated 290 million light-years away at the edge of the spiral galaxy ESO 243-49.
Actual: The black hole — named HLX-1 — is estimated to be 20,000 times more massive than the Sun (how?), and is estimated to be 290 million light-years away at the edge of the spiral galaxy ESO 243-49

Wired: Hubble detected a great deal of energetic blue light coming from the black hole’s accretion disk — a massive collection of gas and dust that spirals into the black hole’s maw, generating x-rays. But scientists studying Hubble’s data also noticed the presence of cooler, red light, which shouldn’t have been there.
Actual: Hubble detected blue light  and red light. They inferred that the blue light came from an accretion disk. But they couldn’t understand the red light.

Wired: Astronomers suspect the red light indicates the existence of a cluster of young stars, roughly 200 million years old, orbiting around the black hole. These stars, in turn, are the key to explaining the chaotic history of the supermassive black hole.
Actual: Astronomers could explain the red light if there were young stars orbiting around the black hole. They even thought up a story about these unobserved stars that might actually exist.

Wired: HLX-1 was likely formed at the center of a dwarf galaxy that once orbited ESO 243-49. But in this dog-eat-dog universe of ours, large galaxies often swallow up their smaller brethren. When the dwarf galaxy came too close to ESO 243-49, the larger galaxy plucked away most of its stars, leaving behind the exposed central black hole. The force of the galaxies’ collision would have also triggered the formation of new stars, explaining the presence of a young stellar cluster around the black hole. The cluster’s age, 200 million years, gives a good estimate of when the merger occurred. HLX-1 may now be following the same fate as its parent galaxy, slowly getting sucked into ESO 243-49. But researchers don’t know the details of the black hole’s orbit, so it could also possibly form a stable orbit around the larger galaxy, circling as the isolated reminder of a vanished dwarf.
Actual: HLX-1 was likely formed when a space dragon called PTMD-X1 laid an egg, which grew into a blue headed X-ray dragon. Astronomers speculate that the dragon’s mother died when it was just 200 million years old  causing the youngster to cry tears which then turned into stars through a process astronomers call tear-star -formification. The blue colour of the stars shows the dragon was sad and astronomers hope that it is happier now and has made friends.

Infinities in nature 2: SUSY, Squarks and Sleptons are the answer!

December 23, 2011

David Bailin, my former tutor at Sussex University, left some detailed comments on my article on the idea that there are infinities in nature. His comments deserve a wider audience, particularly Part 2 on super symmetry and the Higgs. So here they are:

The infinity in QED, and other quantum field theories, derives from a short-distance cut-off which should be zero, IF there is no new physics at all. We know that QED does not include quantum gravitational effects, and the electron surely interacts with gravity. So at the very least there should be a cut-off at the Planck length, corresponding to an energy scale of 10^{19} GeV. The observed (finite) mass is then the sum of the bare mass and the finite quantum radiative correction, so evidently the bare mass is finite too. I share your (and Dirac’s) view that infinity in a phsical theory is a sign that something is wrong, i.e. that there is other relevant, generally new, physics.

Further, the radiative correction to the electron’s mass, and in general any fermion’s, depends only logarithmically on the cut-off. Even if the cut-off is as large as the Planck scale, the radiative correction is of the same order of magnitude as the observed mass, and so, therefore, is the bare mass.

Now suppose that the Higgs boson is indeed discovered at the LHC, with a mass of order 125 Gev/c^2. Unlike a fermion, the radiative correction to the mass(-squared) of a SCALAR particle is proportional to the cut-off (squared). In this case then, if the cut-off is of order the Planck scale, the bare mass(-squared) must also be of this order; in fact the bare mass-squared and the radiative correction have to cancel to one part in 10^{34}! Many people, including me, think that this is implausible, although there is no theoretical reason why it should not be thus. It is an aesthetic objection called the “fine-tuning problem”. The only known way to evade it is SUPERSYMMETRY (SUSY), which requires all of the known particles to have partners with the opposite statistics: fermions have (scalar) boson partners (selectrons, squarks, sleptons), and bosons have fermionic partners (photinos, gluinos, Winos, Zinos, Higgsinos). None have yet been observed, so the symmetry cannot be exact. However, the breaking cannot be at too high a scale. Otherwise it will not solve the fine tuning problem. This is why we expect SUSY to be discovered soon. It is another reason why the discovery of the Higgs would be of such importance. It would be the first known fundamental scalar particle, and would show that there is no obstruction in principle to the existence of the susy scalar particles.

Well, I think those are the best new particle names I have come across in years. I can’t wait until they discover a Wino in the LHC :-).

Happy Christmas

Are there infinities in nature?

December 20, 2011
Infinity

Infinity is a mathematical quantity - no physical quantity is ever infinite.

The concept of infinity is endlessly fascinating. For philosophers, for mathematicians and of course children. We have all wondered about how any number can be infinite, because we can always imagine making a larger number: infinity plus one. The concept of infinity is also discussed in the context of the physical sciences. However, frequently people neglect to mention that there are no infinite quantities in physics. None? Well, I don’t think so. I think that infinity is fundamentally a mathematical concept and in reality something else always happens that interferes with the extremity of physics that would happen near an infinite quantity of anything.

Are you sure? Well ‘No’.

I agree that in the physical sciences, the word infinity is often used. I remember being taught at school that when one’s eyes were relaxed they were ‘focussed on infinity’. I wondered about that phrase for a long time, but in the end I concluded it was just poetry. It means one’s eyes were set so that parallel light would be focussed onto one’s retina. If the light had indeed come from infinity,  it would – of course – not have reached us yet.

What about the singularity around an electron?  At Sussex University, Dr David Bailin taught me that a ‘bare’ electron would have infinite mass and infinite electrical charge. I was told that the particle that we observe and call ‘an electron’  is actually a hypothetical ‘bare electron’ plus its ‘re-normalised cloud of virtual electrons and positrons’. This cloud is created by the intense electric field around the infinite electrical charge. What that means is that in reality we never observe an electron to have infinite mass, but theoretical physicists imagine a real electron as being composed of a hypothetical singular particle plus another phenomenon that hides the singularity. The upshot of this is that a ‘bare electron’ is a mathematical concept – not a physical one. When we look at electrons we never observe an infinite property.

Similarly, it is populalry stated that a black hole is a ‘gravitational singularity’ – an infinitely strong peak of gravitational intensity. But of course we have no observations on this and based on everything we know, we would expect that as the field intensity increased – ‘something’ would happen. Currently, we have no idea what that ‘something’ is. But there is certainly no experimental evidence that a gravitational singularity actually exists. The intensity can reach any amazingly large number. It can reach an intensity which is so alarming that it takes my breath away. But as long as a number can be associated with it – it is not infinite. I am prepared to be ‘boggled’ by large numbers but not ‘baffled’ by an unphysical concept.

What about the Big Bang? At the time of the Big Bang – something is supposed to have exploded for reasons we have not yet figured out. Amazingly it seems that the vast universe hat we observe, and all the energy in it, was once packed closely together in the space occupied by a proton. So the universe was once unimaginably hot and dense. Do I mean infinitely hot and dense? No, not infinitely so, just orders of magnitude beyond any regular conception of temperature or density.

What about the Universe? Well there are many different conceptions of the Universe – and we have data from the cosmic microwave background indicating that the furthest structures that we can see are at most around 40 billion light years distant. Beyond that there is certainly something else, but we just don’t know what. But surely, I hear you ask, one can always keep going? Well actually, we just don’t know! In some conceptions one can, and in others one can’t. But there is no reason to suppose that the Universe is infinite – only that it is even more uncomfortably large than we previously conceived.

So is infinity unphysical? Well I think so. I would love to hear of an example of an infinite quantity, but actually I just don’t think that an infinite ‘anything’ makes any sense at all. And as a measurement scientist I would be very interested to know of the uncertainty of measurement of an infinite quantity: infinity plus or minus what?




%d bloggers like this: