Archive for the ‘The Future’ Category

How do we know anything?

November 18, 2017

How do we know Anything MdeP-NPL

This is an edited video of a talk I gave recently to the NPL Postgraduate Institute about the forthcoming changes to the International System of Units, the SI.

It’s 36 minutes long and you can download the PowerPoint slides here.

It features the first ever public display of the Standard Michael – the artefact defining length in the SM, le systeme de moi, or My System of Units. (It’s shown at about 6 minutes 40 seconds into the video).

The central thesis of the talk is summarised in the slide below:

Measurement

In the talk I explain how the forthcoming changes to the SI will improve future measurements.

I hope you enjoy it.

 

The Past, Present and Future of Measurement

October 22, 2017

Measurement, the simple process of comparing an unknown quantity with a standard quantity, is the essential component of all scientific endeavours. We are currently about to enter a new epoch of metrology, one which will permit the breath-taking progress of the last hundred years to continue unimpeded into the next century and beyond.

The dawn of this new age has been heralded this week by the publication of an apparently innocuous paper in the journal Metrologia. The paper is entitled:

Data and Analysis for the CODATA 2017 Special Fundamental Constants Adjustment

and its authors, Peter Mohr, David Newell, Barry Taylor and Eite Tiesinga constitute the Committee on Data for Science and Technology, commonly referred to as CODATA. In this article I will try to explain the relevance of CODATA’s paper to developments in the science of metrology.

The Past

The way human beings began to make sense of their environment was by measuring it. We can imagine that our agrarian ancestors might have wondered whether crops were taller or heavier this year than last. Or whether plants grew better in one field rather than another. And they would have answered these questions by creating standard weights and measuring rods.

But to effectively communicate their findings, the standard units of measurement would need to be shared. First between villages, and then towns, and then counties and kingdoms. Eventually entire empires would share a system of measurement.

First units of weight and length were shared. Then, as time became more critical for scientific and technical endeavours, units of time were added to systems of the measurement. And these three quantities: mass, length and time, are shared by all systems of units.

These quantities formed the so-called ‘base units’ of a system of measurement. Many other quantities could be described in terms of these ‘base units’. For example, speeds would be described in multiples of [the base unit of length] divided by [the base unit of time]. They might be [feet] per [second] in one system, or [metres] per [second] in another.

Over the last few hundred years, the consistent improvement in measurement techniques has enabled measurements with reduced uncertainty. And since no measurement can ever have a lower uncertainty that the standard quantity in that system of units, there has been a persistent drive to have the most stable, most accurately-known standards, so that they do not form a barrier to improved measurements.

The Present

Presently, all scientific and technical measurements on Earth are made using the International System of Units, the SI. The naming of this system – as an explicitly international system – represented a profound change in conception. It is not an ‘imperial’ system or an ‘English’ system, but a shared enterprise administered by the International Bureau of Weights and Measures (BIPM), a laboratory located in diplomatically-protected land in Sèvres, near Paris, France. Its operation is internationally funded by the dozens of nations who have signed the international treaty known as the Convention of the Metre.

In essence, the SI is humanity’s standard way of giving quantitative descriptions of the world around us. It is really an annex to all human languages, allowing all nationalities and cultures to communicate unambiguously in the realms of science and engineering.

Founded in 1960, the SI was based upon the system of measurement using the metre as the unit of length, the kilogram as the unit of mass, and the second as the unit of time. It also included three more base units.

The kelvin and degree Celsius were adopted as units of temperature, and the ampere was adopted as the unit of electric current. The candela was defined as the unit of luminous efficacy – or how bright lights of different colours appear to human beings. And then in 1971 the often qualitative science of chemistry was included in the fold with the introduction of the mole as a unit of amount of substance, a recognition of the increasing importance of analytical measurements.

SI Circle - no constants

The SI is administered by committees of international experts that seek to make sure that the system evolves to meet humanity’s changing needs. Typically these changes are minor and technical, but in 1984 an important conceptual change was made.

Since the foundation of the SI, the ability to measure time had improved more rapidly than the ability to measure length. It was realised that if the metre was defined differently, then length measurements could be improved.

The change proposed was to define what we mean by ‘one metre’ in terms of the distance travelled by light, in a vacuum, in a fixed time. Based on Einstein’s insights, the speed of light in a vacuum, c, is thought to be a universal constant, but at the time it had to be measured in terms metres and seconds i.e. human-scale measurement standards. This proposal defined a metre in terms of a natural constant – something we believe is truly constant.

The re-definition went well, and set metrologists thinking about whether the change could be adopted more widely.

The Future

Typically every four years, CODATA examine the latest measurements of natural constants, and propose the latest best estimate of the values of a range of natural constants.

Measurement Graphic

This is a strange. We believe that the natural constants are really constant, not having changed measurably since the first few seconds of our universe’s existence. Whereas our human standards are at most a few decades old, and (as with all human standards) are subject to slow changes. Surely, it would make more sense, to base our measurement standards on these fundamental constants of the natural world? This insight is at the heart of the changes which are about to take place. The CODATA publication this week is the latest in a series of planned steps that will bring about this change on 20th May 2019.

Constants Graphic

After years of work by hundreds of scientists, the values of the natural constants recommended by the CODATA committee will be fixed – and will form the basis for the new definitions of the seven SI base units.

What will happen on 20th May 2019?

On the 20th May 2019, revised definitions of four of the base units of the SI will come into force. More than 10 years of careful measurements by scientists world-wide will ensure that the new definitions are, as closely as possible, equivalent to the old definitions.

The change is equivalent to removing the foundations underneath a structure and then inserting new foundations which should leave the structure supported in exactly the same way. However the new foundations – being based on natural constants rather than human artefacts – should be much more stable than the old foundations.

If the past is any guide to the future, then in the coming decades and centuries, we can anticipate that measurement technology will improve dramatically. However we cannot anticipate exactly how and where these improvements will take place. By building the SI on foundations based on the natural constants, we are ensuring that the definitions of the unit quantities of the SI will place no restriction whatever on these future possible improvements.

The kilogram

The kilogram is the SI unit of mass. It is currently defined as the mass of the International Prototype of the Kilogram (IPK), a cylinder of platinum-iridium alloy held in a safe at the BIPM. Almost every weighing in the world is, indirectly, a comparison against the mass of this artefact.

On 20th May 2019, this will change. Instead, the kilogram will be defined in terms of a combination of fundamental constants including the Planck constant, h, and the speed of light, c. Although more abstract than the current definition, the new definition is thought to be at least one million times more stable.

The new definition will enable a new kind of weighing technology called a Kibble balance. Instead of balancing the weight of a mass against another object whose mass is known by comparison with the IPK, the weight will be balanced by a force which is calculable in terms of electrical power, and which can be expressed as a multiple of the fundamental constants e, h and c.

The ampere

The ampere is the SI unit of electrical current. It is presently defined in terms of the current which, if it flowed in two infinitely thin, infinitely long, parallel wires would (in vacuum) produce a specified force between the wires. This definition, arcane even by metrologists’ standards, was intended to allow the measurement of the ampere in terms of the force between carefully constructed coils of wire. Sadly, it was out of date shortly after it was implemented.

On 20th May 2019, this will change. Instead, the ampere will be defined in terms of a particular number of electrons per second, each with an exactly specified electrical charge e, flowing past a point on a wire. This definition finally corresponds to the way electric current is described in textbooks.

The new definition will give impetus to techniques which create known electrical currents by using electrical devices which can output an exactly countable number of electrons per second. At the moment these devices are limited to approximately 1 billion (a thousand million) electrons per second, but in future this is likely to increase substantially.

The kelvin

The kelvin is the SI unit of temperature. It is currently defined as the temperature of the ‘triple point of water’. This temperature – at which liquid water, solid water (ice) and water vapour (but no air) co-exist in equilibrium – is defined to be 273.16 kelvin exactly. Glass cells re-creating this conjunction are located in every temperature calibration lab in the world, and every temperature measurement is a comparison of how much hotter a temperature is than the temperature at one position within a ‘triple point of water cell’.

On 20th May 2019, this will change. Instead, the kelvin will be defined in terms of a particular amount of energy per molecule as specified by the Boltzmann constant, kB. This definition finally corresponds to the way thermal energy is described in textbooks.

The requirement to compare every measurement of temperature with the temperature of the triple point of water adds uncertainty to measurements at extremely low temperatures (below about 20 K) and at high temperatures (above about 1300 K). The new definition will immediately allow small improvements in these measurement ranges, and further improvements are expected to follow.

The definition of the degree Celsius (°C) in terms the kelvin will remain unchanged.

The mole

The mole is the SI unit of ‘amount of substance’. It is currently defined as the amount of substance which contains the same number of ‘elementary entities’ as there are atoms in 12 grams of carbon-12. The change in the definition of the kilogram required a re-think of this definition.

On 20th May 2019, it will change. The mole will be defined as the amount of substance which contains a particular, exactly specified, number of elementary entities. This number – known as the Avogadro number, NA – is currently estimated experimentally, but in future it will have fixed value.

The specification of an exact number of entities effectively links the masses of microscopic entities such as atoms and molecules to the new definition of the kilogram.

The ‘New’ SI

On 20th May 2019 four of the seven base units of the SI will be re-defined. But what of the other three?

The second is already defined in terms of the natural frequency of microwaves emitted by atoms of a particular caesium isotope. The metre is defined in terms of the second and the speed of light in vacuum – a natural constant. And the candela is defined in terms of Kcd, the only natural constant in the SI that relates to human beings. So from 20th May 2019 the entire SI will be defined in terms of natural constants.

SI Circle - with constants

The SI is not perfect. And it will not be perfect even after the redefinitions come into force. This is because it is a system devised by human beings, for human beings. But by incorporating natural constants into the definitions of all its base units, the SI has taken a profound step towards being a system of measurement which will enable ever greater precision in metrology.

And who knows what features of the Universe these improved measurements will reveal.

Talking about the ‘New’ SI

July 3, 2017

I was asked to give a talk about the SI to some visitors tomorrow morning, and so I have prepared some PowerPoint slides

If you are interested, you can download them using this link (.pptx 13 Mb!): please credit me and NPL if you use them.

But I also experimentally narrated my way through the talk and recorded the result as a movie.

The result is… well, a bit dull. But if you’re interested you can view the results below.

I have split the talk into three parts, which I have called Part 1, Part 2 and Part 3.

Part 1: My System of Units

This 14 minute section is the fun part. It describes a hypothetical system of units which is a bit like the SI, but in which all the units are named after my family and friends.

The idea is to show the structure of any system of units and to highlight some potential shortcomings.

It also emphasises the fact that systems of units are not ‘natural’. They have been created by people to meet our needs.

Part 2: The International System of Units

This 22 minute section – the dullest and most rambling part of the talk – explains the subtle rationale for the changes in the SI upon which we have embarked.

There are two key ideas in this part of the talk:

  • Firstly there is a description of the separation of the concepts of the definition of a unit from the way in which copies of the unit are ‘realised‘.
  • And secondly, there is a description of the role of natural constants in the new definitions of the units of the SI.

Part 3: The Kilogram Problem

This 11 minute section is a description of one of the two ways of solving the kilogram problem: the Kibble balance. It has three highlights!

  • It features a description of the balance by none other than Bryan Kibble himself.
  • There is an animation of a Kibble balance which takes just seconds to play but which took hours to create!
  • And there are also some nice pictures of the Mark II Kibble Balance installed in its new home in Canada, including a short movie of the coil going up and down.

Overall

This is all a bit dull, and I apologise. It’s an experiment and please don’t feel obliged to listen to all or any of it.

When I talk to a live audience I hope it will all be a little punchier – and that the 2800 seconds it took to record this will be reduced to something nearer to its target 2100 seconds.

 

 

 

Not everything is getting worse!

April 19, 2017

Carbon Intensity April 2017

Friends, I find it hard to believe, but I think I have found something happening in the world which is not bad. Who knew such things still happened?

The news comes from the fantastic web site MyGridGB which charts the development of electricity generation in the UK.

On the site I read that:

  • At lunchtime on Sunday 9th April 2017,  8 GW of solar power was generated.
  • On Friday all coal power stations in the UK were off.
  • On Saturday, strong winds and solar combined with low demand to briefly provide 73% of power.

All three of these facts fill me with hope. Just think:

  • 8 gigawatts of solar power. In the UK! IN APRIL!!!
  • And no coal generation at all!
  • And renewable energy providing 73% of our power!

Even a few years ago each of these facts would have been unthinkable!

And even more wonderfully: nobody noticed!

Of course, these were just transients, but they show we have the potential to generate electricity which has a significantly low carbon intensity.

Carbon Intensity is a measure of the amount of carbon dioxide emitted into the atmosphere for each unit (kWh) of electricity generated.

Wikipedia tells me that electricity generated from:

  • Coal has a carbon intensity of about 1.0 kg of CO2 per kWh
  • Gas has a carbon intensity of about 0.47 kg of CO2 per kWh
  • Biomass has a carbon intensity of about 0.23 kg of CO2 per kWh
  • Solar PV has a carbon intensity of about 0.05 kg of CO2 per kW
  • Nuclear has a carbon intensity of about 0.02 kg of CO2 per kWh
  • Wind has a carbon intensity of about 0.01 kg of CO2 per kWh

The graph at the head of the page shows that in April 2017 the generating mix in the UK has a carbon intensity of about 0.25 kg of CO2 per kWh.

MyGridGB’s mastermind is Andrew Crossland. On the site he has published a manifesto outlining a plan which would actually reduce our carbon intensity to less than 0.1 kg of CO2 per kWh.

What I like about the manifesto is that it is eminently doable.

And who knows? Perhaps we might actually do it?

Ahhhh. Thank you Andrew.

Even thinking that a good thing might still be possible makes me feel better.

 

Remarkably Unremarkable

February 24, 2017

img_4966

The ‘Now’

‘The future’ is a mysterious place.

And our first encounter with ‘the future’ is ‘the now’.

Today I felt like I encountered the future when I drove a car powered by a hydrogen fuel cell. And far from being mysterious it was remarkably unremarkable.

The raw driving experience was similar to using a conventional car with automatic transmission.

But instead of filling the car with liquid fuel derived from fossil plant matter,  I filled it with hydrogen gas at a pressure 700 times greater than atmospheric pressure.

img_4976

This was achieved using a pump similar in appearance to a conventional petrol pump.

img_4964

This was the interface to some industrial plant which generated 80 kg of hydrogen each day from nothing more than electricity and water. This is enough to fill roughly 20 cars.

This is small scale in comparison with a conventional petrol station, but these are early days. We are still at the interface with the future. Or one possible future.

The past

Some years ago, I remember making measurements of the temperature and humidity inside a fuel cell during operation.

The measurements were difficult, and the results surprising – to me at least.

And at the end of the project I remember thinking “Well, that was interesting, but it will never work in practice”.

Allow me please to eat my words: it works fine.

Today I was enormously impressed by the engineering prowess that made the fuel cell technology transparent to the driver.

The future

What I learned today was that the technology to make cars which emit no pollution at their point of use exists, now.

The range of this car is 300 miles and it takes only 5 minutes to re-fill. When there are more re-filling stations than the dozen or so currently around the UK, this will become a very attractive proposition.

I have no idea if fuel cell cars will become ubiquitous. Or whether they will become novelties like steam-powered cars from the end of the nineteenth century.

Perhaps this will represent the high-water mark of this technology. Or perhaps this will represent the first swallow in a summer of fuel cell cars.

None of us can know the future. But for the present, I was impressed.

It felt like the future was knocking on the door and asking us to hurry up.

When will the North Pole become the North Pool?

December 16, 2016

arctic_ssi_201612_chart

It is a sad fact, but it is likely that within my lifetime it will become possible to sail to the North Pole. I am 56.

Tragically it is also true that there is absolutely nothing that you or I can do about it.

In fact, even in the unlikely event that humanity en masse decided it wanted to prevent this liquefaction, there would be literally nothing we could do to stop it.

The carbon dioxide we have already put in the atmosphere will warm the Earth’s surface for a few decades yet even if we stopped all emissions right now.

Causation

The particular line of causation between carbon dioxide emissions and warming of the arctic is long, and difficult to pin down.

Similarly it is difficult to determine if a bull in a china shop broke a particular vase, or whether it was a shop helper trying to escape.

Nonetheless, in both cases the ultimate cause is undeniable.

What does the figure show?

The animation at the head of the page, stolen from NASA’s Earth Observatory, is particularly striking and clear.

The animation shows data from 1979 to this past November 2016 showing the extent of sea ice versus the month of year.

Initially the data is stable: each year is the same. But since the year 2000, we have seen reductions in the amount of sea ice which remains frozen over the summer.

In 2012, an additional one million square kilometres – four times the area of England Scotland and Wales combined – melted.

The summer of 2016 showed the second largest melt ever.

The animation highlights the fact that the Arctic has been so warm this autumn, that Sea Ice is forming at an unprecedentedly slow rate.

The Arctic Sea Ice extent for November 2016 is about one million square kilometres less than what we might expect it to be at this time of year.

My Concern 

Downloading the data from the US National Snow and Ice Data Centre, I produced my own graph of exactly the same data used in the animation.

The graph below lacks the drama of the animated version at the head of the article. But it shows some things more clearly.

sea-ice-december-2016-graph

This static graph shows that the minimum ice extent used to be stable at around 7 ± 1 million square kilometres. The minimum value in 2012 was around half that.

The animated graph at the head of the article highlights the fact that the autumn freeze (dotted blue circle) is slower than usual – something which is not clear in the static graph.

My concern is that if this winter’s freeze is ‘weak’, then the ice formed will be thin, and then next summer’s melt is likely to be especially strong.

And that raises a big question at the very heart of our culture.

When the North Pole becomes the North Pool, where will Santa live?

 

Science in the face of complexity

February 4, 2016
Jeff Dahn: Battery Expert at Dalhousie University

Jeff Dahn: Battery Expert at Dalhousie University

My mother-in-law bought me a great book for Christmas: Black Box Thinking by Matthew Syed: Thanks Kathleen 🙂

The gist of the book is easy to state: our cultural attitude towards “failure”- essentially one of blame and shame – is counter productive.

Most of the book is spent discussing this theme in relation to the practice of medicine and the law, contrasting attitudes in these areas to those in modern aviation. The stories of unnecessary deaths and of lives wasted are horrific and shocking.

Engineering

But when he moves on to engineering, the theme plays out more subtly. He discusses the cases of James Dyson, the Mercedes Formula 1 team, and David Brailsford from Sky Cycling. All of them have sought success in the face of complexity.

In the case of Dyson, his initial design of a ‘cyclone-based’ dust extractor wasn’t good enough, and the theory was too complex to guide improvements. So he started changing the design and seeing what happened. As recounted, he investigated 5,127 prototypes before he was satisfied with the results. The relevant point here is that his successful design created 5,126 failures.

One of his many insights was to devise a simple measurement technique that detected tiny changes in the effectiveness of his dust extraction: he sucked up fine white dust and blew the exhaust over black velvet.

Jeff Dahn

This approach put me in mind of Jeff Dahn, a battery expert I met at Dalhousie University.

Batteries are really complicated and improving them is hard because there are so many design features that could be changed. What one wants is a way to test as many variants as quickly and as sensitively as possible in order to identify what works and what doesn’t.

However when it comes to battery lifetime – the rate at which the capacity of a battery falls over time – it might seem inevitable that this would take years.

Not so. By charging and discharging batteries in a special manner and at elevated temperatures, it is possible to accelerate the degradation. Jeff then detects this with precision measurements of the ‘coulombic efficiency’ of the cell.

‘Coulombic efficiency’ sounds complicated but is simple. One first measures the electric current as the cell is charged. If the electric current is constant during charging then the electric current multiplied by the charging time gives the total amount of electric charge stored in the cell. One then measures the same thing as the cell discharges.

For the lithium batteries used in electric cars and smart phones, the coulombic efficiency is around 99.9%. But it is that tiny of amount (less than 0.1%) of the electric charge which doesn’t come back that is progressively damaging the cell and limiting it’s life.

One of Jeff’s innovations is the application of precision measurement to this problem. By measuring electric currents with uncertainties of around one part in a million, Jeff can measure that 0.1% of non-returned charge with an uncertainty of around 0.1%. So he can distinguish between cells that 99.95% efficient and 99.96% efficient. That may not sound much, but the second one is 20% better!

By looking in detail at the Coulombic efficiencyJeff can tell in a few weeks whether a new design of electrode will improve or degrade battery life.

The sensitivity of this test is akin to the ‘white dust on black velvet’ test used by Dyson: it doesn’t tell him why something got better or worse – he has to figure that out for himself. But it does tell him quickly which things were bad ideas.

I couldn’t count the ammeters in Jeff’s lab – each one attached to a test cell – but he was measuring hundreds of cells simultaneously. Inevitably, most of these tests will make the cells perform worse and be categorised as ‘failures’.

But this system allows him to fail fast and fail often: and it is this capability that allows him to succeed at all. I found this application of precision measurement really inspiring.

Thanks Jeff.

 

 

 

 

 

Restoring my faith in Quantum Computing

February 1, 2016
Jordan Kyriakidis from Dalhousie University Physics Department

Jordan Kyriakidis from Dalhousie University Physics Department

I am a Quantum Computing Sceptic.

But last week at Dalhousie I met Jordan Kyriakidis who explained one feature of Quantum Computing that I had not appreciated. That even if a quantum computer only gave the right answer one time in a million operations, it might still be useful.

His insight made me believe that Quantum Computing just might be possible.

[Please be aware that I am not an expert in this field. And I am aware that experts are less sceptical than I am. Indeed many consider that the power of quantum computing has already been demonstrated. Additionally Scott Arronson  argues persuasively (in his point 7) that my insight is wrong.]

Background

Conventional digital computers solve problems using mathematics. They have been engineered to perform electronic operations on representations of numbers which closely mimic equivalent mathematical operations.

Quantum computers are different. They work by creating a physical analogue of the problem which requires solving.

An initial state is created and then the computational ‘engine’ is allowed to evolve using basic physical laws and hopefully arrive at a state which represents a solution to the problem at hand.

My problem

There are many conceivable implementations of a quantum computer and I am sceptical about them all!

My scepticism arises from the analogue nature of the computation. At some point the essential elements of the quantum computer (called ‘Qubits‘ and pronounced Q-bits) can be considered as some kind of oscillator.

The output of the computer – the answer – depends on interference between the Qubits being orchestrated in a precise manner. And this interference between the Qubits is completely analogue.

Analogue versus digital

Physics is fundamentally analogue. So, for example, the voltages present throughout a digital computer vary between 0 volts and 5 volts. However the engineering genius of digital electronics is that it produces voltages that are either relatively close to 0 volts, or relatively close to 5 volts. This allows the voltages to be interpreted unambiguously as representing either a binary ‘1’ or ‘0’. This is why digital computers produce exactly the same output every time they run.

Quantum Computing has outputs that can be interpreted unambiguously as representing either a binary ‘1’ or ‘0’. However the operation of the machine is intrinsically analogue. So tiny perturbations that accumulate between the thousands of operations on the Qubits in a useful machine will result in different outputs each time the machine is run.

Jordan’s Insight

To my surprise Jordan acknowledged my analysis was kind-of-not-wrong. But he said it didn’t matter for the kinds of problems quantum computers might be good at solving. The classic problem is factoring of large numbers.

For example working out that the number 1379 is the result of multiplying 7 × 197 will take you a little work. But if I gave you the numbers 7 and 197 and asked you to multiply them, you could do that straightforwardly.

Finding the factors of large numbers (100 digits or more) is hard and slow – potentially taking the most powerful computers on Earth hundreds of years to determine. But multiplying two numbers – even very large numbers – together is easy and quick on even a small computer.

So even if a quantum computer attempting to find factors of a large number were only right one time in a million operations – that would be really useful! Since the answers are are easy to check, we can sort through them and get rid of the wrong answers easily.

So a quantum computer could reduce the time to factor large numbers even though it was wrong 99.9999% of the time!

I can easily imagine quantum computers being mostly wrong and I had thought that would be fatal. But Jordan made me realise that might still be very useful.

Thanks Jordan 🙂

========================================================

By the way, you might like to check out this web site which will factor large numbers for you. I learned that the number derived from birth date (29/12/1959>>>29121959) is prime!

Climate Hopes and Fears.

December 14, 2015
FT Calculator for Greenhouse Gas emissions required to achieve various degrees of global warming.

FT Calculator for Greenhouse Gas emissions required to achieve various degrees of global warming. If we continue on our current path, we are headed towards – in our best estimation – 6 degrees Celsius of global warming. The calculator allows you to see the anticipated effects of the pledged emission reductions.

The Paris agreement on Climate Change is cause for hope. In honesty, I cried at the news.

But the task that the countries of the world have set for themselves is breathtakingly difficult.

And in the euphoria surrounding the Paris Accord, I am not sure the level of difficulty has been properly conveyed.

The process will involve an entire century of ever stronger commitment to meet even the most minimal of targets.

Imagine going on a long car journey full of 200 ‘children’ who will bicker and fight – some of whom are not too bright but are armed with nuclear weapons. How long will it be until we hear the first ‘Are we there yet?’ or ‘ I wanna go home now!’ or ‘ Can I have some extra oil now?’ or ‘It’s all Johnny’s fault!’ or ‘It’s not fair!’

Perhaps unsurprisingly, it is the Financial Times that has cut to the chase with an Interactive Calculator that shows the level of emission reductions required to meet various warming targets.

The calculator indicates that if we continue on our current path, we are headed (in our best estimation) towards 6 °C of global warming.

The calculator then allows you to see the anticipated effects of the pledged emission reductions.

What is shocking is that even the drastic (and barely believable) reductions pledged in Paris are not sufficient to achieve the 2 °C limit.

As quoted by the Guardian, James Hansen (whom I greatly admire) is certainly sceptical:

“It’s a fraud really, a fake. It’s just bullshit for them to say: ‘We’ll have a 2C warming target and then try to do a little better every five years.’ It’s just worthless words. There is no action, just promises. As long as fossil fuels appear to be the cheapest fuels out there, they will be continued to be burned.”

Hansen suggests all governments should institute a $15/tonne carbon tax (rising each year by $10/tonne) . He sees the price of oil (and coal and gas) as the single essential lever we need to pull to achieve our collective goals.

Personally I am with Hansen on the need for urgent action right now, but I feel more charitable towards our leaders.

I don’t know whether it is more rational to feel hopeful or fearful.

But despite myself, I do feel hopeful. I hope that maybe in my lifetime (I expect to die aged 80 in 2040) I will have seen global emissions peak and the rate of increase of atmospheric carbon dioxide begin to flatten.

 

Volcanic Clouds

September 28, 2015
The estimated average air temperature above the land surface of the Earth. The thick black line. The squiggle lines are data and the grey lines give an indication of uncertainty in the estimate. Th bold black line shows the results of a model based on carbon dioxide and the effect of named volcanoes.

The estimated average air temperature above the land surface of the Earth. The squiggly lines are data and the grey lines give an indication of uncertainty in the estimate. The bold black line shows the results of a model based on the effects of carbon dioxide and the effect of named volcanoes. Figure is from the Berkeley Earth Temperature Project

The explosion of Mount Tambora in 1815 was the largest explosion in recorded history. Its catastrophic local effects – earthquakes, tsunami, and poisonous crop-killing clouds – were witnessed by many people including Sir Stamford Raffles, then governor of Java.

Curiously, one year later, while touring through France, Raffles also witnessed deserted villages and impoverished peasantry caused by the ‘year without a summer’ that caused famine throughout Europe.

But at the time no-one connected the two events! The connection was not made until the late 20th Century when scientists were investigating the possibility of a ‘nuclear winter’ that might arise from multiple nuclear explosions.

Looking at our reconstructed record of the air temperature above the land surface of the Earth at the head of this article, we can see that Tambora lowered the average surface temperature of the Earth by more than 1 °C and its effects lasted for around three years.

Tambora erupted just 6 years after a volcanic explosion in 1809 whose location is still unknown. We now know that together they caused the decade 1810-1820 to be exceptionally cold. However, at the time the exceptional weather was just experienced as an ‘act of god’.

In Tambora: The Eruption that changed the world, Gillen D’Arcy Wood describes both the local nightmare near Tambora, and more significantly the way in which the climate impacts of Tambora affected literary, scientific, and political history around the globe.

In particular he discusses:

  • The effect of a dystopian ‘summer’ experienced by the Shelleys and Lord Byron in their Alpine retreat.
  • The emergence of cholera in the wake of a disastrous monsoon season in Bengal. Cholera went on to form a global pandemic that eventually reached the UK through trade routes.
  • The period of famine in the rice-growing region of Yunnan that led to a shift towards opium production.
  • The bizarre warming – yes, warming – in the Arctic that led to reports of ice free northern oceans, and triggered decades of futile attempts to discover the fabled North West Passage.
  • The dramatic and devastating advance of glaciers in the Swiss alps that led to advances in our understanding of ice ages.
  • The ‘other’ Irish Famine – a tale of great shame and woe – prefacing the great hunger caused by the potato-blight famines in later decades.
  • The extraordinary ‘snow in June’ summer in the eastern United States

Other Volcanic Clouds

Many Europeans will recall the chaos caused by the volcanic clouds from the 2010 eruptions of the Icelandic volcano Eyjafjallajökull (pronounced like this  or phonectically ‘[ˈeɪjaˌfjatlaˌjœːkʏtl̥]).

The 2010 eruptions were tiny in historical terms with effects which were local to Iceland and nearby air routes. This is because although a lot of dust was ejected, most of it stayed within the troposphere – the lower weather-filled part of the atmosphere. Such dust clouds are normally rained out over a period of a few days or weeks.

Near the equator the boundary between the troposphere and stratosphere – known as the tropopause – is about 16 km high, but this boundary falls to around 9 km nearer the poles.

For a volcanic cloud to to have wider effects the volcanic explosion must push it above the tropopause into the stratosphere. Tiny particles can be suspended here for years, and have a dramatic effect on global climate.

Laki

Tambora may have been ‘the big one’ but it was not alone. Looking at our reconstructed air temperature record at the head of this article, we can see that large volcanic eruptions are not rare. And the 19th Century had many more than the 20th Century.

Near the start of the recorded temperature history is the eruption of Laki in Iceland (1783-84). Local details of this explosion were recorded in the diary of Jon Steingrimsson, and in their short book Island on Fire, Alexandra Witze and Jeff Kanipe describe the progression of the eruption and its effects further afield – mainly in Europe.

In the UK and Europe the summer consisted of prolonged ‘dry fogs’ that caused plants to wither and people to fall ill. On the whole people were mystified by the origin of these clouds, even though one or two people – including the prolific Benjamin Franklin – then US Ambassador to France – did in fact make the connection with Icelandic volcanoes.

Purple Clouds

Prior to the two books on real volcanic clouds, I had previously read a fictional account of such an event: The Purple Cloud by M P Shiel, published in 1901, and set in the early decades of that century.

This is a fictional, almost stream-of-consciousness, account of how an Arctic explorer discovers a world of beauty at the North Pole – including un-frozen regions. But by violating Nature’s most hidden secrets, he somehow triggers a series of volcanic eruptions at the Equator which over the course of a couple of weeks kill everyone on Earth – save for himself.

I enjoyed this book, but don’t particularity recommend it. However what is striking to me now having since read accounts of these genuine historical events is that the concept of a globally significant volcanic cloud actually existed at the end of the nineteenth Century.

Final Words

The lingering flavour of these books – factual and fictional – is that historically there have been poorly-appreciated tele-connections between historical events.

Now, we live in a world in which the extent and importance of these global tele-connections has never been greater.

And in this world we are vulnerable to events such as volcanic clouds which – as the chart at the top of the page shows – affect the entire world and are not that rare.


%d bloggers like this: