Archive for the ‘The Future’ Category

Remarkably Unremarkable

February 24, 2017

img_4966

The ‘Now’

‘The future’ is a mysterious place.

And our first encounter with ‘the future’ is ‘the now’.

Today I felt like I encountered the future when I drove a car powered by a hydrogen fuel cell. And far from being mysterious it was remarkably unremarkable.

The raw driving experience was similar to using a conventional car with automatic transmission.

But instead of filling the car with liquid fuel derived from fossil plant matter,  I filled it with hydrogen gas at a pressure 700 times greater than atmospheric pressure.

img_4976

This was achieved using a pump similar in appearance to a conventional petrol pump.

img_4964

This was the interface to some industrial plant which generated 80 kg of hydrogen each day from nothing more than electricity and water. This is enough to fill roughly 20 cars.

This is small scale in comparison with a conventional petrol station, but these are early days. We are still at the interface with the future. Or one possible future.

The past

Some years ago, I remember making measurements of the temperature and humidity inside a fuel cell during operation.

The measurements were difficult, and the results surprising – to me at least.

And at the end of the project I remember thinking “Well, that was interesting, but it will never work in practice”.

Allow me please to eat my words: it works fine.

Today I was enormously impressed by the engineering prowess that made the fuel cell technology transparent to the driver.

The future

What I learned today was that the technology to make cars which emit no pollution at their point of use exists, now.

The range of this car is 300 miles and it takes only 5 minutes to re-fill. When there are more re-filling stations than the dozen or so currently around the UK, this will become a very attractive proposition.

I have no idea if fuel cell cars will become ubiquitous. Or whether they will become novelties like steam-powered cars from the end of the nineteenth century.

Perhaps this will represent the high-water mark of this technology. Or perhaps this will represent the first swallow in a summer of fuel cell cars.

None of us can know the future. But for the present, I was impressed.

It felt like the future was knocking on the door and asking us to hurry up.

When will the North Pole become the North Pool?

December 16, 2016

arctic_ssi_201612_chart

It is a sad fact, but it is likely that within my lifetime it will become possible to sail to the North Pole. I am 56.

Tragically it is also true that there is absolutely nothing that you or I can do about it.

In fact, even in the unlikely event that humanity en masse decided it wanted to prevent this liquefaction, there would be literally nothing we could do to stop it.

The carbon dioxide we have already put in the atmosphere will warm the Earth’s surface for a few decades yet even if we stopped all emissions right now.

Causation

The particular line of causation between carbon dioxide emissions and warming of the arctic is long, and difficult to pin down.

Similarly it is difficult to determine if a bull in a china shop broke a particular vase, or whether it was a shop helper trying to escape.

Nonetheless, in both cases the ultimate cause is undeniable.

What does the figure show?

The animation at the head of the page, stolen from NASA’s Earth Observatory, is particularly striking and clear.

The animation shows data from 1979 to this past November 2016 showing the extent of sea ice versus the month of year.

Initially the data is stable: each year is the same. But since the year 2000, we have seen reductions in the amount of sea ice which remains frozen over the summer.

In 2012, an additional one million square kilometres – four times the area of England Scotland and Wales combined – melted.

The summer of 2016 showed the second largest melt ever.

The animation highlights the fact that the Arctic has been so warm this autumn, that Sea Ice is forming at an unprecedentedly slow rate.

The Arctic Sea Ice extent for November 2016 is about one million square kilometres less than what we might expect it to be at this time of year.

My Concern 

Downloading the data from the US National Snow and Ice Data Centre, I produced my own graph of exactly the same data used in the animation.

The graph below lacks the drama of the animated version at the head of the article. But it shows some things more clearly.

sea-ice-december-2016-graph

This static graph shows that the minimum ice extent used to be stable at around 7 ± 1 million square kilometres. The minimum value in 2012 was around half that.

The animated graph at the head of the article highlights the fact that the autumn freeze (dotted blue circle) is slower than usual – something which is not clear in the static graph.

My concern is that if this winter’s freeze is ‘weak’, then the ice formed will be thin, and then next summer’s melt is likely to be especially strong.

And that raises a big question at the very heart of our culture.

When the North Pole becomes the North Pool, where will Santa live?

 

Science in the face of complexity

February 4, 2016
Jeff Dahn: Battery Expert at Dalhousie University

Jeff Dahn: Battery Expert at Dalhousie University

My mother-in-law bought me a great book for Christmas: Black Box Thinking by Matthew Syed: Thanks Kathleen 🙂

The gist of the book is easy to state: our cultural attitude towards “failure”- essentially one of blame and shame – is counter productive.

Most of the book is spent discussing this theme in relation to the practice of medicine and the law, contrasting attitudes in these areas to those in modern aviation. The stories of unnecessary deaths and of lives wasted are horrific and shocking.

Engineering

But when he moves on to engineering, the theme plays out more subtly. He discusses the cases of James Dyson, the Mercedes Formula 1 team, and David Brailsford from Sky Cycling. All of them have sought success in the face of complexity.

In the case of Dyson, his initial design of a ‘cyclone-based’ dust extractor wasn’t good enough, and the theory was too complex to guide improvements. So he started changing the design and seeing what happened. As recounted, he investigated 5,127 prototypes before he was satisfied with the results. The relevant point here is that his successful design created 5,126 failures.

One of his many insights was to devise a simple measurement technique that detected tiny changes in the effectiveness of his dust extraction: he sucked up fine white dust and blew the exhaust over black velvet.

Jeff Dahn

This approach put me in mind of Jeff Dahn, a battery expert I met at Dalhousie University.

Batteries are really complicated and improving them is hard because there are so many design features that could be changed. What one wants is a way to test as many variants as quickly and as sensitively as possible in order to identify what works and what doesn’t.

However when it comes to battery lifetime – the rate at which the capacity of a battery falls over time – it might seem inevitable that this would take years.

Not so. By charging and discharging batteries in a special manner and at elevated temperatures, it is possible to accelerate the degradation. Jeff then detects this with precision measurements of the ‘coulombic efficiency’ of the cell.

‘Coulombic efficiency’ sounds complicated but is simple. One first measures the electric current as the cell is charged. If the electric current is constant during charging then the electric current multiplied by the charging time gives the total amount of electric charge stored in the cell. One then measures the same thing as the cell discharges.

For the lithium batteries used in electric cars and smart phones, the coulombic efficiency is around 99.9%. But it is that tiny of amount (less than 0.1%) of the electric charge which doesn’t come back that is progressively damaging the cell and limiting it’s life.

One of Jeff’s innovations is the application of precision measurement to this problem. By measuring electric currents with uncertainties of around one part in a million, Jeff can measure that 0.1% of non-returned charge with an uncertainty of around 0.1%. So he can distinguish between cells that 99.95% efficient and 99.96% efficient. That may not sound much, but the second one is 20% better!

By looking in detail at the Coulombic efficiencyJeff can tell in a few weeks whether a new design of electrode will improve or degrade battery life.

The sensitivity of this test is akin to the ‘white dust on black velvet’ test used by Dyson: it doesn’t tell him why something got better or worse – he has to figure that out for himself. But it does tell him quickly which things were bad ideas.

I couldn’t count the ammeters in Jeff’s lab – each one attached to a test cell – but he was measuring hundreds of cells simultaneously. Inevitably, most of these tests will make the cells perform worse and be categorised as ‘failures’.

But this system allows him to fail fast and fail often: and it is this capability that allows him to succeed at all. I found this application of precision measurement really inspiring.

Thanks Jeff.

 

 

 

 

 

Restoring my faith in Quantum Computing

February 1, 2016
Jordan Kyriakidis from Dalhousie University Physics Department

Jordan Kyriakidis from Dalhousie University Physics Department

I am a Quantum Computing Sceptic.

But last week at Dalhousie I met Jordan Kyriakidis who explained one feature of Quantum Computing that I had not appreciated. That even if a quantum computer only gave the right answer one time in a million operations, it might still be useful.

His insight made me believe that Quantum Computing just might be possible.

[Please be aware that I am not an expert in this field. And I am aware that experts are less sceptical than I am. Indeed many consider that the power of quantum computing has already been demonstrated. Additionally Scott Arronson  argues persuasively (in his point 7) that my insight is wrong.]

Background

Conventional digital computers solve problems using mathematics. They have been engineered to perform electronic operations on representations of numbers which closely mimic equivalent mathematical operations.

Quantum computers are different. They work by creating a physical analogue of the problem which requires solving.

An initial state is created and then the computational ‘engine’ is allowed to evolve using basic physical laws and hopefully arrive at a state which represents a solution to the problem at hand.

My problem

There are many conceivable implementations of a quantum computer and I am sceptical about them all!

My scepticism arises from the analogue nature of the computation. At some point the essential elements of the quantum computer (called ‘Qubits‘ and pronounced Q-bits) can be considered as some kind of oscillator.

The output of the computer – the answer – depends on interference between the Qubits being orchestrated in a precise manner. And this interference between the Qubits is completely analogue.

Analogue versus digital

Physics is fundamentally analogue. So, for example, the voltages present throughout a digital computer vary between 0 volts and 5 volts. However the engineering genius of digital electronics is that it produces voltages that are either relatively close to 0 volts, or relatively close to 5 volts. This allows the voltages to be interpreted unambiguously as representing either a binary ‘1’ or ‘0’. This is why digital computers produce exactly the same output every time they run.

Quantum Computing has outputs that can be interpreted unambiguously as representing either a binary ‘1’ or ‘0’. However the operation of the machine is intrinsically analogue. So tiny perturbations that accumulate between the thousands of operations on the Qubits in a useful machine will result in different outputs each time the machine is run.

Jordan’s Insight

To my surprise Jordan acknowledged my analysis was kind-of-not-wrong. But he said it didn’t matter for the kinds of problems quantum computers might be good at solving. The classic problem is factoring of large numbers.

For example working out that the number 1379 is the result of multiplying 7 × 197 will take you a little work. But if I gave you the numbers 7 and 197 and asked you to multiply them, you could do that straightforwardly.

Finding the factors of large numbers (100 digits or more) is hard and slow – potentially taking the most powerful computers on Earth hundreds of years to determine. But multiplying two numbers – even very large numbers – together is easy and quick on even a small computer.

So even if a quantum computer attempting to find factors of a large number were only right one time in a million operations – that would be really useful! Since the answers are are easy to check, we can sort through them and get rid of the wrong answers easily.

So a quantum computer could reduce the time to factor large numbers even though it was wrong 99.9999% of the time!

I can easily imagine quantum computers being mostly wrong and I had thought that would be fatal. But Jordan made me realise that might still be very useful.

Thanks Jordan 🙂

========================================================

By the way, you might like to check out this web site which will factor large numbers for you. I learned that the number derived from birth date (29/12/1959>>>29121959) is prime!

Climate Hopes and Fears.

December 14, 2015
FT Calculator for Greenhouse Gas emissions required to achieve various degrees of global warming.

FT Calculator for Greenhouse Gas emissions required to achieve various degrees of global warming. If we continue on our current path, we are headed towards – in our best estimation – 6 degrees Celsius of global warming. The calculator allows you to see the anticipated effects of the pledged emission reductions.

The Paris agreement on Climate Change is cause for hope. In honesty, I cried at the news.

But the task that the countries of the world have set for themselves is breathtakingly difficult.

And in the euphoria surrounding the Paris Accord, I am not sure the level of difficulty has been properly conveyed.

The process will involve an entire century of ever stronger commitment to meet even the most minimal of targets.

Imagine going on a long car journey full of 200 ‘children’ who will bicker and fight – some of whom are not too bright but are armed with nuclear weapons. How long will it be until we hear the first ‘Are we there yet?’ or ‘ I wanna go home now!’ or ‘ Can I have some extra oil now?’ or ‘It’s all Johnny’s fault!’ or ‘It’s not fair!’

Perhaps unsurprisingly, it is the Financial Times that has cut to the chase with an Interactive Calculator that shows the level of emission reductions required to meet various warming targets.

The calculator indicates that if we continue on our current path, we are headed (in our best estimation) towards 6 °C of global warming.

The calculator then allows you to see the anticipated effects of the pledged emission reductions.

What is shocking is that even the drastic (and barely believable) reductions pledged in Paris are not sufficient to achieve the 2 °C limit.

As quoted by the Guardian, James Hansen (whom I greatly admire) is certainly sceptical:

“It’s a fraud really, a fake. It’s just bullshit for them to say: ‘We’ll have a 2C warming target and then try to do a little better every five years.’ It’s just worthless words. There is no action, just promises. As long as fossil fuels appear to be the cheapest fuels out there, they will be continued to be burned.”

Hansen suggests all governments should institute a $15/tonne carbon tax (rising each year by $10/tonne) . He sees the price of oil (and coal and gas) as the single essential lever we need to pull to achieve our collective goals.

Personally I am with Hansen on the need for urgent action right now, but I feel more charitable towards our leaders.

I don’t know whether it is more rational to feel hopeful or fearful.

But despite myself, I do feel hopeful. I hope that maybe in my lifetime (I expect to die aged 80 in 2040) I will have seen global emissions peak and the rate of increase of atmospheric carbon dioxide begin to flatten.

 

Volcanic Clouds

September 28, 2015
The estimated average air temperature above the land surface of the Earth. The thick black line. The squiggle lines are data and the grey lines give an indication of uncertainty in the estimate. Th bold black line shows the results of a model based on carbon dioxide and the effect of named volcanoes.

The estimated average air temperature above the land surface of the Earth. The squiggly lines are data and the grey lines give an indication of uncertainty in the estimate. The bold black line shows the results of a model based on the effects of carbon dioxide and the effect of named volcanoes. Figure is from the Berkeley Earth Temperature Project

The explosion of Mount Tambora in 1815 was the largest explosion in recorded history. Its catastrophic local effects – earthquakes, tsunami, and poisonous crop-killing clouds – were witnessed by many people including Sir Stamford Raffles, then governor of Java.

Curiously, one year later, while touring through France, Raffles also witnessed deserted villages and impoverished peasantry caused by the ‘year without a summer’ that caused famine throughout Europe.

But at the time no-one connected the two events! The connection was not made until the late 20th Century when scientists were investigating the possibility of a ‘nuclear winter’ that might arise from multiple nuclear explosions.

Looking at our reconstructed record of the air temperature above the land surface of the Earth at the head of this article, we can see that Tambora lowered the average surface temperature of the Earth by more than 1 °C and its effects lasted for around three years.

Tambora erupted just 6 years after a volcanic explosion in 1809 whose location is still unknown. We now know that together they caused the decade 1810-1820 to be exceptionally cold. However, at the time the exceptional weather was just experienced as an ‘act of god’.

In Tambora: The Eruption that changed the world, Gillen D’Arcy Wood describes both the local nightmare near Tambora, and more significantly the way in which the climate impacts of Tambora affected literary, scientific, and political history around the globe.

In particular he discusses:

  • The effect of a dystopian ‘summer’ experienced by the Shelleys and Lord Byron in their Alpine retreat.
  • The emergence of cholera in the wake of a disastrous monsoon season in Bengal. Cholera went on to form a global pandemic that eventually reached the UK through trade routes.
  • The period of famine in the rice-growing region of Yunnan that led to a shift towards opium production.
  • The bizarre warming – yes, warming – in the Arctic that led to reports of ice free northern oceans, and triggered decades of futile attempts to discover the fabled North West Passage.
  • The dramatic and devastating advance of glaciers in the Swiss alps that led to advances in our understanding of ice ages.
  • The ‘other’ Irish Famine – a tale of great shame and woe – prefacing the great hunger caused by the potato-blight famines in later decades.
  • The extraordinary ‘snow in June’ summer in the eastern United States

Other Volcanic Clouds

Many Europeans will recall the chaos caused by the volcanic clouds from the 2010 eruptions of the Icelandic volcano Eyjafjallajökull (pronounced like this  or phonectically ‘[ˈeɪjaˌfjatlaˌjœːkʏtl̥]).

The 2010 eruptions were tiny in historical terms with effects which were local to Iceland and nearby air routes. This is because although a lot of dust was ejected, most of it stayed within the troposphere – the lower weather-filled part of the atmosphere. Such dust clouds are normally rained out over a period of a few days or weeks.

Near the equator the boundary between the troposphere and stratosphere – known as the tropopause – is about 16 km high, but this boundary falls to around 9 km nearer the poles.

For a volcanic cloud to to have wider effects the volcanic explosion must push it above the tropopause into the stratosphere. Tiny particles can be suspended here for years, and have a dramatic effect on global climate.

Laki

Tambora may have been ‘the big one’ but it was not alone. Looking at our reconstructed air temperature record at the head of this article, we can see that large volcanic eruptions are not rare. And the 19th Century had many more than the 20th Century.

Near the start of the recorded temperature history is the eruption of Laki in Iceland (1783-84). Local details of this explosion were recorded in the diary of Jon Steingrimsson, and in their short book Island on Fire, Alexandra Witze and Jeff Kanipe describe the progression of the eruption and its effects further afield – mainly in Europe.

In the UK and Europe the summer consisted of prolonged ‘dry fogs’ that caused plants to wither and people to fall ill. On the whole people were mystified by the origin of these clouds, even though one or two people – including the prolific Benjamin Franklin – then US Ambassador to France – did in fact make the connection with Icelandic volcanoes.

Purple Clouds

Prior to the two books on real volcanic clouds, I had previously read a fictional account of such an event: The Purple Cloud by M P Shiel, published in 1901, and set in the early decades of that century.

This is a fictional, almost stream-of-consciousness, account of how an Arctic explorer discovers a world of beauty at the North Pole – including un-frozen regions. But by violating Nature’s most hidden secrets, he somehow triggers a series of volcanic eruptions at the Equator which over the course of a couple of weeks kill everyone on Earth – save for himself.

I enjoyed this book, but don’t particularity recommend it. However what is striking to me now having since read accounts of these genuine historical events is that the concept of a globally significant volcanic cloud actually existed at the end of the nineteenth Century.

Final Words

The lingering flavour of these books – factual and fictional – is that historically there have been poorly-appreciated tele-connections between historical events.

Now, we live in a world in which the extent and importance of these global tele-connections has never been greater.

And in this world we are vulnerable to events such as volcanic clouds which – as the chart at the top of the page shows – affect the entire world and are not that rare.

The strange truth about El Nino

May 13, 2015
Illustration of changes in the height of the sea surface during an El Nino Event. It's hard to know exatly what to make of this graphic, but it does look nice. Courtesy of the BBC

Satellite data showing changes in the height of the sea surface. The large straight red band just to the left (West) of South America represents a region of warmer water which has expanded and caused an increase in the height of the sea surface. This picture was stolen from the BBC web site.

You will probably hearing a lot about El Niño this year because the Australian Bureau of Meteorology have predicted that El Niño conditions will build through the coming year.

The news stories will all look like something like this:

  • Yada Yada Yada
  • Drought/Flood somewhere because of El Niño and Climate Change.
  • Isn’t it terrible

There will be nothing you can do except to experience a sense of vulnerability. And if you are of a similar disposition to me, you may also experience an increased sense of general anxiety.

However the amazing fact, which I have never seen mentioned in all my years of reading about this stuff is that, collectively, we have no idea what causes El Niño  events.

And we certainly can’t predict the events: the current ‘prediction’ is only being made because the El Niño has already begun!

So what do we know?

An Australian Bureau of Meteorology graphic describing the weather patterns in the Pacific Ocean as being either neutral, La Nina or El Nino

An Australian Bureau of Meteorology graphic describing the weather patterns in the Pacific Ocean as being either neutral, La Nina or El Nino

The term El Niño describes a set of linked atmospheric and oceanic conditions. And we understand that the weather patterns in the Pacific Ocean oscillate between three states.

  • Neutral (about 50% of the time)
  • La Niña
  • El Niño (Every 4 to 7 years)

These patterns are so large that they affect the weather right around the globe, and the oscillation between these ‘phases’ is called the El Niño Southern Oscillation (ENSO).

The Australian Bureau of Meteorology have an excellent video description of the phenomenon and its consequences for Australia.

And what don’t we know?

We don’t know what causes the oscillation from one phase of ENSO to another. And so we can’t predict changes from one phase to another.

And importantly, although it is quite conceivable that future Climate Change could affect the transitions from one phase of ENSO to another, we have no idea whether there will be any effect.

And why don’t we know it?

Well obviously, I don’t know the answer to this question. But I think it is this.

ENSO is a linked oceanic and atmospheric phenomenon.

Each of the three phases is self-sustaining i.e. changes in the wind patterns reinforce changes in the location of warm water. And changes in the location of the warmer water reinforce the changes in the wind patterns.

But the variability of the weather is such that it can move the weather patterns from one self-reinforcing phase to another.

And so these planetary scale weather events are triggered by some as yet unknown ‘local’ or ‘short-term’ variability in weather.

Things may improve. As I wrote in my review of Climate Models in the IPCC’s 5th Assessment Report, some models now spontaneously predict ENSO-like behaviour that was not programmed into the model.

And as models of the atmosphere and ocean improve, they will become better able to simulate weather on both the small scale and on the largest scales.

So as our understanding develops it seems likely that changes of ENSO phase will eventually become predictable.

Is anything truly impossible?

October 27, 2014

A recent Scientific American article highlighted the work of two Canadian engineers. Todd Reichert and Cameron Robertson, who built the world’s first (and only) human-powered helicopter.

After they had completed their brilliant and imaginative work, they learned of a recent paper which showed that what they had just done was impossible.

Their achievement put me in mind of Lord Kelvin’s misguided pronouncement:

Heavier-than-air flying machines are impossible.

This is a popular meme: illustrious expert says something is impossible: ingenue shows it is not.

But nonetheless, there are (presumably?) things which, even though they may be imagined, are still either truly or practically impossible.

But how can you distinguish between ideas which are truly or practically impossible, and those which are just hard to imagine?

This is not a merely an academic question

The UK  is currently committed to spending hundreds of millions of pounds on a nuclear fusion experiment called ITER which I am confident will never result in the construction of even a single power station.

Wikipedia tells me the build cost of the project is an astonishing $50 billion – ten times its original projected cost. Impossible projects have a way of going over budget.

I explained my reasons for considering the project to be impossible here

And on reading this Jonathan Butterworth, Head of Physics at UCL tweeted that he:

could write a similar post on why the LHC is impossible. IMHO

But I don’t think he could. Let me explain with some examples:

1. The large hadron collider (LHC) where Jonathan works is a machine called a synchrotron, which is itself a development of a cyclotron.

The first cyclotron was built in a single University physics department in 1932 (History). If, back then, you had told someone the specification of the LHC, would they have said it was impossible?

I don’t think so. Because although each parameter (size, energy etc.) has been stretched – through astonishing ingenuity and technically virtuosity  – the LHC is an extrapolation from something that they knew definitely worked.

2. A modern nuclear power station  is an engineering realisation of ‘a pile of graphite bricks‘ that was first constructed beneath the stand of a playing field of the University of Chicago in 1942.

Within this ‘pile’, the first controlled nuclear reaction took place and worked exactly as had been anticipated. Would the people who witnessed the reaction have said a nuclear power station was impossible?

Definitely not. Everyone in the room was aware of the significance (good and bad) of what had been achieved.

Controlled nuclear fusion, is in an entirely different category from either of these stories of engineering success.

  • It has never worked.

We have never created sustained nuclear fusion and the reasons for the failure of this achievement have always changed as we have understood the problem better.

The rationale for ITER is – cutting through a great deal of technical detail – that it is bigger than previous versions. This increases the volume of the plasma (where energy is released by fusion) in relation to the surface area (where it is lost).

I expect that ITER will meet its technical goals (or most of them). But even on this assumption, they would then have to solve the technical problems associated with confining a plasma at a temperature of 150 million ºC for 30 years rather  than 10 seconds.

As I explained previously, I just don’t think solutions to these problems exist that would allow reliable operation for 30 years with 90% availability required for power generation.

So I think controlled nuclear fusion as a means of generating power is – while perfectly conceivable – actually impossible.

What if – in 50 years time – we make it work? 

Then I will be proved wrong. If I am alive, I will apologise.

However, even in this optimistic scenario, it will be 50 years too late to affect climate change, which is a problem which needs solving now.

And we will have spent money and energy that  we could have spent on solving the problems that face us now using solutions which we know will definitely work.

Between weather and climate: the predictability gap

May 9, 2014
Illustration of the region of the Pacific Ocean in which the El Nino climate event is focussed. The event affects global climate and measurably affects the rotation of the Earth.

Illustration of the region of the Pacific Ocean in which the El Nino climate event is focussed. The event affects global climate and measurably affects the rotation of the Earth. However we cannot predict this vast event even 6 months in advance.  Picture is from Ars Technica

[UPDATE: Monday 12th May 2014: Richard Gilham wrote to highlight the Met Office’s work in trying to fill the ‘predictability gap’ – which they refer to as ‘seasonal to decadal forecasting‘. ]

The ease with which the future may (or may not) be predicted is a subject of continuing personal fascination.

When it comes to weather and climate I recently noticed there is a curious blindness in our forecasts in between the ranges of weather forecasts and climate forecasts.

  • A weather forecast involves saying (for instance) whether it will rain at a particular time and in a particular place. Advanced measurements and advanced computer models allow weather forecasts over the UK to be accurate over a range from a few days (when things are complicated) to a week or two (when things are simple).
  • However even though climate forecasts are more complex than weather forecasts and reach out much farther into the future, they are in some ways easier than weather forecasts because there is no need to predict exactly when or exactly where it will rain. So, for example, we are confident that in 10 years time, the average daily temperature for July in the United Kingdom will be close to 15 ºC.

[Aside: Do take a look at this excellent Met Office page which summarises ‘normal’ UK climate and has data for the last 100 years of so!]

This ‘predictability gap’  is at the heart of the questions about the ‘hiatus’ in global mean temperature. Of course the reason for the gap can be easily stated:

  • Weather forecasts begin with a very detailed picture of the existing weather and use  the laws of physics to run this detailed picture forward in time. Eventually, the lack of detailed knowledge of the initial state makes the forecast inaccurate.
  • Climate forecasts are based on models of the entire Earth including many layers in the oceans and the land and of course many layers in the atmosphere. Also included are the changing distance from the Sun and known solar cycles. We expect Climate models to predict the average of quantities such as the average July temperature – not the temperature in any particular year.

So there is a gap which I sometimes find shocking given the advanced state of development of both weather forecasts and climate models. And this can be  confusing when some specific weather events take on special significance. This year I am keeping my eye on two weather/climate events:

  • The extent of Arctic Sea Ice at its minimum value in September
  • The possibility of an El Nino event in the Southern Hemisphere summer.

Arctic Sea Ice Extent after the summer melt

Chart showing the average amount of sea ice (black line) versus the month of the year. The grey region shows the typical variability and the  green line shows this years data: what will be the value in September 2014? It's impossible to say.

Chart showing the average amount of sea ice (black line) versus the month of the year. The grey region shows the typical variability and the green line shows this years data: what will be the value in September 2014? It’s impossible to say. Click the image for larger version or click here for a live version of the graph

We can confidently predict that the extent of Arctic Sea Ice will reach its minimum extent in late September. We predict this not based upon climate models or weather forecasts but on statistical analysis of previous behaviour.

However, we cannot predict whether it will reach a new record minimum, even though we are sure that the trend will mean that in less than 50 years, the summer minimum of Arctic Sea Ice extent will be zero.

El Nino

The El Nino event  is the largest ‘regular’ climate event on the Earth, strongly affecting rainfall and temperatures across the Pacific Ocean and having detectable affects across the entire planet.

But even though we now monitor the globe hour-by-hour, and day-by-day, we really don’t know even six months in advance if this year will be an ‘El Nino’ year.

Climate models do predict an El Nino-like behaviour – a prediction which emerges from the models rather than being programmed into them. But they predict that it will occur typically every two to seven years – not which year will happen

Currently NOAA estimate the chance of an El Nino Event at the end of this year is just greeter than 50% but really, we just don’t know.

 

Who is going to die in 2048?

April 9, 2014
Age Standardised UK Mortality

Graph showing Age-Standardised UK Mortality per 100,000 of population per year. In 2010 mortality was around 1100 per 100,000, so for the UK population of 60 million we would expect around 660,000 deaths per year. However if the trend continues, no one will die in 2048!

While investigating causes of death in the United Kingdom, I came across the data above. The graph shows that the age-standardised mortality in the UK has been falling since at least 1980 – and shows no signs of stopping.

Indeed, if the trend continues, then sometime around the 14th March 2048, mortality will reach zero and no one will die in the UK!

Now of course, although this data is real and correct, the trend can’t possibly continue indefinitely. But the data is nonetheless fascinating for at least three reasons.

Firstly, in the face of seemingly endless stories telling us all how unhealthy we are – it seems that the trend to lower mortality is continuing unabated, despite the obesity ‘crisis’.

Secondly, although the linear trend in the data is striking, we have no justification for extrapolating the trend into the future. Why? Because its the future! And we don’t know what is going to happen in the future.

And finally, these numbers give us a scale for considering the relative seriousness of different causes of death: that was the reason I looked up the data in the first place.

I read that air pollution causes 30,000 deaths a year in the UK and that seemed a surprisingly large number. From the graph we can estimate that mortality in 2014 is approximately 1000 deaths per 100,000 of population per annum. So that that for the UK population of 60 million, this is about 5% of deaths – which still seems shockingly high, but is a smidgeon closer to believability.

So good news all round: especially if you, like me, are a man. The mortality of men and women is shown separately below.

If the trend continues, then after millennia of ‘excess male mortality’, the mortality of men should fall below that of women in approximately 2027 and reach zero in 2042 – before the women – who will not attain immortality until 2060!

Age Standardised UK Mortality by sex

Graph showing Age-Standardised UK Mortality per 100,000 of population per year for men and women. If trends continue, male mortality will fall below female mortality in 2027 and no men will die at all after 2042!

 UPDATE

Dave asked: Are you sure age standardised mortality means what you think it does? Age standardised mortality might drop to zero. But that is not mortality. If the plot showed mortality that would suggest life expectancy has doubled since 1980, from 50 to nearly 100.

And I replied: The calculation is this:

  • How many people died in a particular year aged (say) 69.
  • This number is then expressed as a fraction of the actual UK population who were aged 69.
  • This is then expressed as an actual number who would have died in a ‘standard population’ called the European Standard Population.

This procedure allows the relative mortality in different countries to be compared

So, if for example, the UK has a high absolute mortality for 69 year-olds, but not many 69 year olds – then this will produce a larger number when ‘age standardised’.

I have obtained one or two sets of actual death data – but I don’t know the equivalent population to divide by to get the absolute mortality per 100,000. However this data shows a similar trend with roughly the same intercept.

What does it mean? I don’t know! I think it means that we are living longer (Is that news?). I was just struck by how straight the line was and how it begged to be extrapolated!


%d bloggers like this: