Archive for the ‘The Future’ Category

The World Set Free

June 26, 2021

I recently re-readThe World Set Free” by H.G. Wells, a book which has a decent claim to being the most influential work of fiction of the 20th Century.

Written in 1913, a central theme of the book is that access to energy is central to the advance of global civilisation.

In the prologue, he imagines early humans wandering over the Earth and not realising that, first coal, and then later nuclear fuel, was literally under their feet.

Rendering of the gigantic planned SunCable solar farm. Copyright SunCable.

I had revisited the text because I realised that Wells had ignored the energy in the sunlight falling on the Earth, of which we require just 0.01% to power our advanced civilisation.

And so now, we can simply collect the largesse of energy that falls on the Earth everyday.

But it is unfair to criticise a futurist for what they omitted – getting anything right at all about the future is hard.

But re-reading the book I realised that Wells’ imagined vision of the future has been – I think – profoundly influential. Let me explain.

The Most Influential Book of the Twentieth Century?

The book initially follows a scientist (Holsten) who uncovers the secret of what he calls “induced radio-activity” – allowing the controlled release of nuclear energy.

And eventually a world of atomic-powered planes and automobiles follows.

But the political institutions of the world remained archaic and unsuited to the possibilities of this new world.

And in a stand-off broadly following the divisions of the actual World Wars, he foresees a global war fought with atomic weapons – a phrase which I think he must have invented.

Fictional Atomic Bombs

Of course in 1913, atomic bombs did not exist. H G Wells envisaged them as follows.

“…the bomb-thrower lifted the big atomic bomb from the box and steadied it against the side [of the plane]. It was black sphere roughly two feet in diameter. Between its handles was a little celluloid stud, and to this he bent his head until his lips touched it. Then he had to bite in order to let air in upon the inducive. Sure of its accessibility, he craned his neck over the side of the aeroplane and judged his pace and distance. Then very quickly he bent forward, bit the stud, and hoisted the bomb over the side.

“Never before in the history of warfare had there been a continuing explosive… Those used by the allies were lumps of pure Carolinum, painted on the outside with un-oxidised cydonator inducive enclosed hermetically in a case of membranium. A little celluloid stud between the handles by which the bomb was lifted was arranged so as to be easily torn off and admit air to the inducive which at once became active and set up the radioactivity in the outer layer of the Carolinum sphere. This liberated fresh inducive and so in a few minutes the whole bomb was a blazing continual explosion.

Carolinum belonged to the beta group of Hyslop’s so-called ‘suspended degenerator’ elements, [and] once its degenerative process had been induced, continued a furious radiation of energy and nothing could arrest it. Of all of Hyslop’s artificial elements, Carolinum was the most heavily stored with energy and the most dangerous to make and handle. To this day it remains the most potent degenerator known. What earlier 20th Century chemists called its half-period was seventeen days; that is to say, it poured out half the huge store of energy in its great molecules in the space of seventeen days, the next seventeen days’ emission was half of that first period’s outpouring and so on…. to this day, the battle-fields and bomb fields of that frantic time in human history are sprinkled with radiant matter, and so centres of inconvenient rays

“A moment or so after its explosion began, [the bomb] was still mainly an inert sphere exploding superficially, a big inanimate nucleus wrapped in flame and thunder. Those that were thrown from aeroplanes fell in this state, they reached the ground mainly solid, and, melting soil and rock in their progress bored into the earth. There, as more and more of the Carolinum became active, the bomb spread itself out into a monstrous cavern of fiery energy at the base of what became very speedily a miniature active volcano. The Carolinum, unable to disperse, freely drove onto and mixed up with the boiling confusion of molten soil and superheated steam, and so remained spinning furiously and maintaining an eruption that lasted for years or months or weeks according the size of the bomb…

“Once launched the bomb was absolutely unapproachable and uncontrollable until its forces were nearly exhausted, and from the crater that burst open above it, puffs of heavy incandescent vapour and fragments of viciously punitive rock and mud, saturated with Carolinum, and each a centre of scorching and blistering energy, were flung high and far.

“Such was the crowning triumph of military science, the ultimate explosive that was to give the ‘decisive touch’ to war…

Actual Atomic Bombs

Of course almost every detail of the account above is wrong.

But qualitatively, it is spot on: a single weapon which could utterly destroy a city not just at the time of its detonation, but have effects which would persist for decades afterwards: the “ultimate explosive”

And critically, the book was read by Leo Szilard, a man with a truly packed Wikipedia page!

On September 12, 1933, having only recently fled Germany for England, Szilard was irritated by a Times article by Rutherford, who dismissed the possibility of releasing useful amounts of nuclear energy.

And later that day, while crossing Southampton Row in London, it came to him how one could practically release nuclear energy by making a nuclear chain reaction. He patented his idea and assigned the patent to the UK Admiralty to maintain its secrecy.

In the following years he was influential in urging the US to create a programme to develop nuclear weapons before the Germans, and so he came to be present in Chicago when Fermi first realised Szilard’s chain reaction on December 2nd 1943.

On seeing his invention work, he did not rejoice. He recalls…

“There was a crowd there and when it dispersed, Fermi and I stayed there alone. Enrico Fermi and I remained. I shook hands with Fermi and I said that I thought this day would go down as a black day in the history of mankind.

I was quite aware of the dangers. Not because I am so wise but because I have read a book written by H. G. Wells called The World Set Free. He wrote this before the First World War and described in it the development of atomic bombs, and the war fought by atomic bombs. So I was aware of these things.

But I was also aware of the fact that something had to be done if the Germans get the bomb before we have it. They had knowledge. They had the people to do it and would have forced us to surrender if we didn’t have bombs also.

We had no choice, or we thought we had no choice.

Was the book really influential?

Of course I don’t know.

But it is striking to me that by merely imagining that such terrible weapons might one day exist, and feasibly imagining the circumstances and results of their use, H.G. Wells placed this idea firmly into Szilard’s mind.

And Szilard was a man who – with good reason – feared what the German regime of the time would do with such weapons.

And so when recalling the first sustained and controlled release of atomic energy in Chicago, he immediately recalled H.G. Wells vision of a war fought with atomic bombs.

Also…

“The World Set Free” is fascinating to read, but it is not – in my totally unqualified opinion – a great work of literature.

The characters are mainly implausible, and the peaceful and rational world government Wells envisages would follow nuclear devastation might be better characterised by George Orwell. (Scientific American contrast Orwell and Wells’ ideas about science and society in an interesting essay here.)

By contrast, some of the plot twists are strikingly plausible. I was struck in particular when – after the declaration of World Government from a conference in Brissago in Switzerland – one single monarch held out.

In what might now be called “a rogue state”, a conniving ruler – “a Fox” – sought to conceal some “weapons of mass destruction”. After an attempted pre-emptive strike on the World Government was foiled, an international force searched the rogue state, grounding its aeroplanes, and a search eventually unearthed a stash of atomic bombs hidden under a haybarn.

Perhaps George Bush had been reading “The World Set Free” too!

 

 

A Bright Future?

May 3, 2021

Click for a larger Image of the book covers.

Friends, a few weeks ago I reviewed two books about our collective energy future: Decarbonising Electricity Made Simple and Taming the Sun.

Summarising heavily

  • Decarbonising Electricity Made Simple
    • A detailed look at how the UK can attain very low carbon intensity electricity – perhaps less than 50 gCO2/kWh in 2030 – by just doing more of what we are already doing
  • Taming the Sun
    • A look at the role of solar power globally, addressing the fact that every ‘market’ will reach the point where super-cheap solar electricity is so abundant that nothing else will compete – during the day. But because of the lack of storage, solar will never be a sufficient answer.

This weekend I read A Bright Future by Joshua Goldstein and Staffan Qvist. The strapline is “How some countries have solved climate change and the rest can follow”.

Their answer is simple:

Start building nuclear power stations now and don’t stop for the next 50 years.

The authors point out

  • The astonishing safety record of nuclear power which is in direct contrast to the coal industry in which thousands of people die annually – and which releases more radioactivity and toxic compounds than nuclear power stations ever have.
  • The enormity of the climate peril into which we are collectively entering.
  • The scale of output which is achievable with nuclear power stations – generating capacity can potentially be added much faster than renewable generation.
  • The reliability of nuclear power and they contrast this with the variability of wind and solar generation.

And while I could disagree with the authors on several details, the basic correctness of their assertion is undeniable.

  • In the UK if we had one or two more nuclear power stations our climate goals would be dramatically easier to meet.
  • Globally, there are currently 450 nuclear power stations undramatically providing emission-free electricity. If there were 10 times this number our collective climate emergency would be easier to address.

And while it would be an understatement to say that nuclear power is without controversy – it seems to me that a massive investment in nuclear power plants worldwide would be a good move.

But it is not going to happen.

I am sure the authors know that their arguments are futile.

Although I personally would welcome a nuclear power plant in Teddington, most people would not.

Similarly most people in Anytown, Anywhere would not welcome a nuclear power station.

But as the authors point out – correctly – there is no renewable energy technology that match the characteristics of nuclear power.

And we need every possible low-carbon generating source to address humanity’s needs

Despite the authors’ positivity, I have never felt more depressed after reading a book than this.

 

The Last Artifact

October 5, 2020

A long long time ago (May 2018) in a universe far far away (NPL), I was asked to take part in a film about the re-definition of four of the seven SI base units: The Last Artifact

The team – consisting of director Ed Watkins, videographer Rick Smith and sound recordist Parker Brown – visited my lab and I spent a happy afternoon and morning chatting.

I later heard that a version of the film was shown to VIPs in May 2019 when the kilogram, kelvin, candela and ampere were re-defined, and I was told that some my words had made the cut!

But then the film disappeared!

I wrote to director Ed Watkins earlier this year and he shared a version with me privately, and I was impressed. But there was still no version to share.

I am writing this because a set of clips have now emerged which will hopefully be relevant for classroom use.

I don’t think the clips individually make as much sense as the film as a whole because they lack the continuing narrative that the film provides. But they are still beautiful and provide views of otherwise unseen laboratories and artifacts and people.

Personally, I was shocked to see myself on film – so shocked I found it difficult to see the rest of the film clearly. But as my shock subsided, I grew to like the film.

This film is a not made for people like me, and it is not the film I would have made. Rather it is a film for non-scientists and schoolchildren. It is by turns gorgeous, colourful, engaging, and humorous. And when I showed it to a UK Science TV producer they said it worked for them!

The colour and music and sound quality are outstanding and make watching it a simple pleasure. I recall at the time noting that they were shooting at ‘8K’ resolution when I had only ever heard of ‘4k’: Now I know why!

This page contains links to all 12 films or you pick from the list below – the links are in the film’s title: Films with a red asterisk (*) feature me! (sometimes just my voice).Clips

Film 1: (Re)Defining The Universe  (1 miniute 55 seconds)

An introduction to what is meant by an ‘artifact’ ending with a beautiful shot of the international prototype of the kilogram itself.

Film 2 *: What is Metrology? (3 minutes 33 seconds)

An introduction the International Bureau of Weights and Measures and its role in the international measurement system: the SI.

Film 3 *: Metrologists (1 minutes 18 seconds)

Metrologists speak!.

Film 4 *: Measurement: System International (3 minutes 14 seconds)

An introduction the SI.

Film 5 *: The Hunk of Metal (3 minutes 33 seconds)

An introduction the international prototype of the kilogram.

Film 6 *: The History of Measurement (7 minutes 10 seconds)

A nice summary of the history of measurements.

Film 7 *: Redefinition and Fundamental Constants (2 minutes 48 seconds)

The idea of moving away from artifacts..

Film 8 : Avogadro’s Constant  (1 minutes 24 seconds)

One option for re-defining the kilogram

Film 9 : The Avogadro Sphere (2 minutes 37 seconds)

How to make an Avogadro Sphere.

Film 10 : Planck’s Constant (2 minutes 16 seconds)

Some chit chat about the Planck Constant.

Film 11 : Watt Balance (4 minutes 29 seconds)

The concept of weighing with electricity using a Kibble Balance. A chance to hear Ian Robinson speak.

Film 12 *: The Next  Frontier (3 minutes 31 seconds)

What changes after re-definition of the SI units?

I hate it when it’s too hot

August 7, 2020

 

I find days when the temperature exceeds 30 °C very unpleasant.

And if the night-time temperature doesn’t fall then I feel doubly troubled.

I have had the feeling that such days have become more common over my lifetime. But have they?

The short  summary is “Yes”. In West London, the frequency of days on which the temperature exceeds 30 °C has increased from typically 2 days per year in the 1950’s and 1960’s to typically 4 days per year in the 2000’s and 2010’s. This was not as big an increase as I expected.

On reflection, I think my sense that these days have become more common probably arises from the fact that up until the 1980’s, there were many years when such hot days did not occur at all. As the graph at the head of the article shows, in the 2010’s they occurred every year.

Super-hot days have now become normal.

You can stop reading at this point – but if you want to know how I worked this out – read on. It was much harder than I expected it would be!

Finding the data

First, please notice that this is not the same question as “has the average summer temperature increased?”

A single very hot day can be memorable but it may only affect the monthly or seasonal average temperatures by a small amount.

So one cannot merely find data from a nearby meteorological station….

…and plot it versus time. These datasets contain just the so-called ‘monthly mean’ data. i.e.. the maximum or minimum daily temperature is measured for a month and then its average value is recorded. So individual hot days are not flagged in the data. You can see my analysis of such data here.

Instead one needs to find the daily data – the daily records of individual maximum and minimum temperatures.

Happily this data is available from the Centre for Environmental Data Analysis (CEDA). They host the Met Office Integrated Data Archive System (MIDAS) for land surface station data (1853 – present). It is available under an Open Government Licence i.e. it’s free for amateurs like me to play with.

I registered and found the data for the nearby Met Office station at Heathrow. There was data for 69 years from 1948 to 2017, with a single (comma separated variable) spreadsheet for maximum and minimum temperatures (and other quantities) for each year.

Analysing the data

Looking at the spreadsheets I noticed that the 1948 data contained daily maxima and minima. But all the other 68 spreadsheets contained two entries for each day – recording the maximum and minimum temperatures from two 12-hour recording periods

  • the first ended at 9:00 a.m. in the morning: I decided to call that ‘night-time’ data.
  • and the second ended at 9:00 p.m. in the evening: I decided to call that ‘day-time’ data.

Because the ‘day-time’ and ‘night-time’ data were on alternate rows, I found it difficult to write a spreadsheet formula that would check only the appropriate cells.

After a day of trying to ignore this problem, I resolved to write a program in Visual Basic that could open each yearly file, read just a relevant single temperature reading from each alternate line, and save the counted the data in a separate file.

It took a solid day – more than 8 hours – to get it working. As I worked, I recalled performing similar tasks during my PhD studies in the 1980’s. I reflected that this was an arcane and tedious skill, but I was glad I could still pay enough attention to the details to get it to work.

For each yearly file I counted two quantities:

  • The number of days when the day-time maximum exceeded a given threshold.
    • I used thresholds in 1 degree intervals from 0 °C to 35 °C
  • The number of days when the night-time minimum fell below a given threshold
    • I used thresholds in 1 degree intervals from -10 °C to +25 °C

So for example, for 1949 the analysis tells me that there were::

  • 365 days when the day-time maximum exceeded 0 °C
  • 365 days when the day-time maximum exceeded 1 °C
  • 363 days when the day-time maximum exceeded 2 °C
  • 362 days when the day-time maximum exceeded 3 °C
  • 358 days when the day-time maximum exceeded 4 °C
  • 354 days when the day-time maximum exceeded 5 °C

etc…

  • 6 days when the day-time maximum exceeded 30 °C
  • 3 days when the day-time maximum exceeded 31 °C
  • 0 days when the day-time maximum exceeded 32 °C
  • 0 days when the day-time maximum exceeded 33 °C
  • 0 days when the day-time maximum exceeded 34 °C

From this data I could then work out out that in 1949 there were…

  • 0 days when the day-time maximum was between 0 °C and 1 °C
  • 2 days when the day-time maximum was between 1 °C and 2 °C
  • 4 days when the day-time maximum was between 2 °C and 3 °C
  • 4 days when the day-time maximum was between 3 °C and 4 °C

etc..

  • 3 days when the day-time maximum was between 30 °C and 31 °C
  • 3 days when the day-time maximum was between 31 °C and 32 °C
  • 0 days when the day-time maximum was between 32 °C and 33 °C
  • 0 days when the day-time maximum was between 33 °C and 34 °C

Variable Variability

As I analysed the data I found it was very variable (Doh!) and it was difficult to spot trends amongst this variability. This is a central problem in meteorology and climate studies.

I decided to reduce the variability in two ways.

  • First I grouped the years into decades and found the average numbers of days in which the maximum temperatures lay in a particular range.
  • Then I increased the temperature ranges from 1 °C to 5 °C.

These two changes meant that most groups analysed had a reasonable number of counts. Looking at the data I felt able to draw four conclusions, none of which were particularly surprising.

Results: Part#1: Frequency of very hot days

The graph below shows that at Heathrow, the frequency of very hot days – days in which the maximum temperature was 31 °C or above has indeed increased over the decades, from typically 1 to 2 days per year in the 1950’s and 1960’s to typically 3 to 4 days per year in the 2000’s and 2010’s.

I was surprised by this result. I had thought the effect would be more dramatic.

But I may have an explanation for the discrepancy between my perception and the statistics. And the answer lies in the error bars shown on the graph.

The error bars shown are ± the square root of the number of days – a typical first guess for the likely variability of any counted quantity.

So in the 1950’s and 1960’s it was quite common to have years in which the maximum temperature (at Heathrow) never exceeded 30 °C. Between 2010 and 2017 (the last year in the archive) there was not a single year in which temperatures have not reached 30 °C.

I think this is closer to my perception – it has become the new normal that temperatures in excess of 30 °C occur every year.

Results: Part#2: Frequency of days with maximum temperatures in other ranges

The graph above shows that at Heathrow, the frequency of days with maxima above 30 °C has increased.

The graphs below shows that at Heathrow, the frequency of days with maxima in the range shown.

  • The frequency of ‘hot’ days with maxima in the range 26 °C to 30 °C has increased from typically 10 to 20 days per year in the 1950s to typically 20 to 25 days per year in the 2000’s.

  • The frequency of ‘warm’ days with maxima in the range 21 °C to 25 °C has increased from typically 65 days per year in the 1950s to typically 75 days per year in the 2000’s.

  • The frequency of days with maxima in the range 16 °C to 20 °C has stayed roughly unchanged at around 90 days per year.

  • The frequency of days with maxima in the range 11 °C to 15 °C appears to have increased slightly.

  • The frequency of ‘chilly’ days with maxima in the range 6 °C to 10 °C has decreased from typically 70 days per year in the 1950’s to typically 60 days per year in the 2000’s.

  • The frequency of ‘cold’ days with maxima in the range 0 °C to 5 °C has decreased from typically 30 days per year in the 1950’s to typically 15 days per year in the 2000’s.

Taken together this analysis shows that:

  • The frequency of very hot days has increased since the 1950’s and 1960’s, and in this part of London we are unlikely to ever again have a year in which there will not be at least one day where the temperature exceeds 30 °C.
  • Similarly, cold days in which the temperature never rises above 5 °C have become significantly less common.

Results: Part#3: Frequency of days with very low minimum temperatures

While I was doing this analysis I realised that with a little extra work I could also analyse the frequency of nights with extremely low minima.

The graph below shows the frequency of night-time minima below -5 °C across the decades. Typically there were 5 such cold nights per year in the 1950’s and 1960’s but now there are more typically just one or two such nights each year.

Analogous to the absence of years without day-time maxima above 30 °C, years with at least a single occurrence of night-time minima below -5 °C are becoming less common.

For example, in the 1950’s and 1960’s, every year had at least one night with a minimum below -5 °C at the Heathrow station. In the 2000’s only 5 years out 10 had such low minima.

Results: Part#4: Frequency of days with other minimum temperatures

For the Heathrow Station, the graphs below show the frequency of days with minima in the range shown:

  • The frequency of ‘cold’ nights with minima in the range -5 °C to -1 °C has decreased from typically 45 days per year in the 1950’s to typically 25 days per year in the 2000’s.

  • The frequency of ‘cold’ nights with minima in the range 0 °C to 4 °C has decreased from typically 95 days per year in the 1950’s to typically 80 days per year in the 2000’s.

  • The frequency of nights with minima in the range 5 °C to 9 °C has remained roughly unchanged.

  • The frequency of nights with minima in the range 10 °C to 14 °C has increased from typically 90 days per year in the 1950’s to typically 115 days per year in the 2000’s.

  • The frequency of ‘warm’ nights with minima in the range 15 °C to 19 °C has increased very markedly from typically 12 days per year in the 1950’s to typically 30 days per year in the 2000’s.

  • ‘Hot’ nights with minima in the above 20 °C are still thankfully very rare.

 

Acknowledgements

Thanks to Met Office stars

  • John Kennedy for pointing to the MIDAS resource
  • Mark McCarthy for helpful tweets
  • Unknown data scientists for quality control of the Met Office Data

Apologies

Some eagle-eyed readers may notice that I have confused the boundaries of some of my temperature range categories. I am a bit tired of this now but I will sort it out when the manuscript comes back from the referees.

COVID-19: Day 212 Update: Population Prevalence

July 31, 2020

Summary This post is an update on the likely prevalence of COVID-19 in the UK population. (Previous update).

The latest data from the Office for National Statistics (ONS) suggest even more clearly than last week that there has been a small increase in prevalence.

The current overall prevalence is estimated to be 1 in 1500  but some areas are estimated to have a much higher incidence.

Overall the ONS estimate  that 6.2 ± 1.3 % of the UK population have been ill with COVID-19 so far.

Population Prevalence

On 31st July the Office for National Statistics (ONS) updated their survey data on the prevalence of people actively ill with COVID-19 in the general population (link), incorporating data for six non-overlapping fortnightly periods covering the period from 4th May up until 26th July

Start of period of survey End of period of survey   Middle Day of Survey (day of year 2020) % testing positive for COVID-19 Lower confidence limit Upper confidence limit
04/05/2020 17/05/2020 132 0.35 0.23 0.52
18/05/2020 31/05/2020 144 0.15 0.08 0.25
1/05/2020 14/06/2020 160 0.07 0.03 0.13
15/06/2020 18/06/2020 174 0.09 0.05 0.16
29/06/2020 12/07/2020 188 0.05 0.03 0.09
13/06/2020 26/07/2020 202 0.09 0.06 0.14

Data from ONS on 26th July

Plotting these data  I see no evidence of a continued decline. ONS modelling suggests the prevalence is actually increasing.

Click for a larger version.

Because of this it no longer makes sense to fit a curve to the data and to anticipate likely dates when the population incidence might fall to key values.

Click for a larger version.

In particular, things look grim for an untroubled return to schools. Previously – during full lock down – we achieved a decline of the prevalence of COVID-19 by a factor 10 in roughly 45 days.

The start of the school term is just 35 days away and – given the much greater activity now compared with April – it is unrealistic to expect the prevalence to fall by a factor 66 to the 1 in 100,000 level in time for the start of the school term.

Limits

As I have mentioned previously, we are probably approaching the lower limit of the population prevalence that this kind of survey can detect.

Each fortnightly data point on the 31 July data set above corresponds to:

  • 51 positive cases detected from a sample of 16,236
  • 32 positive cases detected from a sample of 20,390
  • 13 positive cases detected from a sample of 25,519,
  • 18 positive cases detected from sample of 23,767
  • 19 positive cases detected from sample of 31,542
  • 24 positive cases detected from sample of 28,325

I feel obliged to state that I do not understand how ONS process the data, because historical data points seem to change from one analysis to the next. But I suspect they are just doing something sophisticated that I don’t understand.

Daily Deaths

Below I have also plotted the 7-day retrospective rolling average of the daily death toll along with the World-o-meter projection from the start of June.

Click for a larger version.

A close up graph shows the death rate is not convincingly falling at all and so unless there is some change in behaviour, death rates from coronavirus of tens of people per day are likely to continue for several months yet.

Click for a larger version.

The trend value of deaths (~65 per day) is consistently higher than the roughly 12 deaths per day that we might have expected based on trend behaviour at the start of June.

In future updates I will no longer plot the World-o-meter projection because it is clearly to no longer relevant to what is happening in the UK.

Are fusion scientists crazy?

July 8, 2020

Preamble

I was just about to write another article (1, 2, 3) about the irrelevance of nuclear fusion to the challenges of climate change.

But before I sharpened my pen, I thought I would look again to see if I could understand why a new breed of fusion scientists, engineers and entrepreneurs seem to think so differently. 

Having now listened to two and a half hours of lectureslinks at the bottom of the page – I have to say, I am no longer so sure of myself.

I still think that the mainstream routes to fusion should be shut down immediately.

But the scientists and engineers advocating the new “smaller faster” technology make a fair case that they could conceivably have a relevant contribution to make. 

I am still sceptical. The operating conditions are so extreme that it is likely that there will be unanticipated engineering difficulties that could easily prove fatal.

But I now think their proposals should be considered seriously, because they might just work.

Let me explain…

JET and ITER

Deriving usable energy from nuclear fusion has been a goal for nuclear researchers for the past 60 years.

After a decade or two, scientists and engineers concluded (correctly) that deriving energy from nuclear fusion was going to be extraordinarily difficult.

But using a series of experiments culminating in JET – the Joint European Torus, fusion scientists identified a pathway to create a device that could release fusion energy and proceeded to build ITER, the International Thermonuclear Experimental Reactor.

ITER is a massive project with lots of smart people, but I am unable to see it as anything other than a $20 billion dead end – a colossal and historic error. 

Image of ITER from Wikipedia modified to show cost and human being. Click for larger view.

In addition to its cost, the ITER behemoth is slow. Construction was approved in 2007 but first tests are only expected to begin in 2025; first fusion is expected in 2035; and the study would be complete in 2045.

I don’t think anyone really doubts that ITER will “work”: the physics is well understood.

But even if everything proceeds according to plan, and even if the follow-up DEMO reactor was built in 2050 – and even if it also worked perfectly, it would be a clear 40 years or so from now before fusion began to contribute low carbon electricity. This is just too late to be relevant to the problem of tackling climate change. I think the analysis in my previous three articles still applies to ITER.

I would recommend we stop spending money on ITER right now and leave it’s rusting carcass as a testament to our folly. The problem is not that it won’t ‘work’. The problem is that it just doesn’t matter whether it works or not.

But it turns out that ITER is no longer the only credible route to fusion energy generation.

High Temperature Superconductors

While ITER was lumbering onwards, science and technology advanced around it.

Back in 1986 people discovered high-temperature superconductors (HTS). The excitement around this discovery was intense. I remember making a sample of YBCO at Bristol University that summer and calling up the inestimable Balázs Győrffy near to midnight to ask him to come in to the lab and witness the Meissner effect – an effect which hitherto had been understood, but rarely seen.

But dreams of new superconducting technologies never materialised. And YBCO and related compounds became scientific curiosities with just a few niche applications.

But after 30 years of development, engineers have found practical ways to exploit them to make stronger electromagnets. 

The key property of HTS that makes them relevant to fusion engineering is not specifically the high temperature at which they became superconducting. Instead it is their ability – when cooled to well below their transition temperature – to remain superconducting in extremely high magnetic fields.

Magnets and fusion

As Zach Hartwig explains at length (video below) the only practical route to fusion energy generation involves heating a mixture of deuterium and tritium gases to immensely high temperatures and confining the resulting plasma with magnetic fields.

Stronger electromagnets allow the ‘burning’ plasma to be more strongly confined, and the fusion power density in the burning plasma varies as the fourth power of the magnetic field strength. 

In the implementation imagined by Hartwig, the HTS technology enables magnetic fields 1.74 times stronger, which allows an increase in power density by a factor 1.74 x 1.74 x 1.74 x 1.74 ≈ 9. 

Or alternatively, the apparatus could be made roughly 9 times smaller. So using no new physics, it has become feasible to make a fusion reactor which is much smaller than ITER. 

A smaller reactor can be built quicker and cheaper. The cost is expected to scale roughly as the size cubed – so the cost would be around 9 x 9 x 9 ~ 700 times lower – still expensive but no longer in the billions.

[Note added on 8/2/2021: I think this large factor is justified: see my response to the comment from Dr Brian VonHerzen for an explanation]

And crucially it would take just a few years to build rather than a few decades. 

And that gives engineers a chance to try out a few designs and optimise them. All of fusion’s eggs would no longer be in one basket.

The engineering vision

Dennis Whyte’s talk (link below) outlines the engineering vision driving the modern fusion ‘industry’.

A fusion power station would consist of small modular reactors each one generating perhaps only 200 kW of electrical power. The reactors could be produced on a production line which could lower their production costs substantially.

This would allow a power station to begin generating electricity and revenue after the first small reactor was built. This would shorten the time to payback after the initial investment and make the build out of the putative new technology more feasible from both a financial and an engineering perspective.

The reactors would be linked in clusters so that a single reactor could come on-line for extra generation and be taken off-line for maintenance. Each reactor would be built so that the key components could be replaced every year or so. This reduces the demands on the materials used in the construction. 

Each reactor would sit in a cooling flow of molten salt containing lithium that when irradiated would ‘breed’ the tritium required for operation and simultaneously remove the heat to drive a conventional steam turbine.

You can listen to Dennis Whyte’s lecture below for more details.

But…

Dennis Whyte and Zach Hartwig seem to me to be highly credible. But while I appreciate their ingenuity and engineering insight, I am still sceptical.

  • Perhaps operating a reactor with 500 MW of thermal power in a volume of a just 10 cubic metres or so at 100 million kelvin might prove possible for seconds, minutes or hours or even days. But it might still prove impossible to operate 90% of the time for extended periods. 
  • Perhaps the unproven energy harvesting and tritium production system might not work.
  • Perhaps the superconductor so critical to the new technology would be damaged by years of neutron irradiation

Or perhaps any one of a large number of complexities inconceivable in advance might prove fatal.

But on the other hand it might just work.

So I now understand why fusion scientists are doing what they are doing. And if their ideas did come to fruition on the 10-year timescale they envision, then fusion might yet still have a contribution to make towards solving the defining challenge of our age.

I wish them luck!

===========================================

Videos

===========================================

Video#1: Pathway to fusion

Zach Hartwig goes clearly through the MIT plan to make a fusion reactor.

Timeline of Zach Hartwig’s talk

  • 2:20: Start
  • 2:52: The societal importance of energy
  • 3:30: Societal progress has been at the expense of CO2 emissions
  • 3:51: Fusion is an attractive alternative in principle. – but how to compare techniques?
  • 8:00: 3 Questions
  • 8:10: Question 1: What are viable fusion fuels
  • 18:00 Answer to Q1: Deuterium-Tritium is optimal fuel.
  • 18:40: Question 2: Physical Conditions
    • Density, Temperature, Energy confinement
  • 20:00 Plots of Lawson Criterion versus Temperature.
    • Shows contours of energy ration Q
    • Regions of the plot divided into Pointless, possible, and achieved
  • 22:35: Question 3: Confinement Methods compared on Lawson Criterion/Temperature plots
    1. Cold Fusion 
    2. Gravity
    3. Hydrogen Bombs
    4. Inertial Confinement by Laser
    5. Particle accelerator
    6. Electrostatic well
    7. Magnetic field: Mirrors
    8. Magnetic field: Magnetized Targets or Pinches
    9. Magnetic field: Torus of Mirrors
    10. Magnetic field: Spheromaks
    11. Magnetic field: Stellerator
    12. Magnetic field: Tokamak
  • 39:35 Summary
  • 40:00 ITER
  • 42:00 Answer to Question 3: Tokamak is better than all other approaches.
  • 43:21 Combining previous answers: 
    • Tokamak is better than all other approaches.
  • 43:21 The existing pathway JET to ITER is logical, but too big, too slow, too complex: 
  • 46:46 The importance of magnetic field: Power density proportional to B^4. 
  • 48:00 Use of higher magnetic fields reduces size of reactor
  • 50:10 High Temperature Superconductors enable larger fields
  • 52:10 Concept ARC reactor
    • 3.2 m versus 6.2 m for ITER
    • B = 9.2 T versus 5.3 T for ITER: (9.2/5.3)^4 = 9.1
    • Could actually power an electrical generator
  • 52:40 SPARC = Smallest Possible ARC
  • 54:40 End: A viable pathway to fusion.

Video#2: The Affordable, Robust, Compact (ARC) Reactor: and engineering approach to fusion.

Dennis Whyte explains how improved magnets have made fusion energy feasible on a more rapid timescale.

Timeline of Dennis Whyte’s talk

  • 4:40: Start and Summary
    • New Magnets
    • Smaller Sizes
    • Entrepreneurially accessible
  • 7:30: Fusion Principles
  • 8:30: Fuel Cycle
  • 10:00: Fusion Advantages
  • 11:20: Lessons from the scalability and growth of nuclear fission
  • 12:10 Climate change is happening now. No time to waste.
  • 12:40 Science of Fusion:
    • Gain
    • Power Density
    • Temperature
  • 13:45 Toroidal Magnet Field Confinement:
  • 15:20: Key formulae
    • Gain 10 bar-s
    • Power Density ∝ pressure squared = 10 MW/m^3
  • 17:20 JET – 10 MW but no energy gain
  • 18:20 Progress in fusion beat Moore’s Law in the 1990’s but the science stalled as the devices needed to be too big.
  • 19:30 ITER Energy gain Q = 10, P = 3 Bar, no tritium breeding, no electricity generation.
  • 20:30 ITER is too big and slow
  • 22:10 Magnetic Field Breakthrough
    • Energy gain ∝ B^3 and ∝ R^1.3 
    • Power Density ∝ B^4 and ∝ R 
    • Cost ∝ R^3 
  • 24:30 Why ITER is so large
  • 26:26 Superconducting Tape
  • 28:19 Affordable, Robust, Compact (ARC) Reactor. 
    • 500 MW thermal
    • 200 MW electrical
    • R = 3.2 m – the same as JET but with B^4 scaling 
  • 30:30 HTS Tape and Coils.
  • 37:00 High fields stabilise plasma which leads to low science risks
  • 40:00 ARC Modularity and Repairability
    • De-mountable coils 
    • Liquid Blanket Concept
    • FLiBe 
    • Tritium Breeding with gain = 1.14
    • 3-D Printed components
  • 50:00 Electrical cost versus manufacturing cost.
  • 53:37 Accessibility to ‘Start-up” entrepreneurial attitude.
  • 54:40 SP ARC – Soomest Possible / Smallest Practical ARC to Demonstart fusion
  • 59:00 Summary & Questions

COVID-19: Day 177: Population Prevalence Projections

June 26, 2020

Warning: Discussing death is difficult, and if you feel you will be offended by this discussion, please don’t read any further.

========================================

This post is a 26th June update on the likely prevalance of COVID-19 in the UK population. (Previous update)

Population Prevalence

The Office for National Statistics (ONS) have updated their survey data on the prevalence of people actively ill with COVID-19 in the general population (link).

Start of period of survey End of period of survey  Middle Day of Survey (day of year 2020) % testing positive for COVID-19 Lower confidence limit Upper confidence limit
27/04/2020 10/05/2020 125 0.26 0.17 0.40
11/05/2020 24/05/2020 139 0.22 0.10 0.44
25/05/2020 07/06/2020 153 0.05 0.02 0.10
08/06/2020 21/06/2020 167 0.09 0.04 0.19
Data from ONS

Plotting this data on a graph we see a decreasing trend, but note that the most recent data point worryingly shows an increase in prevalence.

Click for a larger view

The new data indicate a slightly more concerning story than the previous data, with the (unweighted) exponential fit indicating a factor 10 reduction in prevalence every 70 days, significantly slower than the previous estimate of 43 days. We can see the data in more detail if we plot them on a log-linear graph.

Click for a larger image.

The new data significantly shifts the dates at which key low prevalence values might be expected.

Prevalence Date Cases in the UK
1 in 1,000 Start of June About 60,000
1 in 10,000 Early August About 6,000
1 in 100,000 Early October About 600
Projected dates for a given level of COVID-19 prevalence

It is a matter of some concern that levels may not have fallen to the 1 in 100,000 level by the start of the new school term in September.

Limits of the survey power

I mentioned last week that we are probably approaching the lower limit of the population prevalence that this kind survey can detect.

The last two fortnightly data points were based on testing of 22,523 people with 11 positives and 24,256 people with 14 positives.

The statistical rule-of-thumb is that the expected variability of a count is roughly the square root of the number counted. So the true population incidence amongst these samples could easily have been (say) 12 ± 3. So the difference between 11 and 14 is not strong evidence of an increase in prevalence.

However, based on the previous trend, the expected number of positives would have been (roughly) 5 ± 2. So 14 ± 3 is reasonably strong evidence that that the previous trend is not continuing.

Daily Deaths

Below I have also plotted the 7-day retrospective rolling average of the daily death toll along with the World-o-meter projection from late May

Click for a larger view.

The data again are close to the predicted rate of decline but lie consistently above the predicted curve for the last 10 days. In short, the rate of decline appears to slowing.

A slowing in the rate of reduction of deaths now corresponds to additional infections acquired in late May or early June.

Personal View

Personally I am not surprised by the failure of the either the prevalence data or the death rate to continue falling at their previous rates.

In order for that to have happened, societal restrictions would have to have remained as they were in May.

My perception is that that restrictions as observed in practice on the mean streets of Teddington, have become somewhat more relaxed.

If this is the new normal, then we may need to get used to living with corona virus in circulation at a population prevalence of ill people of around 1 in 1000.

In future updates I will continue to use the same World-o-meter projection to gauge whether the death rate is falling faster or slower than the model led us to expect.

===========================

Discussing death is difficult, and if you have been offended by this discussion, I apologise. The reason I have written this is that I feel it is important that we all try to understand what is happening.

COVID-19: Day 164: Population Prevalence Projections

June 14, 2020

June 14, 2020

Warning: Discussing death is difficult, and if you feel you will be offended by this discussion, please don’t read any further.
========================================

This post is an update on the likely prevalance of COVID-19 in the UK population in the coming weeks:

Population Prevalence

The Office for National Statistics (ONS) now have updated their survey data on the prevalence of people actively ill with COVID-19 in the general population (link).

Previously they published the data week-by-week, but now (irritatingly) they have grouped the data by fortnight-by-fortnight.

StartEnd% testing positive for COVID-19Lower confidence limit (%)Upper confidence limit (%)
26/04/202010/05/20200.270.170.41
11/05/202024/05/20200.220.100.43
25/05/202007/06/20200.060.020.12
Data from the ONS

Plotting this on a graph we see a pleasingly decreasing trend.

Right-click and “open in a new tab” to see the graph in more detail. The GREY data are the old data and the RED data are the new analysis.

The old data is shown in grey on the figure above. The new data (shown in red) indicate a broadly similar story. The population prevalence in mid-June (now) is below 0.1% and extrapolating using an exponential function implies – if the trend continued – a factor 10 reduction in prevalence every 45 days.

PrevalenceDateCases in the UK
1 in 1,000Start of JuneAbout 60,000
1 in 10,000Mid-JulyAbout 6,000
1 in 100,000Start of SeptemberAbout 600
1 in 1,000,000Mid OctoberAbout 60
Projected dates for a given level of COVID-19 prevalence

We can see the data in more detail if we plot them on a log-linear graph.

Right-click and “open in a new tab” to see the graph in more detail. The GREY data are the old data and the RED data are the new analysis

The analysis suggests that at the start of September the population prevalence of COVID-19 cases might be close to 10 in a million.

Re-stating what I said last week, I personally think that level would be low enough for life to proceed reasonably normally – albeit with some of our ‘new normal’ behaviours – and it is probably low enough for schools to operate safely with minimal fuss.

Daily Deaths

Below I have also plotted the 7-day retrospective rolling average of the daily death toll along with the World-o-meter projections.

Right-click and “open in a new tab” to see the graph in more detail.

The data lie close to the predicted rate of decline. This is consistent with the population prevalence projection, falling by a factor 10 in about 50 days.

In my future updates I will continue to use the same World-o-meter projection to gauge whether the death rate is falling faster or slower than the model led us to expect.

===========================
Discussing death is difficult, and if you have been offended by this discussion, I apologise. The reason I have written this is that I feel it is important that we all try to understand what is happening.

Fusion Research is STILL a waste of money.

June 5, 2020

Friends: A confession: I am @Protons4B, and I am an occasional user of ‘Twitter’ – purely on a recreational basis you understand…

Recently I commented on a ‘pro-Fusion’ post on Twitter pointing out that in fact Fusion Research was really quite a bad idea.

I didn’t realise that the tweeter, Melanie Windridge, was actually a communication consultant for Tokamak Energy a private enterprise fusion research company! Her website can be found here.

Melanie responded with a 5-tweet salvo saying that my sources were out of date and recommending the following links.

  • Link 1: Low Carbon Grids
  • Link 2: System IQ
  • Link 3: Applications of Machine Learning
  • Link 4: Transformative Capabilities
  • Link 5: Physics paper on small TOKAMAKS#1
  • Link 6: Physics paper on small TOKAMAKS#2
  • Link 7: Physics paper on small TOKAMAKS#3

I have now been through these links and managed to compose a response. It’s too long to tweet so I have written it below.

None of the links above addressed the points I made which I found disappointing. So the response appears to be rhetorical rather than genuinely engaging.

Of course I wish Melanie Windridge and her colleagues at Tokamak Energy good fortune in their work, and I would be delighted if they eventually show that my scepticism was misplaced. But if these links are the best that the privately-funded fusion industry can come up with to justify their research, then I feel confirmed in my previous opinions.

=============================================

Melanie Windridge,

Good Afternoon. Regarding the links and papers you sent me, I have listed my general and detailed comments below.

General Comments

None of the articles or papers you sent addressed the fundamental and specific criticisms that I mentioned in my articles (1 & 2). I have summarised these as points A to C below.

A. It is inconceivable that fusion energy will be cheap. In fact, it is likely to be so risky and capital intensive as to be unachievable. Those billions in research could be spent on things that could help with the climate emergency now.

B. All currently-conceived fusion technologies are not sustainable – they will be tritium burners coupled with a separate not-yet-demonstrated plan for breeding tritium.

C. Fusion is not necessary to create a low- or no-carbon grid. We are already well on the way to a grid generating less than 100 g CO2/kWh. Even on the most optimistic assumptions, fusion energy is at least 30 years away from beginning to contribute. Practical alternatives exist now and those billions of pounds of research funds could be spent in ways that will definitely make a difference now.

The links you sent don’t address these points. Based on the links and papers you sent me, your arguments in favour of ongoing Fusion Energy Research seem to group into four categories:

  1. Transformative Enabling Capabilities
  2. ‘New’ Physics
  3. The power of private enterprise.
  4. Grid Analysis

I have included detailed comments below, but these all seem to me to be based on – basically – optimism.

  1. Identifying seven Transforming Enabling Capabilities is really a detailed wish-list: it would indeed be great if these technologies existed. But they don’t.
  2. Similarly, it would be great if fusion scientists had been mistaken for the last half century and fusion turned out be much easier and cheaper than ITER. I had read two of those physics papers previously, and I did not find them convincing then or now.
  3. The parallels with SpaceX are limited. SpaceX have done a great job in moving on a technology which had already worked successfully for 65 years or so. In contrast, 70 or so years of Fusion Research has produced nothing that works!
  4. Finally, the analysis of future grid requirements is fine – indicating that dispatchable carbon-free electricity can be expensive and still make economic sense. This shows that a market opportunity exists. However, the papers you sent do not speak to the question of whether Fusion will ever work! And even if it did, how it would compete when we already have lower risk, cheaper alternatives!

None of these points 1 to 4 address the points A to C above. They all simply assume that energy generation using Fusion can be made practical.

But the biggest problem is that even if it did work – which it doesn’t – it would still be a bad idea. We are in a crisis now, and we need solutions that will transform the energy landscape now. All the technologies to build a grid with close to zero carbon emissions exist now. Fusion is – at best – a distraction.

Detailed Comments

1      Transformative Enabling Capabilities

Perspectives On The FESAC Transformative Enabling Capabilities: Priorities, Plans, And Status by Arnold Lumsdaine et al

I only have the abstract of this paper. This tells me that progress in several areas would be transformational.

Advanced algorithms.

  • I also looked at your other reference on this topic. The gist seems to be that the plasma is so unstable that conventional engineering solutions cannot control it. The hope seems to be that a computer will ‘magically’ be able to control it. I use the word ‘magically’ here because the aim is to use computers to design and control reactors using physics which no human understands. I appreciate the need for this new technology – but please allow me to be sceptical that it will work
  • Why am I sceptical? We are now 70 years into the ‘Fusion Project’ and the basic physics of how to establish and control a ‘fusing’ plasma has not yet been identified in principle, let alone implemented in practice.

High-critical-temperature superconductors.

  • I don’t know the details of your requirements in terms of critical fields, but I am profoundly sceptical about putting these extremely damage-sensitive materials next to a neutron source and expecting the materials to last for 30 years.

Advanced materials and manufacturing.

  • Why am I not surprised to find this here? The materials requirements for a fusion reactor are spectacular in two ways: (a) their extreme technical requirements, and (b) the degree of optimism that these requirements will somehow be met sometime soon.

Fast-flowing liquid metal plasma-facing components.

  • See above.

Novel technologies for tritium fuel cycle control.

  • Fusion Reactors as conceived presently do not use a ‘fuel-cycle’. They burn tritium and then hope to use an adjunct process to breed more, generating radioactive nuclear waste in the process.

In my opinion, these ‘Transformative Technologies’ might indeed be transformative. But then so would an anti-gravity device. These appear to ‘Wish List Technologies’ rather than engineering reality.

2   Physics Research

  • On the power and size of tokamak fusion pilot plants and reactors by A.E. Costley, J. Hugill and P.F. Buxton
  • On the fusion triple product and fusion power gain of tokamak pilot plants and reactors by A.E. Costley
  • On the energy confinement time in spherical tokamaks: implications for the design of pilot plants and fusion reactors by P F Buxton, J W Connor, A E Costley, M P Gryaznevich and S McNamara

I had read the two Open Access papers a while ago and I have perused the abstract of the third paper. My précis is this: 70 years into the Fusion Project there is a hope that there exists a previously unidentified niche in the space of operating parameters in which fusion might conceivably make more sense.

3   SpaceX is not relevant to Fusion Research

The achievements of SpaceX are impressive but I don’t see analogy with the fusion project.

People had been building working rockets for about 65 years before SpaceX started up. Their way of working was unusual in that field. Previously only state actors had previously been involved in rocket launches, and because of that, a great deal of conservatism was built into projects.

SpaceX engineered a new way of operating. They tried new ideas – such as landing and use of multiple redundant engines etc. But how relevant is that experience to the ‘Fusion Project’?

It is true that up until recently, only state actors had been operational in the field of fusion research. But rather than discovering a route to fusion which has already been working for decades, they only discovered a dead end – ITER et al. This is in stark contrast to the situation in rocket science.

Even at the start, SpaceX had solid engineering ideas and they founded themselves in Southern California which has a population of the world’s best aeronautical engineers who could implement them. The small fusion companies don’t have this engineering pathway – they are explorers rather than navigators. The transformational technologies you identified in Section 2 are things beyond their control which would – if they worked – help.  

So I think the analogy with SpaceX is not appropriate.

4   Grid Analysis

The gist of these papers is that if fusion energy worked, and the colossal capital and risk requirements could be met, then there might be a market.

Personally I disagree. But none of these papers speak to the question of whether it will work, or address the question of whether the colossal capital and risk requirements could be met.

Paper 1: The Role of Firm Low-Carbon Electricity Resources in Deep Decarbonization of Power Generation Nestor A. Sepulveda, Jesse D. Jenkins, Fernando J. de Sisternes, Richard K. Lester

These authors reasonably conclude, that as we approach a zero carbon grid, the average cost of dispatchable electricity generation rises as CO2 emissions approach zero. From a fusion perspective this might seem to give a market opening through which the impossibly high costs of fusion energy might be justified. However, I have three objections.

  • Fusion energy will not be ready even in 2050 when this would be required at scale.
  • Conventional Fission could probably perform this role adequately. It is certainly demonstrably possible.
  • Even burning gas with carbon capture would be achievable at lower cost than fusion and is MUCH closer to being feasible.

Web Page: Electrification and Decarbonisation: the Role of Fusion in Achieving a Zero-carbon Power Grid By SYSTEMIQ 12th July 2019

This is really just a piece of self publicity. Their conclusions are

Concluding, we make the case that, in the next two decades, there will be a large global market for baseload clean power to complement variable renewables if we are to mitigate the impact of climate change, by large segments of the economy to clean electricity. Fusion has the potential to be significantly cheaper than other clean baseload options [My emphasis] and should therefore be considered by policy-makers and investors as a climate change mitigation accelerator, to be pursued together with the continued deployment of all sources of renewable power generation that can bring down emissions in line with achieving a fully decarbonised power system by mid-century.

The assertion that I have highlighted is frankly laughable.

COVID-19: Day 127: I feel less optimistic

May 7, 2020

Warning: Discussing death is difficult, and if you feel you will be offended by this discussion, please don’t read any further.
========================================

In my last post (on day 121 of 2020) I indulged in a moment of optimism. I am already regretting it.

What caused my optimism?

My optimism arose because I had been focusing on data from hospitals: the so-called ‘Pillar 1’ data on cases diagnosed as people entered hospital, and the subsequent deaths of those people in hospital.

These were the data sets available at the outset, and they tell a story of a problem in the process of being solved.

My last post pointed out that each new ‘Pillar 1 case’ arose from an infection roughly 18 days previously. Applying a trend analysis to that data indicated that the actual rate of ongoing infection that gave rise to the Pillar 1 cases must currently be close to zero.

I think this conclusion is still correct. But elsewhere – particularly in care homes and peripheral settings – things are not looking so good.

Pillar 1 versus Pillar 2 Testing

Although each Pillar 1 or Pillar 2 ‘confirmed case’ designates a single individual with the corona-virus in their body, the two counts are not directly comparable.

  • Cases diagnosed by Pillar 1 testing correspond to individuals who have suffered in the community but their symptoms have become so bad, they have been admitted to hospital.
  • Cases diagnosed by Pillar 2 testing correspond to a diverse range of people who have become concerned enough about their health to ask for a test. This refers mainly to people working in ‘care’ settings.

Diagnosing Pillar 2 cases is important because they help to prevent the spread of the disease.

But whereas a Pillar 1 case is generally very ill – with roughly a 19% chance of dying within a few days – Pillar 2 cases are generally not so ill and are much less likely to lead to an imminent death

Summarising:

  • Around 19% of Pillar 1 ‘Cases’ will die from COVID-19.
  • In Pillar 2 ‘Cases’ the link is not so strong, but these cases give an indication of the general prevalence of the virus.

We should also note that as the number of tests increases, the indication of prevalence given by Pillar 2 diagnoses will slowly become more realistic.

What does the data say: 3 Graphs

Graph#1 shows the number of cases diagnosed by Pillar 1 and Pillar 2 testing.

Slide1

Pillar 1 diagnosed cases are falling relatively consistently: this is what led to my aberrant optimism. However Pillar 2 cases are rising.

This rise in part reflects the higher number of tests. But it more closely reveals the true breadth of the virus’s spread. This rise is – to me – alarming.

Graph#2 below shows Pillar 1 and Pillar 2 cases lumped together. This shows no significant decline.

Slide2

However, because deaths are more closely associated with Pillar 1 diagnoses, the number of daily deaths (Graph#3) is declining in a way more closely linked to the fall in Pillar 1 cases.

Slide3

Overall 

The NHS is coping – but the situation outside of hospitals looks like it is still not under control.

This reality is probably a consequence of the long-standing denial of the true importance of the care of elderly people, and the attempt to ‘relegate’ it from the ‘premier league’ of NHS care.

Considering the forthcoming lightening of regulations, it seems likely that viral spread in the community as a whole is currently very low. Thus a wide range of activities seem to me to be likely to be very safe.

But the interface between high risk groups – care workers in particular – and the rest of us, is likely to be area where the virus may spread into the general population.

===========================

Discussing death is difficult, and if you have been offended by this discussion, I apologise. The reason I have written this is that I feel it is important that we all try to understand what is happening.


%d bloggers like this: