COVID-19: Day 220 Update: Population Prevalence

August 9, 2020

Summary

This post is an update on the likely prevalence of COVID-19 in the UK population. (Previous update).

The latest data from the Office for National Statistics (ONS) suggest that the prevalence is broadly stable, but that there has been a small increase in prevalence over the last month or so.

The current overall prevalence is estimated to be around 1 in 1500  but some areas are estimated to have a much higher incidence.

Based on antibody studies, the ONS estimate  that 6.2 ± 1.3 % of the UK population have been ill with COVID-19 so far.

Population Prevalence

On 7th August the Office for National Statistics (ONS) updated their survey data on the prevalence of people actively ill with COVID-19 in the general population (link), incorporating data for seven non-overlapping fortnightly periods covering the period from 27th April up until 2nd August

Start of period of survey End of period of survey   Middle Day of Survey (day of year 2020) % testing positive for COVID-19 Lower confidence limit Upper confidence limit
27/4/2020 10/05/2020 125 0.34 0.24 0.48
11/05/2020 24/05/2020 139 0.30 0.22 0.42
25/05/2020 7/06/2020 153 0.07 0.04 0.11
8/06/2020 21/06/2020 167 0.10 0.05 0.18
22/06/2020 5/07/2020 181 0.04 0.02 0.08
5/07/2020 19/07/2020 195 0.06 0.03 0.10
20/07/2020 2/08/2020 209 0.08 0.05 0.13

Data from ONS on 7th August 2020

Plotting these data  I see no evidence of a continued decline. ONS modelling suggests the prevalence is increasing, but please note that this rate of increase is right at the limit of what can be concluded from these statistics.

Click for a larger version

It no longer makes sense to fit a curve to the data and to anticipate likely dates when the population incidence might fall to key values.

Below I have plotted the data with a logarithmic vertical axis to highlight how far we are from what might be considered as ‘landmark’ achievements: passing the 1 in 10,000 and 1 in 100,000 barrier.

Click for a larger version

As I mentioned last week, given the increase in general mobility it is unrealistic to expect the prevalence to fall significantly in time for the start of the school term.

Limits

As I have mentioned previously, we are probably approaching the lower limit of the population prevalence that this kind of survey can detect.

Each fortnightly data point on the 31 July data set above corresponds to:

  • 41 positive cases detected from a sample of 11,390
  • 51 positive cases detected from a sample of 19,393
  • 17 positive cases detected from a sample of 22,647
  • 18 positive cases detected from sample of 25,268
  • 12 positive cases detected from sample of 26,419
  • 19 positive cases detected from sample of 31,917
  • 24 positive cases detected from sample of 28,501

I feel obliged to state that I do not understand how ONS process the data.

Daily Deaths

Below I have also plotted recent data on the 7-day retrospective rolling average of the daily death toll. The red dotted highlight the two week plateau on the data that was apparent last week. Pleasingly, the death rate has begun to fall again.

Click for larger version.

What is going on?

Friends, I have struggled in recent weeks to grasp the bigger picture of “what is going on” with the virus, but, like most people I guess, I can’t quite get my head around it.

As a consequence, I am ignoring intriguing articles such as this one in the Washington Post. which raises many more questions that it answers.

I feel the best thing I can do in the face of this tidal wave of uncertainty is to try to focus on the simple statistics that require only minimal theoretical interpretation.

I hate it when it’s too hot

August 7, 2020

 

I find days when the temperature exceeds 30 °C very unpleasant.

And if the night-time temperature doesn’t fall then I feel doubly troubled.

I have had the feeling that such days have become more common over my lifetime. But have they?

The short  summary is “Yes”. In West London, the frequency of days on which the temperature exceeds 30 °C has increased from typically 2 days per year in the 1950’s and 1960’s to typically 4 days per year in the 2000’s and 2010’s. This was not as big an increase as I expected.

On reflection, I think my sense that these days have become more common probably arises from the fact that up until the 1980’s, there were many years when such hot days did not occur at all. As the graph at the head of the article shows, in the 2010’s they occurred every year.

Super-hot days have now become normal.

You can stop reading at this point – but if you want to know how I worked this out – read on. It was much harder than I expected it would be!

Finding the data

First, please notice that this is not the same question as “has the average summer temperature increased?”

A single very hot day can be memorable but it may only affect the monthly or seasonal average temperatures by a small amount.

So one cannot merely find data from a nearby meteorological station….

…and plot it versus time. These datasets contain just the so-called ‘monthly mean’ data. i.e.. the maximum or minimum daily temperature is measured for a month and then its average value is recorded. So individual hot days are not flagged in the data. You can see my analysis of such data here.

Instead one needs to find the daily data – the daily records of individual maximum and minimum temperatures.

Happily this data is available from the Centre for Environmental Data Analysis (CEDA). They host the Met Office Integrated Data Archive System (MIDAS) for land surface station data (1853 – present). It is available under an Open Government Licence i.e. it’s free for amateurs like me to play with.

I registered and found the data for the nearby Met Office station at Heathrow. There was data for 69 years from 1948 to 2017, with a single (comma separated variable) spreadsheet for maximum and minimum temperatures (and other quantities) for each year.

Analysing the data

Looking at the spreadsheets I noticed that the 1948 data contained daily maxima and minima. But all the other 68 spreadsheets contained two entries for each day – recording the maximum and minimum temperatures from two 12-hour recording periods

  • the first ended at 9:00 a.m. in the morning: I decided to call that ‘night-time’ data.
  • and the second ended at 9:00 p.m. in the evening: I decided to call that ‘day-time’ data.

Because the ‘day-time’ and ‘night-time’ data were on alternate rows, I found it difficult to write a spreadsheet formula that would check only the appropriate cells.

After a day of trying to ignore this problem, I resolved to write a program in Visual Basic that could open each yearly file, read just a relevant single temperature reading from each alternate line, and save the counted the data in a separate file.

It took a solid day – more than 8 hours – to get it working. As I worked, I recalled performing similar tasks during my PhD studies in the 1980’s. I reflected that this was an arcane and tedious skill, but I was glad I could still pay enough attention to the details to get it to work.

For each yearly file I counted two quantities:

  • The number of days when the day-time maximum exceeded a given threshold.
    • I used thresholds in 1 degree intervals from 0 °C to 35 °C
  • The number of days when the night-time minimum fell below a given threshold
    • I used thresholds in 1 degree intervals from -10 °C to +25 °C

So for example, for 1949 the analysis tells me that there were::

  • 365 days when the day-time maximum exceeded 0 °C
  • 365 days when the day-time maximum exceeded 1 °C
  • 363 days when the day-time maximum exceeded 2 °C
  • 362 days when the day-time maximum exceeded 3 °C
  • 358 days when the day-time maximum exceeded 4 °C
  • 354 days when the day-time maximum exceeded 5 °C

etc…

  • 6 days when the day-time maximum exceeded 30 °C
  • 3 days when the day-time maximum exceeded 31 °C
  • 0 days when the day-time maximum exceeded 32 °C
  • 0 days when the day-time maximum exceeded 33 °C
  • 0 days when the day-time maximum exceeded 34 °C

From this data I could then work out out that in 1949 there were…

  • 0 days when the day-time maximum was between 0 °C and 1 °C
  • 2 days when the day-time maximum was between 1 °C and 2 °C
  • 4 days when the day-time maximum was between 2 °C and 3 °C
  • 4 days when the day-time maximum was between 3 °C and 4 °C

etc..

  • 3 days when the day-time maximum was between 30 °C and 31 °C
  • 3 days when the day-time maximum was between 31 °C and 32 °C
  • 0 days when the day-time maximum was between 32 °C and 33 °C
  • 0 days when the day-time maximum was between 33 °C and 34 °C

Variable Variability

As I analysed the data I found it was very variable (Doh!) and it was difficult to spot trends amongst this variability. This is a central problem in meteorology and climate studies.

I decided to reduce the variability in two ways.

  • First I grouped the years into decades and found the average numbers of days in which the maximum temperatures lay in a particular range.
  • Then I increased the temperature ranges from 1 °C to 5 °C.

These two changes meant that most groups analysed had a reasonable number of counts. Looking at the data I felt able to draw four conclusions, none of which were particularly surprising.

Results: Part#1: Frequency of very hot days

The graph below shows that at Heathrow, the frequency of very hot days – days in which the maximum temperature was 31 °C or above has indeed increased over the decades, from typically 1 to 2 days per year in the 1950’s and 1960’s to typically 3 to 4 days per year in the 2000’s and 2010’s.

I was surprised by this result. I had thought the effect would be more dramatic.

But I may have an explanation for the discrepancy between my perception and the statistics. And the answer lies in the error bars shown on the graph.

The error bars shown are ± the square root of the number of days – a typical first guess for the likely variability of any counted quantity.

So in the 1950’s and 1960’s it was quite common to have years in which the maximum temperature (at Heathrow) never exceeded 30 °C. Between 2010 and 2017 (the last year in the archive) there was not a single year in which temperatures have not reached 30 °C.

I think this is closer to my perception – it has become the new normal that temperatures in excess of 30 °C occur every year.

Results: Part#2: Frequency of days with maximum temperatures in other ranges

The graph above shows that at Heathrow, the frequency of days with maxima above 30 °C has increased.

The graphs below shows that at Heathrow, the frequency of days with maxima in the range shown.

  • The frequency of ‘hot’ days with maxima in the range 26 °C to 30 °C has increased from typically 10 to 20 days per year in the 1950s to typically 20 to 25 days per year in the 2000’s.

  • The frequency of ‘warm’ days with maxima in the range 21 °C to 25 °C has increased from typically 65 days per year in the 1950s to typically 75 days per year in the 2000’s.

  • The frequency of days with maxima in the range 16 °C to 20 °C has stayed roughly unchanged at around 90 days per year.

  • The frequency of days with maxima in the range 11 °C to 15 °C appears to have increased slightly.

  • The frequency of ‘chilly’ days with maxima in the range 6 °C to 10 °C has decreased from typically 70 days per year in the 1950’s to typically 60 days per year in the 2000’s.

  • The frequency of ‘cold’ days with maxima in the range 0 °C to 5 °C has decreased from typically 30 days per year in the 1950’s to typically 15 days per year in the 2000’s.

Taken together this analysis shows that:

  • The frequency of very hot days has increased since the 1950’s and 1960’s, and in this part of London we are unlikely to ever again have a year in which there will not be at least one day where the temperature exceeds 30 °C.
  • Similarly, cold days in which the temperature never rises above 5 °C have become significantly less common.

Results: Part#3: Frequency of days with very low minimum temperatures

While I was doing this analysis I realised that with a little extra work I could also analyse the frequency of nights with extremely low minima.

The graph below shows the frequency of night-time minima below -5 °C across the decades. Typically there were 5 such cold nights per year in the 1950’s and 1960’s but now there are more typically just one or two such nights each year.

Analogous to the absence of years without day-time maxima above 30 °C, years with at least a single occurrence of night-time minima below -5 °C are becoming less common.

For example, in the 1950’s and 1960’s, every year had at least one night with a minimum below -5 °C at the Heathrow station. In the 2000’s only 5 years out 10 had such low minima.

Results: Part#4: Frequency of days with other minimum temperatures

For the Heathrow Station, the graphs below show the frequency of days with minima in the range shown:

  • The frequency of ‘cold’ nights with minima in the range -5 °C to -1 °C has decreased from typically 45 days per year in the 1950’s to typically 25 days per year in the 2000’s.

  • The frequency of ‘cold’ nights with minima in the range 0 °C to 4 °C has decreased from typically 95 days per year in the 1950’s to typically 80 days per year in the 2000’s.

  • The frequency of nights with minima in the range 5 °C to 9 °C has remained roughly unchanged.

  • The frequency of nights with minima in the range 10 °C to 14 °C has increased from typically 90 days per year in the 1950’s to typically 115 days per year in the 2000’s.

  • The frequency of ‘warm’ nights with minima in the range 15 °C to 19 °C has increased very markedly from typically 12 days per year in the 1950’s to typically 30 days per year in the 2000’s.

  • ‘Hot’ nights with minima in the above 20 °C are still thankfully very rare.

 

Acknowledgements

Thanks to Met Office stars

  • John Kennedy for pointing to the MIDAS resource
  • Mark McCarthy for helpful tweets
  • Unknown data scientists for quality control of the Met Office Data

Apologies

Some eagle-eyed readers may notice that I have confused the boundaries of some of my temperature range categories. I am a bit tired of this now but I will sort it out when the manuscript comes back from the referees.

COVID-19: Day 212 Update: Population Prevalence

July 31, 2020

Summary This post is an update on the likely prevalence of COVID-19 in the UK population. (Previous update).

The latest data from the Office for National Statistics (ONS) suggest even more clearly than last week that there has been a small increase in prevalence.

The current overall prevalence is estimated to be 1 in 1500  but some areas are estimated to have a much higher incidence.

Overall the ONS estimate  that 6.2 ± 1.3 % of the UK population have been ill with COVID-19 so far.

Population Prevalence

On 31st July the Office for National Statistics (ONS) updated their survey data on the prevalence of people actively ill with COVID-19 in the general population (link), incorporating data for six non-overlapping fortnightly periods covering the period from 4th May up until 26th July

Start of period of survey End of period of survey   Middle Day of Survey (day of year 2020) % testing positive for COVID-19 Lower confidence limit Upper confidence limit
04/05/2020 17/05/2020 132 0.35 0.23 0.52
18/05/2020 31/05/2020 144 0.15 0.08 0.25
1/05/2020 14/06/2020 160 0.07 0.03 0.13
15/06/2020 18/06/2020 174 0.09 0.05 0.16
29/06/2020 12/07/2020 188 0.05 0.03 0.09
13/06/2020 26/07/2020 202 0.09 0.06 0.14

Data from ONS on 26th July

Plotting these data  I see no evidence of a continued decline. ONS modelling suggests the prevalence is actually increasing.

Click for a larger version.

Because of this it no longer makes sense to fit a curve to the data and to anticipate likely dates when the population incidence might fall to key values.

Click for a larger version.

In particular, things look grim for an untroubled return to schools. Previously – during full lock down – we achieved a decline of the prevalence of COVID-19 by a factor 10 in roughly 45 days.

The start of the school term is just 35 days away and – given the much greater activity now compared with April – it is unrealistic to expect the prevalence to fall by a factor 66 to the 1 in 100,000 level in time for the start of the school term.

Limits

As I have mentioned previously, we are probably approaching the lower limit of the population prevalence that this kind of survey can detect.

Each fortnightly data point on the 31 July data set above corresponds to:

  • 51 positive cases detected from a sample of 16,236
  • 32 positive cases detected from a sample of 20,390
  • 13 positive cases detected from a sample of 25,519,
  • 18 positive cases detected from sample of 23,767
  • 19 positive cases detected from sample of 31,542
  • 24 positive cases detected from sample of 28,325

I feel obliged to state that I do not understand how ONS process the data, because historical data points seem to change from one analysis to the next. But I suspect they are just doing something sophisticated that I don’t understand.

Daily Deaths

Below I have also plotted the 7-day retrospective rolling average of the daily death toll along with the World-o-meter projection from the start of June.

Click for a larger version.

A close up graph shows the death rate is not convincingly falling at all and so unless there is some change in behaviour, death rates from coronavirus of tens of people per day are likely to continue for several months yet.

Click for a larger version.

The trend value of deaths (~65 per day) is consistently higher than the roughly 12 deaths per day that we might have expected based on trend behaviour at the start of June.

In future updates I will no longer plot the World-o-meter projection because it is clearly to no longer relevant to what is happening in the UK.

My House: comparing models and measurements

July 28, 2020

I began my last article about my house by explaining that I have used both measurements and modelling to plan thermal improvements.

However, I did not answer the question:

  • Does the thermal model agree with the measurements?

In this article I will compare them and show that the agreement is good enough to use the model as a basis for planning further work.

The Measurements

There are two key measurements:

  • I read my gas meter roughly once a week.
    • I subtract the reading from the previous week’s reading to find out how many hundreds of cubic feet of gas were consumed that week.
    • I then work out how much energy this corresponds to. You can use this calculator for your own meter.
    • I then work out the average rate at which the energy was used by dividing the amount of energy by the time since the last reading.
    • This gives the average power used in watts (W )
  • I read my weather station.
    • I record the average weekly temperature.

The Model

The model is an attempt to explain the gas consumption in terms of a single number that characterises the thermal transmision from the inside to the outside of the house.

The thermal transmission is measured in watts per degree Celsius of temperature difference (W/°C).

Comparing the model and the measurements.

Previously (link) I explained how I calculated the thermal transmission through the walls of the house. And I then used this to estimate how the thermal transmission would be affected by various planned changes.

  • But how do I know if those calculations are reliable?

To check this I begin with the gas consumption data for the 80 weeks or so for which I measurements. I have smoothed this data with each point being a 5 week symmetrical running average i.e. the average consumption from 2 weeks before to 2 weeks after the time for which is plotted.

Click for a larger version.

This shows that in the summer, the average rate of gas consumption is around 200 watts.

Since the space-heating is not used in the summer, I assume this 200 W is due to the use of gas for cooking and heating water for showers. I assume that this gas consumption continues unchanged through the year.

I then assume that the excess winter use is solely caused by the lower average weekly external temperature.

Mathematically I expect the gas consumption to be give by the formula:

Click for a larger version.

Next alongside the measured gas consumption we can plot what the equation above predicts would have been the gas consumption based on:

  • The calculated properties of the house looked up from data sheets about building materials and windows, and dimensional measurements of the house.
  • The difference between the internal and external temperatures as worked out from weather station readings.

The graph below shows the model with a transmission of 298 W/°C – the value I calculated was appropriate to the winter of 2018/2019.

Click for a larger version.

You can see that the dotted-red curve matches the experimental gas consumption data reasonably well in the cold winter months – except during the coldest winter weather (around day 25).

You can can also see that during the following winter of 2019/2020 the model predicts that there should have been substantially more gas consumption than there actually was.

  • Was this due to the £7000 worth of triple-glazing I installed?

My calculations suggested that after the triple-glazing was installed the transmission should have been reduced to 260 W/°C. This curve is plotted below:

Click for a larger version.

You can see that with a transmission of 260 W/°C the model curve describes the data for the winter of 2019/2020 reasonably well.

I was pleased to see this: this is the first data I ever seen which verifies quantitatively the effect of triple-glazing.

This gives me confidence that this crude model is describing heat transmission through my house reasonably well.

That is why I feel confident that, after spending a further £3,000 on finishing the triple-glazing, and £20,000 on external-wall insulation. This will hopefully reduce the transmission to 152 W/°C. That curve is shown on the figure below.

Click for a larger version.

How good is this level of insulation?

My expectation is that after this summer’s modifications, this house – with a floor area of almost 180 square metres – will require barely more than 2 kW of winter heating.

Over a year it would require typically 8000 kWh of heating, or 44 kWh per square metre per year.

If this performance level is verified then (according OVO energy) the house will require less than the average in every European country except Portugal: the UK average is 133 kWh per square metre per year

This is still not good enough to achieve ‘passivhaus’ status (Links 1, 2)- which requires less than 15 kWh per square metre per year. Or even the ‘Passivhaus Retrofit’ standard EnerPHit (Link) which requires less than 25 kWh per square metre per year. But it would still be exceptional for an old UK house.

Other considerations

Despite the fact that the graphs above have worked out nicely, there is still considerable uncertainty about the way the house performs.

For example, I don’t really know the significance of several factors such as heat loss through air flow, and heat loss through the floors, both of which are little more than guesses. I am concerned I may have underestimated these processes in which case the effect of the external wall insulation will not be as large as I anticipate.

And I have assumed that the internal temperature was a constant 18 °C. It’s not clear whether this is the best estimate – perhaps it should be 19 °C or 20 °C?

So the fact that these modelled results look good indicates that these assumptions may be about right, or that a combination of factors have by chance made the agreement look good.

One interesting feature of the data is that while the single parameter for heat transmission describes the winter and summer data well – it does not describe the spring and autumn data well.

The model always predicts higher gas usage than actually occurs in the spring and autumn. Look for example at the data from days 250 to 320 and from 450 to 550 on the second model.

Click for a larger version.

I do not know what causes this, but it may be that in the transitional seasons, the pattern of gas usage may differ from being almost always on (in winter) or always off (in summer). I tried adding an extra parameter to describe this effect, but it didn’t add a lot to the explanatory power of the model.

In short, the model is simple and the reality is complex, but answering the question I asked at the start of this article:

  • Does the thermal model agree with my measurements?

I think the answer is “Yes” – it’s good enough to guide my choices.

Previous articles about my house.

 

 

 

 

 

 

 

 

 

 

 

Masks revisited

July 25, 2020

Michael in a mask

Back in April…

I wrote an article Life beyond lock-down: Masks for all? where I asked the question:

  • Will we all be wearing masks in public for the next year or two?

The question was prompted by a good friend who had sent me a link to a video which advocated the wearing of masks in public as a successful strategy for combating the transmission of corona virus.

One of the key pieces of evidence offered in the video was the effectiveness of even primitive masks in inhibiting virus transmission in Czechia.

Apparently, mask-wearing in public became de rigeur in Czechia right from the start and this corresponded – apparently – with a low incidence of COVID-19.

I decided to look at the data for the number of deaths per million of the population in the countries of Europe as recorded on Worldometer . The results from 2nd April 2020 are shown below.

Number of deaths per million of population of countries in Europe on 2nd April 2020. See text for details. Czechia is highlighted in yellow. Click for larger version

I concluded that:

  • Czechia did not stand out from its neighbours as having an especially low death rate, at that time.
  • So even though the idea of wearing a mask in public was not unreasonable, the data themselves did not seem to speak to the effectiveness of the habit.

This tied in with the conclusions of an extensive Ars Technica article on the subject

…but what about now?

Now, at the end of July, after an eventful 104 days, masks are compulsory on public transport and in shops throughout England.

So I thought this might be a good time to look back at Czechia and see how it had fared through what I believe we are calling ‘the First Wave’.

  • Did mask-wearing work out well in Czechia?

Today’s data from World-o-meter on the number of deaths per million of population are captured below, but this time colour-coded.

Number of deaths per million of population of countries in Europe on 25 July2020. See text for details. Czechia is highlighted in yellow with a red border. Click for larger version

Well Czechia has done well, being one of a small group of countries with less than 50 deaths per million of population. But other countries for whom mask-wearing was not touted from the start as a feature of pro-social behaviour have also done well.

Looking at the countries that have not done well, they all have large populations with massive inflows of visitors. Of these countries, only Germany seems to stand out as having done well.

Of course there are many factors at play, and it may be that Czechia’s mask-wearing habit has indeed been effective. But it has certainly not been a panacea.

And yet here we are, and it does look as though mask-wearing has already become accepted by the vast majority of people as a reasonable precaution, even though the data continue to be equivocal.

Curiously

Looking back at the 3rd April article I cited the United States of America as being an exemplar of the mask-wearing habit. How things change.

 

NYT Tracker for Czechia

Headlines from papers on 2nd April 2020

COVID-19: Day 205 Update: Population Prevalence

July 25, 2020

Summary

This post is an update on the likely prevalence of COVID-19 in the UK population. (Previous update).

The latest data from the Office for National Statistics (ONS) covers only two weeks after the ‘opening up’ on July 4th (Day 185 of 2020) and suggest that there has been a small increase in prevalence.

However, the ONS data do not have the resolution to rapidly detect slow increases or decreases in prevalence, so we need to keep measuring in order to detect any sustained slow increase at the earliest opportunity.

Population Prevalence

On 24th July the Office for National Statistics (ONS) updated their survey data on the prevalence of people actively ill with COVID-19 in the general population (link), incorporating data for six non-overlapping fortnightly periods covering the period from 27th April up until 19th July

Start of period of survey End of period of survey   Middle Day of Survey (day of year 2020) % testing positive for COVID-19 Lower confidence limit Upper confidence limit
27/4/2020 10/05/2020 125 0.33 0.22 0.48
11/05/2020 24/05/2020 139 0.30 0.18 0.46
25/05/2020 7/06/2020 153 0.07 0.03 0.12
8/06/2020 21/06/2020 167 0.10 0.05 0.18
22/06/2020 5/07/2020 181 0.04 0.02 0.08
5/07/2020 19/07/2020 195 0.05 0.04 0.11

Data from ONS on 24th July

Plotting these data on a logarithmic graph we see a decreasing trend.

An (unweighted) exponential fit indicates a factor 10 reduction in prevalence every 74 days, longer than the previous estimate of 61 days, which was itself longer than the previous estimate of 51 days.

It is not clear that the exponential fit correctly describes as the data (as it should for a declining epidemic) but if we  extrapolate the trend of the data we can find likely dates when the population incidence might fall to key values.

Prevalence Date Cases in the UK
1 in 1,000 End of May About 60,000
1 in 10,000 End of August About 6,000
1 in 100,000 End of October About 600

These dates are later than previously anticipated, and worryingly the data confirm that it is unlikely that the prevalence will reach the 1 in 100,000 level in time for the start of the school term.

Limits

As I have mentioned previously, we are probably approaching the lower limit of the population prevalence that this kind of survey can detect.

Each fortnightly data point on the July 24th data set above corresponds to:

  • 40 positive cases detected from a sample of 11,346
  • 50 positive cases detected from a sample of 19,354
  • 17 positive cases detected from a sample of 22,570
  • 18 positive cases detected from sample of 25,200
  • 12 positive cases detected from sample of 26,332
  • 19 positive cases detected from sample of 30,260

Incidentally, the ONS also include data on the number of households sampled, and, in cases where someone in a household tests positive, there are typically two positive cases in that household.

Daily Deaths

Below I have also plotted the 7-day retrospective rolling average of the daily death toll along with the World-o-meter projection from the start of June.

Click for a larger version

This data shows the death rate is still falling, but only slowly. The trend value of deaths (~75 per day) is consistently higher than the roughly 19 deaths per day that we might have expected based on trend behaviour at the start of June.

This indicates that death rates of tens of people per day are likely to continue for several months yet.

In future updates I will continue to use the same World-o-meter projection to gauge whether the death rate is falling faster or slower than the model led us to expect.

Meteorological Thermometers are Sloooooooow!

July 23, 2020

Abstract: Laboratory measurements of the response time of thermometers used in typical meteorological applications reveal that they respond more slowly than generally thought. None of the tested thermometers met the WMO guideline response time of 20 seconds in a wind speed of 1 m/s.

Friends – Together with the meteorologist’s meteorologist Stephen Burt, I have published an academic paper – possibly my last. It’s a simple paper following the epithet of the metrologist’s metrologist, Michael Moldover: One paper: one thing

You can read it for free here: Response times of meteorological air temperature sensors.

Land Surface Air Temperature (LSAT)

Air temperature measurements taken over the land surface of the Earth (LSAT) are the primary measurand in humanity’s assessment of the extent of global warming.

And air temperature measurements are also critical for assessing the accuracy of weather forecasts.

However air temperature measurements are difficult. They are subject to a number of systematic errors that arise because of the low thermal conductivity and low heat capacity of air.

These effects make it tricky to ensure good thermal contact between the air and thermometers and make the readings of thermometers sensitive to even very low levels of radiative heating.

I have written about this in a previous academic paper which you can read for free here: Air temperature sensors: dependence of radiative errors on sensor diameter in precision metrology and meteorology.

The WMO CIMO guidelines

So it’s important that air temperature measurements are made in a standardised manner worldwide. This makes it possible for scientists to assess the data for possible systematic effects.

For this reason the Commission on Instruments and Methods of Observation (CIMO) of the World Meteorological Organisation (WMO) publish a guide (the so-called CIMO-Guide which you can read for free here: CIMO guide) to which manufacturers refer when producing equipment.

I was honoured to represent the International Bureau of Weights and Measures on the committee that last reviewed the CIMO guide on temperature measurement and I ‘stuck my oar in’ on one or two issues!

But one issue on which I was silent was the on response time of thermometers. I was silent because I didn’t have any idea what the response time was or indeed what response time was desirable.

Response Time

How rapidly should a meteorological air-thermometer respond? There is no definitively correct answer.

  • If it responds too rapidly:
    • …then the reading will fluctuate with local temperature variations and it will wastefully require many readings in order to estimate the average air temperature.
  • If it responds too slowly:
    • …then the reading will fail to track local maximum and minimum temperatures.

This is why world-wide standardisation of meteorological equipment is important.

In the paper we report:

  • Measurements of the response to a step change in temperature of a range of thermometers taken in a simple wind tunnel at a range of air speeds.
  • Analysis of this data to extract a time constant that characterises the thermometers.
  • Further analysis to explain the measurements in terms of the mechanisms of heat transfer between the air and thermometer.
  • A suggested rule-of-thumb for estimating the time constant of any thermometer.

We then suggest how people should respond to the fact that none of the thermometers met the CIMO Guidelines.

So all that awaits you in our paper (which you can read for free) over at the Quarterly Journal of the Royal Meteorological Society.

Other books by Stephen Burt 

If you like our paper then you may also be interested in other books by Stephen Burt. Sadly these books are not free. 😦

Estimating the expected thermal performance of my house

July 22, 2020

//trigger warning// This article is long and dull. It’s about estimating the thermal performance of my home. //trigger warning//

Friends, I have used two kinds of analysis to enable me to plan the thermal improvement of my house.

  • The first analysis involves measuring the thermal performance of the house:
    • I have explained how to do this previously (link) using weekly gas-meter readings and local weather data.
    • This allows me to see whether any changes I make have affected anything.
  • The second analysis involves modelling the thermal performance I should expect from my house.
    • This allows me to anticipate the likely costs and benefits of a range of possible changes.
    • That’s what this article is about.

Both these steps are important.

Thermal Model

My basic thermal model of my house assumes that heat flows from the inside of the house to outside through the ‘building envelope’ This term describes all the building elements that separate the inside of the ‘envelope’ (where I live!) from the outside.

In this article I will consider these elements under 4 categories

  1. Windows & doors
  2. Walls & Roof
  3. Floors
  4. Air flow

The basic assumption in the model is that the amount of heat flowing through each ‘building element’ is proportional to:

  • Its area (measured in metres squared, m^2)
  • The temperature difference between the inside and outside (measured in °C)

This ignores other important factors such as whether it is windy or rainy, or the action of the Sun in heating the house. These are limitations of the model.

The thermal performance of building elements is most commonly specified by a U-value which states how many watts of heat will flow through the one square metre of the element when there is a temperature difference of 1 °C between its internal and external surfaces.

So to model the house:

  • I made a list of all the ways in which heat can leave the interior of the house:
    • i.e. all the building elements involved in the envelope of the house: windows, doors, walls floors, etc
  • I measured the physical size of each building element.
  • I used educated guesswork (link) to estimate the thermal performance (U-value) of each building element
    • The values I used are in the table below.

  • I then multiplied the area (m^2) of each building element by the U-value (W/m^2/°C) to get the amount of heat transmitted through that element per degree of temperature difference (W/°C)
  • I then added up all the transmission values (W/°C).

You can download the spreadsheet that I used here (Thermal Model of House [Excel format]) in case it helps you with your own calculations.

Let’s start with the windows and doors.

1. Windows & doors

The table below (click it to see an enlarged version) shows:

  • A label for each window (or door) so I don’t get confused
  • It’s basic dimensions from which I calculate the area of each item
  • A categorisation into one of 4 types of window: Single-glazed, 25 year-old double glazing, modern double glazing, and triple glazing.
  • The U-value associated with that type of window
  • The transmission through the window is the product of its U-value and its area.
    • I have colour-coded the transmission column to highlight the worst performing windows.
  • Finally, I added up the transmissions to give a total transmission through all the windows and doors of 79.1 W/°C.

Please don’t be fooled by my use of a single decimal place – the uncertainty in this estimate is around 10%.

Thus my guess is that when the temperature outside falls 10 °C below the internal temperature, heat will flow out through the windows and doors at a rate of 10 °C x 79.1 W/°C. = 791 watts

The table above refers to the situation in 2018. Last year (2019) I replaced several windows and this year (2020) I will replace one more door and the remaining poor quality windows.

Replacing a building element its area remains the same so I can estimate the new transmission from the area and the new U-value, and so estimate the impact of the changes I have made.

The table below shows my estimate of the effect of these changes in 2019 and 2020:


When the changes are made this year, the transmission through the windows will be around 30 W/°C down from the roughly 80 W/°C back in 2018.

We’ll see how this compares with the overall heat loss from the house at the end of the article.

I have retained the back door for sentimental reasons even though it does not perform well thermally. That is because the house is for the people I love, and sentimental attachment to particular architectural features is a common problem when upgrading buildings.

  • Thermal perfection is worth nothing without domestic harmony.

2. Walls & Roof

My house is a 1930’s end-of-terrace house which has been extended several times over the last 50 years. Construction techniques have changed a good deal over that time and so there is considerable uncertainty about exactly how some walls are constructed.

My estimates are summarised in the table below (click it to see an enlarged version). It shows:

  • A label for each roof or wall element.
  • It’s basic dimensions from which I calculate the area of each item.
    • I then subtract the area of any windows or doors to get the net area.
  • A categorisation into one of 4 types of wall: Solid Brick, Cavity Wall, Insulated Cavity wall, Externally-Insulated Wall.
  • The roof (labelled ‘loft’) is extraordinarily well-insulated with approximately 200 mm of Celotex insulation.
  • The U-value associated with that type of wall
  • The transmission through the element is the product of its U-value and its area.
    • I have colour-coded the transmission column to highlight the worst performing walls.
  • Finally, I added up the transmissions to give a total transmission through all the windows and doors of 148.7 W/°C.

The uncertainty in this estimate is probably around 10%.

Thus my guess is that when the temperature outside falls 10 °C below the internal temperature, heat will flow out through the walls and roof at a rate of 10 °C x 148.7 W/°C. = 1487 watts

Note that I haven’t included the wall between my house and my neighbour’s house. This is because I think the temperature difference between our two houses is likely to be small and so I have assumed there will be negligible heat flow.

The table above refers to the situation in 2018. This year (2020) I will clad most of the external walls with External Wall Insulation (EWI).

To calculate the new value of the “wall + cladding”, one has to add the U values using an odd formula.

So for example, if an existing solid brick wall has a U-value of 2 W/m^2/°C and it is clad with EWI with a U-value of 0.17 W/m^2/°C, then the combined U-value is given by:

The combined U-value of 0.16 W/m^2/°C is a little better than either building element by itself.

The table below shows my estimate of the effect of the changes I have planned for this year:

When the cladding is finished, I estimate the transmission through the walls will be around 28 W/°C down from the current value of roughly 150 W/°C.

We’ll see how this compares with the overall heat loss from the house at the end of the article.

3. Floor

I have found it very difficult to estimate the heat flow through the floor of the house.

For this reason I have used a U-value that is little more than a guess: U = 0.7 W/m^2/°C.

My estimates are summarised in the table below (click it to see an enlarged version). It shows the area of each room on the ground floor of the house multiplied by this guess at a U-value.

The uncertainty on this figure is difficult to assess – but is probably around 20%.

Thus my guess is that when the temperature outside falls 10 °C below the internal temperature, heat will flow out through the floor at a rate of 10 °C x 50.5 W/°C = 505 watts

Unfortunately, it isn’t easy to do anything about the heat loss through the floor without taking up the floor and insulating underneath.

If we do any work on the house in coming years, we may try to do this, but at the moment, I can’t see any easy way to improve this.

4. Air flow

Heat is also carried from the interior of the building envelope to the exterior by air flows. But air flows are difficult to measure and hence difficult to manage.

As the house is now, there are two obviously ‘draughty’ elements – both doors. I will replace one door and improve the draught-proofing on the other.

But otherwise the house feels fine and is not stuffy. So for the moment I have decided to leave the air flow as it is, and I have simply guessed that air flow transmittance is 20 W/°C – but this is really just a guess.

However I am investigating the use of a carbon dioxide monitor as a tracer for air flow. The idea is to model the rate at which the concentration of carbon dioxide in the air increases due to breathing and cooking. If the house was perfectly sealed, the carbon dioxide levels would rise indefinitely. So the limiting value of carbon dioxide depends on the rate at which air leaks from the house. I’ll write about this some other time.

Summary

Bring all this together:

  • I can estimate the thermal performance of the whole house and see how it compares with my measurements.
  • I can estimate the effect of the changes I intend to make to see what is worthwhile.
  • I can see how far I need to go to make the house carbon-neutral.

The model indicates:

  • That the total transmittance is estimated to be~ 298  W/°C in 2018 which is – within the uncertainties of my estimate – roughly what I measured (~280 W/°C).
  • That the triple glazing I installed last year should have made roughly a 10% difference, reducing this figure to roughly 260 W/°C. This is also in line with my measurements.
  • The effect of the external wall insulation and the additional glazing that I am installing this year should be very significant, reducing the losses to roughly half their 2018 value.

I can also assess the monetary value of the various changes:

  • The triple-glazing I installed last year:
    • Cost £7200 and reduced the transmission by ~ 39 W/°C, approximately £186 for each W/°C.
    • Based on my gas bill this is a return on investment of ~1.3%.
  • The triple-glazing I will install this year:
    • Will cost £3080 and reduce transmission by ~10 W/°C, approximately £288 for each W/°C.
    • Based on my gas bill this is a return on investment of ~0.8%.
  • The external wall insulation I will install this year:
    • Will cost £20,000 (!) and reduce transmission by ~98 W/°C, approximately £165 for each W/°C.
    • This cost includes roughly £5000 for cosmetic features and the use of super-insulation to limit its thickness.
    • Based on my gas bill this is a return on investment of ~1.2%.

Some people would argue that these are paltry returns. Actually – the returns are not bad from a purely financial perspective, and the external wall insulation would have benefited from the new government subsidy if I had got my timing right!

Additionally, replacing windows and repairing the exterior of a house are things which need doing every 25 years or so. So I would have to spend a significant fraction of this anyway just to maintain the house.

But my motive is not financial. By undertaking these works I am preparing for the replacement of the gas boiler with an air source heat pump in 2021. This should reduce the carbon emissions required to heat the house.

  • The emissions will be reduced by a factor 2 because of these improvement in the thermal performance of the house.
  • The emissions will be reduced by a factor 3 because of the coefficient of the performance of the air source heat pump – it provides three units of heat for each unit of electricity used.
  • Currently grid electricity used for heating emits around 20% more carbon than burning gas directly for heating. In 2019 the figures were ~ 240 g/kWh for electricity versus ~ 200 g/kWh for gas.
  • So the emissions will be reduced by an overall factor of 2 x 3 x 0.8 ≈ 4.8.
  • In the coming decade, the carbon emissions associated with grid electricity are expected to fall to around 100 g/kWh, further reducing the carbon emissions associated with heating the house.

But even in 2030, the carbon emissions associated with heating the house will still be roughly 0.2 tonnes per year.

The final step will be to reduce these emissions on average, by using solar panels to generate low carbon-intensity electricity in the summer to offset the electricity I use in the winter to heat the house.

A personal note

I have no idea whether this project makes sense.

I just feel personally ashamed that my house emits 2.5 tonnes of carbon dioxide each year – just keeping me and my family warm.

COVID-19: Day 198 Update: Population Prevalence

July 18, 2020

Opening Up

It is now almost two weeks on from the ‘opening up’ of society on 4th July. And I have had a lovely holiday in Kent.

In case you have forgotten what a holiday is, or indeed, how beautiful England is, I have included some pictures below…

It was interesting to emerge from my Teddington bubble and see how life was proceeding elsewhere.

Our holiday destinations: Canterbury Cathedral, Isle of Sheppey, Bodiam Castle and Battle Abbey were partly open, but quiet. However, tragically, Teapot Island Museum was closed 😦

I saw visitors behaving well and considerately, and shopkeepers doing their best to implement new regulations.

We carried hand-sanitiser, and used it regularly, and wore masks when in shops.

Knowing that the population incidence was below 1 in 1000, I felt that precautions being taken were proportionate, and I did not feel at any time like I was taking a significant risk. And 3 days after returning everything seems fine. Though we may still be called – having left contact details at a couple of cafes.

But there were very few people out and about – nowhere was crowded. And it seemed as though very few shops or holiday venues could possibly be making money.

Summary

This post is an update on the likely prevalence of COVID-19 in the UK population. (Previous update). The latest data from the Office for National Statistics (ONS) covers only one week after the ‘opening up’ on July 4th. This is too soon to detect any resulting increase in prevalence.

My summary of the data is that the population prevalence of COVID-19 is declining, but not as fast as anticipated based on data in May.

In my reading of the data there is no evidence of an increase in incidence – something which was hinted at in previous results.

However, the ONS data do not have the resolution to detect slow increases or decreases in prevalence, so we need to keep measuring in order to detect any sustained slow increase at the earliest opportunity.

Population Prevalence

On 17th July the Office for National Statistics (ONS) updated their survey data on the prevalence of people actively ill with COVID-19 in the general population (link), incorporating data for five non-overlapping fortnightly periods covering the period from 4th May up until 12th July.

Start of period of survey End of period of survey   Middle Day of Survey (day of year 2020) % testing positive for COVID-19 Lower confidence limit Upper confidence limit
04/05/2020 17/05/2020 132 0.25 0.14 0.40
18/05/2020 31/05/2020 144 0.11 0.05 0.20
1/05/2020 14/06/2020 160 0.06 0.02 0.11
15/06/2020 18/06/2020 174 0.04 0.02 0.08
29/06/2020 12/07/2020 188 0.03 0.01 0.06

Data from ONS on 16th July

In the graph below I have plotted this data in blue alongside the 9th July data from ONS with is shown in red. The 9th July data also consists of five non-overlapping fortnightly periods, but covering the period from 27th April up until 5th July.

Start of period of survey End of period of survey   Middle Day of Survey (day of year 2020) % testing positive for COVID-19 Lower confidence limit Upper confidence limit
27/4/2020 10/05/2020 125 0.27 0.17 0.40
11/05/2020 24/05/2020 139 0.22 0.10 0.43
25/05/2020 7/06/2020 153 0.05 0.02 0.10
8/06/2020 21/06/2020 167 0.10 0.04 0.20
22/06/2020 5/07/2020 181 0.03 0.01 0.06

Data from ONS on 9th July

The source data for the two datasets is the same over the period 4th May to 5th July, but the 9th July data includes one earlier week and 16th July data includes one later week.

Plotting these two data sets on a graph we see a decreasing trend.

Click for a larger version

An (unweighted) exponential fit to either data set indicates a factor 10 reduction in prevalence every 61 days, similar (but longer than) the previous estimate of 51 days.

If we plot them on a log-linear graph we can see the trend more clearly, and extrapolate the trend of the data to find likely dates when the population incidence might fall to key values.

Click for a larger version

Prevalence Date Cases in the UK
1 in 1,000 End of May About 60,000
1 in 10,000 Start of August About 6,000
1 in 100,000 Start of October About 600

These dates are later than previously anticipated, and worryingly the data indicate that the prevalence will not reach the 1 in 100,000 level in time for the start of the school term.

Limits

We are probably approaching the lower limit of the population prevalence that this kind survey can detect.

Each fortnightly data point on the July 17th data set above correspond to:

  • 32 positive cases detected from a sample of 16,243
  • 23 positive cases detected from a sample of 20,384
  • 11 positive cases detected from a sample of 25,472
  • 11 positive cases detected from sample of 23,728
  • 9 positive cases detected from sample of 30,047

Incidentally, the cost of a test is reported in the Washington post to be in the range $10 to $20 – so the 115,00 tests used for this survey have probably cost between £1M and £2M.

Daily Deaths

Below I have also plotted the 7-day retrospective rolling average of the daily death toll along with the World-o-meter projection from the start of June.

Click for a larger version.

This data shows the death rate is still falling, but only slowly. The trend value of deaths (~82 per day) is consistently higher than the roughly 29 deaths per day that we might have expected based on trend behaviour at the start of June.

This indicates that death rates of tens of people per day are likely to continue for several months yet.

In future updates I will continue to use the same World-o-meter projection to gauge whether the death rate is falling faster or slower than the model led us to expect.

Are fusion scientists crazy?

July 8, 2020

Preamble

I was just about to write another article (1, 2, 3) about the irrelevance of nuclear fusion to the challenges of climate change.

But before I sharpened my pen, I thought I would look again to see if I could understand why a new breed of fusion scientists, engineers and entrepreneurs seem to think so differently. 

Having now listened to two and a half hours of lectureslinks at the bottom of the page – I have to say, I am no longer so sure of myself.

I still think that the mainstream routes to fusion should be shut down immediately.

But the scientists and engineers advocating the new “smaller faster” technology make a fair case that they could conceivably have a relevant contribution to make. 

I am still sceptical. The operating conditions are so extreme that it is likely that there will be unanticipated engineering difficulties that could easily prove fatal.

But I now think their proposals should be considered seriously, because they might just work.

Let me explain…

JET and ITER

Deriving usable energy from nuclear fusion has been a goal for nuclear researchers for the past 60 years.

After a decade or two, scientists and engineers concluded (correctly) that deriving energy from nuclear fusion was going to be extraordinarily difficult.

But using a series of experiments culminating in JET – the Joint European Torus, fusion scientists identified a pathway to create a device that could release fusion energy and proceeded to build ITER, the International Thermonuclear Experimental Reactor.

ITER is a massive project with lots of smart people, but I am unable to see it as anything other than a $20 billion dead end – a colossal and historic error. 

Image of ITER from Wikipedia modified to show cost and human being. Click for larger view.

In addition to its cost, the ITER behemoth is slow. Construction was approved in 2007 but first tests are only expected to begin in 2025; first fusion is expected in 2035; and the study would be complete in 2045.

I don’t think anyone really doubts that ITER will “work”: the physics is well understood.

But even if everything proceeds according to plan, and even if the follow-up DEMO reactor was built in 2050 – and even if it also worked perfectly, it would be a clear 40 years or so from now before fusion began to contribute low carbon electricity. This is just too late to be relevant to the problem of tackling climate change. I think the analysis in my previous three articles still applies to ITER.

I would recommend we stop spending money on ITER right now and leave it’s rusting carcass as a testament to our folly. The problem is not that it won’t ‘work’. The problem is that it just doesn’t matter whether it works or not.

But it turns out that ITER is no longer the only credible route to fusion energy generation.

High Temperature Superconductors

While ITER was lumbering onwards, science and technology advanced around it.

Back in 1986 people discovered high-temperature superconductors (HTS). The excitement around this discovery was intense. I remember making a sample of YBCO at Bristol University that summer and calling up the inestimable Balázs Győrffy near to midnight to ask him to come in to the lab and witness the Meissner effect – an effect which hitherto had been understood, but rarely seen.

But dreams of new superconducting technologies never materialised. And YBCO and related compounds became scientific curiosities with just a few niche applications.

But after 30 years of development, engineers have found practical ways to exploit them to make stronger electromagnets. 

The key property of HTS that makes them relevant to fusion engineering is not specifically the high temperature at which they became superconducting. Instead it is their ability – when cooled to well below their transition temperature – to remain superconducting in extremely high magnetic fields.

Magnets and fusion

As Zach Hartwig explains at length (video below) the only practical route to fusion energy generation involves heating a mixture of deuterium and tritium gases to immensely high temperatures and confining the resulting plasma with magnetic fields.

Stronger electromagnets allow the ‘burning’ plasma to be more strongly confined, and the fusion power density in the burning plasma varies as the fourth power of the magnetic field strength. 

In the implementation imagined by Hartwig, the HTS technology enables magnetic fields 1.74 times stronger, which allows an increase in power density by a factor 1.74 x 1.74 x 1.74 x 1.74 ≈ 9. 

Or alternatively, the apparatus could be made roughly 9 times smaller. So using no new physics, it has become feasible to make a fusion reactor which is much smaller than ITER. 

A smaller reactor can be built quicker and cheaper. The cost is expected to scale roughly as the size cubed – so the cost would be around 9 x 9 x 9 ~ 700 times lower – still expensive but no longer in the billions.

And crucially it would take just a few years to build rather than a few decades. 

And that gives engineers a chance to try out a few designs and optimise them. All of fusion’s eggs would no longer be in one basket.

The engineering vision

Dennis Whyte’s talk (link below) outlines the engineering vision driving the modern fusion ‘industry’.

A fusion power station would consist of small modular reactors each one generating perhaps only 200 kW of electrical power. The reactors could be produced on a production line which could lower their production costs substantially.

This would allow a power station to begin generating electricity and revenue after the first small reactor was built. This would shorten the time to payback after the initial investment and make the build out of the putative new technology more feasible from both a financial and an engineering perspective.

The reactors would be linked in clusters so that a single reactor could come on-line for extra generation and be taken off-line for maintenance. Each reactor would be built so that the key components could be replaced every year or so. This reduces the demands on the materials used in the construction. 

Each reactor would sit in a cooling flow of molten salt containing lithium that when irradiated would ‘breed’ the tritium required for operation and simultaneously remove the heat to drive a conventional steam turbine.

You can listen to Dennis Whyte’s lecture below for more details.

But…

Dennis Whyte and Zach Hartwig seem to me to be highly credible. But while I appreciate their ingenuity and engineering insight, I am still sceptical.

  • Perhaps operating a reactor with 500 MW of thermal power in a volume of a just 10 cubic metres or so at 100 million kelvin might prove possible for seconds, minutes or hours or even days. But it might still prove impossible to operate 90% of the time for extended periods. 
  • Perhaps the unproven energy harvesting and tritium production system might not work.
  • Perhaps the superconductor so critical to the new technology would be damaged by years of neutron irradiation

Or perhaps any one of a large number of complexities inconceivable in advance might prove fatal.

But on the other hand it might just work.

So I now understand why fusion scientists are doing what they are doing. And if their ideas did come to fruition on the 10-year timescale they envision, then fusion might yet still have a contribution to make towards solving the defining challenge of our age.

I wish them luck!

===========================================

Videos

===========================================

Video#1: Pathway to fusion

Zach Hartwig goes clearly through the MIT plan to make a fusion reactor.

Timeline of Zach Hartwig’s talk

  • 2:20: Start
  • 2:52: The societal importance of energy
  • 3:30: Societal progress has been at the expense of CO2 emissions
  • 3:51: Fusion is an attractive alternative in principle. – but how to compare techniques?
  • 8:00: 3 Questions
  • 8:10: Question 1: What are viable fusion fuels
  • 18:00 Answer to Q1: Deuterium-Tritium is optimal fuel.
  • 18:40: Question 2: Physical Conditions
    • Density, Temperature, Energy confinement
  • 20:00 Plots of Lawson Criterion versus Temperature.
    • Shows contours of energy ration Q
    • Regions of the plot divided into Pointless, possible, and achieved
  • 22:35: Question 3: Confinement Methods compared on Lawson Criterion/Temperature plots
    1. Cold Fusion 
    2. Gravity
    3. Hydrogen Bombs
    4. Inertial Confinement by Laser
    5. Particle accelerator
    6. Electrostatic well
    7. Magnetic field: Mirrors
    8. Magnetic field: Magnetized Targets or Pinches
    9. Magnetic field: Torus of Mirrors
    10. Magnetic field: Spheromaks
    11. Magnetic field: Stellerator
    12. Magnetic field: Tokamak
  • 39:35 Summary
  • 40:00 ITER
  • 42:00 Answer to Question 3: Tokamak is better than all other approaches.
  • 43:21 Combining previous answers: 
    • Tokamak is better than all other approaches.
  • 43:21 The existing pathway JET to ITER is logical, but too big, too slow, too complex: 
  • 46:46 The importance of magnetic field: Power density proportional to B^4. 
  • 48:00 Use of higher magnetic fields reduces size of reactor
  • 50:10 High Temperature Superconductors enable larger fields
  • 52:10 Concept ARC reactor
    • 3.2 m versus 6.2 m for ITER
    • B = 9.2 T versus 5.3 T for ITER: (9.2/5.3)^4 = 9.1
    • Could actually power an electrical generator
  • 52:40 SPARC = Smallest Possible ARC
  • 54:40 End: A viable pathway to fusion.

Video#2: The Affordable, Robust, Compact (ARC) Reactor: and engineering approach to fusion.

Dennis Whyte explains how improved magnets have made fusion energy feasible on a more rapid timescale.

Timeline of Dennis Whyte’s talk

  • 4:40: Start and Summary
    • New Magnets
    • Smaller Sizes
    • Entrepreneurially accessible
  • 7:30: Fusion Principles
  • 8:30: Fuel Cycle
  • 10:00: Fusion Advantages
  • 11:20: Lessons from the scalability and growth of nuclear fission
  • 12:10 Climate change is happening now. No time to waste.
  • 12:40 Science of Fusion:
    • Gain
    • Power Density
    • Temperature
  • 13:45 Toroidal Magnet Field Confinement:
  • 15:20: Key formulae
    • Gain 10 bar-s
    • Power Density ∝ pressure squared = 10 MW/m^3
  • 17:20 JET – 10 MW but no energy gain
  • 18:20 Progress in fusion beat Moore’s Law in the 1990’s but the science stalled as the devices needed to be too big.
  • 19:30 ITER Energy gain Q = 10, P = 3 Bar, no tritium breeding, no electricity generation.
  • 20:30 ITER is too big and slow
  • 22:10 Magnetic Field Breakthrough
    • Energy gain ∝ B^3 and ∝ R^1.3 
    • Power Density ∝ B^4 and ∝ R 
    • Cost ∝ R^3 
  • 24:30 Why ITER is so large
  • 26:26 Superconducting Tape
  • 28:19 Affordable, Robust, Compact (ARC) Reactor. 
    • 500 MW thermal
    • 200 MW electrical
    • R = 3.2 m – the same as JET but with B^4 scaling 
  • 30:30 HTS Tape and Coils.
  • 37:00 High fields stabilise plasma which leads to low science risks
  • 40:00 ARC Modularity and Repairability
    • De-mountable coils 
    • Liquid Blanket Concept
    • FLiBe 
    • Tritium Breeding with gain = 1.14
    • 3-D Printed components
  • 50:00 Electrical cost versus manufacturing cost.
  • 53:37 Accessibility to ‘Start-up” entrepreneurial attitude.
  • 54:40 SP ARC – Soomest Possible / Smallest Practical ARC to Demonstart fusion
  • 59:00 Summary & Questions

%d bloggers like this: