Archive for August, 2020

COVID-19: Day 238 Update: What’s going on?

August 28, 2020

Summary

I am having difficulty grasping ‘the big picture’ about what is going on with the pandemic.

It seems the prevalence of people ill with the virus is low enough (below 1 in a 1000) that most people can get on with many parts of their life while maintaining social distance.

But the prevalence is not declining significantly. The ONS estimate around 2600 people became ill each day around 20th August and that this rate is increasing by an additional 100 new cases each day.

Of the 2600 infections each day probably around 10 will eventually die. This relatively low death rate (0.4%) is probably because mainly younger people are becoming ill and resources are not overstretched. But at this infection rate, many aspects of life will not be able to return to normal.

Although this status quo is a big improvement on where we have been, I am concerned that the forthcoming return to school will give rise to repeated persistent outbreaks.

It seems that there is no strategy to lower the virus’s prevalence significantly.

Instead, the government seem to be trying to…

  • …open the economy ‘as much as possible‘ while keeping…
  • … the infection rate ‘as low as possible‘…
  • …until next summer when they hope a vaccine will be available.

Anyway: here is my take on the data

1. Prevalence

Since late April the ONS prevalence survey has been randomly testing people each week to look for the virus. They then collate their data into fortnightly periods to increase the sensitivity of their tests.

The number of people tested and the number of positive tests are given in their table (reproduced below) along with their estimate that the population prevalence of actively ill people around 14th August 2020 was roughly 1 in 2000.

Click for a larger image.

Their data – graphed below – suggest that prevalence has been below the 1 in 1000 level for several months, but that there is no systematic trend towards lower prevalence.

Click for a larger image.

I have replotted the data on a logarithmic scale (below) to emphasise how far we are from achieving levels around the 1 in 100,000 mark which would enable many more social activities to take place.

Click for a larger image.

2. Other ONS conclusions

ONS also analyse antibody data and conclude on the basis of just over 5000 tests that – as in previous weeks – roughly 6.2% ± 1.3% of the UK population have already been exposed to the virus.

On the basis of a statistical model, they also conclude that there were roughly 2600 infections each day during the week including August 20th, with a daily incidence increasing at roughly 100 infections (4%) per day.

Since there were roughly 1000 positive tests each day during that week, we can estimate that less than half the infections are being found as they occur.

3. Tests and Deaths

The graph below shows the number of deaths and positive tests on the same logarithmic scale.

The data were downloaded from the governments ‘dashboard’ site. The deaths refer to deaths within 28 days of a test and the positive tests refer to Pillar 1 (hospital) and Pillar 2 (community) tests combined. Both curves are 7-day retrospective rolling averages.

Click for a larger version.

Remember that the number of tests has increased dramatically over the period of the graph. However the data do appear to reflect a rising incidence which started just after the official ‘re-opening’ of the economy on July 4th.

About the start of August, presumably as a result of the re-opening 4 weeks earlier, there appears to be change in the rate of reduction of the number of daily deaths.

In the last week or so there is evidence of an ‘upturn’ in the number of deaths per day.

What does this tell us?

Does the data tell us that we have a low-enough incidence of COVID-19 such that it can be managed by ad hoc local closures until a vaccine arrives?

Or does the data tell us that the virus is continuing to infiltrate its way throughout our society, ready to spread rapidly as soon as an opportunity arises.

Such opportunities for spreading may arise from…

  • the forthcoming return to school,
  • increased air travel,
  • a more widespread return to offices, or
  • an increase in our susceptibility to the virus in winter.

But I am afraid I just don’t know which of these is true.

========================================

23:21 on 29/8/2020

Corrected to show the proper death rate 10/2600 ~0.38%

Passivhaus? Or Michaelhaus?

August 26, 2020

Passivhaus 

The modern ‘Passivhaus‘ philosophy of building design involves building houses with exceptional levels of insulation – so that very little heating or cooling is required. Click here for videos explaining the concept

Making houses airtight is essential to achieving this low heating requirement. If the air in my house (volume 400 cubic metres) were exchanged once each hour with outside air then the heat leak would be equivalent to 148 watts for each degree of temperature difference between the inside and outside of the house. This would amount to more than half the current heat loss and make insulating the walls almost pointless.

So to achieve Passivhaus certification a low level of air leakage is required: the number of Air Changes per Hour (ACH) must be less than 0.6 ACH when the external pressure is changed by 50 pascal. The Passivhaus Institute have an excellent guide on all aspects of achieving airtightness in practice (link).

But with this low background ventilation, the general day-to-day activities of a family would likely lead to the build up of unpleasant odours or excess moisture.

So the air flow through the house is then engineered to achieve a specified number of air changes per hour (ACH) through  mechanical ventilators that capture the heat from air leaving the building and use it to heat air coming into the building. This use of  Heat Recovery Ventilation leads to fresh air without the noticeable draughts or heat loss.

Michaelhaus

Achieving the Passivhaus standard for newly built houses is not easy, but it is readily achievable and there are now many exemplars of good practice in the UK.

But achieving that standard in my house would require extensive retrofit work, lifting floorboards and sealing hundreds of tiny leaks. So what should I do?

I don’t know what to do! So I am adopting a “measurement first” approach.

  1. As I have outlined previously, I am monitoring the energy use so after the external wall insulation has been applied next month, I should be able to assess how significant the heat loss associated with air leakage is over the winter.
  2. And more recently I have been estimating the number of air changes per hour (ACH) in the house in normal use.

The second of these measurements – estimating the number of air changes per hour – is normally extremely difficult to do. But I have been using a carbon dioxide meter and a simple spreadsheet model to give me some insight into the number of air changes per hour – without having to figure out where the air leaks are.

Carbon dioxide meter

I have been using two CO2 meters routinely around the house.

Each meter cost around £140, which is quite a lot for a niche device. But since it might guide me to save hundreds or thousands of pounds I think it is worthwhile.

Calibrating the two CO2 meters used in this study by exposing them to outside air. Both meters have a specified uncertainty of ±50 ppm but they agree with each other and with the expected outdoor CO2 level (~400 ppm) more closely than this (407 ppm and 399 ppm).

To estimate the number of ACH one needs to appreciate that there are two common domestic sources of CO2.

  • Human respiration: people produce the equivalent of around 20 litres of pure CO2 each hour – more if they undertake vigorous exercise.
  • Cooking on gas: a gas burner produces hundreds or thousands of litres of CO2 per hour.

So if there were no air changes, the concentration of CO2 would build up indefinitely. From knowledge of:

  • the volume of the room or house under consideration,
  • the number of people present and the amount of cooking.
  • a measurement of CO2 concentration

It is possible to estimate the number of air changes per hour.

I have been studying all these variables, and I will write more as I get more data, but I was intrigued by two early results.

Result#1

The figure below shows the CO2 concentration in the middle room of the house measured over several days using the data-logging CO2 meter.

This room is a ‘hallway’ room and its two doors are open all day, so I think there is a fair degree of air mixing with the entire ground floor.

The data is plotted versus the time of day to emphasise daily similarities.

Click for larger version

I have annotated the graph above in the figure below:

Click for larger version

There are several key features:

  • The first is that the lowest level of CO2 concentration observed is around 400 parts per million (ppm) – which is the approximate concentration in external air. This probably corresponds to a time in which both front and back doors were open.
  • The second is that overnight, the concentration falls to a steady level of between 400 and 500 ppm. The rate of fall corresponds to between 0.5 and 1 ACH.
  • The third is the rapid rise in concentration and high levels of CO2 (up to 1500 ppm) associated with cooking with gas.
  • The fourth is that excluding the cooking ‘events’, the typical CO2 concentration typically lies in the range 400 to 600 ppm. With typically 3 or 4 adults in the house, this is consistent with between 3 and 4 ACH. During this time the weather was warm and doors were often left open and so this plausibly explains why the air change rate might be higher during the day than the night.

Result#2

The figure below shows the TEMTOP CO2 meter after reading in my bedroom (volume 51 cubic metres) with the door and windows closed on two consecutive nights.

It can be seen that the CO2 concentration has risen steadily and then stabilised at around 1900 ppm. With two people sleeping this corresponds to an air change rate of around 0.5 ACH.

What next for Michelhaus?

The data indicate that:

  • For our bedroom, probably more airflow would be beneficial.
  • For the bulk of the house, more airflow might be required in winter when doors and windows will likely remain closed.

So it seems that some degree of mechanical ventilation with heat recovery will likely be required. I will study the matter further over the winter.

What is empowering about the CO2 monitoring technique is that I now have a simple tool that allows me to estimate – rather than merely guess – the number of air changes per hour.

Measuring the thermal conductivity of insulation

August 24, 2020

As I mentioned previously, I am currently obsessed with the thermal insulation I will be applying to the outside of my house.

I have checked that it should be safe from the point of view of flammability (link), but the question of how will it perform thermally still remains.

The insulation product I have chosen (Kingspan K5) has truly exceptional specifications. This allows me to clad the house with 100 mm thickness of K5 and achieve the same insulation level as 160 mm of expanded polystyrene (EPS).

In this article I describe the tests I have performed to show that the K5 insulation does in fact match the specified level of insulation in practice.

Conduction through Closed-Cell Foams

Heat travels through materials using three mechanisms: conduction, convection and radiation.

Closed-Cell Foams – in which sealed ‘cells’ of gas are surrounded by solid ‘walls’ – inhibit all three methods of heat transfer.

  • Conduction through the solid is reduced because the cross-sectional area of solid through which heat can travel is reduced.
    • Conduction is through the thin walls of the cells.
  • Conduction through the gas within the cells is very low
    • The thermal conductivity of gases is much less than that of solids.
  • Convection in the gas within the cells is inhibited because each cell has just a tiny temperature gradient across it
    • Smaller ‘cells’ inhibit convection more strongly.
  • Radiation across each cell is inhibited because each radiating surface sees a surface at almost the same temperature.
    • Smaller ‘cells’ inhibit radiation transfer more strongly

So a foam optimised for low heat transfer would have very little solid present and consist mainly of gas cells. But such  a foam would be very fragile.

So practical building materials balance cell size and wall thickness to produce materials that are sufficiently strong and not too expensive to manufacture.

This article (link) from the 10th International Conference on District Heating and Cooling summarises the properties of polyurethane (PU) foam that affect its thermal performance. I have summarised the calculations on the figure below.

  • The graph shows thermal conductivity on the vertical axis and foam density on the horizontal axis.
  • The red square shows the specified thermal conductivity of Kingspan K5 and the Blue Diamond shows the specified thermal conductivity of EPS.
  • Notice the low density of the foams compared to say bricks (~2000 kg/m^3)
  • The three solid lines show calculated contributions to the thermal conductivity of PU Foam as a function of density.
  • Notice that K5 has a specified thermal conductivity which is lower than that of still air.

All the data on the graph correspond to low thermal conductivities, but the differences are significant. The thermal conductivity of the K5 is around two thirds that of the EPS and so the same insulating effect can be achieved with just two thirds the thickness. Or alternatively, the same thickness of K5 can achieve one third less heat transfer than EPS.

The lowest achievable thermal conductivity that can be achieved is limited by thermal conduction through the gas in the cells. And so the K5 achieves its low conductivity by having cells filled with non-air gases – probably mainly carbon dioxide.

However I was sceptical…

These were just specifications. My erstwhile colleagues at NPL spoke often of the ‘optimism’ of many thermal transfer specifications. Could this material really have a thermal conductivity which is lower than that of still, non-convecting air!

…So I decided to do some tests…

I built two boxes out of 50 mm thick sheets of EPS and K5, sealing the joins with industrial glue.

I then heated a cylinder of concrete that I happened to have (100 mm diameter x 300 mm long weighing 5.14 kg) in the oven and heated it to around 50 °C – roughly 1 hour at the lowest gas setting.

I then placed the concrete in the box along with two data-logging thermometers – one at either end of the cylinder – and sealed the box with another piece of insulation.

I recorded the temperatures every minute for somewhere between 10 and 24 hours and measured the rate at which the concrete cooled.

The cooling curves for EPS and Kingspan K5 are shown in the figures below.

  • The two thin lines correspond to the readings from the two thermometers and the bold line corresponds to their average.
  • The (dotted red curve – – – –) shows a theoretical model of the data with the parameters optimised using the Excel solver.
  • The (dotted red line – – – –) shows a estimated time constant of the exponential temperature decay.
  • The (dotted blue line – – – –) shows a estimated background (room) temperature.

This data allowed me to establish two things.

  • Firstly, by simply comparing the time constants of the cooling curves (494 minutes and 801 minutes), it was clear that the K5 really does have a thermal conductivity which is about 40% lower than EPS.
  • Secondly, by assuming a value for the heat capacity of the concrete and that the heat flowed perpendicularly through the walls of the box I could estimate the thermal conductivity of the two materials. I found:
    • K5 thermal conductivity = 0.021 ± 0.001 W / m K
    • EPS thermal conductivity = 0.035 ± 0.001 W / m K
    • The uncertainties were estimated by analysing the data from each thermometer individually and then their average.
    • To my surprise, these figures agree closely with the specified properties of both EPS and K5

So my scepticism was – it seems – misplaced.

Summary

I am relieved. In my previous article I showed that the K5 has good flammability resistance and in this article I have shown that it really does have excellent thermal performance.

Being confident of these properties I am looking forward even more keenly to getting the material onto my house and snuggling in for a long cold winter.

By the way..

The Blue Maestro dataloggers that I used (link) are fantastically easy to use and come strongly recommended.

I am becoming an insulation bore.

August 21, 2020

Friends, I am obsessed with the insulation I am about to apply to the outside of my house (link).

The installation is still 4 weeks away but I am thinking about it all the time. And if there is a lull in the conversation I may well introduce the topic a propos of anything at all:

Person A: “So I said to Doreen this relationship just isn’t working…

…Pause…

Me: “That’s very difficult. But have you thought about External Wall Insulation?

However,  aside from the risk of boring everyone I know, I have had two major concerns.

  • The first and more basic concern is about the flammability of the insulation.
  • And the second and more technical concern is whether or not the insulation will work as well as it claims.

I have now looked at both these issues experimentally. I’ll cover the measurement of the thermal conductivity in the next article, but here I take a look at the flammability of external wall insulation.

Flammability 

When I tell people about the external wall insulation (EWI) project I can see people internally say “Oh. You mean like Grenfell?” and then say nothing.

That appalling tale of misunderstood specifications that ended up with people putting flammable insulation on the outside of high-rise flats led me to believe that I need to personally reassure myself before going ahead. It would be unwise to take anyone’s word for it.

The insulation that will be applied to the outside of the house is called Kingspan K5, .

  • It is a thermoset foam which means that is manufactured and hardened by heating and so should not melt when heated.
  • This is in contrast with expanded polystyrene (EPS) foam which is a thermoplastic which will soften or melt on heating.

The K5 datasheet (link) contains detailed specifications of the performance in flame tests. For example:

“…achieves European Classification (Euroclass) C-s1,d0 when classified to EN 13501-1: 2018 (Fire classification of construction products and building elements. Classification using data from reaction to fire tests).

Extract from Kingspan K5 data sheet. Click for larger version

But what does this mean? I found this explanatory page and table.

Click for larger image

  • The C is a categorisation from A (Non-combustible) to F (highly flammable) and means “Combustible – with limited contribution to fire”
  • The s1 means “a little or no smoke” on a scale of s1 to s3 (“substantial smoke”).
  • The d0 means “no flaming droplets or particles” on a scale of d0 to d2 (“Quite a lot”)

This was quite reassuring, but the terms are rather inexact and I didn’t really know what it all meant in practice.

So I went down to the EWI Store, bought some K5 and did my own flammability tests.

Flammability test

My flammability test consisted of propping up a sheet of K5 and directing a blow torch onto its surface from a few centimetres away and then leaving it for 10 minutes.

I think this is a pretty tough test and I was pleasantly surprised by how the insulation performed.

The results are captured in the exceedingly dull video at the end of the page and there are post-mortem photographs of the insulation below.

The insulation remained broadly in tact and damage was limited to a few centimetres around the region where the flame reached the insulation. The rear side of the insulation did not appear to have been damaged at all.

After having performed this test I realised that I had forgotten to measure the temperature on the rear face of the K5. Doh!

So I few days later I repeated the test and measured the temperature on the back of the 50 mm thick insulation panel as the temperature in the interior of the insulation reached approximately 1000 °C.

Remarkably, after 10 minutes the rear had only reached 57 °C.

Overall these results are  better than I expected, and from a safety perspective, I feel happy having Kingspan K5 on the outside of my house.

Expanded Polystyrene Foam (EPS)

I also did flammability tests on EPS. But these tests did not take long – EPS lasts just a few seconds before burning and melting.

However, even for a material as flammable as EPS, in this external application, the risk would be very low. The foam would sandwiched between non-flammable external render, and a non-flammable brick wall.

You can read about the factors which mitigate the risk in this application at the following links

But I am still happy to be paying extra for the superior fire resistance of Kingspan K5.

But will the K5 really be as good an insulator as its manufacturers claim? I’ll cover this in the next exciting episode…

Video

Here is a 15 minute video of my flammability tests of Kingspan K5 and Expanded Polystyrene.

It’s really boring but ‘highlights’ are

  • 3′ 30″: K5: Move blowtorch closer
  • 8′ 00″ : K5: Close up
  • 10′ 40″ : K5: Post Mortem
  • 11′ 25″ White EPS: Start
  • 11′ 57″ White EPS: Move blowtorch closer
  • 13′ 06″ White EPS: Post Mortem
  • 13′ 20″ Black EPS: Start
  • 13′ 57″ Black EPS: Post Mortem
  • 14′ 16″ Black EPS#2: Start with burner further away
  • 15′ 30″ Black EPS#2: Post Mortem

COVID-19 Re-categorisation of deaths

August 20, 2020

Summary

Forgive me omitting the usual ‘population prevalence’ update, but roughly speaking, nothing has changed.

However the government have introduced new ways to count the dead, and that is really important.

Surprisingly – to me at least – I have concluded that is a not a self-serving manipulation of the data to reduce headline rates of death. It actually helps us to understand what is going on with the pandemic.

New ways to count the dead

Last week I wrote:

I feel the best thing I can do in the face of this tidal wave of uncertainty is to try to focus on the simple statistics that require only minimal theoretical interpretation.

I was away from home last week and so missed the announcement about new ways to ‘count the dead’. New ways to count the dead?! What?

The government announced it would divide daily deaths into three categories:

  1. Deaths of people who have died within 28 days of a positive COVID-19 test
    • Irrespective of any ‘underlying conditions’ we can reasonably say these people ‘died from COVID-19
  2. Deaths of people who have died within 60 days of a positive COVID-19 test
    • Similarly, despite any ‘underlying conditions’ we can reasonably say these people also ‘died from COVID-19
  3. Deaths of people who have died anytime after a positive COVID-19 test
    • Depending on the length of time to death, it could be that the COVID-19 test might possibly be less relevant to these deaths than other pre-existing difficulties.

At first I found it hard not to think that the government was doing this in order to generate lower numbers.

But after writing this article, I have concluded that this categorisation is actually helpful.

Let’s look at the data.

The government now produce three curves as shown in the two figures below. The first graph shows the daily death statistic throughout the pandemic. The three curves only differ in the ‘tail’ of the curve.

Click for larger figure. The Red Curve shows the total number of deaths per day. The Cyan Curve shows the number of deaths within 28 days of a test, and the Blue Curve shows the number of deaths within 60 days of a test. All curves are 7-day retrospective rolling averages.

Let’s look at the recent data in more detail.

Click for larger figure. The Red Curve shows the total number of deaths per day. The Cyan Curve shows the number of deaths within 28 days of a test, and the Blue Curve shows the number of deaths within 60 days of a test. All curves are 7-day retrospective rolling averages.

Notice that the ‘All deaths’ curve includes all the deaths counted in the ’60-day’ curve and the ’60-day’ curve includes all deaths on the ’28-day’ curve.

In order to understand these data  we need to re-categorise them by subtracting the datasets from each other to yield:

  • Deaths of people who have died within 28 days of a positive COVID-19 test
  • Deaths of people who have died between 28 and 60 days of a positive COVID-19 test
  • Deaths of people who have died more than 60 days after a positive COVID-19 test

These data are summarised in the figures below. Now the data in each of three categories are independent of each other and add up to give the total deaths.

Click for larger figure. The Red Curve shows the total number of deaths per day. The Cyan Curve shows the number of deaths within 28 days of a test, and the Black Curve shows the number of deaths between 28 and 60 days after a test. The Blue Curve shows the number of deaths occurring at least 60 days after a test. All curves are 7-day retrospective rolling averages.

So for example on day 229 (16th August 2020), the average number of people dying per day in the previous seven days was 61.7 deaths per day:

  • On average, 10.9 of those people were diagnosed less than 28 days previously
  • A further 10.4 were diagnosed between 28 and 60 days previously.
  • But 40.4 of those people were diagnosed more than 60 days previously.

It is this last datum which is most significant: most people dying after a recent infection with COVID-19 acquired the infection more than 60 days earlier. Further more, deaths in this category are rising! THis is the real insight arising from this re-categorisation.

We can also plot these categories as fractions of the total deaths: we see that roughly two thirds of daily deaths occur more than 60 days after a positive COVID test – and that fraction is rising!

Click for larger figure. The Cyan Curve shows the percentage of deaths within 28 days of a test, and the Black Curve shows the percentage of deaths between 28 and 60 days after a test. The Blue Curve shows the percentage of deaths occurring at least 60 days after a test. All curves are 7-day retrospective rolling averages.

What does this tell us?

Here is my current understanding. And it is broadly good news!

  • The fraction of people dying from COVID-19 who die within 28 days has been falling since the peak of the pandemic. Currently, one sixth of people who eventually die survive for less than 28 days from their diagnosis.
    • The most likely reason for this is that our doctors have got better at treating people. Only a few people die quickly.
  • The fraction of people dying from COVID-19 who died between 28 and 60 days rose as doctors kept people alive beyond 28 days. But this too has now started to fall and only a further one sixth of people who will die survive between 28 and 60 days from their diagnosis.
    • The most likely reason for this is once again that people are being kept alive longer, but doctors are unable to cure them.
  • The fraction of people dying from COVID-19 who die and were diagnosed more than 60 days previously is still rising and now constitutes two thirds of all ongoing deaths. I find this surprising. And now we need to consider two possible causes:
    • Firstly, doctors are keeping people alive longer but are unable to cure them. If this were so then people would be dying after what must be an appalling 60 days in hospital. I was not aware that there many patients in this condition.
    • Secondly, people might have fully or partially recovered from COVID-19, but then die of another cause.

But how large is this second category? We can estimate it thus:

  • About 1% of the UK population die each year (roughly 600,000 people). So on average we would would expect around 1%/365 = 0.027% of the population to die each day, or roughly 1640 deaths per day, irrespective of COVID-19.
  • Thus current daily deaths from COVID-19 constitute only a small percentage of normally-expected deaths. If the disease did not have the capability to re-infect the entire population and kill literally millions then we would not be so worried about deaths at this rate.
  • So far around 320,000 people have tested positive for COVID-19 and roughly 260,000 have survived. What is the chance that this cohort of 260,000 might have recovered from COVID infection and then died of something else? Well a first guess would be roughly 1% chance per year or 0.027% per day- the same chance as applies to the general population. Thus we might expect 0.027% of the 260,000 recovered people to die each day i.e. around 70 deaths per day.
  • Even allowing for several biasing factors, this is a significant fraction of the daily deaths data – much larger than I would have estimated.

So my understanding of the data is this.

  • Deaths within 28 days of a positive test can be understood as being deaths arising from COVID-19. There are currently around 10 deaths per day in this category.
  • Deaths beyond 60 days of a positive test are primarily due to deaths from other causes. We should expect deaths in this category to rise to about 70 deaths per day (or some other similar number) and then stabilise.
  • Deaths after 28 but before 60 days can probably not be categorised as being clearly in one category or the other. The fact that deaths in this category first rose and then fell probably indicates deaths in this category initially arose directly from COVID infection.

Overall, this is good news. It means that there are fewer deaths arising from COVID-19 than we previously thought.

And by looking at the ‘prompt’ deaths, policy makers can get better feedback on how well their policies are working on the ground.

COVID-19: Day 220 Update: Population Prevalence

August 9, 2020

Summary

This post is an update on the likely prevalence of COVID-19 in the UK population. (Previous update).

The latest data from the Office for National Statistics (ONS) suggest that the prevalence is broadly stable, but that there has been a small increase in prevalence over the last month or so.

The current overall prevalence is estimated to be around 1 in 1500  but some areas are estimated to have a much higher incidence.

Based on antibody studies, the ONS estimate  that 6.2 ± 1.3 % of the UK population have been ill with COVID-19 so far.

Population Prevalence

On 7th August the Office for National Statistics (ONS) updated their survey data on the prevalence of people actively ill with COVID-19 in the general population (link), incorporating data for seven non-overlapping fortnightly periods covering the period from 27th April up until 2nd August

Start of period of survey End of period of survey   Middle Day of Survey (day of year 2020) % testing positive for COVID-19 Lower confidence limit Upper confidence limit
27/4/2020 10/05/2020 125 0.34 0.24 0.48
11/05/2020 24/05/2020 139 0.30 0.22 0.42
25/05/2020 7/06/2020 153 0.07 0.04 0.11
8/06/2020 21/06/2020 167 0.10 0.05 0.18
22/06/2020 5/07/2020 181 0.04 0.02 0.08
5/07/2020 19/07/2020 195 0.06 0.03 0.10
20/07/2020 2/08/2020 209 0.08 0.05 0.13

Data from ONS on 7th August 2020

Plotting these data  I see no evidence of a continued decline. ONS modelling suggests the prevalence is increasing, but please note that this rate of increase is right at the limit of what can be concluded from these statistics.

Click for a larger version

It no longer makes sense to fit a curve to the data and to anticipate likely dates when the population incidence might fall to key values.

Below I have plotted the data with a logarithmic vertical axis to highlight how far we are from what might be considered as ‘landmark’ achievements: passing the 1 in 10,000 and 1 in 100,000 barrier.

Click for a larger version

As I mentioned last week, given the increase in general mobility it is unrealistic to expect the prevalence to fall significantly in time for the start of the school term.

Limits

As I have mentioned previously, we are probably approaching the lower limit of the population prevalence that this kind of survey can detect.

Each fortnightly data point on the 31 July data set above corresponds to:

  • 41 positive cases detected from a sample of 11,390
  • 51 positive cases detected from a sample of 19,393
  • 17 positive cases detected from a sample of 22,647
  • 18 positive cases detected from sample of 25,268
  • 12 positive cases detected from sample of 26,419
  • 19 positive cases detected from sample of 31,917
  • 24 positive cases detected from sample of 28,501

I feel obliged to state that I do not understand how ONS process the data.

Daily Deaths

Below I have also plotted recent data on the 7-day retrospective rolling average of the daily death toll. The red dotted highlight the two week plateau on the data that was apparent last week. Pleasingly, the death rate has begun to fall again.

Click for larger version.

What is going on?

Friends, I have struggled in recent weeks to grasp the bigger picture of “what is going on” with the virus, but, like most people I guess, I can’t quite get my head around it.

As a consequence, I am ignoring intriguing articles such as this one in the Washington Post. which raises many more questions that it answers.

I feel the best thing I can do in the face of this tidal wave of uncertainty is to try to focus on the simple statistics that require only minimal theoretical interpretation.

I hate it when it’s too hot

August 7, 2020

 

I find days when the temperature exceeds 30 °C very unpleasant.

And if the night-time temperature doesn’t fall then I feel doubly troubled.

I have had the feeling that such days have become more common over my lifetime. But have they?

The short  summary is “Yes”. In West London, the frequency of days on which the temperature exceeds 30 °C has increased from typically 2 days per year in the 1950’s and 1960’s to typically 4 days per year in the 2000’s and 2010’s. This was not as big an increase as I expected.

On reflection, I think my sense that these days have become more common probably arises from the fact that up until the 1980’s, there were many years when such hot days did not occur at all. As the graph at the head of the article shows, in the 2010’s they occurred every year.

Super-hot days have now become normal.

You can stop reading at this point – but if you want to know how I worked this out – read on. It was much harder than I expected it would be!

Finding the data

First, please notice that this is not the same question as “has the average summer temperature increased?”

A single very hot day can be memorable but it may only affect the monthly or seasonal average temperatures by a small amount.

So one cannot merely find data from a nearby meteorological station….

…and plot it versus time. These datasets contain just the so-called ‘monthly mean’ data. i.e.. the maximum or minimum daily temperature is measured for a month and then its average value is recorded. So individual hot days are not flagged in the data. You can see my analysis of such data here.

Instead one needs to find the daily data – the daily records of individual maximum and minimum temperatures.

Happily this data is available from the Centre for Environmental Data Analysis (CEDA). They host the Met Office Integrated Data Archive System (MIDAS) for land surface station data (1853 – present). It is available under an Open Government Licence i.e. it’s free for amateurs like me to play with.

I registered and found the data for the nearby Met Office station at Heathrow. There was data for 69 years from 1948 to 2017, with a single (comma separated variable) spreadsheet for maximum and minimum temperatures (and other quantities) for each year.

Analysing the data

Looking at the spreadsheets I noticed that the 1948 data contained daily maxima and minima. But all the other 68 spreadsheets contained two entries for each day – recording the maximum and minimum temperatures from two 12-hour recording periods

  • the first ended at 9:00 a.m. in the morning: I decided to call that ‘night-time’ data.
  • and the second ended at 9:00 p.m. in the evening: I decided to call that ‘day-time’ data.

Because the ‘day-time’ and ‘night-time’ data were on alternate rows, I found it difficult to write a spreadsheet formula that would check only the appropriate cells.

After a day of trying to ignore this problem, I resolved to write a program in Visual Basic that could open each yearly file, read just a relevant single temperature reading from each alternate line, and save the counted the data in a separate file.

It took a solid day – more than 8 hours – to get it working. As I worked, I recalled performing similar tasks during my PhD studies in the 1980’s. I reflected that this was an arcane and tedious skill, but I was glad I could still pay enough attention to the details to get it to work.

For each yearly file I counted two quantities:

  • The number of days when the day-time maximum exceeded a given threshold.
    • I used thresholds in 1 degree intervals from 0 °C to 35 °C
  • The number of days when the night-time minimum fell below a given threshold
    • I used thresholds in 1 degree intervals from -10 °C to +25 °C

So for example, for 1949 the analysis tells me that there were::

  • 365 days when the day-time maximum exceeded 0 °C
  • 365 days when the day-time maximum exceeded 1 °C
  • 363 days when the day-time maximum exceeded 2 °C
  • 362 days when the day-time maximum exceeded 3 °C
  • 358 days when the day-time maximum exceeded 4 °C
  • 354 days when the day-time maximum exceeded 5 °C

etc…

  • 6 days when the day-time maximum exceeded 30 °C
  • 3 days when the day-time maximum exceeded 31 °C
  • 0 days when the day-time maximum exceeded 32 °C
  • 0 days when the day-time maximum exceeded 33 °C
  • 0 days when the day-time maximum exceeded 34 °C

From this data I could then work out out that in 1949 there were…

  • 0 days when the day-time maximum was between 0 °C and 1 °C
  • 2 days when the day-time maximum was between 1 °C and 2 °C
  • 4 days when the day-time maximum was between 2 °C and 3 °C
  • 4 days when the day-time maximum was between 3 °C and 4 °C

etc..

  • 3 days when the day-time maximum was between 30 °C and 31 °C
  • 3 days when the day-time maximum was between 31 °C and 32 °C
  • 0 days when the day-time maximum was between 32 °C and 33 °C
  • 0 days when the day-time maximum was between 33 °C and 34 °C

Variable Variability

As I analysed the data I found it was very variable (Doh!) and it was difficult to spot trends amongst this variability. This is a central problem in meteorology and climate studies.

I decided to reduce the variability in two ways.

  • First I grouped the years into decades and found the average numbers of days in which the maximum temperatures lay in a particular range.
  • Then I increased the temperature ranges from 1 °C to 5 °C.

These two changes meant that most groups analysed had a reasonable number of counts. Looking at the data I felt able to draw four conclusions, none of which were particularly surprising.

Results: Part#1: Frequency of very hot days

The graph below shows that at Heathrow, the frequency of very hot days – days in which the maximum temperature was 31 °C or above has indeed increased over the decades, from typically 1 to 2 days per year in the 1950’s and 1960’s to typically 3 to 4 days per year in the 2000’s and 2010’s.

I was surprised by this result. I had thought the effect would be more dramatic.

But I may have an explanation for the discrepancy between my perception and the statistics. And the answer lies in the error bars shown on the graph.

The error bars shown are ± the square root of the number of days – a typical first guess for the likely variability of any counted quantity.

So in the 1950’s and 1960’s it was quite common to have years in which the maximum temperature (at Heathrow) never exceeded 30 °C. Between 2010 and 2017 (the last year in the archive) there was not a single year in which temperatures have not reached 30 °C.

I think this is closer to my perception – it has become the new normal that temperatures in excess of 30 °C occur every year.

Results: Part#2: Frequency of days with maximum temperatures in other ranges

The graph above shows that at Heathrow, the frequency of days with maxima above 30 °C has increased.

The graphs below shows that at Heathrow, the frequency of days with maxima in the range shown.

  • The frequency of ‘hot’ days with maxima in the range 26 °C to 30 °C has increased from typically 10 to 20 days per year in the 1950s to typically 20 to 25 days per year in the 2000’s.

  • The frequency of ‘warm’ days with maxima in the range 21 °C to 25 °C has increased from typically 65 days per year in the 1950s to typically 75 days per year in the 2000’s.

  • The frequency of days with maxima in the range 16 °C to 20 °C has stayed roughly unchanged at around 90 days per year.

  • The frequency of days with maxima in the range 11 °C to 15 °C appears to have increased slightly.

  • The frequency of ‘chilly’ days with maxima in the range 6 °C to 10 °C has decreased from typically 70 days per year in the 1950’s to typically 60 days per year in the 2000’s.

  • The frequency of ‘cold’ days with maxima in the range 0 °C to 5 °C has decreased from typically 30 days per year in the 1950’s to typically 15 days per year in the 2000’s.

Taken together this analysis shows that:

  • The frequency of very hot days has increased since the 1950’s and 1960’s, and in this part of London we are unlikely to ever again have a year in which there will not be at least one day where the temperature exceeds 30 °C.
  • Similarly, cold days in which the temperature never rises above 5 °C have become significantly less common.

Results: Part#3: Frequency of days with very low minimum temperatures

While I was doing this analysis I realised that with a little extra work I could also analyse the frequency of nights with extremely low minima.

The graph below shows the frequency of night-time minima below -5 °C across the decades. Typically there were 5 such cold nights per year in the 1950’s and 1960’s but now there are more typically just one or two such nights each year.

Analogous to the absence of years without day-time maxima above 30 °C, years with at least a single occurrence of night-time minima below -5 °C are becoming less common.

For example, in the 1950’s and 1960’s, every year had at least one night with a minimum below -5 °C at the Heathrow station. In the 2000’s only 5 years out 10 had such low minima.

Results: Part#4: Frequency of days with other minimum temperatures

For the Heathrow Station, the graphs below show the frequency of days with minima in the range shown:

  • The frequency of ‘cold’ nights with minima in the range -5 °C to -1 °C has decreased from typically 45 days per year in the 1950’s to typically 25 days per year in the 2000’s.

  • The frequency of ‘cold’ nights with minima in the range 0 °C to 4 °C has decreased from typically 95 days per year in the 1950’s to typically 80 days per year in the 2000’s.

  • The frequency of nights with minima in the range 5 °C to 9 °C has remained roughly unchanged.

  • The frequency of nights with minima in the range 10 °C to 14 °C has increased from typically 90 days per year in the 1950’s to typically 115 days per year in the 2000’s.

  • The frequency of ‘warm’ nights with minima in the range 15 °C to 19 °C has increased very markedly from typically 12 days per year in the 1950’s to typically 30 days per year in the 2000’s.

  • ‘Hot’ nights with minima in the above 20 °C are still thankfully very rare.

 

Acknowledgements

Thanks to Met Office stars

  • John Kennedy for pointing to the MIDAS resource
  • Mark McCarthy for helpful tweets
  • Unknown data scientists for quality control of the Met Office Data

Apologies

Some eagle-eyed readers may notice that I have confused the boundaries of some of my temperature range categories. I am a bit tired of this now but I will sort it out when the manuscript comes back from the referees.


%d bloggers like this: