COVID 19: Day 264: Worrying about the effect of ‘false-positive’ tests

September 21, 2020

Click for a larger version. The upper graph shows the number of Pillar 1 (Hospital) and Pillar 2 (Community) tests per day versus the day of year. The middle graph shows the number of those tests yielding positive results for COVID-19. The lower graph shows the ratio of the upper two graphs to show the fraction of tests which are positive. The minimum value of that ratio is shown as a dotted line and represents a reasonable best guess at the MAXIMUM rate of ‘false positive’ tests. All data are 7-day retrospective averages.

I wrote yesterday about the difficulty of interpreting the recent rise in positive COVID-19 tests.

One point that I did not cover was the issue of false positive results.

Now that we are testing 200,000 people each day, even a small rate of false of positives (say 1%) would give rise to 2000 positive results each day. This can be compared with roughly 4000 positive tests per day at the moment.

Prof. Carl Heneghan argues that the tests may include many false positives caused by detection of ‘dead’ viral fragments.

But unfortunately nobody knows what the rate of false positive results actually is!

However we can estimate an upper limit.

Estimating the upper limit of false-positive tests#1

To estimate the upper limit of false positive tests we assume that

  • The false positive rate has not changed significantly during the course of the crisis.

Then we look for the minimum registered fraction of positive results – and assume that they were all false positives. This is likely to be an overestimate of the false positive rate

I have downloaded data from the Governments Coronavirus ‘Dashboard’ to evaluate this. The data – all shown as retrospective 7-day averages – are shown in the figure at the start of the article and plotted versus the day of the year

  • The upper graph shows the number of Pillar 1 (Hospital) and Pillar 2 (Community) tests per day.
  • The middle graph shows the number of those tests yielding positive for COVID-19.
  • The lower graph shows the ratio of the upper two graphs to show the fraction of test which are positive.

The minimum value of that ratio (0.0052 or 0.52%) is shown as a dotted line and represents a reasonable best guess at the MAXIMUM rate of ‘false positive’ tests. The true rate is probably lower than this.

Estimating the upper limit of false-positive tests#2

We can also perform a similar analysis for the Pillar 4 tests – those used for the ONS Survey – that I mentioned in a recent post.

Click for a larger image.

Looking at the data from July 3rd to 16th, the positivity rate was just 0.05% – 10 times lower than the maximum false positive rate from the Pillar 1 & 2 data. The maximum false positive rate cannot be higher than this.

Conclusion

There is no evidence that the false positive rate is materially distorting our view of corona virus spread.

Using either estimate does not materially change any of the conclusions I drew yesterday.

The demonstrably low rate of false-positives also speaks against the concerns of Professor Henegan that the government may be ‘chasing shadows’.

Living with the virus…

The BBC ask today Is it time we learned to live with the virus?

This article draws on the arguments of Prof. Heneghan who argues – reasonably – that we should focus on the harm caused by the virus (disease and death) rather than just the mere presence of the virus or of an infected person.

He and others argue that while COVID-19 may be worse than Flu, it is a disease in the same category of respiratory viruses.

And in the same way that we accept deaths from flu of between 20,000 and 50,000 per winter, and we should accept a similar rate of loss to COVID-19.

This is a good point. But the difficulty is that if the wrong choices are made, the death toll could be up to 10 times higher.

Edit at 16:30 on 21 September

Altered text to add a second way of estimating false positives.

COVID-19: Day 262: Update

September 20, 2020

Summary

My puzzlement at what is happening continues.

  • On the one hand the viral prevalence appears to be increasing and a prudent approach prioritising public-health over the economy would indicate strong action – lock-downs and similar – is required urgently.
  • On the other hand, an increasingly vocal group is arguing that the government are chasing shadows, and that an epidemic explosion similar to that we experienced in the spring is not about to occur.

Worryingly the views about what is happening have a political dimension. So it is easy to find oneself inclined to one camp or the other based on feelings of general sympathy rather than particular facts. Many people are totally fed up with the restrictions and the COVID-related pallava, and would happily just ignore the whole thing. But would that be wise? And would it cause thousands of deaths each day as it did in the spring?

So what is actually happening?

Back in March the virus had spread uncontrolled through the population for a couple of months and was on an exponential rise. It is my understanding that no reasonable person would dispute that, if left uncontrolled,  it could have killed on the order of half a million people. At the peak of the deaths a few weeks after lock-down, about 1000 people a day were dying after an agonising illness.

The consensus is that this was stopped by the ‘lock-down’ and that subsequent measures have contained the virus. The current rate of COVID-related death rates (about 10 a day) is probably acceptable indefinitely until a vaccine arrives.

Since July 4th there has been a slow rise in COVID-19 positive tests per day, and two weeks ago there was a sudden sharp rise. But the interpretation of the rise is open to question:

  • On the one hand the Government take the rise as an indication that the viral prevalence is increasing. They warn that the virus may explode just as it did in March.
  • And on the other hand, critics point out that hospitalisations are not rising and that the protocols for testing have changed – concentrating on areas where the virus is known – creating an ‘echo chamber’ of alarm.

Both these views are probably true, but the cost of locking-down is immense, as is the potential cost of not locking-down soon enough if a lock-down is required.

Before trying to figure out what we should do, we should look at the data…

Data#1. Prevalence

Since late April the ONS prevalence survey has been randomly testing people each week to look for the virus. They then collate their data into fortnightly periods to increase the sensitivity of their tests. Details of their full results are described methodically in this ‘bulletin‘.

Click for a larger image.

The number of people tested and the number of positive tests are given in their table (reproduced above) along with their estimate that on the 5th September 2020 roughly 1 in 770 of the population were actively ill.

Their data – graphed below – suggest that the prevalence has been below the 1 in 1000 level for several months but is now almost certainly above that level: the raw count of positive tests was 87 from 66,717 in the two weeks to 10th September, up from 36 in 51,992 in the preceding two weeks and 22 from 39,998 in the two weeks preceding that.

Click for a larger image.

My conclusion is that viral prevalence in the general population around the 10th September was two to three times what it had been at it its minimum in July and August.

It has probably risen further in the 17 days since the last data point

Data#2. Tests and Deaths

The graph below shows:

  • the number of deaths per day.
  • the number of positive tests per day on the same logarithmic scale. 

The data were downloaded from the government’s ‘dashboard’ site. The deaths refer to deaths within 28 days of a test and the positive tests refer to Pillar 1 (hospital) and Pillar 2 (community) tests combined. All curves are 7-day retrospective rolling averages.

Click for a larger image

The rapid rise in the number of positive tests is probably the result of a genuine increase in prevalence, coupled with a change in the testing protocol i.e. more testing in suspected ‘hot spots’.

In the last couple of weeks the generally downward trend in deaths per day has shown fluctuations and appears to be starting to increase. It is certainly not falling.

Are we on the verge of a viral resurgence? Or Not.

The ONS survey and the testing data indicates an increase in viral prevalence,

But if we plot the number of people hospitalised alongside the test and death data in the graph above, we see that increase in tests has not resulted in a concomitant increase in hospitalisations or use of ventilators.

Click for larger version

The data show that up until the start of July, the data for tests, hospitalisations, ventilations and eventually deaths all followed the same pattern.

But since then the roughly 10 fold increase in positive tests per day has not been matched by similar increases in hospitalisations.

In order to understand the above graph, one needs to understand that although the significance of three of the data streams has remained unchanged across the graph – the significance of a positive test has changed significantly.

To see this it is best to split the above graph down the middle and consider the left and right hand sides separately.

Click for larger version

Let’s consider the left-hand side of the graph first: 

Click for larger version

Here the positive tests arose as the COVID-status of a seriously ill person entering hospital (Pillar 1 testing) was confirmed.

  • Typically patients were already in a a vulnerable group.
  • Typically they had already been ill for 2 to 3 weeks before entering hospital, and around 20% of these people would die within about 3 weeks (link).
  • Thus the link between a positive tests, hospitalisation, ventilator use and death were striking and easy to see.

Now consider the right-hand side of the graph: 

Now most tests are carried out in the community (Pillar 2) and only around 1% are positive.

  • The vast majority of positive tests are amongst people who are not in a vulnerable group and who will never need to go to hospital.
  • Even for people who will eventually become hospitalised, the test will come much earlier in the course of the disease.

What next?

The data are not – in my opinion – decisively clear. This is how I read the graphs.

  • The rising number of Positive tests are consistent with the ONS data on rising prevalence. The significance of the kink at the start of September is not yet clear but is probably a result of a real increase and increased testing in hot-spots.
  • The fall in the number of COVID hospitalisations flattened out at the start of September.
    • This statistic is the difference between admissions and discharges, and so there must have been a rise in the rate of daily admissions beginning around late August/start of September..
  • Curiously, the previously falling number of COVID-patients on ventilators flattened out before the hospitalisation curve. I can’t think why that might be.

What has probably happened is this:

  • The virus has been spreading and increasing in prevalence since July.
  • Summer holidays, and increased activities of all kinds have allowed the virus to spread, mainly amongst younger people.
    • Younger people are less seriously affected and thus have not caused an immediate increase in hospitalizations.
  • With increased activity and the re-starting of schools the spread is now reaching vulnerable groups leading to increased deaths and hospitalisations.

The key question is:

  • With the viral prevalence we now have, and the mode of conducting our lives that we have now adopted – to what rate of hospitalisation and death have we committed ourselves?

I don’t think anyone knows the answer to that question. But we will all find out fairly soon.

COVID-19: Day 261: Do you remember Day 75? 16th March 2020

September 19, 2020

Previously I said:  

I am having difficulty grasping ‘the big picture’ about what is going on with the pandemic.

And I am still struggling. I will re-visit this week’s data in a future article, but I here I just wanted to remind you – and myself – of what has been previously predicted about this pandemic – back in March 2020 – a week before lockdown.

Remember March 2020?

One of the news stories then was the change in Government policy as a result of a briefing by Neil Ferguson’s group at Imperial College which predicted that:

  • if no action was taken the corona virus would cause around half a million deaths in the UK over the course of a few months.

The Government deemed this unacceptable and as a result of actions taken:

  • Based on recent antibody tests, only around 6% of the UK population have been exposed to the virus.
  • Deaths have been restricted to less than a tenth of the ‘no action’ alternative.

Despite many failures for which the government deserves to be criticised, the saving of around 450,000 lives is a real achievement.

I remember reading Ferguson’s report dated 16th March 2020 which The Guardian published in full as a special supplement.

  • The report is available here

As I read the report I was shocked by the prediction that even if we ‘locked down’ and prevented our health services being overwhelmed, the virus would still be present and would simply rise again.

The report predicted that – in the absence of a vaccine – we would need to have repeated periods of opening and closing of the economy which would be determined by the extent to which the critical care infrastructure was being overwhelmed.

Click for Larger Version

Figure 4 from that report is reproduced above.

Given all the uncertainties involved – I think our current situation is pretty well described by this graph.

Why do I mention this?

I can’t think of any reason why predictions of viral transmission though  a population should depend on one’s political view point.

But the right-wing press (paid link, paid link) frequently seek to portray the predictions of Neil Ferguson’s group at Imperial College as flawed – without ever being specific.

They state that the group predicted half a million deaths and that this did not happen. This is true.

What the right-wing press do not state is that the reason we have not had half a million deaths is that the government acted on Neil Ferguson’s predictions of what would happen if they didn’t do anything

In fact – given the uncertainties involved in the prediction from way back in March – uncertainties of policy and in knowledge of the viral properties – I would say that this foretelling of our future – now our present – looks to have been spectacularly prescient.

I mention this because – as I see it now – a “Winter of Discontent” is looming.  And in these difficult times we need to be careful about who’s views we trust.

Personally I think the people who predicted our current situation with such prescience, deserve more credit than they are currently being given.

COVID-19: Day 256 Update: I am feeling uneasy

September 14, 2020

Michael: How are you feeling this week?

Thank you for asking. I am well.

Last week I said: I am having difficulty grasping ‘the big picture’ about what is going on with the pandemic.

This week I still feel the same. But things are becoming clearer. And I think I am beginning to understand the fundamental reason for my unease. It is the unreliability of every single measure of the prevalence of the virus.

Considering this as a measurement problem I realised that…

  • …all the measures we have of the prevalence of the virus are imperfect.

We’ll look at the latest data below, but here I will just summarise how we get to know anything at all about the viral prevalence.

How we measure the viral prevalence:

  • The ONS prevalence survey takes samples from people randomly-selected from around the UK.
    • It tests people whether or not they have symptoms.
    • It samples the adult population reasonably fairly with regard to age, ethnicity, location and social class.
    • But even sampling 25,000 people each week it is not very sensitive.
    • Also, if the geographical distribution of the viral infections is not random (which it isn’t) then the survey can easily miss (i.e. under-sample) ‘hot-spots’.
    • The lag between measurements and analysis is several days to weeks.
  • The death count.
    • Probably the most reliable indicator of the spread of the virus, but even counting bodies is not straightforward.
    • The death count is blind to the amount of infection amongst younger people – in general they don’t die..
    • And it lags the time of active infection by around three to four weeks. So by the time the older infected people start dying – the virus may have already passed through three to four generations of infection and be widespread in the younger community.
  • Hospitalisation.
    • Like deaths, this statistic lags the infection, but by less time than the death count.
    • Like deaths, this statistic is blind to the amount of infection amongst younger people – in general they don’t need to be hospitalised.
  • PCR Tests
    • The PCR swab tests from the tonsils (or nasal passages) test for COVID-19 genetic material. But the tests are themselves imperfect indicators of the amount of virus present and whether it is alive or dead. The tests sometimes detect remnant dead virus fragments, and sometimes fail to detect live virus.
  • Pillar 2 (Community testing)
    • The meaning of the pillar 2 tests changes depending on the testing strategy and protocol.
    • This makes it hard to associate any increases or decreases in the Pillar 2 tests with a sign of increased or decreased prevalence.
    • Mass (Pillar 2) testing in suspected ‘hotspots’ is certainly good for making rapid assessments of areas of known high infection – but the exact significance of the measured prevalence from one hotspot to another is not clear because of differences in community behaviour and testing protocols.
    • The UK testing infrastructure appears to have bottlenecks and its actual performance may be obfuscated for political reasons (examplar). One curious example is that the government still do not clearly distinguish between tests processed, and the number of people tested.
  • Symptom surveys
    • These surveys ask people to fill in an app-based questionnaire daily, reporting any symptoms.
    • Through mass participation, this can detect the onset of infection amongst participating social groups with only a few days delay.
    • But these surveys cannot detect the virus pre-symptomatically,and are only weakly sensitive to the 80% (yes, 80%) of people who are infected a-symptomatically or with only mild symptoms. (Yes, 80% – link).
  • ONS Antibody tests
    • The ONS antibody tests provide an insight into how many people have been infected in the past – the answer is about 6% of the UK population.
    • But there are still unanswered questions about whether all infected people produce an antibody response

Why I feel uneasy.

So in order to get a picture of what is happening look we need to look at all these measures – and each one needs to be interpreted with nuance. And we need to seek a coherent picture that is consistent with all the data.

Additionally the government endlessly changes data formats and presentations to make coherent and consistent analysis difficult. This is possibly deliberate but could also be a result of chaotic incompetence.

Summarising, my unease arises from being unable to establish a coherent narrative about “what is going on”. This is not a narrative that seeks to blame any particular group, but one which just states the facts as well as we know them without any spin.

Indeed. I am writing this to try to clarify my own thoughts.

This week’s nuanced analysis.

Based on the data below I note that:

  • The number of positive cases has risen sharply. The sharpness of the rise is almost certainly an anomaly caused by a change in testing strategy or protocol – it just doesn’t look right! – but there is a consensus that this probably reflects a real rise.
  • There is an increase in the daily rate of deaths – but there has not been any obvious pre-cursor of this in the positive tests.

The government’s ‘policy roulette’ has come up with a “rule of six” and a ‘Moon Shot’ testing programme.

  • The “rule of six” (RO6) represents a continued arbitrary breathtaking assault on our freedom.
    • It is not clear that it will be widely adhered to – especially if it is expected to be adhered to nearer to Christmas (103 days away).
    • The government’s endless U-turns and their disregard for their own commitments gives them little moral authority.
    • But it is clear that the ‘new normal’ we have been experiencing in the last couple of months is not working well enough to suppress the virus.
    • So the RO6 is probably the sort of extreme measure that might significantly affect virus transmission: some disagree.
  • The Moonshot testing programme is nonsense from top to bottom.
    • The existing testing programme is being chaotically mismanaged (link). Making it bigger will likely make a bigger mess.
    • It is likely that a vaccine from one source or another will become available in the early part of 2021 – and it would be much cheaper and more effective than a testing programme.

Data#1. Prevalence

Since late April the ONS prevalence survey has been randomly testing people each week to look for the virus. They then collate their data into fortnightly periods to increase the sensitivity of their tests. Details of their full results are described methodically in this ‘bulletin‘.

Click for a larger image.

The number of people tested and the number of positive tests are given in their table (reproduced above) along with their estimate that on the 5th September 2020 roughly 1 in 1300 of the population were actively ill.

Their data – graphed below – suggest that the prevalence has been below the 1 in 1000 level for several months and has increased recently: the raw count of positive tests was 55 from 59,222 in the two weeks to 5th September, up from 26 in 45,959 in the preceding two weeks. But this survey lacks the sensitivity to track rapid local increases in prevalance.

Click for larger image.

 

Data #2. Other ONS conclusions

ONS also analyse antibody data and conclude on the basis of just over 7000 tests that – as in previous weeks – roughly 6.02% ± 1.1% of the UK population have already been exposed to the virus.

Data#3. Tests and Deaths

The graph below shows:

  • the number of deaths per day.
  • the number of positive tests on the same logarithmic scale. 

The data were downloaded from the government’s ‘dashboard’ site. The deaths refer to deaths within 28 days of a test and the positive tests refer to Pillar 1 (hospital) and Pillar 2 (community) tests combined. All curves are 7-day retrospective rolling averages.

Click for a larger image

The rapid rise in the number of positive tests looks unfeasibly sharp and is probably the result of a genuine increase in prevalence, coupled with a change in the testing protocol.

In the last couple of weeks the generally downward trend in deaths per day has shown fluctuations and increases whose significance is still not yet clear.

I am puzzled because the rise did not seem to be associated with any corresponding change in positive tests in the preceding weeks.

What to make of all of this?

I don’t know the true story that links all these facts. Worryingly, I don’t think anybody knows what is going on.

But is difficult not to expect several more weeks of increased cases and eventually deaths.

 

COVID-19: Day 252 Update: Autumn

September 6, 2020

As I said last week, I am having difficulty grasping ‘the big picture’ about what is going on with the pandemic.

Even with the benefit of a couple of hundred days of thinking about it, I still find myself confused by the basic facts of this virus:

  • That it is mostly harmless for most people.
  • That it has the potential to kill hundreds of thousands within a few months if left uncontrolled.

But my view of what is happening in the UK is becoming clearer and I will outline that below. First let’s look at the latest data.

Data#1. Prevalence

Since late April the ONS prevalence survey has been randomly testing people each week to look for the virus. They then collate their data into fortnightly periods to increase the sensitivity of their tests.

The number of people tested and the number of positive tests are given in their table (reproduced below) along with their estimate that roughly 1 in 1700 of the population were actively ill in the two weeks around 19th August 2020.

Click for Larger Image

Their data – graphed below – suggest that the prevalence has been below the 1 in 1000 level for several months, but that there is no systematic trend towards lower prevalence.

Click for larger image.

Data #2. Other ONS conclusions

ONS also analyse antibody data and conclude on the basis of just over 7000 tests that – as in previous weeks – roughly 6.02% ± 1.1% of the UK population have already been exposed to the virus.

On the basis of a statistical model, they also conclude that there were roughly 2800 infections each day during the week including August 25th, with a daily incidence increasing at roughly 100 infections (3.5%) per day.

Since there were roughly 1200 positive tests each day during that week, we can estimate that less than half the infections are being found as they occur.

Data#3. Tests and Deaths

The graph below shows three curves:

  • the number of deaths per day.
  • the number of positive tests on the same logarithmic scale. 
  • the fraction of tests conducted which are positive shown on a separate logarithmic axis on the right hand scale.

The data were downloaded from the government’s ‘dashboard’ site. The deaths refer to deaths within 28 days of a test and the positive tests refer to Pillar 1 (hospital) and Pillar 2 (community) tests combined. All curves are 7-day retrospective rolling averages.

Click for larger image.

The data suggest a rising incidence which started just after the official ‘re-opening’ of the economy on July 4th.

I draw this conclusion not just from the rising number of positive tests, but also the small rise in the positivity rate of the tests.

  • The ‘number of positive tests statistic is difficult to interpret by itself because its value depends on the number of tests and the testing protocols.
  • The number of tests has increased dramatically over the period of the graph: there are now over 175,000 tests each day. Alongside this increase in tests per day, the positivity rate for tests has declined from 50% around the start of April, to less than 1% since the start of July. It is now rising slowly, suggesting that the virus is ‘easier to find’.

In the last couple of weeks the generally downward trend in deaths per day has shown fluctuations whose significance is not yet clear.

What to make of all of this?

The prevalence of people ill with the virus is low enough (below 1 in a 1000) that most people can get on with many parts of their life while maintaining social distance.

But the prevalence is increasing systematically. This growth means – by definition –  that R is bigger than 1.

The cases are mainly amongst younger people who are at little risk themselves (hence the low death rate), but their continued infection serves to spread the infection around the country.

As we enter autumn, there are many uncertainties, but it seems that several factors will likely increase R further. I say this because I can’t see any way these factors could act to reduce R.

  • The return to school will result in more interactions between otherwise separate bubbles, not just within schools, but also at peripheral activities.
  • The return to universities will likewise result in more interactions between otherwise separate bubbles.
  • The colder weather will move gatherings of all kinds indoors where viral spread is harder to prevent. And colder weather will probably allow easier infection.

With all these steps, it seems inevitable that there will be continued outbreaks around the country this autumn and winter. The local ‘lockdown’s in Leicester and Manchester are likely to be repeated elsewhere.

If the infection spreads amongst the young, then there should be very little associated mortality, and one might argue that this would be just fine if this allowed the economy to fully re-start.

But with high infection rates it would seem likely that eventually many vulnerable people would be affected. Especially if we factor another winter event:

  • Christmas: Just 110 days away, Christmas will form the perfect, trans-generational super-spreader event.

Recall that at the end of January 2020, Chinese New Year was all but cancelled in the People’s Republic of China (PRC). I recall this being reported as the equivalent of “cancelling Christmas” in the West.

But PRC has a dictatorial government and the virus threat was still new in January. I think that the UK government would not stand much chance of stopping Christmas gatherings especially after the year we have had.

What does this tell us?

Last week I asked

  • Does the data tell us that we have a low-enough incidence of COVID-19 such that it can be managed by ad hoc local closures until a vaccine arrives?
  • Or does the data tell us that the virus is continuing to infiltrate its way throughout our society, ready to spread rapidly as soon as an opportunity arises.

I think that both these statements are true. The situation we are in is manageable – as it is now.

But with schools and universities opening, colder weather, and Christmas on the horizon, it is hard to see how we will manage to keep the death rate this low over the coming months.

A safe and effective vaccine cannot arrive soon enough.

====================

Edited on 6/9/2020 at 21:30 to remove incorrect information about common colds.

NPL: Last words

September 2, 2020

Follow up on Previous Comments

Previously, I wrote about Serco’s role in facilitating NPL’s decline to its current state, which – as I experienced it – featured a poisonous working environment, abysmal staff morale, and a management detached from reality.

Having been conditioned for several years that ‘the truth’ can never be spoken, it felt frightening to simply say out loud what had happened at NPL. And indeed what is still happening there.

Several people contacted me ‘off-blog’ about that article and I would like to thank everyone who did.

Only one person offered a significantly differing narrative, arguing that in fact NPL’s problems were aggravated by – paraphrasing – a preponderance of old white men – alpha males – amongst the scientific staff. It was their maleness rather than their whiteness which the correspondent saw as being of primary significance. I didn’t really think that this was a key issue – but then I am an old white man. While there are many issues around gender and race to be addressed in engineering and science organisations, I had thought NPL seemed to deal with them reasonably well. I posted their response anyway, but they later requested I delete it.

Another correspondent  who had been closer to management during the 2000’s reminded me of many of the difficulties Serco were having during this time. Particular events involved competing with Qinteq for the NPL ‘franchise’ and being forced to lower their margins by the government. As I reflected on this, I thought that these were areas of strength and competence for the Serco managers. And their focus on these issues probably allowed them to feel they were ‘doing something’, and distracted them even further from the simple fact that they did not have a clue how to run a scientific organisation.

Time to wrap up

Other correspondents have asked me privately.

“Michael, but what do really think about NPL’s management?”

Four months on from my departure, I am happy to report that I think about them very little.

Initially I had meant to write more about my time at NPL – describing the hilarious antics of the nincompoops now in charge.

But in honesty – I just don’t want to. It’s time to move on.

Flashbacks

I had been very devoted to the work I did, and many people asked me if – despite my relief at leaving – I would miss work. And I wondered that too. But so far, I have not missed it one iota.

Thinking back – so much of my time there has simply faded into nothingness and my memories of the place feel dreamy.

What I do remember clearly are the camaraderie and kindness of colleagues and friends. These memories are golden.

But I do still have occasional panicky flashbacks where I remember the poisonous bullying and re-experience the sense of helplessness it was designed to induce.

I am confident these flashbacks will diminish as I replace them with positive memories – such as having my house insulated.

Indeed I wonder sometimes if any of my memories really happened?

  • Did NPL Managers really try to sack me three times? Each time for a matter related to my ‘improper’ response to management incompetence?
  • Did NPL Managers really throw away the gas with which we measured the Boltzmann constant? The gas whose bottle-specific isotopic composition was critical for NPL’s only contribution to the 2019 redefinition of the SI units?
    • Did they find the bottle of precious gas which was marked as “NPL Critical” and had my personal phone number on and just bin it without asking me?
    • And did they try to sack me for “raising my voice” attempting to stop them?
    • And did my alleged bad behaviour include “crying aggressively” when I found out they had already thrown it out?
    • And did the people responsible really never apologise?
  • Was I really told they would try to sack me for a fourth time unless I apologised to a senior manager, but not told why I was apologising? Or what for?
  • Was I really told by the Head of Department not to tell them “any bad news”?
    • And did this person later try to sack me because I pointed out their Knowledge Transfer team had “no Knowledge”
    • Did that attempt fail after I was awarded an MBE?
    • Did I really meet the Queen?
  • Did a senior advisor to NPL’s board (Cyril Hilsum CBE FRS FREng HonFInstP ) really write to me personally to tell me why anthropogenic global warming wasn’t real?
    • And when I told him I was shocked at his ignorance did he complain about me immediately to management?
    • And did a director stomp into my office and tell me to “shut up”.?
    • Did no one from management support me in challenging his poisonous delusions?
  • Did NPL managers try to sack me for suggesting positive ways to waste less of scientists time?
  • Did NPL managers really decide to ‘transform’ NPL and then immediately start sacking people before deciding what they were transforming it to?
  • Did NPL managers spend hundreds of thousands of taxpayer pounds subsidising multiple contracts to foreign companies and governments?
  • Did NPL managers really get rid of the UK’s leading facility for measuring ultra-low heat transfer in structures, literally cutting it up and putting it in a skip?
  • Did NPL managers really have a top-level digital strategy that advocated we “Create new units that underpin the cyber-physical world.”?

Thinking back it is hard to distinguish what was really real, from my personal nightmare

But if even a small fraction of my hazy memories are correct, then NPL was (and still is) a showcase for management chaos and incompetence.

Farewell

As I mentioned above, I had initially meant to write more about the surreal and ridiculous specifics of NPL’s ‘transformation’. I researched some details but…

..but it’s just too late. I no longer want to devote any of my time or energy to thinking about NPL.

So farewell to my old colleagues and good luck.

And as Forrest Gump might have said, “…that’s all I have to say about that!”

COVID-19: Day 238 Update: What’s going on?

August 28, 2020

Summary

I am having difficulty grasping ‘the big picture’ about what is going on with the pandemic.

It seems the prevalence of people ill with the virus is low enough (below 1 in a 1000) that most people can get on with many parts of their life while maintaining social distance.

But the prevalence is not declining significantly. The ONS estimate around 2600 people became ill each day around 20th August and that this rate is increasing by an additional 100 new cases each day.

Of the 2600 infections each day probably around 10 will eventually die. This relatively low death rate (0.4%) is probably because mainly younger people are becoming ill and resources are not overstretched. But at this infection rate, many aspects of life will not be able to return to normal.

Although this status quo is a big improvement on where we have been, I am concerned that the forthcoming return to school will give rise to repeated persistent outbreaks.

It seems that there is no strategy to lower the virus’s prevalence significantly.

Instead, the government seem to be trying to…

  • …open the economy ‘as much as possible‘ while keeping…
  • … the infection rate ‘as low as possible‘…
  • …until next summer when they hope a vaccine will be available.

Anyway: here is my take on the data

1. Prevalence

Since late April the ONS prevalence survey has been randomly testing people each week to look for the virus. They then collate their data into fortnightly periods to increase the sensitivity of their tests.

The number of people tested and the number of positive tests are given in their table (reproduced below) along with their estimate that the population prevalence of actively ill people around 14th August 2020 was roughly 1 in 2000.

Click for a larger image.

Their data – graphed below – suggest that prevalence has been below the 1 in 1000 level for several months, but that there is no systematic trend towards lower prevalence.

Click for a larger image.

I have replotted the data on a logarithmic scale (below) to emphasise how far we are from achieving levels around the 1 in 100,000 mark which would enable many more social activities to take place.

Click for a larger image.

2. Other ONS conclusions

ONS also analyse antibody data and conclude on the basis of just over 5000 tests that – as in previous weeks – roughly 6.2% ± 1.3% of the UK population have already been exposed to the virus.

On the basis of a statistical model, they also conclude that there were roughly 2600 infections each day during the week including August 20th, with a daily incidence increasing at roughly 100 infections (4%) per day.

Since there were roughly 1000 positive tests each day during that week, we can estimate that less than half the infections are being found as they occur.

3. Tests and Deaths

The graph below shows the number of deaths and positive tests on the same logarithmic scale.

The data were downloaded from the governments ‘dashboard’ site. The deaths refer to deaths within 28 days of a test and the positive tests refer to Pillar 1 (hospital) and Pillar 2 (community) tests combined. Both curves are 7-day retrospective rolling averages.

Click for a larger version.

Remember that the number of tests has increased dramatically over the period of the graph. However the data do appear to reflect a rising incidence which started just after the official ‘re-opening’ of the economy on July 4th.

About the start of August, presumably as a result of the re-opening 4 weeks earlier, there appears to be change in the rate of reduction of the number of daily deaths.

In the last week or so there is evidence of an ‘upturn’ in the number of deaths per day.

What does this tell us?

Does the data tell us that we have a low-enough incidence of COVID-19 such that it can be managed by ad hoc local closures until a vaccine arrives?

Or does the data tell us that the virus is continuing to infiltrate its way throughout our society, ready to spread rapidly as soon as an opportunity arises.

Such opportunities for spreading may arise from…

  • the forthcoming return to school,
  • increased air travel,
  • a more widespread return to offices, or
  • an increase in our susceptibility to the virus in winter.

But I am afraid I just don’t know which of these is true.

========================================

23:21 on 29/8/2020

Corrected to show the proper death rate 10/2600 ~0.38%

Passivhaus? Or Michaelhaus?

August 26, 2020

Passivhaus 

The modern ‘Passivhaus‘ philosophy of building design involves building houses with exceptional levels of insulation – so that very little heating or cooling is required. Click here for videos explaining the concept

Making houses airtight is essential to achieving this low heating requirement. If the air in my house (volume 400 cubic metres) were exchanged once each hour with outside air then the heat leak would be equivalent to 148 watts for each degree of temperature difference between the inside and outside of the house. This would amount to more than half the current heat loss and make insulating the walls almost pointless.

So to achieve Passivhaus certification a low level of air leakage is required: the number of Air Changes per Hour (ACH) must be less than 0.6 ACH when the external pressure is changed by 50 pascal. The Passivhaus Institute have an excellent guide on all aspects of achieving airtightness in practice (link).

But with this low background ventilation, the general day-to-day activities of a family would likely lead to the build up of unpleasant odours or excess moisture.

So the air flow through the house is then engineered to achieve a specified number of air changes per hour (ACH) through  mechanical ventilators that capture the heat from air leaving the building and use it to heat air coming into the building. This use of  Heat Recovery Ventilation leads to fresh air without the noticeable draughts or heat loss.

Michaelhaus

Achieving the Passivhaus standard for newly built houses is not easy, but it is readily achievable and there are now many exemplars of good practice in the UK.

But achieving that standard in my house would require extensive retrofit work, lifting floorboards and sealing hundreds of tiny leaks. So what should I do?

I don’t know what to do! So I am adopting a “measurement first” approach.

  1. As I have outlined previously, I am monitoring the energy use so after the external wall insulation has been applied next month, I should be able to assess how significant the heat loss associated with air leakage is over the winter.
  2. And more recently I have been estimating the number of air changes per hour (ACH) in the house in normal use.

The second of these measurements – estimating the number of air changes per hour – is normally extremely difficult to do. But I have been using a carbon dioxide meter and a simple spreadsheet model to give me some insight into the number of air changes per hour – without having to figure out where the air leaks are.

Carbon dioxide meter

I have been using two CO2 meters routinely around the house.

Each meter cost around £140, which is quite a lot for a niche device. But since it might guide me to save hundreds or thousands of pounds I think it is worthwhile.

Calibrating the two CO2 meters used in this study by exposing them to outside air. Both meters have a specified uncertainty of ±50 ppm but they agree with each other and with the expected outdoor CO2 level (~400 ppm) more closely than this (407 ppm and 399 ppm).

To estimate the number of ACH one needs to appreciate that there are two common domestic sources of CO2.

  • Human respiration: people produce the equivalent of around 20 litres of pure CO2 each hour – more if they undertake vigorous exercise.
  • Cooking on gas: a gas burner produces hundreds or thousands of litres of CO2 per hour.

So if there were no air changes, the concentration of CO2 would build up indefinitely. From knowledge of:

  • the volume of the room or house under consideration,
  • the number of people present and the amount of cooking.
  • a measurement of CO2 concentration

It is possible to estimate the number of air changes per hour.

I have been studying all these variables, and I will write more as I get more data, but I was intrigued by two early results.

Result#1

The figure below shows the CO2 concentration in the middle room of the house measured over several days using the data-logging CO2 meter.

This room is a ‘hallway’ room and its two doors are open all day, so I think there is a fair degree of air mixing with the entire ground floor.

The data is plotted versus the time of day to emphasise daily similarities.

Click for larger version

I have annotated the graph above in the figure below:

Click for larger version

There are several key features:

  • The first is that the lowest level of CO2 concentration observed is around 400 parts per million (ppm) – which is the approximate concentration in external air. This probably corresponds to a time in which both front and back doors were open.
  • The second is that overnight, the concentration falls to a steady level of between 400 and 500 ppm. The rate of fall corresponds to between 0.5 and 1 ACH.
  • The third is the rapid rise in concentration and high levels of CO2 (up to 1500 ppm) associated with cooking with gas.
  • The fourth is that excluding the cooking ‘events’, the typical CO2 concentration typically lies in the range 400 to 600 ppm. With typically 3 or 4 adults in the house, this is consistent with between 3 and 4 ACH. During this time the weather was warm and doors were often left open and so this plausibly explains why the air change rate might be higher during the day than the night.

Result#2

The figure below shows the TEMTOP CO2 meter after reading in my bedroom (volume 51 cubic metres) with the door and windows closed on two consecutive nights.

It can be seen that the CO2 concentration has risen steadily and then stabilised at around 1900 ppm. With two people sleeping this corresponds to an air change rate of around 0.5 ACH.

What next for Michelhaus?

The data indicate that:

  • For our bedroom, probably more airflow would be beneficial.
  • For the bulk of the house, more airflow might be required in winter when doors and windows will likely remain closed.

So it seems that some degree of mechanical ventilation with heat recovery will likely be required. I will study the matter further over the winter.

What is empowering about the CO2 monitoring technique is that I now have a simple tool that allows me to estimate – rather than merely guess – the number of air changes per hour.

Measuring the thermal conductivity of insulation

August 24, 2020

As I mentioned previously, I am currently obsessed with the thermal insulation I will be applying to the outside of my house.

I have checked that it should be safe from the point of view of flammability (link), but the question of how will it perform thermally still remains.

The insulation product I have chosen (Kingspan K5) has truly exceptional specifications. This allows me to clad the house with 100 mm thickness of K5 and achieve the same insulation level as 160 mm of expanded polystyrene (EPS).

In this article I describe the tests I have performed to show that the K5 insulation does in fact match the specified level of insulation in practice.

Conduction through Closed-Cell Foams

Heat travels through materials using three mechanisms: conduction, convection and radiation.

Closed-Cell Foams – in which sealed ‘cells’ of gas are surrounded by solid ‘walls’ – inhibit all three methods of heat transfer.

  • Conduction through the solid is reduced because the cross-sectional area of solid through which heat can travel is reduced.
    • Conduction is through the thin walls of the cells.
  • Conduction through the gas within the cells is very low
    • The thermal conductivity of gases is much less than that of solids.
  • Convection in the gas within the cells is inhibited because each cell has just a tiny temperature gradient across it
    • Smaller ‘cells’ inhibit convection more strongly.
  • Radiation across each cell is inhibited because each radiating surface sees a surface at almost the same temperature.
    • Smaller ‘cells’ inhibit radiation transfer more strongly

So a foam optimised for low heat transfer would have very little solid present and consist mainly of gas cells. But such  a foam would be very fragile.

So practical building materials balance cell size and wall thickness to produce materials that are sufficiently strong and not too expensive to manufacture.

This article (link) from the 10th International Conference on District Heating and Cooling summarises the properties of polyurethane (PU) foam that affect its thermal performance. I have summarised the calculations on the figure below.

  • The graph shows thermal conductivity on the vertical axis and foam density on the horizontal axis.
  • The red square shows the specified thermal conductivity of Kingspan K5 and the Blue Diamond shows the specified thermal conductivity of EPS.
  • Notice the low density of the foams compared to say bricks (~2000 kg/m^3)
  • The three solid lines show calculated contributions to the thermal conductivity of PU Foam as a function of density.
  • Notice that K5 has a specified thermal conductivity which is lower than that of still air.

All the data on the graph correspond to low thermal conductivities, but the differences are significant. The thermal conductivity of the K5 is around two thirds that of the EPS and so the same insulating effect can be achieved with just two thirds the thickness. Or alternatively, the same thickness of K5 can achieve one third less heat transfer than EPS.

The lowest achievable thermal conductivity that can be achieved is limited by thermal conduction through the gas in the cells. And so the K5 achieves its low conductivity by having cells filled with non-air gases – probably mainly carbon dioxide.

However I was sceptical…

These were just specifications. My erstwhile colleagues at NPL spoke often of the ‘optimism’ of many thermal transfer specifications. Could this material really have a thermal conductivity which is lower than that of still, non-convecting air!

…So I decided to do some tests…

I built two boxes out of 50 mm thick sheets of EPS and K5, sealing the joins with industrial glue.

I then heated a cylinder of concrete that I happened to have (100 mm diameter x 300 mm long weighing 5.14 kg) in the oven and heated it to around 50 °C – roughly 1 hour at the lowest gas setting.

I then placed the concrete in the box along with two data-logging thermometers – one at either end of the cylinder – and sealed the box with another piece of insulation.

I recorded the temperatures every minute for somewhere between 10 and 24 hours and measured the rate at which the concrete cooled.

The cooling curves for EPS and Kingspan K5 are shown in the figures below.

  • The two thin lines correspond to the readings from the two thermometers and the bold line corresponds to their average.
  • The (dotted red curve – – – –) shows a theoretical model of the data with the parameters optimised using the Excel solver.
  • The (dotted red line – – – –) shows a estimated time constant of the exponential temperature decay.
  • The (dotted blue line – – – –) shows a estimated background (room) temperature.

This data allowed me to establish two things.

  • Firstly, by simply comparing the time constants of the cooling curves (494 minutes and 801 minutes), it was clear that the K5 really does have a thermal conductivity which is about 40% lower than EPS.
  • Secondly, by assuming a value for the heat capacity of the concrete and that the heat flowed perpendicularly through the walls of the box I could estimate the thermal conductivity of the two materials. I found:
    • K5 thermal conductivity = 0.021 ± 0.001 W / m K
    • EPS thermal conductivity = 0.035 ± 0.001 W / m K
    • The uncertainties were estimated by analysing the data from each thermometer individually and then their average.
    • To my surprise, these figures agree closely with the specified properties of both EPS and K5

So my scepticism was – it seems – misplaced.

Summary

I am relieved. In my previous article I showed that the K5 has good flammability resistance and in this article I have shown that it really does have excellent thermal performance.

Being confident of these properties I am looking forward even more keenly to getting the material onto my house and snuggling in for a long cold winter.

By the way..

The Blue Maestro dataloggers that I used (link) are fantastically easy to use and come strongly recommended.

I am becoming an insulation bore.

August 21, 2020

Friends, I am obsessed with the insulation I am about to apply to the outside of my house (link).

The installation is still 4 weeks away but I am thinking about it all the time. And if there is a lull in the conversation I may well introduce the topic a propos of anything at all:

Person A: “So I said to Doreen this relationship just isn’t working…

…Pause…

Me: “That’s very difficult. But have you thought about External Wall Insulation?

However,  aside from the risk of boring everyone I know, I have had two major concerns.

  • The first and more basic concern is about the flammability of the insulation.
  • And the second and more technical concern is whether or not the insulation will work as well as it claims.

I have now looked at both these issues experimentally. I’ll cover the measurement of the thermal conductivity in the next article, but here I take a look at the flammability of external wall insulation.

Flammability 

When I tell people about the external wall insulation (EWI) project I can see people internally say “Oh. You mean like Grenfell?” and then say nothing.

That appalling tale of misunderstood specifications that ended up with people putting flammable insulation on the outside of high-rise flats led me to believe that I need to personally reassure myself before going ahead. It would be unwise to take anyone’s word for it.

The insulation that will be applied to the outside of the house is called Kingspan K5, .

  • It is a thermoset foam which means that is manufactured and hardened by heating and so should not melt when heated.
  • This is in contrast with expanded polystyrene (EPS) foam which is a thermoplastic which will soften or melt on heating.

The K5 datasheet (link) contains detailed specifications of the performance in flame tests. For example:

“…achieves European Classification (Euroclass) C-s1,d0 when classified to EN 13501-1: 2018 (Fire classification of construction products and building elements. Classification using data from reaction to fire tests).

Extract from Kingspan K5 data sheet. Click for larger version

But what does this mean? I found this explanatory page and table.

Click for larger image

  • The C is a categorisation from A (Non-combustible) to F (highly flammable) and means “Combustible – with limited contribution to fire”
  • The s1 means “a little or no smoke” on a scale of s1 to s3 (“substantial smoke”).
  • The d0 means “no flaming droplets or particles” on a scale of d0 to d2 (“Quite a lot”)

This was quite reassuring, but the terms are rather inexact and I didn’t really know what it all meant in practice.

So I went down to the EWI Store, bought some K5 and did my own flammability tests.

Flammability test

My flammability test consisted of propping up a sheet of K5 and directing a blow torch onto its surface from a few centimetres away and then leaving it for 10 minutes.

I think this is a pretty tough test and I was pleasantly surprised by how the insulation performed.

The results are captured in the exceedingly dull video at the end of the page and there are post-mortem photographs of the insulation below.

The insulation remained broadly in tact and damage was limited to a few centimetres around the region where the flame reached the insulation. The rear side of the insulation did not appear to have been damaged at all.

After having performed this test I realised that I had forgotten to measure the temperature on the rear face of the K5. Doh!

So I few days later I repeated the test and measured the temperature on the back of the 50 mm thick insulation panel as the temperature in the interior of the insulation reached approximately 1000 °C.

Remarkably, after 10 minutes the rear had only reached 57 °C.

Overall these results are  better than I expected, and from a safety perspective, I feel happy having Kingspan K5 on the outside of my house.

Expanded Polystyrene Foam (EPS)

I also did flammability tests on EPS. But these tests did not take long – EPS lasts just a few seconds before burning and melting.

However, even for a material as flammable as EPS, in this external application, the risk would be very low. The foam would sandwiched between non-flammable external render, and a non-flammable brick wall.

You can read about the factors which mitigate the risk in this application at the following links

But I am still happy to be paying extra for the superior fire resistance of Kingspan K5.

But will the K5 really be as good an insulator as its manufacturers claim? I’ll cover this in the next exciting episode…

Video

Here is a 15 minute video of my flammability tests of Kingspan K5 and Expanded Polystyrene.

It’s really boring but ‘highlights’ are

  • 3′ 30″: K5: Move blowtorch closer
  • 8′ 00″ : K5: Close up
  • 10′ 40″ : K5: Post Mortem
  • 11′ 25″ White EPS: Start
  • 11′ 57″ White EPS: Move blowtorch closer
  • 13′ 06″ White EPS: Post Mortem
  • 13′ 20″ Black EPS: Start
  • 13′ 57″ Black EPS: Post Mortem
  • 14′ 16″ Black EPS#2: Start with burner further away
  • 15′ 30″ Black EPS#2: Post Mortem

%d bloggers like this: