Archive for the ‘Personal’ Category

NPL: Last words

September 2, 2020

Follow up on Previous Comments

Previously, I wrote about Serco’s role in facilitating NPL’s decline to its current state, which – as I experienced it – featured a poisonous working environment, abysmal staff morale, and a management detached from reality.

Having been conditioned for several years that ‘the truth’ can never be spoken, it felt frightening to simply say out loud what had happened at NPL. And indeed what is still happening there.

Several people contacted me ‘off-blog’ about that article and I would like to thank everyone who did.

Only one person offered a significantly differing narrative, arguing that in fact NPL’s problems were aggravated by – paraphrasing – a preponderance of old white men – alpha males – amongst the scientific staff. It was their maleness rather than their whiteness which the correspondent saw as being of primary significance. I didn’t really think that this was a key issue – but then I am an old white man. While there are many issues around gender and race to be addressed in engineering and science organisations, I had thought NPL seemed to deal with them reasonably well. I posted their response anyway, but they later requested I delete it.

Another correspondent  who had been closer to management during the 2000’s reminded me of many of the difficulties Serco were having during this time. Particular events involved competing with Qinteq for the NPL ‘franchise’ and being forced to lower their margins by the government. As I reflected on this, I thought that these were areas of strength and competence for the Serco managers. And their focus on these issues probably allowed them to feel they were ‘doing something’, and distracted them even further from the simple fact that they did not have a clue how to run a scientific organisation.

Time to wrap up

Other correspondents have asked me privately.

“Michael, but what do really think about NPL’s management?”

Four months on from my departure, I am happy to report that I think about them very little.

Initially I had meant to write more about my time at NPL – describing the hilarious antics of the nincompoops now in charge.

But in honesty – I just don’t want to. It’s time to move on.

Flashbacks

I had been very devoted to the work I did, and many people asked me if – despite my relief at leaving – I would miss work. And I wondered that too. But so far, I have not missed it one iota.

Thinking back – so much of my time there has simply faded into nothingness and my memories of the place feel dreamy.

What I do remember clearly are the camaraderie and kindness of colleagues and friends. These memories are golden.

But I do still have occasional panicky flashbacks where I remember the poisonous bullying and re-experience the sense of helplessness it was designed to induce.

I am confident these flashbacks will diminish as I replace them with positive memories – such as having my house insulated.

Indeed I wonder sometimes if any of my memories really happened?

  • Did NPL Managers really try to sack me three times? Each time for a matter related to my ‘improper’ response to management incompetence?
  • Did NPL Managers really throw away the gas with which we measured the Boltzmann constant? The gas whose bottle-specific isotopic composition was critical for NPL’s only contribution to the 2019 redefinition of the SI units?
    • Did they find the bottle of precious gas which was marked as “NPL Critical” and had my personal phone number on and just bin it without asking me?
    • And did they try to sack me for “raising my voice” attempting to stop them?
    • And did my alleged bad behaviour include “crying aggressively” when I found out they had already thrown it out?
    • And did the people responsible really never apologise?
  • Was I really told they would try to sack me for a fourth time unless I apologised to a senior manager, but not told why I was apologising? Or what for?
  • Was I really told by the Head of Department not to tell them “any bad news”?
    • And did this person later try to sack me because I pointed out their Knowledge Transfer team had “no Knowledge”
    • Did that attempt fail after I was awarded an MBE?
    • Did I really meet the Queen?
  • Did a senior advisor to NPL’s board (Cyril Hilsum CBE FRS FREng HonFInstP ) really write to me personally to tell me why anthropogenic global warming wasn’t real?
    • And when I told him I was shocked at his ignorance did he complain about me immediately to management?
    • And did a director stomp into my office and tell me to “shut up”.?
    • Did no one from management support me in challenging his poisonous delusions?
  • Did NPL managers try to sack me for suggesting positive ways to waste less of scientists time?
  • Did NPL managers really decide to ‘transform’ NPL and then immediately start sacking people before deciding what they were transforming it to?
  • Did NPL managers spend hundreds of thousands of taxpayer pounds subsidising multiple contracts to foreign companies and governments?
  • Did NPL managers really get rid of the UK’s leading facility for measuring ultra-low heat transfer in structures, literally cutting it up and putting it in a skip?
  • Did NPL managers really have a top-level digital strategy that advocated we “Create new units that underpin the cyber-physical world.”?

Thinking back it is hard to distinguish what was really real, from my personal nightmare

But if even a small fraction of my hazy memories are correct, then NPL was (and still is) a showcase for management chaos and incompetence.

Farewell

As I mentioned above, I had initially meant to write more about the surreal and ridiculous specifics of NPL’s ‘transformation’. I researched some details but…

..but it’s just too late. I no longer want to devote any of my time or energy to thinking about NPL.

So farewell to my old colleagues and good luck.

And as Forrest Gump might have said, “…that’s all I have to say about that!”

Passivhaus? Or Michaelhaus?

August 26, 2020

Passivhaus 

The modern ‘Passivhaus‘ philosophy of building design involves building houses with exceptional levels of insulation – so that very little heating or cooling is required. Click here for videos explaining the concept

Making houses airtight is essential to achieving this low heating requirement. If the air in my house (volume 400 cubic metres) were exchanged once each hour with outside air then the heat leak would be equivalent to 148 watts for each degree of temperature difference between the inside and outside of the house. This would amount to more than half the current heat loss and make insulating the walls almost pointless.

So to achieve Passivhaus certification a low level of air leakage is required: the number of Air Changes per Hour (ACH) must be less than 0.6 ACH when the external pressure is changed by 50 pascal. The Passivhaus Institute have an excellent guide on all aspects of achieving airtightness in practice (link).

But with this low background ventilation, the general day-to-day activities of a family would likely lead to the build up of unpleasant odours or excess moisture.

So the air flow through the house is then engineered to achieve a specified number of air changes per hour (ACH) through  mechanical ventilators that capture the heat from air leaving the building and use it to heat air coming into the building. This use of  Heat Recovery Ventilation leads to fresh air without the noticeable draughts or heat loss.

Michaelhaus

Achieving the Passivhaus standard for newly built houses is not easy, but it is readily achievable and there are now many exemplars of good practice in the UK.

But achieving that standard in my house would require extensive retrofit work, lifting floorboards and sealing hundreds of tiny leaks. So what should I do?

I don’t know what to do! So I am adopting a “measurement first” approach.

  1. As I have outlined previously, I am monitoring the energy use so after the external wall insulation has been applied next month, I should be able to assess how significant the heat loss associated with air leakage is over the winter.
  2. And more recently I have been estimating the number of air changes per hour (ACH) in the house in normal use.

The second of these measurements – estimating the number of air changes per hour – is normally extremely difficult to do. But I have been using a carbon dioxide meter and a simple spreadsheet model to give me some insight into the number of air changes per hour – without having to figure out where the air leaks are.

Carbon dioxide meter

I have been using two CO2 meters routinely around the house.

Each meter cost around £140, which is quite a lot for a niche device. But since it might guide me to save hundreds or thousands of pounds I think it is worthwhile.

Calibrating the two CO2 meters used in this study by exposing them to outside air. Both meters have a specified uncertainty of ±50 ppm but they agree with each other and with the expected outdoor CO2 level (~400 ppm) more closely than this (407 ppm and 399 ppm).

To estimate the number of ACH one needs to appreciate that there are two common domestic sources of CO2.

  • Human respiration: people produce the equivalent of around 20 litres of pure CO2 each hour – more if they undertake vigorous exercise.
  • Cooking on gas: a gas burner produces hundreds or thousands of litres of CO2 per hour.

So if there were no air changes, the concentration of CO2 would build up indefinitely. From knowledge of:

  • the volume of the room or house under consideration,
  • the number of people present and the amount of cooking.
  • a measurement of CO2 concentration

It is possible to estimate the number of air changes per hour.

I have been studying all these variables, and I will write more as I get more data, but I was intrigued by two early results.

Result#1

The figure below shows the CO2 concentration in the middle room of the house measured over several days using the data-logging CO2 meter.

This room is a ‘hallway’ room and its two doors are open all day, so I think there is a fair degree of air mixing with the entire ground floor.

The data is plotted versus the time of day to emphasise daily similarities.

Click for larger version

I have annotated the graph above in the figure below:

Click for larger version

There are several key features:

  • The first is that the lowest level of CO2 concentration observed is around 400 parts per million (ppm) – which is the approximate concentration in external air. This probably corresponds to a time in which both front and back doors were open.
  • The second is that overnight, the concentration falls to a steady level of between 400 and 500 ppm. The rate of fall corresponds to between 0.5 and 1 ACH.
  • The third is the rapid rise in concentration and high levels of CO2 (up to 1500 ppm) associated with cooking with gas.
  • The fourth is that excluding the cooking ‘events’, the typical CO2 concentration typically lies in the range 400 to 600 ppm. With typically 3 or 4 adults in the house, this is consistent with between 3 and 4 ACH. During this time the weather was warm and doors were often left open and so this plausibly explains why the air change rate might be higher during the day than the night.

Result#2

The figure below shows the TEMTOP CO2 meter after reading in my bedroom (volume 51 cubic metres) with the door and windows closed on two consecutive nights.

It can be seen that the CO2 concentration has risen steadily and then stabilised at around 1900 ppm. With two people sleeping this corresponds to an air change rate of around 0.5 ACH.

What next for Michelhaus?

The data indicate that:

  • For our bedroom, probably more airflow would be beneficial.
  • For the bulk of the house, more airflow might be required in winter when doors and windows will likely remain closed.

So it seems that some degree of mechanical ventilation with heat recovery will likely be required. I will study the matter further over the winter.

What is empowering about the CO2 monitoring technique is that I now have a simple tool that allows me to estimate – rather than merely guess – the number of air changes per hour.

I am becoming an insulation bore.

August 21, 2020

Friends, I am obsessed with the insulation I am about to apply to the outside of my house (link).

The installation is still 4 weeks away but I am thinking about it all the time. And if there is a lull in the conversation I may well introduce the topic a propos of anything at all:

Person A: “So I said to Doreen this relationship just isn’t working…

…Pause…

Me: “That’s very difficult. But have you thought about External Wall Insulation?

However,  aside from the risk of boring everyone I know, I have had two major concerns.

  • The first and more basic concern is about the flammability of the insulation.
  • And the second and more technical concern is whether or not the insulation will work as well as it claims.

I have now looked at both these issues experimentally. I’ll cover the measurement of the thermal conductivity in the next article, but here I take a look at the flammability of external wall insulation.

Flammability 

When I tell people about the external wall insulation (EWI) project I can see people internally say “Oh. You mean like Grenfell?” and then say nothing.

That appalling tale of misunderstood specifications that ended up with people putting flammable insulation on the outside of high-rise flats led me to believe that I need to personally reassure myself before going ahead. It would be unwise to take anyone’s word for it.

The insulation that will be applied to the outside of the house is called Kingspan K5, .

  • It is a thermoset foam which means that is manufactured and hardened by heating and so should not melt when heated.
  • This is in contrast with expanded polystyrene (EPS) foam which is a thermoplastic which will soften or melt on heating.

The K5 datasheet (link) contains detailed specifications of the performance in flame tests. For example:

“…achieves European Classification (Euroclass) C-s1,d0 when classified to EN 13501-1: 2018 (Fire classification of construction products and building elements. Classification using data from reaction to fire tests).

Extract from Kingspan K5 data sheet. Click for larger version

But what does this mean? I found this explanatory page and table.

Click for larger image

  • The C is a categorisation from A (Non-combustible) to F (highly flammable) and means “Combustible – with limited contribution to fire”
  • The s1 means “a little or no smoke” on a scale of s1 to s3 (“substantial smoke”).
  • The d0 means “no flaming droplets or particles” on a scale of d0 to d2 (“Quite a lot”)

This was quite reassuring, but the terms are rather inexact and I didn’t really know what it all meant in practice.

So I went down to the EWI Store, bought some K5 and did my own flammability tests.

Flammability test

My flammability test consisted of propping up a sheet of K5 and directing a blow torch onto its surface from a few centimetres away and then leaving it for 10 minutes.

I think this is a pretty tough test and I was pleasantly surprised by how the insulation performed.

The results are captured in the exceedingly dull video at the end of the page and there are post-mortem photographs of the insulation below.

The insulation remained broadly in tact and damage was limited to a few centimetres around the region where the flame reached the insulation. The rear side of the insulation did not appear to have been damaged at all.

After having performed this test I realised that I had forgotten to measure the temperature on the rear face of the K5. Doh!

So I few days later I repeated the test and measured the temperature on the back of the 50 mm thick insulation panel as the temperature in the interior of the insulation reached approximately 1000 °C.

Remarkably, after 10 minutes the rear had only reached 57 °C.

Overall these results are  better than I expected, and from a safety perspective, I feel happy having Kingspan K5 on the outside of my house.

Expanded Polystyrene Foam (EPS)

I also did flammability tests on EPS. But these tests did not take long – EPS lasts just a few seconds before burning and melting.

However, even for a material as flammable as EPS, in this external application, the risk would be very low. The foam would sandwiched between non-flammable external render, and a non-flammable brick wall.

You can read about the factors which mitigate the risk in this application at the following links

But I am still happy to be paying extra for the superior fire resistance of Kingspan K5.

But will the K5 really be as good an insulator as its manufacturers claim? I’ll cover this in the next exciting episode…

Video

Here is a 15 minute video of my flammability tests of Kingspan K5 and Expanded Polystyrene.

It’s really boring but ‘highlights’ are

  • 3′ 30″: K5: Move blowtorch closer
  • 8′ 00″ : K5: Close up
  • 10′ 40″ : K5: Post Mortem
  • 11′ 25″ White EPS: Start
  • 11′ 57″ White EPS: Move blowtorch closer
  • 13′ 06″ White EPS: Post Mortem
  • 13′ 20″ Black EPS: Start
  • 13′ 57″ Black EPS: Post Mortem
  • 14′ 16″ Black EPS#2: Start with burner further away
  • 15′ 30″ Black EPS#2: Post Mortem

COVID-19 Re-categorisation of deaths

August 20, 2020

Summary

Forgive me omitting the usual ‘population prevalence’ update, but roughly speaking, nothing has changed.

However the government have introduced new ways to count the dead, and that is really important.

Surprisingly – to me at least – I have concluded that is a not a self-serving manipulation of the data to reduce headline rates of death. It actually helps us to understand what is going on with the pandemic.

New ways to count the dead

Last week I wrote:

I feel the best thing I can do in the face of this tidal wave of uncertainty is to try to focus on the simple statistics that require only minimal theoretical interpretation.

I was away from home last week and so missed the announcement about new ways to ‘count the dead’. New ways to count the dead?! What?

The government announced it would divide daily deaths into three categories:

  1. Deaths of people who have died within 28 days of a positive COVID-19 test
    • Irrespective of any ‘underlying conditions’ we can reasonably say these people ‘died from COVID-19
  2. Deaths of people who have died within 60 days of a positive COVID-19 test
    • Similarly, despite any ‘underlying conditions’ we can reasonably say these people also ‘died from COVID-19
  3. Deaths of people who have died anytime after a positive COVID-19 test
    • Depending on the length of time to death, it could be that the COVID-19 test might possibly be less relevant to these deaths than other pre-existing difficulties.

At first I found it hard not to think that the government was doing this in order to generate lower numbers.

But after writing this article, I have concluded that this categorisation is actually helpful.

Let’s look at the data.

The government now produce three curves as shown in the two figures below. The first graph shows the daily death statistic throughout the pandemic. The three curves only differ in the ‘tail’ of the curve.

Click for larger figure. The Red Curve shows the total number of deaths per day. The Cyan Curve shows the number of deaths within 28 days of a test, and the Blue Curve shows the number of deaths within 60 days of a test. All curves are 7-day retrospective rolling averages.

Let’s look at the recent data in more detail.

Click for larger figure. The Red Curve shows the total number of deaths per day. The Cyan Curve shows the number of deaths within 28 days of a test, and the Blue Curve shows the number of deaths within 60 days of a test. All curves are 7-day retrospective rolling averages.

Notice that the ‘All deaths’ curve includes all the deaths counted in the ’60-day’ curve and the ’60-day’ curve includes all deaths on the ’28-day’ curve.

In order to understand these data  we need to re-categorise them by subtracting the datasets from each other to yield:

  • Deaths of people who have died within 28 days of a positive COVID-19 test
  • Deaths of people who have died between 28 and 60 days of a positive COVID-19 test
  • Deaths of people who have died more than 60 days after a positive COVID-19 test

These data are summarised in the figures below. Now the data in each of three categories are independent of each other and add up to give the total deaths.

Click for larger figure. The Red Curve shows the total number of deaths per day. The Cyan Curve shows the number of deaths within 28 days of a test, and the Black Curve shows the number of deaths between 28 and 60 days after a test. The Blue Curve shows the number of deaths occurring at least 60 days after a test. All curves are 7-day retrospective rolling averages.

So for example on day 229 (16th August 2020), the average number of people dying per day in the previous seven days was 61.7 deaths per day:

  • On average, 10.9 of those people were diagnosed less than 28 days previously
  • A further 10.4 were diagnosed between 28 and 60 days previously.
  • But 40.4 of those people were diagnosed more than 60 days previously.

It is this last datum which is most significant: most people dying after a recent infection with COVID-19 acquired the infection more than 60 days earlier. Further more, deaths in this category are rising! THis is the real insight arising from this re-categorisation.

We can also plot these categories as fractions of the total deaths: we see that roughly two thirds of daily deaths occur more than 60 days after a positive COVID test – and that fraction is rising!

Click for larger figure. The Cyan Curve shows the percentage of deaths within 28 days of a test, and the Black Curve shows the percentage of deaths between 28 and 60 days after a test. The Blue Curve shows the percentage of deaths occurring at least 60 days after a test. All curves are 7-day retrospective rolling averages.

What does this tell us?

Here is my current understanding. And it is broadly good news!

  • The fraction of people dying from COVID-19 who die within 28 days has been falling since the peak of the pandemic. Currently, one sixth of people who eventually die survive for less than 28 days from their diagnosis.
    • The most likely reason for this is that our doctors have got better at treating people. Only a few people die quickly.
  • The fraction of people dying from COVID-19 who died between 28 and 60 days rose as doctors kept people alive beyond 28 days. But this too has now started to fall and only a further one sixth of people who will die survive between 28 and 60 days from their diagnosis.
    • The most likely reason for this is once again that people are being kept alive longer, but doctors are unable to cure them.
  • The fraction of people dying from COVID-19 who die and were diagnosed more than 60 days previously is still rising and now constitutes two thirds of all ongoing deaths. I find this surprising. And now we need to consider two possible causes:
    • Firstly, doctors are keeping people alive longer but are unable to cure them. If this were so then people would be dying after what must be an appalling 60 days in hospital. I was not aware that there many patients in this condition.
    • Secondly, people might have fully or partially recovered from COVID-19, but then die of another cause.

But how large is this second category? We can estimate it thus:

  • About 1% of the UK population die each year (roughly 600,000 people). So on average we would would expect around 1%/365 = 0.027% of the population to die each day, or roughly 1640 deaths per day, irrespective of COVID-19.
  • Thus current daily deaths from COVID-19 constitute only a small percentage of normally-expected deaths. If the disease did not have the capability to re-infect the entire population and kill literally millions then we would not be so worried about deaths at this rate.
  • So far around 320,000 people have tested positive for COVID-19 and roughly 260,000 have survived. What is the chance that this cohort of 260,000 might have recovered from COVID infection and then died of something else? Well a first guess would be roughly 1% chance per year or 0.027% per day- the same chance as applies to the general population. Thus we might expect 0.027% of the 260,000 recovered people to die each day i.e. around 70 deaths per day.
  • Even allowing for several biasing factors, this is a significant fraction of the daily deaths data – much larger than I would have estimated.

So my understanding of the data is this.

  • Deaths within 28 days of a positive test can be understood as being deaths arising from COVID-19. There are currently around 10 deaths per day in this category.
  • Deaths beyond 60 days of a positive test are primarily due to deaths from other causes. We should expect deaths in this category to rise to about 70 deaths per day (or some other similar number) and then stabilise.
  • Deaths after 28 but before 60 days can probably not be categorised as being clearly in one category or the other. The fact that deaths in this category first rose and then fell probably indicates deaths in this category initially arose directly from COVID infection.

Overall, this is good news. It means that there are fewer deaths arising from COVID-19 than we previously thought.

And by looking at the ‘prompt’ deaths, policy makers can get better feedback on how well their policies are working on the ground.

I hate it when it’s too hot

August 7, 2020

 

I find days when the temperature exceeds 30 °C very unpleasant.

And if the night-time temperature doesn’t fall then I feel doubly troubled.

I have had the feeling that such days have become more common over my lifetime. But have they?

The short  summary is “Yes”. In West London, the frequency of days on which the temperature exceeds 30 °C has increased from typically 2 days per year in the 1950’s and 1960’s to typically 4 days per year in the 2000’s and 2010’s. This was not as big an increase as I expected.

On reflection, I think my sense that these days have become more common probably arises from the fact that up until the 1980’s, there were many years when such hot days did not occur at all. As the graph at the head of the article shows, in the 2010’s they occurred every year.

Super-hot days have now become normal.

You can stop reading at this point – but if you want to know how I worked this out – read on. It was much harder than I expected it would be!

Finding the data

First, please notice that this is not the same question as “has the average summer temperature increased?”

A single very hot day can be memorable but it may only affect the monthly or seasonal average temperatures by a small amount.

So one cannot merely find data from a nearby meteorological station….

…and plot it versus time. These datasets contain just the so-called ‘monthly mean’ data. i.e.. the maximum or minimum daily temperature is measured for a month and then its average value is recorded. So individual hot days are not flagged in the data. You can see my analysis of such data here.

Instead one needs to find the daily data – the daily records of individual maximum and minimum temperatures.

Happily this data is available from the Centre for Environmental Data Analysis (CEDA). They host the Met Office Integrated Data Archive System (MIDAS) for land surface station data (1853 – present). It is available under an Open Government Licence i.e. it’s free for amateurs like me to play with.

I registered and found the data for the nearby Met Office station at Heathrow. There was data for 69 years from 1948 to 2017, with a single (comma separated variable) spreadsheet for maximum and minimum temperatures (and other quantities) for each year.

Analysing the data

Looking at the spreadsheets I noticed that the 1948 data contained daily maxima and minima. But all the other 68 spreadsheets contained two entries for each day – recording the maximum and minimum temperatures from two 12-hour recording periods

  • the first ended at 9:00 a.m. in the morning: I decided to call that ‘night-time’ data.
  • and the second ended at 9:00 p.m. in the evening: I decided to call that ‘day-time’ data.

Because the ‘day-time’ and ‘night-time’ data were on alternate rows, I found it difficult to write a spreadsheet formula that would check only the appropriate cells.

After a day of trying to ignore this problem, I resolved to write a program in Visual Basic that could open each yearly file, read just a relevant single temperature reading from each alternate line, and save the counted the data in a separate file.

It took a solid day – more than 8 hours – to get it working. As I worked, I recalled performing similar tasks during my PhD studies in the 1980’s. I reflected that this was an arcane and tedious skill, but I was glad I could still pay enough attention to the details to get it to work.

For each yearly file I counted two quantities:

  • The number of days when the day-time maximum exceeded a given threshold.
    • I used thresholds in 1 degree intervals from 0 °C to 35 °C
  • The number of days when the night-time minimum fell below a given threshold
    • I used thresholds in 1 degree intervals from -10 °C to +25 °C

So for example, for 1949 the analysis tells me that there were::

  • 365 days when the day-time maximum exceeded 0 °C
  • 365 days when the day-time maximum exceeded 1 °C
  • 363 days when the day-time maximum exceeded 2 °C
  • 362 days when the day-time maximum exceeded 3 °C
  • 358 days when the day-time maximum exceeded 4 °C
  • 354 days when the day-time maximum exceeded 5 °C

etc…

  • 6 days when the day-time maximum exceeded 30 °C
  • 3 days when the day-time maximum exceeded 31 °C
  • 0 days when the day-time maximum exceeded 32 °C
  • 0 days when the day-time maximum exceeded 33 °C
  • 0 days when the day-time maximum exceeded 34 °C

From this data I could then work out out that in 1949 there were…

  • 0 days when the day-time maximum was between 0 °C and 1 °C
  • 2 days when the day-time maximum was between 1 °C and 2 °C
  • 4 days when the day-time maximum was between 2 °C and 3 °C
  • 4 days when the day-time maximum was between 3 °C and 4 °C

etc..

  • 3 days when the day-time maximum was between 30 °C and 31 °C
  • 3 days when the day-time maximum was between 31 °C and 32 °C
  • 0 days when the day-time maximum was between 32 °C and 33 °C
  • 0 days when the day-time maximum was between 33 °C and 34 °C

Variable Variability

As I analysed the data I found it was very variable (Doh!) and it was difficult to spot trends amongst this variability. This is a central problem in meteorology and climate studies.

I decided to reduce the variability in two ways.

  • First I grouped the years into decades and found the average numbers of days in which the maximum temperatures lay in a particular range.
  • Then I increased the temperature ranges from 1 °C to 5 °C.

These two changes meant that most groups analysed had a reasonable number of counts. Looking at the data I felt able to draw four conclusions, none of which were particularly surprising.

Results: Part#1: Frequency of very hot days

The graph below shows that at Heathrow, the frequency of very hot days – days in which the maximum temperature was 31 °C or above has indeed increased over the decades, from typically 1 to 2 days per year in the 1950’s and 1960’s to typically 3 to 4 days per year in the 2000’s and 2010’s.

I was surprised by this result. I had thought the effect would be more dramatic.

But I may have an explanation for the discrepancy between my perception and the statistics. And the answer lies in the error bars shown on the graph.

The error bars shown are ± the square root of the number of days – a typical first guess for the likely variability of any counted quantity.

So in the 1950’s and 1960’s it was quite common to have years in which the maximum temperature (at Heathrow) never exceeded 30 °C. Between 2010 and 2017 (the last year in the archive) there was not a single year in which temperatures have not reached 30 °C.

I think this is closer to my perception – it has become the new normal that temperatures in excess of 30 °C occur every year.

Results: Part#2: Frequency of days with maximum temperatures in other ranges

The graph above shows that at Heathrow, the frequency of days with maxima above 30 °C has increased.

The graphs below shows that at Heathrow, the frequency of days with maxima in the range shown.

  • The frequency of ‘hot’ days with maxima in the range 26 °C to 30 °C has increased from typically 10 to 20 days per year in the 1950s to typically 20 to 25 days per year in the 2000’s.

  • The frequency of ‘warm’ days with maxima in the range 21 °C to 25 °C has increased from typically 65 days per year in the 1950s to typically 75 days per year in the 2000’s.

  • The frequency of days with maxima in the range 16 °C to 20 °C has stayed roughly unchanged at around 90 days per year.

  • The frequency of days with maxima in the range 11 °C to 15 °C appears to have increased slightly.

  • The frequency of ‘chilly’ days with maxima in the range 6 °C to 10 °C has decreased from typically 70 days per year in the 1950’s to typically 60 days per year in the 2000’s.

  • The frequency of ‘cold’ days with maxima in the range 0 °C to 5 °C has decreased from typically 30 days per year in the 1950’s to typically 15 days per year in the 2000’s.

Taken together this analysis shows that:

  • The frequency of very hot days has increased since the 1950’s and 1960’s, and in this part of London we are unlikely to ever again have a year in which there will not be at least one day where the temperature exceeds 30 °C.
  • Similarly, cold days in which the temperature never rises above 5 °C have become significantly less common.

Results: Part#3: Frequency of days with very low minimum temperatures

While I was doing this analysis I realised that with a little extra work I could also analyse the frequency of nights with extremely low minima.

The graph below shows the frequency of night-time minima below -5 °C across the decades. Typically there were 5 such cold nights per year in the 1950’s and 1960’s but now there are more typically just one or two such nights each year.

Analogous to the absence of years without day-time maxima above 30 °C, years with at least a single occurrence of night-time minima below -5 °C are becoming less common.

For example, in the 1950’s and 1960’s, every year had at least one night with a minimum below -5 °C at the Heathrow station. In the 2000’s only 5 years out 10 had such low minima.

Results: Part#4: Frequency of days with other minimum temperatures

For the Heathrow Station, the graphs below show the frequency of days with minima in the range shown:

  • The frequency of ‘cold’ nights with minima in the range -5 °C to -1 °C has decreased from typically 45 days per year in the 1950’s to typically 25 days per year in the 2000’s.

  • The frequency of ‘cold’ nights with minima in the range 0 °C to 4 °C has decreased from typically 95 days per year in the 1950’s to typically 80 days per year in the 2000’s.

  • The frequency of nights with minima in the range 5 °C to 9 °C has remained roughly unchanged.

  • The frequency of nights with minima in the range 10 °C to 14 °C has increased from typically 90 days per year in the 1950’s to typically 115 days per year in the 2000’s.

  • The frequency of ‘warm’ nights with minima in the range 15 °C to 19 °C has increased very markedly from typically 12 days per year in the 1950’s to typically 30 days per year in the 2000’s.

  • ‘Hot’ nights with minima in the above 20 °C are still thankfully very rare.

 

Acknowledgements

Thanks to Met Office stars

  • John Kennedy for pointing to the MIDAS resource
  • Mark McCarthy for helpful tweets
  • Unknown data scientists for quality control of the Met Office Data

Apologies

Some eagle-eyed readers may notice that I have confused the boundaries of some of my temperature range categories. I am a bit tired of this now but I will sort it out when the manuscript comes back from the referees.

COVID-19: Day 212 Update: Population Prevalence

July 31, 2020

Summary This post is an update on the likely prevalence of COVID-19 in the UK population. (Previous update).

The latest data from the Office for National Statistics (ONS) suggest even more clearly than last week that there has been a small increase in prevalence.

The current overall prevalence is estimated to be 1 in 1500  but some areas are estimated to have a much higher incidence.

Overall the ONS estimate  that 6.2 ± 1.3 % of the UK population have been ill with COVID-19 so far.

Population Prevalence

On 31st July the Office for National Statistics (ONS) updated their survey data on the prevalence of people actively ill with COVID-19 in the general population (link), incorporating data for six non-overlapping fortnightly periods covering the period from 4th May up until 26th July

Start of period of survey End of period of survey   Middle Day of Survey (day of year 2020) % testing positive for COVID-19 Lower confidence limit Upper confidence limit
04/05/2020 17/05/2020 132 0.35 0.23 0.52
18/05/2020 31/05/2020 144 0.15 0.08 0.25
1/05/2020 14/06/2020 160 0.07 0.03 0.13
15/06/2020 18/06/2020 174 0.09 0.05 0.16
29/06/2020 12/07/2020 188 0.05 0.03 0.09
13/06/2020 26/07/2020 202 0.09 0.06 0.14

Data from ONS on 26th July

Plotting these data  I see no evidence of a continued decline. ONS modelling suggests the prevalence is actually increasing.

Click for a larger version.

Because of this it no longer makes sense to fit a curve to the data and to anticipate likely dates when the population incidence might fall to key values.

Click for a larger version.

In particular, things look grim for an untroubled return to schools. Previously – during full lock down – we achieved a decline of the prevalence of COVID-19 by a factor 10 in roughly 45 days.

The start of the school term is just 35 days away and – given the much greater activity now compared with April – it is unrealistic to expect the prevalence to fall by a factor 66 to the 1 in 100,000 level in time for the start of the school term.

Limits

As I have mentioned previously, we are probably approaching the lower limit of the population prevalence that this kind of survey can detect.

Each fortnightly data point on the 31 July data set above corresponds to:

  • 51 positive cases detected from a sample of 16,236
  • 32 positive cases detected from a sample of 20,390
  • 13 positive cases detected from a sample of 25,519,
  • 18 positive cases detected from sample of 23,767
  • 19 positive cases detected from sample of 31,542
  • 24 positive cases detected from sample of 28,325

I feel obliged to state that I do not understand how ONS process the data, because historical data points seem to change from one analysis to the next. But I suspect they are just doing something sophisticated that I don’t understand.

Daily Deaths

Below I have also plotted the 7-day retrospective rolling average of the daily death toll along with the World-o-meter projection from the start of June.

Click for a larger version.

A close up graph shows the death rate is not convincingly falling at all and so unless there is some change in behaviour, death rates from coronavirus of tens of people per day are likely to continue for several months yet.

Click for a larger version.

The trend value of deaths (~65 per day) is consistently higher than the roughly 12 deaths per day that we might have expected based on trend behaviour at the start of June.

In future updates I will no longer plot the World-o-meter projection because it is clearly to no longer relevant to what is happening in the UK.

My House: comparing models and measurements

July 28, 2020

I began my last article about my house by explaining that I have used both measurements and modelling to plan thermal improvements.

However, I did not answer the question:

  • Does the thermal model agree with the measurements?

In this article I will compare them and show that the agreement is good enough to use the model as a basis for planning further work.

The Measurements

There are two key measurements:

  • I read my gas meter roughly once a week.
    • I subtract the reading from the previous week’s reading to find out how many hundreds of cubic feet of gas were consumed that week.
    • I then work out how much energy this corresponds to. You can use this calculator for your own meter.
    • I then work out the average rate at which the energy was used by dividing the amount of energy by the time since the last reading.
    • This gives the average power used in watts (W )
  • I read my weather station.
    • I record the average weekly temperature.

The Model

The model is an attempt to explain the gas consumption in terms of a single number that characterises the thermal transmision from the inside to the outside of the house.

The thermal transmission is measured in watts per degree Celsius of temperature difference (W/°C).

Comparing the model and the measurements.

Previously (link) I explained how I calculated the thermal transmission through the walls of the house. And I then used this to estimate how the thermal transmission would be affected by various planned changes.

  • But how do I know if those calculations are reliable?

To check this I begin with the gas consumption data for the 80 weeks or so for which I measurements. I have smoothed this data with each point being a 5 week symmetrical running average i.e. the average consumption from 2 weeks before to 2 weeks after the time for which is plotted.

Click for a larger version.

This shows that in the summer, the average rate of gas consumption is around 200 watts.

Since the space-heating is not used in the summer, I assume this 200 W is due to the use of gas for cooking and heating water for showers. I assume that this gas consumption continues unchanged through the year.

I then assume that the excess winter use is solely caused by the lower average weekly external temperature.

Mathematically I expect the gas consumption to be give by the formula:

Click for a larger version.

Next alongside the measured gas consumption we can plot what the equation above predicts would have been the gas consumption based on:

  • The calculated properties of the house looked up from data sheets about building materials and windows, and dimensional measurements of the house.
  • The difference between the internal and external temperatures as worked out from weather station readings.

The graph below shows the model with a transmission of 298 W/°C – the value I calculated was appropriate to the winter of 2018/2019.

Click for a larger version.

You can see that the dotted-red curve matches the experimental gas consumption data reasonably well in the cold winter months – except during the coldest winter weather (around day 25).

You can can also see that during the following winter of 2019/2020 the model predicts that there should have been substantially more gas consumption than there actually was.

  • Was this due to the £7000 worth of triple-glazing I installed?

My calculations suggested that after the triple-glazing was installed the transmission should have been reduced to 260 W/°C. This curve is plotted below:

Click for a larger version.

You can see that with a transmission of 260 W/°C the model curve describes the data for the winter of 2019/2020 reasonably well.

I was pleased to see this: this is the first data I ever seen which verifies quantitatively the effect of triple-glazing.

This gives me confidence that this crude model is describing heat transmission through my house reasonably well.

That is why I feel confident that, after spending a further £3,000 on finishing the triple-glazing, and £20,000 on external-wall insulation. This will hopefully reduce the transmission to 152 W/°C. That curve is shown on the figure below.

Click for a larger version.

How good is this level of insulation?

My expectation is that after this summer’s modifications, this house – with a floor area of almost 180 square metres – will require barely more than 2 kW of winter heating.

Over a year it would require typically 8000 kWh of heating, or 44 kWh per square metre per year.

If this performance level is verified then (according OVO energy) the house will require less than the average in every European country except Portugal: the UK average is 133 kWh per square metre per year

This is still not good enough to achieve ‘passivhaus’ status (Links 1, 2)- which requires less than 15 kWh per square metre per year. Or even the ‘Passivhaus Retrofit’ standard EnerPHit (Link) which requires less than 25 kWh per square metre per year. But it would still be exceptional for an old UK house.

Other considerations

Despite the fact that the graphs above have worked out nicely, there is still considerable uncertainty about the way the house performs.

For example, I don’t really know the significance of several factors such as heat loss through air flow, and heat loss through the floors, both of which are little more than guesses. I am concerned I may have underestimated these processes in which case the effect of the external wall insulation will not be as large as I anticipate.

And I have assumed that the internal temperature was a constant 18 °C. It’s not clear whether this is the best estimate – perhaps it should be 19 °C or 20 °C?

So the fact that these modelled results look good indicates that these assumptions may be about right, or that a combination of factors have by chance made the agreement look good.

One interesting feature of the data is that while the single parameter for heat transmission describes the winter and summer data well – it does not describe the spring and autumn data well.

The model always predicts higher gas usage than actually occurs in the spring and autumn. Look for example at the data from days 250 to 320 and from 450 to 550 on the second model.

Click for a larger version.

I do not know what causes this, but it may be that in the transitional seasons, the pattern of gas usage may differ from being almost always on (in winter) or always off (in summer). I tried adding an extra parameter to describe this effect, but it didn’t add a lot to the explanatory power of the model.

In short, the model is simple and the reality is complex, but answering the question I asked at the start of this article:

  • Does the thermal model agree with my measurements?

I think the answer is “Yes” – it’s good enough to guide my choices.

Previous articles about my house.

 

 

 

 

 

 

 

 

 

 

 

Masks revisited

July 25, 2020

Michael in a mask

Back in April…

I wrote an article Life beyond lock-down: Masks for all? where I asked the question:

  • Will we all be wearing masks in public for the next year or two?

The question was prompted by a good friend who had sent me a link to a video which advocated the wearing of masks in public as a successful strategy for combating the transmission of corona virus.

One of the key pieces of evidence offered in the video was the effectiveness of even primitive masks in inhibiting virus transmission in Czechia.

Apparently, mask-wearing in public became de rigeur in Czechia right from the start and this corresponded – apparently – with a low incidence of COVID-19.

I decided to look at the data for the number of deaths per million of the population in the countries of Europe as recorded on Worldometer . The results from 2nd April 2020 are shown below.

Number of deaths per million of population of countries in Europe on 2nd April 2020. See text for details. Czechia is highlighted in yellow. Click for larger version

I concluded that:

  • Czechia did not stand out from its neighbours as having an especially low death rate, at that time.
  • So even though the idea of wearing a mask in public was not unreasonable, the data themselves did not seem to speak to the effectiveness of the habit.

This tied in with the conclusions of an extensive Ars Technica article on the subject

…but what about now?

Now, at the end of July, after an eventful 104 days, masks are compulsory on public transport and in shops throughout England.

So I thought this might be a good time to look back at Czechia and see how it had fared through what I believe we are calling ‘the First Wave’.

  • Did mask-wearing work out well in Czechia?

Today’s data from World-o-meter on the number of deaths per million of population are captured below, but this time colour-coded.

Number of deaths per million of population of countries in Europe on 25 July2020. See text for details. Czechia is highlighted in yellow with a red border. Click for larger version

Well Czechia has done well, being one of a small group of countries with less than 50 deaths per million of population. But other countries for whom mask-wearing was not touted from the start as a feature of pro-social behaviour have also done well.

Looking at the countries that have not done well, they all have large populations with massive inflows of visitors. Of these countries, only Germany seems to stand out as having done well.

Of course there are many factors at play, and it may be that Czechia’s mask-wearing habit has indeed been effective. But it has certainly not been a panacea.

And yet here we are, and it does look as though mask-wearing has already become accepted by the vast majority of people as a reasonable precaution, even though the data continue to be equivocal.

Curiously

Looking back at the 3rd April article I cited the United States of America as being an exemplar of the mask-wearing habit. How things change.

 

NYT Tracker for Czechia

Headlines from papers on 2nd April 2020

COVID-19: Day 205 Update: Population Prevalence

July 25, 2020

Summary

This post is an update on the likely prevalence of COVID-19 in the UK population. (Previous update).

The latest data from the Office for National Statistics (ONS) covers only two weeks after the ‘opening up’ on July 4th (Day 185 of 2020) and suggest that there has been a small increase in prevalence.

However, the ONS data do not have the resolution to rapidly detect slow increases or decreases in prevalence, so we need to keep measuring in order to detect any sustained slow increase at the earliest opportunity.

Population Prevalence

On 24th July the Office for National Statistics (ONS) updated their survey data on the prevalence of people actively ill with COVID-19 in the general population (link), incorporating data for six non-overlapping fortnightly periods covering the period from 27th April up until 19th July

Start of period of survey End of period of survey   Middle Day of Survey (day of year 2020) % testing positive for COVID-19 Lower confidence limit Upper confidence limit
27/4/2020 10/05/2020 125 0.33 0.22 0.48
11/05/2020 24/05/2020 139 0.30 0.18 0.46
25/05/2020 7/06/2020 153 0.07 0.03 0.12
8/06/2020 21/06/2020 167 0.10 0.05 0.18
22/06/2020 5/07/2020 181 0.04 0.02 0.08
5/07/2020 19/07/2020 195 0.05 0.04 0.11

Data from ONS on 24th July

Plotting these data on a logarithmic graph we see a decreasing trend.

An (unweighted) exponential fit indicates a factor 10 reduction in prevalence every 74 days, longer than the previous estimate of 61 days, which was itself longer than the previous estimate of 51 days.

It is not clear that the exponential fit correctly describes as the data (as it should for a declining epidemic) but if we  extrapolate the trend of the data we can find likely dates when the population incidence might fall to key values.

Prevalence Date Cases in the UK
1 in 1,000 End of May About 60,000
1 in 10,000 End of August About 6,000
1 in 100,000 End of October About 600

These dates are later than previously anticipated, and worryingly the data confirm that it is unlikely that the prevalence will reach the 1 in 100,000 level in time for the start of the school term.

Limits

As I have mentioned previously, we are probably approaching the lower limit of the population prevalence that this kind of survey can detect.

Each fortnightly data point on the July 24th data set above corresponds to:

  • 40 positive cases detected from a sample of 11,346
  • 50 positive cases detected from a sample of 19,354
  • 17 positive cases detected from a sample of 22,570
  • 18 positive cases detected from sample of 25,200
  • 12 positive cases detected from sample of 26,332
  • 19 positive cases detected from sample of 30,260

Incidentally, the ONS also include data on the number of households sampled, and, in cases where someone in a household tests positive, there are typically two positive cases in that household.

Daily Deaths

Below I have also plotted the 7-day retrospective rolling average of the daily death toll along with the World-o-meter projection from the start of June.

Click for a larger version

This data shows the death rate is still falling, but only slowly. The trend value of deaths (~75 per day) is consistently higher than the roughly 19 deaths per day that we might have expected based on trend behaviour at the start of June.

This indicates that death rates of tens of people per day are likely to continue for several months yet.

In future updates I will continue to use the same World-o-meter projection to gauge whether the death rate is falling faster or slower than the model led us to expect.

Meteorological Thermometers are Sloooooooow!

July 23, 2020

Abstract: Laboratory measurements of the response time of thermometers used in typical meteorological applications reveal that they respond more slowly than generally thought. None of the tested thermometers met the WMO guideline response time of 20 seconds in a wind speed of 1 m/s.

Friends – Together with the meteorologist’s meteorologist Stephen Burt, I have published an academic paper – possibly my last. It’s a simple paper following the epithet of the metrologist’s metrologist, Michael Moldover: One paper: one thing

You can read it for free here: Response times of meteorological air temperature sensors.

Land Surface Air Temperature (LSAT)

Air temperature measurements taken over the land surface of the Earth (LSAT) are the primary measurand in humanity’s assessment of the extent of global warming.

And air temperature measurements are also critical for assessing the accuracy of weather forecasts.

However air temperature measurements are difficult. They are subject to a number of systematic errors that arise because of the low thermal conductivity and low heat capacity of air.

These effects make it tricky to ensure good thermal contact between the air and thermometers and make the readings of thermometers sensitive to even very low levels of radiative heating.

I have written about this in a previous academic paper which you can read for free here: Air temperature sensors: dependence of radiative errors on sensor diameter in precision metrology and meteorology.

The WMO CIMO guidelines

So it’s important that air temperature measurements are made in a standardised manner worldwide. This makes it possible for scientists to assess the data for possible systematic effects.

For this reason the Commission on Instruments and Methods of Observation (CIMO) of the World Meteorological Organisation (WMO) publish a guide (the so-called CIMO-Guide which you can read for free here: CIMO guide) to which manufacturers refer when producing equipment.

I was honoured to represent the International Bureau of Weights and Measures on the committee that last reviewed the CIMO guide on temperature measurement and I ‘stuck my oar in’ on one or two issues!

But one issue on which I was silent was the on response time of thermometers. I was silent because I didn’t have any idea what the response time was or indeed what response time was desirable.

Response Time

How rapidly should a meteorological air-thermometer respond? There is no definitively correct answer.

  • If it responds too rapidly:
    • …then the reading will fluctuate with local temperature variations and it will wastefully require many readings in order to estimate the average air temperature.
  • If it responds too slowly:
    • …then the reading will fail to track local maximum and minimum temperatures.

This is why world-wide standardisation of meteorological equipment is important.

In the paper we report:

  • Measurements of the response to a step change in temperature of a range of thermometers taken in a simple wind tunnel at a range of air speeds.
  • Analysis of this data to extract a time constant that characterises the thermometers.
  • Further analysis to explain the measurements in terms of the mechanisms of heat transfer between the air and thermometer.
  • A suggested rule-of-thumb for estimating the time constant of any thermometer.

We then suggest how people should respond to the fact that none of the thermometers met the CIMO Guidelines.

So all that awaits you in our paper (which you can read for free) over at the Quarterly Journal of the Royal Meteorological Society.

Other books by Stephen Burt 

If you like our paper then you may also be interested in other books by Stephen Burt. Sadly these books are not free. 😦


%d bloggers like this: