Archive for the ‘Simple Science’ Category

COVID-19: Day 111:Getting better, but too slowly.

April 21, 2020

Warning: Discussing death is difficult, and if you feel you will be offended by this discussion, please don’t read any further.

========================================

This post looks at today’s data (Day 111) and clarifies the meaning of the data classification “New Cases”.

This change gives a small downward trend to the predicted number of daily deaths. The slowness of this trend – if continued – would result in our national ordeal lasting through to mid-June, with a final death toll in excess of 40,000.

‘New Cases’

In my previous posts (12, 3), I have been predicting the number of hospital deaths one week ahead of time by reasoning that mortality from COVID-19 hospital admissions is around 20% and so 20% of new ‘Cases’ become ‘Deaths’ after 6 days on average.

One important qualification to this prediction is that ‘New Cases’ are evaluated in the same way across the period. In fact the way the statistic for ‘New Cases’ is derived was changed on April 11th (Day 101 of 2020).

Pillar 1 & Pillar 2 Testing

I had been alert to this possibility, but I only became aware of this change yesterday during the government briefing, when they showed this slide.

Slide2

I searched for the data on line but could not find it.

[Update: I found the slides from the daily government briefings here.]

So I captured this by freezing a replay of the presentation and then pressing ‘Print screen’ on my computer. I then typed the number of cases from the blue and orange categories into my spreadsheet.

  • Initially, ‘New Cases’ cases were all deduced by so-called Pillar 1 testing (blue). This is mainly the hospital tests of new admissions.
  • From March the 29th, a small number of cases deduced from Pillar 2 testing (orange) of health care staff were being taken, but these were not included with the Pillar 1 data.
  • From April 11th, the increasing number of cases deduced from Pillar 2 testing (orange) of health care staff were included with the Pillar 1 data.

The effect of this made it seem as if the number of cases from Pillar 1 testing – the statistic we would expect to correlate with later deaths – was staying high when in fact it is slightly declining.

In itself, this is good news. But it is not very good news, because the reduction in cases diagnosed by Pillar 1 testing is not very great.

Revised Predictions

Below I have re-plotted my usual graph but now the prediction for future deaths is based just on Pillar 1 testing

Slide3

The above graph shows various statistics plotted versus the day of the year.

  • the blue curve shows the daily published number of new ‘Pillar 1 tested’ COVID-19 cases.
  • the red curve shows the daily number of COVID-19 deaths in hospital.
  • the black dotted line shows the predicted number of deaths based on 20% case mortality after 6 days.
  • The blue dotted line shows my previous prediction based on ‘New Cases’ diagnosed by Pillar 1 and Pillar 2 testing.
  • The vertical green lines shows the start and end of the first phase of the ‘lock down’

For the part of the curve relating to the last two weeks, the data are not changing rapidly, so we can re-plot the data on a linear vertical scale to see that region in more detail.

Slide4

The above graph shows some of the same data as the previous graph.

  • the red curve shows the daily number of COVID-19 deaths in hospital.
  • the black dotted line shows the predicted number of deaths based on 20% case mortality of Pillar 1 cases after 6 days.
  • the blue dotted line shows my previous prediction based on ‘New Cases’ diagnosed by Pillar 1 and Pillar 2 testing.

What I conclude from this data is that:

  • The number of new cases diagnosed by Pillar 1 testing is falling, but only slowly.
  • Fitting a linear trend to the data (see the graph below) the number of new cases would not be expected to reach zero for another 54 days  – Day 165 (14th June).
  • I do not know why this statistic is falling so slowly, and that worries me.
  • If that trend were followed, the death toll would likely exceed 40,000 – a truly appalling outcome.

Slide5

===========================

Discussing death is difficult, and if you have been offended by this discussion, I apologise. The reason I have written this is that I feel it is important that we all try to understand what is happening.

COVID-19 Numerology

April 11, 2020

Warning: Discussing death is difficult, and if you feel you will be offended by this discussion, please don’t read any further.

========================================

Life is very pleasant for me and my wife in this ‘stay at home’ world, but I find myself permanently anxious and neurotically focused on ‘the numbers’: trying to understand them and use them to foresee what’s coming next.

I had thought naively that the ‘lock down’, which started on Day 81 of the year, would be completely effective, and that new cases of COVID-19 would begin to decline. But as the data below shows, that doesn’t seem to have happened.

Slide1

The number of new cases has stopped rising – but new cases are still occurring at around 4500 ± 500 cases per day.

As I understand the data, and the way in which testing is done, these are mainly people entering hospital. People who have probably been ill at home for some time, but their symptoms have now become serious enough for them to come to hospital.

But even so, some of those people will have been infected after Day 81.

Relating New Cases to Deaths

Some fraction of the people entering hospital will die a few days later.

I have looked at the UK data to try to understand how many people would die – the fractional mortality – and the delay.

To do this I took the ‘new cases‘ data and:

  • Applied a delay to the data that moves it to the right on the graph
  • Adjusted the fractional mortality to try to match the statistic for daily deaths. This moves it downwards on the graph.

Slide2

I found a reasonable match to the data for a delay of 7 days and a fractional mortality of 25%. i.e. the data seem to imply that 1 in 4 people being admitted to hospital as a new case will die, on average just 7 days later.

Slide3

Is this right?

Well obviously I don’t know if this is right or not.

I had expected a much lower mortality for people entering hospital – perhaps 1 in 10. On the graph above this would push the dotted black curve downwards.

But if that were so, then in order to match the ‘daily deaths’ data, the time to death would have to be very short, and in fact the curve doesn’t match the data well.

I found that reasonable matches could be obtained with:

  • mortality of 30%  and a time until death of around 9 days,
  • mortality of 20%  and a time until death of around 5 days,

But the best match (by eye) seemed to be with a mortality of 25%  and a time until death of around 7 days,

Discussion

I was shocked and saddened by this result. I hope I have missed something out or misinterpreted the data. Perhaps the mortality or time until death have improved throughout the last few weeks.

A mortality rate of 25% has been reported in the ‘worst hit’ hospitals, but I assumed this was exceptional. Also, the time until death seemed much faster than I had expected.

One additional feature of this analysis is that – if correct – it predicts the number of daily deaths for the next 7 days. And the prediction is disappointing.

The analysis indicates that the number of daily deaths in the next 7 days is unlikely to fall because these deaths correspond to people who have already been admitted to hospital.

Link to Excel Spreadsheet: Modelling Death Delay and Mortality

=============================

As I mentioned, discussing death is difficult, and if you have been offended by this discussion, I apologise. The reason I have written this is that I feel it is important that we all try understand what is happening.

 

Life beyond lock-down: Masks for all?

April 3, 2020

Michael in a mask

Will we all be wearing masks in public for the next year or two?

A good friend sent me a link to a video which advocated the wearing of masks in public as a successful strategy for combating the transmission of corona virus.

I have no idea if this is true or not.

One thing of which I have been reminded by the current pandemic is that my intuition gained by experience as ‘an expert’ in one area, is not transferable. This pandemic has left me in a permanent state of bewilderment.

One of the key pieces of evidence offered in the video is the effectiveness of even primitive masks in inhibiting virus transmission in Czechia. Apparently,  mask-wearing in public has become de rigeur in Czechia, and there is – apparently – a low incidence of COVID-19 in Czechia.

I decided to look at the data

The table at the end of this article is compiled by data from Wikipedia’s list of the countries of Europe and their population, and the number of deaths recorded on Worldometer on the evening of 2nd April 2020.

The map below shows the results with the numbers expressing the numbers of deaths per million of the population.

[Note: Many European countries have small populations – less than the size of London – and many may not have good reporting of the deaths, which are in any case small in number. But the data is what it is.]

Number of deaths per million of population of countries in Europe on 2nd April 2020. See text for details. Data Table at the end of the article.

Number of deaths per million of population of countries in Europe on 2nd April 2020. See text for details. Data Table at the end of the article. Czechia is highlighted in yellow.

Does this data provide evidence that Czechia is a special case?

No.

To me it looks like Eastern Europe is generally less affected than Western Europe, and Czechia is in the middle. On the West it is bordered Germany and Austria, both of which have a low incidence (for Western Europe) per million of their population.

The 4 deaths per million of its population of its 10.7 million does not stand out as being anomalously low compared with, say, Poland (2 deaths per million of its population of 38 million) or Greece (5 deaths per million of its population of 10.4 million).

One further piece of evidence to look for would be the rate of growth of the virus within Czechia.

NYT Tracker for Czechia

The New York Times death tracker shows that the doubling-time for deaths in Czechia is similar to other countries in Western Europe – around 3 days.

The number of deaths are small and so the trend is uncertain, but it does not look like it is in the same group as Japan or South Korea which have only slow growth of virus-related deaths – a doubling time of more than 7 days.

In short, even though the idea of wearing a mask in public is not unreasonable, the data themselves do not seem to speak to the effectiveness of the habit.

But..

After the lock-down has ended, we all will need to be able to get out and about again and earn the money to pay for this hiatus. But the virus will still be out there and will still be exactly as lethal as it has been for these last few months.

So it might easily be that wearing a mask in public – proven in effectiveness or not –  may become a sign of respect for one’s fellow citizens.

One of the attractive features of the policy in Czechia is that the masks are not considered as being defensive i.e. protecting the wearer. Instead they are considered as a sign of pro-social behaviour i.e. a sign of one’s consideration of others.

Masks are unlikely to do any harm, and they may even do some good. But whichever is the case, it seems that in the US – the leader for many trends for both good and ill – their adoption may become mandatory.

NYT Tracker for Czechia

Headlines from papers on 2nd April 2020

So perhaps we will all be wearing masks in public for the for next year or two. I certainly didn’t see that coming!

UPDATE on 04/04/2020 ARTICLE ON ARS TECHNICA referencing new US CDC recommendations.

Data

Country Population Deaths @2/4/2020 Deaths/million
Germany 83,783,942 1,107 13
United Kingdom 67,886,011 2,921 43
France 65,273,511 5,387 83
Italy 60,461,826 13,915 230
Spain 46,754,778 10,348 221
Ukraine 43,733,762 22 1
Poland 37,846,611 57 2
Romania 19,237,691 115 6
Netherlands 17,134,872 1,339 78
Belgium 11,589,623 1,011 87
Czech Republic (Czechia) 10,708,981 44 4
Greece 10,423,054 53 5
Portugal 10,196,709 209 20
Sweden 10,099,265 308 30
Hungary 9,660,351 21 2
Belarus 9,449,323 4 0
Austria 9,006,398 158 18
Serbia 8,737,371 31 4
Switzerland 8,654,622 536 62
Bulgaria 6,948,445 10 1
Denmark 5,792,202 123 21
Finland 5,540,720 19 3
Slovakia 5,459,642 1 0
Norway 5,421,241 50 9
Ireland 4,937,786 98 20
Croatia 4,105,267 7 2
Moldova 4,033,963 6 1
Bosnia and Herzegovina 3,280,819 16 5
Albania 2,877,797 16 6
Lithuania 2,722,289 9 3
North Macedonia 2,083,374 11 5
Slovenia 2,078,938 17 8
Latvia 1,886,198 0 0
Estonia 1,326,535 11 8
Montenegro 628,066 2 3
Luxembourg 625,978 30 48

 

Hazards of Flying

November 17, 2019

Radiation Dose

Radeye in Cabin

RadEye Geiger Counter on my lap in the plane.

It is well-known that by flying in commercial airliners, one exposes oneself to increased intensity of ionising radiation.

But it is one thing to know something in the abstract, and another to watch it in front of you.

Thus on a recent flight from Zurich I was fascinated to use a Radeye B20-ER survey meter to watch the intensity of radiation rise with altitude as I flew home.

Slide1

Graph showing the dose rate in microsieverts per hour as a function of time before and after take off. The dose rate at cruising altitude was around 25 times on the ground.

Slide2

During the flight from Zurich, the accumulated radiation dose was almost equal to my entire daily dose in the UK.

The absolute doses are not very great (Some typical doses). The dose on flight from Zurich (about 2.2 microsieverts) was roughly equivalent to the dose from a dental X-ray, or one whole day’s dose in the UK.

But for people who fly regularly the effects mount up.

Given how skittish people are about exposing themselves to any hazard I am surprised that more is not made of this – it is certainly one more reason to travel by train!

CO2 Exposure

Although I knew that by flying I was exposing myself to higher levels of radiation – I was not aware of how high the levels of carbon dioxide can become in the cabin.

I have been using a portable detector for several months. I was sceptical that it really worked well, and needed to re-assure myself that it reads correctly. I am now more or less convinced and the insights it has given have been very helpful.

In fresh air the meter reads around 400 parts per million (ppm) – but in the house, levels can exceed this by a factor of two – especially if I have been cooking using gas.

One colleague plotted levels of CO2 in the office as a function of the number of people using the office. We were then able to make a simple airflow model based on standard breathing rates and the specified number of air changes per hour.

Slide5

However I was surprised at just how high the levels became in the cabin of an airliner.

The picture below shows CO2 levels in the bridge leading to the plane in Zurich Airport. Levels around 1500 ppm are indicative very poor air quality.

Slide3

Carbon dioxide concentration on the bridge leading to the plane – notice the rapid rise.

The picture below shows that things were even worse in the aeroplane cabin as we taxied on the tarmac.

Slide4

Carbon dioxide concentration measured in the cabin while we taxied on the ground in Zurich.

Once airborne, levels quickly fell to around 1000 ppm – still a high level – but much more comfortable.

I have often felt preternaturally sleepy on aircraft and now I think I know why – the spike in carbon dioxide concentrations at this level can easily induce drowsiness.

One more reason not to fly!

 

 

 

Getting there…

November 14, 2019

Life is a journey to a well-known destination. It’s the ‘getting there’ that is interesting.

The journey has been difficult these last few weeks. But I feel like I am ‘getting there

Work and non-work

At the start of 2019 I moved to a 3-day working week, and at first I managed to actually work around 3-days a week, and felt much better for it.

But as the year wore on, I have found it more difficult to limit my time at work. This has been particularity intense these last few weeks.

My lack of free time has been making me miserable. It has limited my ability to focus on things I want to do for personal, non-work reasons.

Any attention I pay to a personal project – such as writing this blog – feels like a luxurious indulgence. In contrast, work activities acquire a sense of all-pervading numinous importance.

But despite this difficulty – I feel like I am better off than last year – and making progress towards the mythical goal of work-life balance on the way to a meaningful retirement.

I am getting there!

Travelling 

Mainly as a result of working too much, I am still travelling too much by air. But on some recent trips to Europe I was able to travel in part by train, and it was surprisingly easy and enjoyable.

I am getting there! By train.

My House

The last of the triple-glazing has been installed in the house. Nine windows and a door (around £7200 since you asked) have been replaced.

Many people have knowingly askedWhat’s the payback time?

  • Using financial analysis the answer is many years.
  • Using moral and emotional analysis, the payback has been instantaneous.

It would be shameful to have a house which spilt raw sewage onto the street. I feel the same way about the 2.5 tonnes of carbon dioxide my house currently emits every winter.

This triple-glazing represents the first steps in bringing my home up to 21st Century Standards and it is such a relief to have begun this journey.

I will monitor the performance over the winter to see if it coincides with my expectations, and then proceed to take the next steps in the spring of 2020.

I am getting there! And emitting less carbon dioxide in the process

Talking… and listening

Physics in Action 3

Yesterday I spoke about the SI to more than 800 A level students at the Emmanuel Centre in London. I found the occasion deeply moving.

  • Firstly, the positivity and curiosity of this group of group of young people was palpable.
  • Secondly, their interest in the basics of metrology was heartwarming.
  • Thirdly, I heard Andrea Sella talk about ‘ice’.

Andrea’s talked linked the extraordinary physical properties of water ice to the properties of ice on Earth: the dwindling glaciers and the retreat of sea-ice.

He made the connection between our surprise that water ice was in any way unusual with the journalism of climate change denial perpetrated by ‘newspapers’ such as the Daily Mail.

This link between the academic and the political was shocking to hear in this educational context – but essential as we all begin our journey to a new world in which we acknowledge what we have done to Earth’s climate.

We have a long way to go. But hearing Andrea clearly and truthfully denounce the lies to which we are being exposed was personally inspiring.

We really really are getting there. 

What it takes to heat my house: 280 watts per degree Celsius above ambient

August 16, 2019

Slide1

The climate emergency calls on us to “Think globally and act locally“. So moving on from distressing news about the Climate, I have been looking to reduce energy losses – and hence carbon dioxide emissions – from my home.

One of the problems with doing this is that one is often working ‘blind’ – one makes choices – often expensive choices – but afterwards it can be hard to know precisely what difference that choice has made.

So the first step is to find out the thermal performance of the house as it is now. This is as tedious as it sounds – but the result is really insightful and will help me make rational decisions about how to improve the house.

Using the result from the end of the article I found out that to keep my house comfortable in the winter, for each degree Celsius that the average temperature falls below 20 °C, I currently need to use around 280 W of heating. So when the temperature is 5 °C outside, I need to use 280 × (20 – 5) = 4200 watts of heating.

Is this a lot? Well that depends on the size of my house. By measuring the wall area and window area of the house, this figure allows me to work out the thermal performance of the walls and windows. And then I can estimate how much I could reasonably hope to improve the performance by using extra insulation or replacing windows. These details will be the topic of my next article.

In the rest of this article I describe how I made the estimate for my home which uses gas for heating, hot water, and cooking. My hope is it will help you make similar estimates for your own home.

Overall Thermal Performance

The first step to assessing the thermal performance of the house was to read the gas meter – weekly: I did say it was tedious. I began doing that last November.

One needs to do this in the winter and the summer. Gas consumption in winter is dominated by heating, and the summer reading reveals the background rate of consumption for the other uses.

My meter reads gas consumption in units of ‘hundreds of cubic feet’. This archaic unit can be converted to energy units – kilowatt-hours using the formula below.

Energy used in kilowatt-hours = Gas Consumption in 100’s of cubic feet × 31.4

So if you consume 3 gas units per day i.e. 300 cubic feet of gas, then that corresponds to 3 × 31.4 = 94.2 kilowatt hours of energy per day, and an average power of 94.2 / 24 = 3 925 watts.

The second step is to measure the average external temperature each week. This sounds hard but is surprisingly easy thanks to Weather Underground.

Look up their ‘Wundermap‘ for your location – you can search by UK postcode. They have data from thousands of weather stations available.

To get historical data I clicked on a nearby the weather station (it was actually the one in my garden [ITEDDING4] but any of the neighbouring ones would have done just as well.)  I then selected ‘weekly’ mode and noted down the average weekly temperature for each week in the period from November 2018 to the August 2019.

Slide3

Weather history for my weather station. Any nearby station would have done just as well. Select ‘Weekly Mode’ and then just look at the ‘Average temperature’. You can navigate to any week using the ‘Next’ and ‘Previous’ buttons, or by selecting a date from the drop down menus

Once I had the average weekly temperature, I then worked out the difference between the internal temperature in the house – around 20 °C and the external temperature.

I expected that the gas consumption to be correlated with the difference from 20 °C, but I was surprised by how close the correlation was.

Slide2

Averaging the winter data in the above graph I estimate that it takes approximately 280 watts to keep my house at 20 °C for each 1 °C that the temperature falls below 20 °C.

Discussion

I have ignored many complications in arriving at this estimate.

  • I ignored the variability in the energy content of gas
  • I ignored the fact that less than 100% of the energy of the gas is use in heating

But nonetheless, I think it fairly represents the thermal performance of my house with an uncertainty of around 10%.

In the next article I will show how I used this figure to estimate the thermal performance – the so-called ‘U-values’ – of the walls and windows.

Why this matters

As I end, please let me explain why this arcane and tedious stuff matters.

Assuming that the emissions of CO2 were around 0.2 kg of CO2 per kWh of thermal energy, my meter readings enable me to calculate the carbon dioxide emissions from heating my house last winter.

The graph below shows the cumulative CO2 emissions…

Slide4

Through the winter I emitted 17 kg of CO2 every day – amounting to around 2.5 tonnes of CO2 emissions in total.

2.5 tonnes????!!!!

This is around a factor of 10 more than the waste we dispose of or recycle. I am barely conscious that 2.5 tonnes of ANYTHING have passed through my house!

I am stunned and appalled by this figure.

Without stealing the thunder from the next article, I think I can see a way to reduce this by a factor of three at least – and maybe even six.

Is a UK grid-scale battery feasible?

April 26, 2019

This is quite a technical article, so here is the TL/DR: It would make excellent sense for the UK to build a distributed battery facility to enable renewable power to be used more effectively.

=========================================

Energy generated from renewable sources – primarily solar and wind – varies from moment-to-moment and day-to-day.

The charts below are compiled from data available at Templar Gridwatch. It shows the hourly, daily and seasonal fluctuations in solar and wind generation plotted every 5 minutes for (a) 30 days and (b) for a whole year from April 21st 2018. Yes, that is more than 100,000 data points!

Wind (Green), Solar (Yellow) and Total (Red) renewable energy generation for the days since April 21st 2018

Wind (Green), Solar (Yellow) and Total (Red) renewable energy generation for 30 days following April 21st 2018. The annual average (~6 GW) is shown as black dotted line.

Slide7

Wind (Green), Solar (Yellow) and Total (Red) renewable energy generation for the 365 days since April 21st 2018. The annual average (~6 GW) is shown as black dotted line.

An average of 6 GW is a lot of power. But suppose we could store some of this energy and use it when we wanted to rather than when nature supplied it. In other words:

Why don’t we just build a big battery?

It turns out we need quite a big battery!

How big a battery would be need?

The graphs below shows a nominal ‘demand’ for electrical energy (blue) and the electrical energy made available by the vagaries of nature (red) over periods of 30 days and 100 days respectively. I didn’t draw the whole year graph because one cannot see anything clearly on it!

The demand curve is a continuous demand for 3 GW of electrical power with a daily peak demand of 9 GW. This choice of demand curve is arbitrary, but it represents the kind of contribution we would like to be able to get from any energy source – its availability would ideally follow typical demand.

Slide8

Slide9

We can see that the renewable supply already has daily peaks in spring and summer due to the solar energy contribution.

The role of a big battery would be cope to with the difference between demand and supply. The figures below show the difference between my putative demand curve and supply, over periods of 30 days and a whole year.

Slide10

Slide11

I have drawn black dotted lines showing when the difference between demand and supply exceeds 5 GW one way or another. In spring and summer this catches most of the variations. So let’s imagine a battery that could store or release energy at a rate of 5 GW.

What storage capacity would the battery need to have? As a guess, I have done calculations for a battery that could store or release 5 GW of generated power for 5 hours i.e. a battery with a capacity of 5 GW x 5 hours = 25 GWh. We’ll look later to see if this is too much or too little.

How would such a battery perform?

So, how would such a battery affect the ability of wind and solar to deliver a specified demand?

To assess this I used the nominal ‘demand‘ I sketched at the top of this article – a demand for  3 GW continuously, but with a daily peak in demand to 9 GW – quite a severe challenge.

The two graphs below show the energy that would be stored in the battery for 30 days after 21 April 2018, and then for the whole following year.

  • When the battery is full then supply is exceeding demand and the excess is available for immediate use.
  • When the battery is empty then supply is simply whatever the elements have given us.
  • When the battery is in-between fully-charged and empty, then it is actively storing or supplying energy.

Slide12

Over 30 days (above) the battery spends most of its time empty, but over a full year (below), the battery is put to extensive use.

Slide13

How to measure performance?

To assess the performance of the battery I looked at how the renewable energy available last year would meet a levels of constant demand from 1 GW up to 10 GW with different sizes of battery. I consider battery sizes from zero (no storage) in 5 GWh steps up to our 25 GWh battery. The results are shown below:

Slide15It is clear that the first 5 GWh of storage makes the biggest difference.

Then I tried modelling several levels of variable demand: a combination of 3 GW of continuous demand with an increasingly large daily variation – up to a peak of 9 GW. This is a much more realistic demand curve.Slide17

Once again the first 5 GWh of storage makes a big difference for all the demand curves and the incremental benefit of bigger batteries is progressively smaller.

So based on the above analysis, I am going to consider a battery with 5 GWh of storage – but able to charge or discharge at a rate of 5 GW. But here is the big question:

Is such a battery even feasible?

Hornsdale Power Reserve

The Hornsdale Power Reserve Facility occupies an area bout the size of a football pitch. Picture from the ABC site

The Hornsdale Power Reserve Facility occupies an area about the size of a football pitch. Picture from the ABC site

The biggest battery grid storage facility on Earth was built a couple of years ago in Hornsdale, Australia (Wiki Link, Company Site). It seems to have been a success (link).

Here are its key parameters:

  • It can store or supply power at a rate of 100 MW or 0.1 GW
    • This is 50 times smaller than our planned battery
  • It can store 129 MWh of energy.
    • This is just under 40 times smaller than our planned battery
  • Tesla were reportedly paid 50 million US dollars
  • It was supplied in 100 days.
  • It occupies the size of a football pitch.

So why don’t we just build lots of similar things in the UK?

UK Requirements

So building 50 Hornsdale-size facilities, the cost would be roughly 2.5 billion dollars: i.e. about £2 billion.

If we could build 5 a year our 5 GWh battery would be built in 10 years at a cost of around £200 million per year. This is a lot of money. But it is not a ridiculous amount of money when considering the National Grid Infrastructure.

Why this might actually make sense

The key benefits of this kind of investment are:

  • It makes the most of all the renewable energy we generate.
    • By time-shifting the energy from when it is generated to when we need it, it allows renewable energy to be sold at a higher price and improves the economics of all renewable generation
  • The capital costs are predictable and, though large, are not extreme.
  • The capital generates an income within a year of commitment.
    • In contrast, the 3.2 GW nuclear power station like Hinkley Point C is currently estimated to cost about £20 billion but does not generate any return on investment for perhaps 10 years and carries a very high technical and political risk.
  • The plant lifetime appears to be reasonable and many elements of the plant would be recyclable.
  • If distributed into 50 separate Hornsdale-size facilities, the battery would be resilient against a single catastrophic failure.
  • Battery costs still appear to be falling year on year.
  • Spread across 30 million UK households, the cost is about £6 per year.

Conclusion

I performed these calculations for my own satisfaction. I am aware that I may have missed things, and that electrical grids are complicated, and that contracts to supply electricity are of labyrinthine complexity. But broadly speaking – more storage makes the grid more stable.

I can also think of some better modelling techniques. But I don’t think that they will affect my conclusion that a grid scale battery is feasible.

  • It would occupy about 50 football pitches worth of land spread around the country.
  • It would cost about £2 billion, about £6 per household per year for 10 years.
    • This is one tenth of the current projected cost of the Hinkley Point C nuclear power station.
  • It would deliver benefits immediately construction began, and the benefits would improve as the facility grew.

But I cannot comment on whether this makes economic sense. My guess is that when it does, it will be done!

Resources

Data came from Templar Gridwatch

 

Cloud in a bottle!

March 22, 2019

One of the best parts of the FREE! ‘Learn About Weather‘ course, was the chance to make a cloud in a bottle. Here’s my video!

The demonstration involves squeezing a bottle partly filled with water and then letting go. One can see a cloud form as one lets go, and then disappear again when one squeezes. Wow!

But there is a trick! You need to drop a burning match into the bottle first!

Heterogeneous versus homogeneous nucleation

How does the smoke make the trick work? It’s to do with the way droplets form – a process called nucleation.

There are two ways for droplets to nucleate. An easy way and a hard way. But those words are too short for scientists. Instead we call them heterogeneous and homogeneous nucleation!

  • Heterogeneous nucleation‘ means that the water droplets in a cloud form around dust or smoke particles. The ‘hetero-” prefix means ‘different’, because there is more than one type of entity involved in forming droplets – dust and water.
  • Homogeneous nucleation‘ means that the water droplets in a cloud form spontaneously without any other type of particle being present. The ‘homo-” prefix means ‘the same’, because there is just one substance present – water.

The experiment shows that hetero-gen-e-ous nucleation is dramatically easier than than homo-gen-e-ous nucleation. And in reality – in real clouds – practically all droplet formation is heterogeneous – involving dust particles.

The reason is easy to appreciate.

  • To form a tiny droplet by homogeneous nucleation requires a few water molecules to meet and stick together. It’s easy to imagine three or four molecules might do this, but as new molecules collide, some will have higher than average energy and tend to break the proto-droplet apart.
  • But a dust or smoke particle, though small by human standards (about 0.001 mm in diameter), is roughly 10,000 times larger than individual molecules. So its surface provides billions of locations for water molecules to stick. So when the average energy of the water molecules is at the appropriate level to form a liquid, the water molecules can quickly stick to the surface and cause a droplet to grow.

How big is the temperature change?

Squeezing the bottle compresses the air quickly (in much less than 1 second) and so (because the air is a poor conductor of heat), there is no time for the heat of compression to flow from the gas into the walls and the water (this takes a few seconds) and the air warms transiently.

I was curious about the size of the temperature change that brought about this cloud formation.

I calculated that if the air in the bottle changed volume by 5%, there should be a temperature change of around 6 °C – really quite large!

Squeezing the bottle warms the air rapidly – and then over a few seconds the temperature slowly returns to the temperature of the walls of the bottle and the water.

If one lets go at this point the volume increases by an equivalent amount and the temperature returns to ambient. It is this fall which is expected to precipitate the water droplets.

To get the biggest temperature change one needs a large fractional change in volume. I couldn’t do the calculation of the optimum filling fraction so I did an experiment instead.

I poked a thin thermocouple through a bottle top and made it air tight using lots of epoxy resin.

Bottle

I then squeezed the bottle and measured the maximum temperature rise. The results are shown below.

Delta T versus Filling Fraction

The results indicate that for a bottle filled to around three quarters with water, the temperature change is about 6 °C.

But as you can see in the video – it takes a few seconds to reach this maximum temperature, so I suspect the instantaneous change in air temperature is much larger, but that even this small thermocouple takes a couple of seconds to warm up.

Happy Experimenting

The Met office have more cloud forming tricks here.

 

 

 

Learning about weather

March 17, 2019

I have just completed a FREE! ‘Learn About Weather‘ course, and slightly to my surprise I think I have learned some things about the weather!

Learning

Being an autodidact in the fields of Weather and Climate, I have been taught by an idiot. So ‘attending’ online courses is a genuine pleasure.

All I have to do is to listen – and re-listen – and then answer the questionsSomeone else has selected the topics they feel are most important and determined the order of presentation.

Taking a course on-line allows me to expose my ignorance to no-one but myself and the course-bot. And in this low-stress environment it is possible to remember the sheer pleasure of just learning stuff.

Previously I have used the FutureLearn platform, for courses on Global WarmingSoil, and Programming in Python. These courses have been relatively non-technical and excellent introductions to subjects of which I have little knowledge. I have also used the Coursera platform for a much more thorough course on Global Warming.

So what did I learn? Well several things about about why Global Circulation Cells are the size they are, the names of the clouds, and how tornadoes start to spin. But perhaps the best bit was finally getting my head around ‘weather fronts’.

Fronts: Warm and Cold

I had never understood the terms ‘warm front’ and ‘cold front’ on weather forecasts. I had looked at the charts with the isobars and thought that somehow the presence or absence of ‘a front’ could be deduced by the shapes of the lines. I was wrong. Allow me to try to explain my new insight.

Air Mixing

Air in the atmosphere doesn’t mix like air in a room. Air in a room generally mixes quite thoroughly and quite quickly. If someone sprays perfume in one corner of the room, the perfume spreads through the air quickly.

But on a global scale, air doesn’t mix quickly. Air moves around as ‘big blobs’ and mixing takes place only where the blobs meet. These areas of mixing between air in different blobs are called ‘fronts’

Slide1

In the ‘mixing region’ between the two blobs, the warm – generally wet – air meets the cold air and the water vapour condenses to make clouds and rain. So fronts are rain-forming regions.

Type of front

However it is unusual for two blobs of air to sit still. In general one ‘blob’ of air is ‘advancing’ and the other is ‘retreating’.

This insight was achieved just after the First World War and so the interfaces between the blobs were referred to as ‘fronts’ after the name for the interface between fighting armies. 

  • If the warm air is advancing, then the front is called a warm front, and
  • if the cold air is advancing, then the front is called a cold front.

Surprisingly cold fronts and warm fronts are quite different in character.

Warm Fronts 

When a blob of warm air advances, because it tends to be less dense than the cold air, it rises above the cold air.

Thus the mixing region extends ahead of the location on the ground where the temperature of the air will change.

The course told me the slope of the mixing region was shallow, as low as 1 in 150. So as the warm air advances, there is a region of low, rain-forming cloud that can extend for hundreds of kilometres ahead of it.

Slide2

So on the ground, what we experience is hours of steady rain, and then the rain stops as the temperature rises.

Cold Fronts 

When a blob of cold air advances, because it tends to be more dense than the warm air, it slides below it. But sliding under an air mass is harder than gliding above it – I think this is because of friction with the ground.

As a result there is a steep mixing region which extends a little bit ahead, and a short distance behind the location on the ground where the temperature of the air changes.

Slide3

So as the cold air advances, there is a region of intense rain just before and for a short time after.

So on the ground what we experience are stronger, but much shorter, rain events at just about the same time as the temperature falls. There generally follows some clearer air – at least for a short while.

Data

I had assumed that because of the messy nature of reality compared to theory, real weather data would look nothing like what the simple models above might lead me to expect. I was wrong!

As I was learning about warm and cold fronts last weekend (10 March 2019) by chance I looked at my weather station data and there – in a single day – was evidence for what I was learning – a warm front passing over at about 6:00 a.m. and then a cold front passing over at about 7:00 p.m.

  • You can look at the data from March 10th and zoom in using this link to Weather Underground.

This is the general overview of the air temperature, humidity, wind speed, rainfall and air pressure data. The left-hand side represents midnight on Saturday/Sunday and the right-hand side represents midnight on Sunday/Monday.

Slide4

The warm front approaches overnight and reaches Teddington at around 6:00 a.m.:

  • Notice the steady rainfall from midnight onwards, and then as the rain eases off, the temperature rises by about 3 °C within half an hour.

The cold front reaches Teddington at around 7:00 p.m.:

  • There is no rain in advance of the front, but just as the rain falls – the temperature falls by an astonishing 5 °C!

Slide5

Of course there is a lot of other stuff going on. I don’t understand how these frontal changes relate to the pressure changes and the sudden rise and fall of the winds as the fronts pass.

But I do feel I have managed to link what I learned on the course to something I have seen in the real world. And that is always a good feeling.

P.S. Here’s what the Met Office have to say about fronts…

Global Oxygen Depletion

February 4, 2019

While browsing over at the two degrees institute, I came across this figure for atmospheric oxygen concentrations measured at a station at the South Pole.

Graph 1

The graph shows the change in:

  • the ratio of oxygen to nitrogen molecules in samples of air taken at a particular date

to

  • the ratio of oxygen to nitrogen molecules in samples of air taken in the 1980’s.

The sentence above is complicated, but it can be interpreted without too many caveats as simply the change in oxygen concentration in air measured at the South Pole.

We see an annual variation – the Earth ‘breathing’- but more worryingly we see that:

  • The amount of oxygen in the atmosphere is declining.

It’s a small effect, and will only reach a 0.1% decline – 1000 parts per million – in 2035 or so. So it won’t affect our ability to breathe. Phewww. But it is nonetheless interesting.

Averaging the data from the South pole over the years since 2010, the oxygen concentration appears to be declining at roughly 25 parts per million per year.

Why?

The reason for the decline in oxygen concentration is that we are burning carbon to make carbon dioxide…

C + O2 = CO2

…and as we burn carbon, we consume oxygen.

I wondered if I could use the measured rate of decline in oxygen concentration to estimate the rate of emission of carbon dioxide.

How much carbon is that?

First I needed to know how much oxygen there was in the atmosphere. I considered a number of ways to calculate that, but it being Sunday, I just looked it up in Wikipedia. There I learned that the atmosphere has a mass of about 5.15×1018 kg.

I also learned the molar fractional concentration of the key gases:

  • nitrogen (molecular weight 28): 78.08%
  • oxygen (molecular weight 32): 20.95%
  • argon (molecular weight 40):0.93%

From this I estimated that the mass of 1 mole of the atmosphere was 0.02896 kg/mol. And so the mass of the atmosphere corresponded to…

5.15×1018 /0.02896 = 1.78×1020

…moles of atmosphere. This would correspond to roughly…

1.78×1020 × 0.02095 =3.73×1019

…moles of oxygen molecules. This is the number that appears to be declining by 25 parts per million per year i.e.

3.73×1019× 0.000 025= 9.32×1014

…moles of oxygen molecules are being consumed per year. From the chemical equation, this must correspond to exactly the same number of moles of carbon: 9.32×1014. Since 1 mole of carbon weighs 12 g, this corresponds to…

  • 1.12×1016 g of C,
  • 1.12×1013 kg of C
  • 1.12×1010 tonnes of C
  • 11.2 gigatonnes (Gt) of C

Looking up the sources of sources, I obtained the following estimate for global carbon emissions which indicates that currently emissions are running at about 10 Gt of carbon per year

Carbon Emissions

Analysis

So Wikipedia tells me that humanity emits roughly 10 Gt of carbon per year, but based on measurements at the South pole, we infer that 11.2 Gt of carbon per year is being emitted and consuming the concomitant amount of oxygen. Mmmmm.

First of all, we notice that these figures actually agree within roughly 10%. Which is pleasing.

  • But what is the origin the disagreement?
  • Could it be that the data from the South Pole is not representative?

I downloaded data from the Scripps Institute for a number of sites and the graph below shows recent data from Barrow in Alaska alongside the South Pole data. These locations are roughly half a world – about 20,000 km – apart.

Graph 2

Fascinatingly, the ‘breathing’ parts of the data are out of phase! Presumably this arises from the phasing of summer and winter in the northern and southern hemispheres.

But significantly the slopes of the trend lines differ by only 1%.  So global variability doesn’t seem to able to explain the 10% difference between the rate of carbon burning predicted from the decline of atmospheric oxygen (11.2 Gt C per year) , and the number I got off Wikipedia (10 Gt C per year).

Wikipedia’s number was obtained from the Carbon Dioxide Information and Analysis Centre (CDIAC) which bases their estimate on statistics from countries around the world based on stated oil, gas and coal consumption.

My guess is that there is considerable uncertainty – on the order of a few percent –  on both the CDIAC estimate, and also on the Scripps Institute estimates. So agreement at the level of about 10% is actually – in the context of a blog article – acceptable.

Conclusions

My conclusion is that – as they say so clearly over at the two degrees project – we are in deep trouble. Oxygen depletion is actually just an interesting diversion.

The most troubling graph they present shows

  • the change in CO2 concentration over the last 800,000  years, shown against the left-hand axis,

alongside

  • the estimated change in Earth’s temperature over the last 800,000  years, shown  along the right-hand axis.

The correlation between the two quantities is staggering, and the conclusion is terrifying. chart

We’re cooked…

 


%d bloggers like this: