Archive for the ‘Climate Change’ Category

The MyVaillant App: a review

February 6, 2023

Friends, regular readers will know that I love my heat pump, a Vaillant Arotherm plus model with a nominal maximum heating power of 5 kW.

But regular readers will also know that I have been very disappointed with the software and controls for the heat pump. Back in October 2022 I wrote:

Vaillant Arotherm Plus Heat Pump: The good, the bad and the ugly.

In that article the “good” referred to the mechanical and electrical operation of the heat pump; the “bad” referred to the mysterious absence of a user manual; and the “ugly” referred to the VaillantsensoApp‘ used to control a few of the functions.

Recently Vaillant have released a new MyVaillant app to replace the sensoApp and I was eager to try out it. Could it be the elegant swan that grew up from the ugly duckling of the sensoApp.

In case you don’t have time to read this finely-crafted article, here is a summary of my findings: the MyVaillant app is a big improvement, but the operational data it provides is – as best I can tell – still just as inaccurate as it was previously.

Overview

After logging on initially I was mildly impressed, but then the next time I opened the app, I was asked me to log on again: select my country location, e-mail and password. This has happened several times since and I have been told over that Vaillant are working on this. I won’t mention it again, but it is a sign of poor testing.

After eventually logging on one is faced with a pleasing simple ‘Home Screen’. A glowing green circle displays the current set point temperature with the actual temperature below it – these numbers change in increments of 0.5 °C. The glowing circle changes colour from time to time, but I have no idea why!

Plus and minus buttons allow the set point to be easily adjusted. Clicking on these brings up a dialogue box which asks how long to change the set point for. After the chosen period – default is 3 hours – the set point will return to it’s previous setting or programmed value.

Click on image for a larger version. The Home Screen of the MyVaillant App.

The Home Screen contains buttons which link to four more important screens.

Click on image for a larger version. The Home Screen of the MyVaillant App and the screens to which it links directly.

These screens (see-above) allow access to the basic controls. It’s nice to see that ‘Activate Hot Water Boost’ – the most common reason I need the app – is just one touch away from the Home Screen.

Perhaps the most important screens are those for planning the weekly cycles for (a) heating and (b) domestic hot water. These are – in my opinion – textbook good design.

Click on image for a larger version. The screens for adding an  additional regular period of domestic hot water heating. Notice that one days settings can be copied and pasted onto another day.

So the app is well-structured, pleasant to look at and easy to use. A big improvement.

System Performance

Even bigger improvements have been made to the screens showing the system performance. An example screen is shown below for the week beginning 23 January 2023.

Click on image for a larger version. Example page show energy information for the heat pump during the week beginning 23 January 2023.The screen is shown left and on the right the screen is annotated to show how the various quantities relate to one another.

The display page shows:

  • A: The electrical energy used to operate the heat pump – in this case 125.9 kWh
  • B: The thermal energy captured from the air – in this case 230.1 kWh
  • C: The thermal energy delivered to heat the house – in this case 332.1 kWh
  • D: The thermal energy delivered to heat hot water – in this case 23.9 kWh

From these quantities the app calculates the Coefficient of Performance (COP) which it calls as Energy Efficiency.

If one touches any of the small graphs, a more detailed version is shown.

Click on image for a larger version. Clicking on the small energy graphs shows more detailed versions.

This display structure has been well thought through and is well executed. I would wish that the data could be downloaded, but this presentation is basically excellent.

However sadly the performance data shown is not accurate.

Accuracy

The MyVaillant app warns people that it is not accurate. But I think that despite this warning, in the absence of any other information, most people will take these figures at face value.

Click on image for a larger version. This warning screen appears before one sees the energy information pages. I recommend that one does not click the box asking not to show the message again. This should remind one that the data can be significantly in error.

Please note: Energy consumptions, energy yields and efficiencies are extrapolated based on various parameters. The actual figures may differ substantially in some cases.

Fortunately I have a monitoring system which measures the electrical consumption by the heat pump and the heat output of the heat pump. This allows a direct comparison between the app’s estimates and a measurement system which is certified to be suitable for billing.

So for the week illustrated in the figures above, the actual figures are shown in the table below.

Click on image for a larger version. Table showing the MyVaillant App estimates for electricity consumed and heat produced together with the measurements of these quantities by billing-grade instruments.

The MyVaillant estimates are seriously in error.

  • The estimate of the electricity consumed is in error by 9.3%.
  • The estimate of the heat produced is in error by 22%

Consequently, the estimate of the COP is seriously in error.

For the week in question, the average temperature was 4.0 °C and the minimum temperature was -5.1 °C, a cold week by London standards. A COP of 3.3 in such a week is quite respectable. A COP of 2.8 is not so great, and might lead someone to search for system improvements which would be illusory.

Click on image for a larger version. Temperatures in my back garden during the week in question.

The real problem with these errors is that the erroneous estimates are completely plausible.

Summary 

Assuming that Vaillant sort out the problem logging on to their app, then this app represents a really significant improvement over their previous offering. To all the engineers who have worked on this I would like to say: Thank you.

But the inaccuracy of the reported quantities is significant and I feel that if Vaillant cannot improve these estimates, then they should be indelibly marked as ‘indicative’.

Carbon Dioxide Accounting: Why I hate it.

January 7, 2023

Friends, the turn of the year is the time when Carbonistas such as myself look at their carbon dioxide accounts. Like all accounting it is tedious but sort of important.

However carbon dioxide accounting depresses me more than regular accounting because I can hardly believe any of the numbers!

Allow me to explain…

The Big Picture

Click on image for a larger version. The red line on the graph shows estimated emissions from my household if I had not undertaken any refurbishment. The data are calculated month by month out to 2040. The green line shows actual estimated emissions from my household. The black dotted-line shows the additional effect of paying Climeworks to remove CO2 from the atmosphere on my behalf.

The aim of my activities and expenditure over the last three years has been to reduce ongoing carbon dioxide emissions from all aspects of my life, but targeting especially my home.

The graph above shows how I expect household emissions to accumulate based on various assumptions. Notice the scales: the horizontal scale extends out to 2040, my targeted date of death, and the vertical scale is in tonnes of carbon dioxide. Tonnes!

  • The red line shows how I would expect emissions to accumulate if I had made no alterations to the house.
  • The green line shows how I expect emissions to accumulate based on the current plan. This is based on the amount of electricity I draw from the grid.
  • The dotted black line accounts for the activities of Climeworks who have promised to permanently remove 50 kg of CO2/month in my name. This line is dotted because I don’t personally have any evidence that Climeworks are actually removing CO2 from the atmosphere.

The net effect of my efforts will hopefully by 2040 amount to around 78 tonnes of CO2 emissions which do not take place. But in honesty, I am not very sure about these numbers.

Assumptions, Assumptions, Assumptions

Working out the data for this graph involves estimating the amount of electricity and gas that the household has consumed (not so hard) – and will consume in future (a bit harder, but still not crazily difficult).

However it also involves associating an amount of carbon dioxide with each unit of gas or electricity used – the so-called carbon intensity (CI) measured in kilograms of CO2 per kilowatt hour (kgCO2/kWh) of gas or electricity. And I genuinely do not know what numbers to use for these CI’s.

Allow me to explain my difficulty.

Assumptions for gas

For gas, a hypothetical 100% efficient boiler would produce around 0.18 kgCO2/kWh.

But it also takes energy – and thus emissions – to extract and deliver the gas to my boiler, and these emissions should also be associated with my consumption.

However, allocating these ‘up stream emissions is not straightforward. It will differ depending on the source of gas e.g. from the North Sea (~+0.013 kgCO2/kWh) or liquified natural gas shipped from (say) the US (~+0.035 kgCO2/kWh). And also it will vary with the distance gas is pumped through the gas distribution network.

And then there is the giant smelly elephant in the room: leaks.

The gas network leaks. At every point from gas platforms to our homes, leaks are very significant. Probably around 1% of the gas we consume leaks, and some of the burned gas escapes without combustion.

When methane leaks it enters the atmosphere, staying for around 10 years before reacting to form CO2 and H2O . And during that 10 years or so, it warms the atmosphere much more intensively than CO2. Averaged over 20 years – methane is around 80 times more powerful greenhouse gas than CO2.

So a leak of 1% anywhere from the gas well to our homes increase the carbon intensity associated with methane by approximately  1% x 80 x 0.18 = 0.144 gCO2/kWh. Combined with upstream emissions this practically doubles the carbon intensity of burning methane compared with the value used by most web sites.

The only way to really know the amount CO2 emitted associated with gas use, is to use no gas at all: anything multiplied by zero is zero.

Assumptions for electricity

As difficult as it is to truly know the  appropriate carbon intensity (CI) to associate with gas consumption, it is much more difficult to know the appropriate CI to associate with electricity consumption. This is because electricity is generated from several different sources, each with its own characteristic CI.

For example, as I type this, this web site tells me that the carbon intensity of the electricity I am using is 0.065 kgCO2/kWh, but this web site tells me that the carbon intensity of the electricity I am using is 0.101 kgCO2/kWh. Which should I believe? I just don’t know.

Both figures will change depending on the composition of generating technologies, but they have (I suppose) made different assumptions about how to account for some emissions. I have previously written to the web sites to ask but received no reply.

Click on image for larger version. Data from MyGridGB and National Grid on carbon intensity. the two sites give answer which differ by 0.036 kgCO2/kWh – amounting to ~30% discrepancy.

But what if I want to draw some extra electricity? If I switch on a tumble dryer, this extra demand must be met by a source which can be switched on to meet that demand, and in practice, this is always gas-fired generation, which is nominally assigned a CI of 0.45 kgCO2/kWh.

So I have to choose whether to allocate an average CI (0.101 or 0.065 kgCO2/kWh) or a marginal CI (0.45 kgCO2/kWh) to my consumption. How do I decide what is my average consumption and what is marginal? I genuinely do not know.

And additionally, the same elephant (methane leaks) that was in the room for gas consumption, is still in the room for electricity derived from gas-fired power stations. Accounting for leaks, the contribution to the average CI of gas-fired generation could practically double from 0.45 kgCO2/kWh to 0.81 kgCO2/kWh which is almost as bad as coal-fired electricity generation.

And there are similar problems accounting for electricity exported from – say – solar panels. In principle, each extra kWh exported displaces a kWh that would have been generated by gas-fired generation. And so exports of solar electricity are avoiding emissions of CO2 at the marginal rate for gas-fired emissions (0.45 kgCO2/kWh). But should this also include the effect of methane leaks avoided?

And since the CI of grid electricity is changing all the time, should I do my accounting in (say) half-hour periods? Or should I use day or night averages? Or weekly, monthly or yearly averages?

Click on image for a larger version. graph showing the variation in CI with time of day: using electricity at night is generally a bit greener because the fraction of electricity generated by wind and nuclear power is greater. Data from the Carbon Intensity web site.

And some argue that the CI of grid electricity varies from region to region! They argue that in regions where there is lots of renewable generation the ‘local’ CI is low. But this ignores the fact that it is essentially a single grid, and that if these regions were isolated, the grid would not be able to function.

Click on image for a larger version. Map showing regional sub grids together with an indication (by colour) of the ‘local’ carbon intensity. Data from the Carbon Intensity web site.

So what do I do?

Friends, this is why I hate carbon accounting: just changing the accounting basis can apparently change emissions associated with electricity or gas consumption depending on where they take place, and how many leaks are associated with the consumption. And part of this is real, and part is conventional practice, which ignores critical issues like methane leaks.

So one can find oneself making spreadsheets of enormous complexity in search of an accounting accuracy that is ultimately unattainable.

So in the face of all this complexity and ambiguity I assign the same carbon intensity to gas and electricity (imports and exports) of 0.230 kgCO2/kWh.

  • For gas this is a bit higher than higher than estimates that add upstream emissions but much lower than estimates that account for methane leaks.
  • For electricity this is roughly the average CI for the years 2019 to 2022 as specified on the MyGridGB web site. If this figure changes significantly in 2023 I will update it.

Click on image for a larger version. Map showing carbon intensity averaged over one year showing the systematic reduction in CI. Data from the MygridGB web site.

The graph at the head of the article shows progress so far and how I anticipate things unfolding over the years. In calculating that graph I disregarded…

  • Exports of solar electricity which could be considered to be avoiding emissions by displacing gas-fired generation.
  • The share of a wind farm that I bought and which should start generating from November 2023. Again, this could be considered to be avoiding emissions by displacing gas-fired generation.

If add these in to my projections, (CI = 0.23 kgCO2/kWh) then the outlook looks better. However, the uncertainties in all the numbers here are so great that I just don’t know if any of it is correct. That’s why all the lines are dotted.

Overall, I know that household gas consumption is zero and therefore so are emissions, no matter what the CI. And this year I expect that we will be more or less off-grid i.e. taking no electricity from the grid – for roughly 6 months. And so I know emissions during that period will be zero. In short, just minimising grid consumption is probably the best way to ensure that associated carbon dioxide emissions are low.

Click on image for a larger version. The red line on the graph shows estimated emissions from my household if I had not undertaken any refurbishment. The data are calculated month by month out to 2040. The green dotted line shows estimated emissions from my household accounting for electricity exported in the summer as ‘negative emissions’ i.e. I have avoided someone else emitting CO2. The black dotted-line shows the additional effect of paying Climeworks to remove CO2 from the atmosphere on my behalf. The blue dotted-line shows the ‘negative emissions’ effect of shares in a wind farm due to begin generating in November 2023.

The Great Carbon Dioxide Accountant in the Sky

Friends, on the sacred slopes of Mauna Loa in Hawaii, there is a carbon dioxide accountant far greater than I.

Click on image for larger version. Mauna Loa CO2 observatory: the  location of the great Carbon Dioxide Accountant in the sky.

Patiently this accountant has been monitoring the concentration of carbon dioxide in the atmosphere since 1959, the year of my birth.

This accountant:

  • Does not care about which value of carbon intensity I use in my calculations.
  • Does not care about whether I used the correct estimate for embodied carbon in my solar panels or triple-glazing
  • Cannot be sweet-talked with promises of future emissions reductions.

They just measure the concentration of carbon dioxide in the Earth’s atmosphere.

When the volcano is not erupting, this accountant publishes their results daily. And this global accountant shows that whatever we are doing is just not enough.

Even if this curve stabilised at its current value of around 420 ppm, the Earth would not cool. But this curve is not stabilising – it is still rising – and it is our actions that are causing this – and only our actions can stop it.

Click on image for larger version. Black Curve: Monthly average atmospheric carbon dioxide concentration versus time at Mauna Loa Observatory, Hawaii (20 °N, 156°W). Red Curve: Fossil fuel trend of a fixed fraction (57%) of the cumulative industrial emissions of CO2 from fossil fuel combustion and cement production. This fraction was calculated from a least squares fit of the fossil fuel trend to the observation record. Data from Scripps CO2 Program.

 

 

Gas and Gaslighting

January 1, 2023

Click on image for a larger version. BBC News stories detailing gas explosions this autumn: See end of article for links.

Friends, welcome to 2023.

I would have liked to start the year talking about something positive, but I can’t!

Over the Christmas break it struck me just how astonishing it is that we still allow homes to be heated by burning methane gas.

And we even build new homes incorporating this deadly and disgusting technology.

In case you didn’t know:

  • Over 100 people a year in the UK die from carbon monoxide poisoning, mainly arising from poorly-maintained gas-burning equipment.

Click on image for a larger version. Graph showing data from the Office for National Statistics on the number of people killed each year from carbon monoxide poisoning (link).

  • Even when gas-apparatus functions correctly, gas cookers emit toxic fumes into the homes of people who cook with gas. It is likely the highest exposure to mixed oxides of nitrogen (NOX) that you will experience anywhere in the UK is not by a roadside, but in a kitchen.

Click on image for a larger version. While cooking with gas in this US household, NO2 levels rose to almost 300 ppb. This figure is modified from the linked article.

  • And on top of it all, every year gas causes more than 300 explosions in the UK, killing or maiming around 100 people each year.

Click on image for a larger version. There are over 300 fires involving gas and an explosion every year. About 100 of these incidents result in a casualty or a fatality (Data Source).

  • And on top it all again, burning it emits tonnes of carbon dioxide, a gas which is destabilising the climate on which we depend.

So how is it that we tolerate such a technology? Why are we not outraged?

Gaslighting

Friends, we are being ‘gaslighted‘ by the Gas Industry.

Gaslighting – as Wikipedia puts it – is a term that:

…”may also be used to describe a person (a “gaslighter”) who presents a false narrative to another group or person, thereby leading them to doubt their perceptions and become misled, disoriented or distressed. Often this is for the gaslighter’s own benefit.

The gas industry – and the media it influences – suggest that the deaths and appalling climate impacts of burning gas are in some way ‘normal’ and ‘acceptable’.

Because we are familiar with gas, they propagate a false narrative that ‘burning gas’ is somehow ‘safe’, ‘natural’, ‘warming’ and ‘friendly’.

To understand how shocking and deceitful this really is, try the following exercises:

  • Imagine that Wind Turbines killed more than 100 people a year.
  • Imagine that Heat Pumps killed more than 100 people a year.
  • Imagine that Solar Panels killed more than 100 people a year.

Do you think there would be media outrage? Of course there would! But with gas – these consequences are literally just ignored.

The Reality

The reality is this: gas is a filthy polluting technology and burning gas damages our climate, our health, and kills over 100 people a year in the UK alone, as well causing 300 explosive fires per year.

I urge you not to be misled into thinking that gas is anything other than a toxic mistake. If you can, I urge to eliminate gas appliances from your life.

BBC News Story Links

 

Setback? Should you lower heating overnight?

December 19, 2022

Friends, it has become a fact of my conversational life that people ask me questions about heating their homes. And one of the most common questions I am asked is whether or not people should turn down their heating overnight? This is referred to by heating engineers as overnight ‘setback’.

Up until today I have only had a generic and unsatisfactory answer:

Well, it depends…”.

But last night I finally saw how the question could be answered and then I wrote a spreadsheet to test out my idea. And now I can I answer more fulsomely. My answer will now be

Well, it depends, but it’s marginal, either way”.

Before I get into the gory details, let me just outline that this article is about working out which strategy uses less energy. In contrast to this dull, but understandable, utilitarian perspective, because any gains are marginal, I would urge you to keep doing whatever makes your home a place of joy.

This is a long and technical article, and follows on from a previous dull post in which I estimated the heat capacity of house. Sorry. If you are looking for something lighter, please allow me to recommend this article about candles! But if you really want to know the details, read on.

The Question

The question people are really asking is this:

  • If I set back the temperature from (say) 20 °C during the day to (say) 16 °C at night I will save energy.
  • But then if I want the house to be (say) 20 °C when I get up, I need to apply additional or boost heating for (say) a couple of hours before I get up and this requires extra energy.
  • On balance, will applying this ‘setback’ save energy? Use extra energy? Or will it make no difference?

The Answer

The answer is as follows,

  • If the heating is 100% efficient, then a setback period will always save energy. It’s generally not a big saving, but it is always a saving.
  • If the efficiency of heating varies with power, then a setback period may save energy, or may not.

Specifically

  • Direct electrical heating with fan heaters, storage heaters or infrared heaters.
    • A setback period will always save energy.
    • The longer the setback period and the lower the temperature, the greater the saving.
  • Gas heating with a gas boiler
    • The efficiency of a modern condensing gas boiler is generally in the range 85% ± 5%.
    • If the efficiency falls with increased power – which can happen – then the small saving in energy can be offset by the decreased efficiency of the boiler at high power.
  • Heat Pump
    • The efficiency of an Air Source Heat Pump (ASHP) is generally around 300%.
    • If the efficiency falls with increased power – which can happen – then the small saving in energy can be offset by the decreased efficiency of the ASHP at high power.

Spreadsheet Calculation

To calculate the energy savings I wrote a spreadsheet that simulates the way energy flows into and out of a house over a 24 hour period.

I modelled 4 separate ‘modes’ of heating the house.

  • The steady state: is where the temperature is stable and the input heating power immediately flows out of the house.
  • Off: is the condition on entering the setback period where there is initially no heating and the temperature falls as the house loses heat ‘naturally’. The rate at which the house cools depends upon the heat transfer coefficient and the heat capacity of the house.
  • Setback: is where the temperature is stable at a lower temperature than in the steady state and the input heating power immediately flows out of the house.
  • Boost: is where the heating power is increased to rapidly raise the temperature of the house.  The rapidity with which the house cools depends upon the heat transfer coefficient and the heat capacity of the house.

Click on image for larger version. Illustration of the temperature variation during a setback period. The dotted orange line shows the data for the situation where the house is maintained at the same temperature 24/7 and the red line shows data for the modelled ‘setback’.

I then divided the day into 0.1 hour periods and calculated (a) the heating energy and (b) the fuel consumed, in each of these periods. I then summed up the heating power and fuel consumed taking account of the possibility that the efficiency might be different in each phase.

Click on image for larger version. Illustration of the cumulative use of heat energy through a setback period. The dotted orange line shows the data for the situation where the house is maintained at the same temperature 24/7 and the red line shows data for the modelled ‘setback’.

Unfortunately there are a large number of variables that can be – well – varied: specifically

  • Internal Steady State Temperature
  • External Temperature
  • Setback Temperature
  • Length of setback period
  • Length of boost period/Boost power
  • Heat Transfer Coefficient/Thermal Resistance
  • House Heat Capacity/Time constant
  • Heating Efficiency in each stage

If you want to play with the spreadsheet you can download the Excel™  spreadsheet here:

I must warn you it is an experimental spreadsheet and I can give no guarantees that it is error-free. In fact I can almost guarantee it is error-strewn!

It’s all about the boost!

The reason that the balance of benefits in a ‘setback’ is nuanced is because of the use of ‘boost’ power.

Without boost power, or with only a low power boost, the internal temperature will only return to its set temperature slowly.

For example, in the illustration below, the boost power is 4 kW while the steady state power is 3 kW. Even with this extra 1 kW, the boost takes several hours to bring the temperature back to the set point.

Click on image for larger version. When the boost power is low, it takes a long time to return the house to its set temperature. The model settings are shown at the top. The upper graph shows temperature versus time and the lower graph shows the cumulative energy used during the day. On each of the graphs the dotted orange line shows the data for the situation where the house is maintained at the same temperature 24/7 and the red line shows data for the modelled ‘setback’.

Obviously this is isn’t satisfactory because the internal temperature remains well below the set point for hours after it should have stabilised. However the savings (12%) compared with maintaining the steady state continuously are substantial!

If we increase the boost power to 6 kW, then the internal temperature returns relatively rapidly to the set point, and the savings are still a reasonable 9% compared with maintaining the steady state.

Click on image for larger version. Increasing the boost power reduces the time to return the house to its set temperature. The model settings are shown at the top. The upper graph shows temperature versus time and the lower graph shows the cumulative energy used during the day. On each of the graphs the dotted orange line shows the data for the situation where the house is maintained at the same temperature 24/7 and the red line shows data for the modelled ‘setback’.

However, if the heating in the boost phase is less efficient than the heating in the steady state, then these energy savings can easily disappear. For example, in the situation below, the boost heating efficiency is reduced from 90% to 80%. This has exactly the same thermal behaviour as the example above, but would now use 3% more fuel.

Click on image for larger version. If the boost heating is 10% less efficient than the steady state heating, then the energy savings can be wiped out. The model settings are shown at the top. The upper graph shows temperature versus time and the lower graph shows the cumulative energy used during the day. On each of the graphs the dotted orange line shows the data for the situation where the house is maintained at the same temperature 24/7 and the red line shows data for the modelled ‘setback’.

The examples above might be appropriate to the behaviour of gas boilers which sometimes condense water vapour in their exhaust fumes less effectively when operating at higher power.

But exactly the same principle could also apply with a heat pump. To pump higher thermal power it might be necessary to raise the temperature of the water flowing in the radiators.

The example below has the same power settings as the examples above, but now assumes an efficiency of 300% (i.e. COP = 3) for all heating phases. You can see that overall energy consumed is 3 times lower than in the examples above.

Click on image for larger version. If the heating efficiency is the same for all heating phases, then the energy savings are the same whether that efficiency is 90% or 300%. The model settings are shown at the top. The upper graph shows temperature versus time and the lower graph shows the cumulative energy used during the day. On each of the graphs the dotted orange line shows the data for the situation where the house is maintained at the same temperature 24/7 and the red line shows data for the modelled ‘setback’.

But if the efficiency in the boost phase falls to 250% from 300%, then once again, the savings are reversed and the setback strategy actually costs more than the steady state strategy.

Click on image for larger version. If the heating efficiency in the boost phase is 250% rather than 300%, then the setback strategy costs more energy. The model settings are shown at the top. The upper graph shows temperature versus time and the lower graph shows the cumulative energy used during the day. On each of the graphs the dotted orange line shows the data for the situation where the house is maintained at the same temperature 24/7 and the red line shows data for the modelled ‘setback’.

Conclusions

Developing this simulation has helped me understand some of the basic physics behind the use of setback strategies. And so my advice has developed from “Well, it depends…” to “Well, it depends, but it’s marginal, either way”.

The critical factor is the relative efficiency of heating at higher power (boost) compared with heating at lower power (steady state). If the boost heating is even marginally less efficient than the steady state heating then any energy savings are reduced, and may even be reversed.

This begs the question of whether there is any way to know what is the situation in a particular household. And my first thought is, “No“. Without detailed measurements, there is no way to tell!

 

Estimating the heat capacity of my house

December 19, 2022

Friends, the spell of cold weather at the start of December 2022 has led to me breathlessly examining data on the thermal performance of the heat pump and the house.

During this period, outside temperatures fell as low as -5 °C and average daily temperatures were below 0 °C. In order to try to keep the internal temperature constant, I studied measurements of internal temperature taken every 2 minutes. The data were pretty stable, only rarely falling outside the bound of 19.5 °C ± 0.5 °C.

But looking in detail, I noticed a curious pattern.

Click on image for a larger version. Two graphs from the period 6th to 18th December 2022. The upper graph shows the air temperature in the middle of the house. At around 01:30 each night the temperature fell sharply. The lower graph shows the rate of change of the air temperature versus time (°C/hour). From this graph it is clear that the rate at which the temperature fell was approximately -0.95 °C/hour.

The upper graph shows sharp falls in temperature at 01:30 each night. These were caused by the heat pump switching to its hot water heating cycle. Prior to this, the heat flowing into the house from the heat pump was more-or-less balanced by the heat flowing out. But when the heat pump switches to heating the domestic hot water, there was no heating from the heat pump and the internal temperature fell.

The lower graph shows the rate of change of the air temperature (°C/hour) versus time over the same period. From this graph it is clear that the rate at which the air temperature fell during the domestic hot water cycles was approximately 0.95 °C/hour.

With a little mathematical analysis (which you can read here if you care) this cooling rate can be combined with knowledge of the heat transfer coefficient (which I estimated a couple of weeks ago) to give estimates of (a) the time constant for the house to cool and (b) the effective heat capacity of the house.

Analysis: Time Constant 

The time constant for the house, is the time for the temperature difference between the inside and outside of the house to fall to ~37% of its initial value after the heating is removed.

The time constant is estimated as (the initial temperature difference) divided by (the initial cooling rate). In this case  the initial temperature difference was typically ~20 °C and the initial cooling rate was 0.95 °C/hour, so the time constant of the house is roughly 21 hours. Sometimes it’s useful to express this in seconds: i.e. 21 x 3,600 = 75,600 seconds.

This suggests that if we switched off all the heating when the house was at 20 °C and the external temperature was 0 °C, the house would cool to roughly 7.4 °C after 21 hours. Intuitively this seems right, but for obvious reasons, I don’t want to actually do this experiment!

Note that this time constant is a characteristic of the house and does not vary with internal or external temperature.

Analysis: Thermal Resistance  

A couple of weeks ago I posted an analysis of the heating power required to heat our house as the ‘temperature demand’  increased as the external temperature fell. The summary graph is shown below.

Click on graph for a larger version. Graph of average heating power (in kW) versus temperature demand (°C) for the first 10 days of December 2022.

From this I concluded that Heat Transfer Coefficient (HTC) for the house was around 165 W/°C.

The inverse of the HTC is known as the thermal resistance that connects the inside of the house to the external environment. So the thermal resistance for the house is ~ 1/165 = 0.00606 °C/W.

Analysis: Heat Capacity  

A general feature of simple thermal analyses is that the time constant, thermal resistance and heat capacity are connected by the formula:

Time constant = Thermal resistance x Heat Capacity

Since we have estimates for the time constant (75,600 s) and the thermal resistance (0.00606 °C/W) we can this estimate the heat capacity of the house as 12,474,000 joules per °C.

This extremely large number is difficult to comprehend, but if we change to units more appropriate for building physics we can express the heat capacity as 3.5 kWh/°C. In other words, if the house were perfectly insulated, it would take 3.5 kWh of heat to raise its temperature by 1 °C.

We can check whether the number makes sense by noticing that the main mass of the house is the bricks from which it is built. A single brick weighs ~3 kg and has a heat capacity of ~2,400 J/°C. So thermally it looks like my house consists of 12,474,000/2,400 ~ 5,200 bricks.

However this estimate is too small. Even considering just the 133 square metres of external walls, if these have the equivalent of 120 bricks per square metre that would come to ~16,000 bricks.

So I think this heat capacity estimate just applies the heat capacity of the first internal parts of the house to cool. This refers to all the surfaces in contact with the air. So I think this is the effective heat capacity for cooling just a degree or two below ambient.

Why did I bother with this? The ‘Setback’ Problem

Friends, sometimes I go upstairs and forget why I went. And sometimes I start analysing things and can’t remember why I started! Fortunately, in this case, I had a really good reason for wanting to know the effective heat capacity of my house.

When I analysed the heat flows previously, I have had to assume that the temperature of the house was stable i.e. that there was a balance between the heat flowing in and the heat flowing out. As long as the temperature of the fabric of the house is stable, then it is neither storing or releasing heat.

However this isn’t enough if we want to understand some very common problems in the thermal physics of houses, such as “the setback problem”: this is the question of whether it’s smart to reduce the temperature of a dwelling (say) overnight and then to re-heat it once again in the morning. To answer this question we need to know about the rate at which a house cools down (it’s time constant) which is equivalent to knowing its heat capacity.

And that is why I have done this prolonged and tedious analysis. The next article will be an analysis of ‘The Setback Problem’. And it will be much more exciting!

Fusion Energy breakthrough? Not so much.

December 15, 2022

Click on Image for a larger version. The ‘breakthrough’ was the front page of the BBC News Website. Apparently this was the most important story in the world.

Friends, I find myself lost for words. Why? Because I am apoplectic with disappointment at the breathtakingly bad reporting about the recent ‘fusion breakthrough’ in the US.

Every media organisation whose output I have read has simply regurgitated the line they have been fed by the press office of Lawrence Livermore National Laboratory (LLNL). The BBC made this their headline story with the byline:

The technology is a potential source of near-limitless clean power….

In this article I will outline what actually happened in this ‘breakthrough’, and then explain why this technology will never ever, ever, ever, ever, ever, ever, ever, ever be useful as a power source.

At the end of the article is a list of resources I consulted. Links below to the panel discussion are to timed locations within the video.

The Experiment

The experiment comprised firing a bunch of lasers split into 192 beams at a tiny, hollow, diamond sphere suspended inside an open-ended cylindrical metal capsule. Both the sphere and cylinder were manufactured to extraordinary specifications in terms of their dimensions and surface finish. These extreme specifications are necessary to ensure that the energy is reflected from the inner surface of the cylinder onto the sphere uniformly.

Click on Image for a larger version. Left. The cylinder with the diamond sphere at it’s centre. Right. Illustration of the way in which ultraviolet lasers illuminate the inner surface of the cylinder, which then bathes the diamond sphere in X-rays.

At the panel discussion which followed the press conference, Mark Herman, the LLNL Director for Weapons Physics and Design refused to assign a monetary value to the target, but reasonably it must be on the order of a million dollars per target.

This sphere and cylinder were placed with nanometre precision in the centre of a chamber, and cooled to cryogenic temperatures (less than 20 K). At these low temperatures, the inside surface of the sphere was coated with roughly 60 micrograms (~0.03 mm) of a solid mixture of deuterium and tritium.

When the ultraviolet laser blast hit the metal cylinder, it vaporised and irradiated the sphere with X-rays. The pressure of this blast was so great and so uniform that it rapidly compressed the 4 mm diameter diamond sphere to around 0.1 mm ( from a “basketball to a pea“), accompanied by extreme heating to temperatures in excess of 100 million degrees Celsius.

This extreme temperature and pressure were sufficient to transiently cause the nuclei of deuterium and tritium to collide and fuse, with each fusion releasing 17 MeV (million electron volts) of energy. In more familiar units this amounts to 2.8 x 10^-12 joules per fusion event.

The energy in the laser pulse was estimated to be 2.05 MJ (million joules). I’m afraid I don’t know how that was measured. The energy of the resulting explosion was estimated to be 3.15 MJ. This estimate is made in several ways, but one technique involves putting a metal sphere near the fusion centre. When irradiated by neutrons from the fusion reaction, nuclear reactions cause the metal sphere to be come transiently radioactive, and measurements of this induced radioactivity allow an estimate of the number of neutrons to which it was exposed. This neutron flux is directly linked to the number of fusion events.

The difference between 3.15 MJ and 2.05 MJ = 1.1 MJ is inferred to come from deuterium-tritium fusion reactions. Dividing this yield by the energy per fusion reaction suggests that there were roughly 3.9 x 10^17 fusion reactions. At the panel discussion it was stated that this was 4% of the number of possible fusions, and so this allows us to estimate that there around 10^19 molecules of D and T in the sphere with a volume (in the solid state) of around 4 cubic millimetres.

Energy and Power

Is 1.1 MJ a lot or a little? The answer depends on what you compare it with.

A familiar unit of energy for consumers is the kilowatt hour (kWh) – the units in which we are billed for our gas and electricity. One kilowatt hour is 3.6 MJ, so 1.1 MJ is an appreciable amount of energy. Enough to boil around 3 litres of water.

1 MJ is also the typical energy content of a stick of dynamite. A stick of dynamite weighs ~ 190 grams whereas this same energy was released by ~ 60 micrograms of deuterium-tritium mixture. This gives a sense of the extraordinary power density available in nuclear reactions, and why they make such powerful explosives.

Unsurprisingly, these explosions damage the chamber in which they occur and the optics of the laser used to focus the beams onto the target needs to be repaired after each shot.

Breakeven: the problems emerge.

The hype surrounding this event arose because for the first time an experimental fusion reaction  produced more energy than was required to initiate the reaction.

As was made clear at the panel discussion, this 1.1 MJ of excess energy was the result of the laser imparting 2.05 MJ to the experiment, but the laser itself consumed roughly 300 MJ of electrical power, and this itself would have been derived from around 600 MJ of primary energy, mostly from burning methane in gas-fired power stations.

If we wanted to “improve” this facility so that the same laser produced enough thermal energy to run a power plant that could generate the electrical energy (at 33% efficiency) to run itself, then we would need to increase the yield by a factor 3 x 300 MJ/1.1 ~800. Where might this gain come from?

  • Only 4% of the deuterium-tritium in the experiment reacted so we could gain a factor 25 by arranging for the all the deuterium-tritium charge to burn. Now we just need a factor 33.
  • We might increase the efficiency of the laser from 1% to (optimistically) 20% and then we would just need a factor 1.6 from ‘somewhere’ to break even.

For the sake of argument, let’s assume we got that factor 1.6 somehow – perhaps by increasing the charge of deuterium-tritium. We would then have a system that could in raw energy terms sustain itself. But we would not yet be generating any extra energy at all!

At this point we would have an experiment that once every few weeks could produce an explosion yielding 900 MJ i.e. the equivalent of 900 sticks of dynamite or about 200 kg TNT.

No feasible path to a reactor.

Let’s suppose we want a fusion reactor which can produce 100 MW of electrical power to an external load. This is a small generating plant on a national scale – the UK peak requirement is around 40,000 MW (40 GW) and the planned Hinkley C reactor (if it ever operates) should produce 3,200 MW (3.2 GW).

To achieve 100 MW of electrical output we would need to generate around 300 MW of thermal power to operate a turbine and generator set with an output of 100 MW of electricity. This means that having gone to considerable trouble to generate energy via fusion we would then throw away two thirds of it as heat!

300 MW of thermal power corresponds to 300 MJ/second so assuming that we can (somehow) produce 900 MJ explosions, we need one explosion every 3 seconds to generate enough electricity to ‘breakeven’ i.e. just to operate the plant! So an additional 300 MW of heat would be required to make electricity for other uses: this would require an explosion every 1.5 seconds.

So to summarise, to produce a power plant outputting 100 MW of electricity the designers would need to:

  • Find a way to manufacture tritium.
  • Find a way to capture the energy of the explosions and turn it into heat.
  • Improve laser efficiency by a factor 20 and improve repetition rate by a factor 80,000 from around 1 laser pulse per day to around 1 laser pulse per second.
  • Build a chamber which could withstand a small nuclear explosion (0.2 tonnes of TNT equivalent) every second for (say) 30 years. Remember that the reaction chamber itself would become intensely radioactive and no human could enter it once its service life began.
  • Within this chamber a cryogenically-cooled target must be put in place with nanometre precision once a second.
  • No debris from the previous explosion can remain because this would affect the path of the lasers.
  • To achieve electricity output at a cost of $1 per kWh – around 10 times current use US prices – the cost of the target could not exceed $40. More realistically – considering the other costs involved, the target would need to cost ~$4 and around 58,000 would be required every day.

In short, there is no feasible path to turn this physics experiment into a reactor. And even if all the achievements above were somehow solved, the electricity would still be extraordinary expensive.

Why the hype?

Friends, we are being ‘gaslighted‘.

As Wikipedia puts it:

This term may also be used to describe a person (a “gaslighter”) who presents a false narrative to another group or person, thereby leading them to doubt their perceptions and become misled, disoriented or distressed. Often this is for the gaslighter’s own benefit.

Lawrence Livermore National Laboratory is a nuclear weapons research institute, and one can see how being able to create ‘mini’ nuclear explosions might be useful for them. And that is what this facility is for. As Mark Herman, the LLNL Director for Weapons Physics and Design said in the panel discussion .

“... the ignition work we’re doing is for stockpile stewardship. Our thermonuclear weapons have Fusion ignition … and so studying Fusion ignition is something we do to support the stockpile stewardship program.

In other words it is a technology which allows the US to design and test nuclear weapons without contravening the Comprehensive Nuclear Test Ban Treaty.

Any attempt to frame this technology as having any application whatsoever to energy generation is a deception.

The answers to our energy needs are already available to us.

And finally…

My comments in this article refer to Inertial Confinement Fusion (ICF). In contrast, Magnetic Confinement Fusion (MCF) does have an unlikely, but conceivable path to making a power plant.

In July 2020 I wrote about MCF in this article: Are fusion scientists crazy? The article includes a précis of (and a link to) an excellent talk from Zach Hartwig which I think is the best summary of all approaches to fusion that I have seen.

My other articles about fusion – dating back to 2013! – can be found here:

If you liked this article, you will likely be disappointed with the following articles that I looked at while preparing this:

Assessing Powerwall battery degradation

December 12, 2022

Click on image for a larger version. Three screenshots from my phone showing the performance of the battery on 9th and 17th January 2022, and 11th December 2022. The key data concerns the total amount of energy discharged from the Powerwall. See text for details.

Friends, the Tesla Powerwall2 battery that we installed in March 2021 has transformed the way we use electricity and allowed us to go off-grid for prolonged periods each year. I have no regrets.

But lurking at the back of my mind, is the question of battery degradation.

This phenomena arises due to parasitic chemical reactions that occur as the battery approaches either full charge or full discharge. These reactions ‘capture’ some lithium and remove its ability to be used to store charge. Hence one expects the capacity of a battery to decline with extended use, particular near the extremes of battery capacity.

This particularly affects batteries used for domestic applications as they are often charged fully and then discharged fully – particularly in the winter.

The extent of the degradation depends on the specific chemistry of the battery. More modern battery chemistries labelled as ‘LiFePO4: Lithium Iron Phosphate” perform better than the previous best in class so-called “NMC: Nickel Manganese Cobalt”. Unfortunately, the Powerwall2 uses NMC batteries. This article has a comparison of the properties of different lithium-ion battery chemistries.

Battery degradation is a real phenomenon, but unsurprisingly, battery manufacturers do not make it straightforward to spot. I first looked at this about a year ago, but I don’t think my analysis was very sensible.

I now think I have a better method to spot degradation, and 20 months after installation, initial degradation is apparent.

Method. 

The new method looks at data from winter days during which the battery is discharged from full to empty, with little or no solar ‘top up’.

In winter our strategy is to charge up the battery with cheap electricity (currently 7.5p/kWh) between 00:30 and 04:30 and to run the house from this until the battery is empty. When it’s cold and the heat pump is working hard we can use up to 30 kWh/day and so the nominal 13.5 kWh of stored electricity is not enough to run through the day. So we run out of battery typically in the early evening and then run off full-price electricity until we can top up again.

The run-time can be extended by a top-up from the solar PV system, which can be anything from 0 kWh in overcast conditions, up to around 7 kWh in full December sun.

My idea is to measure the Powerwall’s total discharge and to compensate for any solar top up. By restricting measurements to days when the battery goes from full to empty, I don’t have to rely on estimates of battery remaining capacity. These days mainly occur in December and January.

For example, today (12 December 2022), the battery was charged to 100% at 04:30 and discharged 12.8 kWh to give 0% just after midday. I the estimate battery capacity as 12.8 kWh.

But on 7 December 2022, the battery was charged to 100% at 04:30 and discharged 15.5 kWh to give 0% just 22:00. This was a sunny day and the battery was topped up by 3.0 kWh of solar. I thus estimate battery capacity as 15.5-3.0 = 12.5 kWh.

In this latter case the way to compensate for the 3.0 kWh of charging is not clear. Why? Because the 3 kW of solar is used to charge the battery and so this may be done with say 95% efficiency (say) in which case only 2.85 kWh of solar energy would be stored. So there is some ambiguity in data which is solar compensated, but for this analysis I am ignoring this difficulty.

The data are shown below:

Click on image for a larger version. Graph showing total Powerwall discharge after compensating for any solar top-up. See text for details.

Discussion. 

The nominal capacity of the Powerwall2 is 13.5 kWh. This is – presumably – the stored electrical energy of the batteries when they are fully charged. To be useful, this energy must be discharged and converted to AC power, and this cannot be done with 100% efficiency.

Considering the data from the winter of 2021/22, the average Full-to-Empty discharge was 13.1 kWh, and so it looks like the discharge losses were around 3%. I think this is probably a fair estimate for the performance of a new battery.

The data show a considerable amount of scatter: the standard deviation is around 0.2 kWh. I am not sure why this is. Last winter, the battery would sometimes only charge to 99% rather than 100% and I corrected for this. That is why the capacity data do not lie entirely on exact tenths of a kWh.

Considering the data from the winter of 2022/23, the average Full-to-Empty discharge is currently 12.8 kWh. This represents a reduction in capacity of 2.3% (0.3 kWh) compared with last winter. However, there is a whole winter ahead with another 50 or so full discharges before spring and that average could well fall.

If the trend continued then battery capacity would fall to 10 kWh in around 2030. That would still be a useful size battery, and by that time hopefully a newer (and cheaper!) model will be available.

I‘ll be keeping an eye on this and will write an update at the end of the winter season. But I thought it was worth publishing this now in case fellow battery owners wanted to monitor their own batteries in a similar way.

 

Cold Weather Measurements of Heat Transfer Coefficient

December 11, 2022

Friends, it’s winter and the weather is reassuringly cold: average daily temperatures in Teddington are around 0 °C. And as I wrote the other week, that offers the opportunity to make measurements of the Heat Transfer Coefficient of a dwelling.

People with Gas Boilers

This is especially valuable for people with gas boilers who are thinking about getting a heat pump.

When the outside temperature is around 0 °C, the average heating power required to heat the majority of UK homes is typically in the range 5 kW to 10 kW.

Most gas boilers have a full power of 20 kW to 30 kW and so can heat a home easily. To keep the temperature just right, the boilers cycle on and off to reduce their average output to the required level. For most houses there is no possibility that a boiler will be undersized.

Heat pumps operate differently. They are typically less powerful than boilers and the maximum heat pump output must be chosen to match the maximum heat requirement of the house.

By measuring the daily use of gas (kWh) by a boiler on a cold day one can estimate the size of heat pump required to heat the dwelling to an equivalent temperature.

I wrote about this at great length here, but at its simplest one just takes the amount of gas used on a very cold day (say 150 kilowatt hours) and divides by 24 (hours) to give the required heat pump power in kilowatts (150/24 = 6.3 kW).

People with Heat Pumps

But the cold weather is not just for people with gas boilers: Heat pump custodians and people heating their house electrically can gain insights when it’s cold.

Starting on 1st December I looked up:

  • the average daily temperature;
  • the daily heat output from the heat pump (in kWh);
  • the daily electricity consumption of the heat pump (in kWh);

The internal temperature was a pretty stable 20 °C throughout this period. So I first worked out the so-called temperature demand: that’s the difference between the desired internal temperature and the actual external temperature.

I then plotted the daily heat output from the heat pump (in kWh) versus the average daily temperature demand (°C). The data fell on a plausible straight line as one might expect. Why? Because the colder it is outside, the faster heat flows out through the fabric of the dwelling, and the greater the rate at which one must supply heat to keep the temperature constant.

In the graph below I have re-plotted this  but instead of using the average daily heat output from the heat pump (in kWh) I have divided this by 24 to give the average daily heat pump power in kilowatts.

Click on graph for a larger version. Graph of average heating power (in kW) versus temperature demand (°C) for the first 10 days of December 2022. Notice that the line of best fit does not go through the origin.

The maximum heat output from the 5 kW Vaillant Arotherm plus heat pump varies with the external temperature, but for flow temperatures of up to 45 °C, it exceeds 5.6 kW.

The maximum daily average power for the first 10 days of December is just over 3 kW, so I think the heat pump will cope well in even colder weather. Indeed I could probably have got away the next model down. But it does seem to be a general rule of heat pumps that one ends up with the model one size above the size one actually needs.

Click on image for a larger version. Specifications for the Vaillant Arotherm plus heat pump. For the 5 kW model at an external temperature of -5 °C and heating water for radiators to between 40 °C and 45 °C, the maximum output is between 5.6 kW and 6 kW.

The slope and intercept of the graph

The slope of the graph is approximately 0.166 kW/°C or 166 W/°C. This figure is known as the Heat Transfer Coefficient for a dwelling. It is the figure which characterises the so-called fabric efficiency of a dwelling.

However, as I noted many years ago when I looked at this problem using gas boiler measurements, the straight line does not go through the origin. The best-fit line suggests zero power output when the external temperature is 2.8 °C below the internal temperature.

This would imply that the heat flow through the fabric of the building was not proportional to the difference between the inside and outside temperature.

The reason for this is that there are other sources of heating in the house, and not all the heat pump output goes into the house. Specifically:

  • People: each person heats the house with around 100 W, about 2.4 kWh/day.
  • Electrical Items: All the electricity consumed by items in the house ends up as heat. For my home this amounts to around 10 kWh/day.
  • Hot Water: Heat pump output that heats domestic hot water is mostly lost when the hot water is used. My guess for this house is that this amounts to around 3 kWh.day.

So to estimate the actual amount of heat dissipated in the house I should really take the heat pump output and:

  • Add 2.4 kWh/day for each person in the house
  • Add 10 kWh/day for all the electrical items
  • Subtract 3 kWh/day for the hot water lost.

Together this amounts to adding 9.4 kWh/day to each heating estimate. Pleasingly, plotting the same graph with these corrections, the graph now intercepts within 0.5 °C of the origin. To me this indicates that I am now accounting for all the significant sources of heat within the house reasonably well.

Click on graph for a larger version. Graph of average heating power (in kW) versus temperature demand (°C) for the first 10 days of December 2022.

I haven’t included any solar gain in these estimates because at this time of year solar gain is generally very low unless a home has large south facing windows. Previously I have noted that solar gain seemed to be much more important in spring and autumn with longer and generally sunnier days.

COP

I also took the opportunity to evaluate how the daily averaged coefficient of performance (COP) varies with external temperature. This is based on readings from a heat meter and an electricity meter which monitors the heat pump.

The COP values below 3 are a little bit lower than I would like, but still acceptable. On subsequent cold days I will be seeing if there are adjustments I can make to the heat pump operation which can improve this.

Click on graph for a larger version. Graph of daily average COP versus daily average outside temperature (°C) for the first 10 days of December 2022. Extrapolating the trend suggests that the COP would reach unity at a temperature of – 12 °C.

Out of curiosity, I also evaluated the heat output using the Vaillant SensoApp. The figures were massively in error. For example, on 10th December the app suggested the total heat delivered to the house by the heat pump was 58 kWh. In fact the correct answer was 73.9 kWh.

Conclusion

Cold weather offers an opportunity to assess the so-called Fabric Efficiency of a dwelling by direct measurements of its Heat Transfer Coefficient.

The cold weather will be with us for a few more days so there’s still a chance to make measurements in your dwelling.

Tony Seba has got me thinking

December 4, 2022

Friends, while browsing on YouTube, The Algorithm suggested I watch videos by Tony Seba. And just as The Algorithm foresaw, I have found them fascinating.

The reason for my fascination is that he makes very specific predictions for the rate at which legacy industries (coal, oil, gas, automotive, and animal farming) will collapse.

In general he anticipates rapid changes – much more dramatic than is envisioned conventionally. He anticipates that all these industries will dramatically disrupted, and some of them eliminated, by 2030.

His reasoning is based on the idea of rational deployment of capital in a free market. His predictions do not require people to make choices on moral or environmental grounds. Rather he believes that in a free market, the collapse of these legacy industries will happen because it is economically inevitable.

I would very much like to see the collapse of the oil and automotive industries, and so some part of me would really like to believe his predictions. But while his arguments are compelling, I remain sceptical: I find it hard to believe it will all just ‘happen’: there are strong forces seeking to maintain the status quo. [Edit: The first comment on the article was also sceptical, but so well-worded that I have promoted it to the main text at the end of the article.]

So this article is about Tony Seba’s predictions, and my thoughts about whether or not they will come to pass.

Who is Tony Seba? 

Tony Seba’s web site says:

Tony Seba is a world-renowned thought leader, author, speaker, educator, angel investor and Silicon Valley entrepreneur

His work focuses on technology disruption, the convergence of technologies, business model innovation, organizational capabilities and product innovation that leads to the creation of new industries and societies and the collapse of existing ones.

Tony Seba’s basic ideas add up to Market Disruption

Tony Seba ideas focus on two types of technological change, which he calls change from above and change from below, and on developments in business models.

Change from above is when a new technology is superior to an existing technology, but initially much more expensive than a standard product. This is the case for many new products. However through the action of the learning curve (see below) the price of the superior product falls exponentially as its production volume increase. i.e. the price falls by a characteristic factor (say 20%) for each doubling of production. The compounding of these factors year-upon-year leads to dramatic and initially inconceivable falls in prices. Think Flat Screen TV’s etc.

Change from below is when a new technology is inferior to an existing technology, but much cheaper than a standard product. Through the action of the learning curve (see below) the quality of the inferior product increase exponentially as it’s production volume increase. i.e. some quality metric increases by a characteristic factor (say 20%) for each doubling of production. The compounding of these factors year-upon-year leads to the new product becoming superior to the standard product at a much lower price. Think Digital Cameras,  etc.

And aside from technological innovations, he discusses the importance of business models. For example, he discuss the demise of Kodak, a global giant in photography who made money every time anyone took a photograph or had a print made. Kodak invented digital imaging, and foresaw a world in which they would take a cut of every digital photograph taken, just like they had in the previous era. Of course, digital photography doesn’t work like that: Digital photos are essentially free and the companies that make money from digital photography are Facebook and Instagram – who use completely different business models.

Tony Seba calls the collective impact of these changes market disruption. In such processes, existing markets collapse, stable businesses operating on small margins go bankrupt even in the early stages of a transition, and new businesses emerge that work in ways that seem initially quite foreign.

In retrospect, these changes can appear to be inevitable, but that is not how they feel at the time: during a technology transition, things probably appear chaotic and confusing, with lots of hype and mis-information. Often long-standing traditions – ways of working and living that have stood for generations and seemed unalterable – disappear over short periods of time. These disruptions to the status quo are typically accompanied by the personal distress of many individuals and families, and societal upheaval. They also typically involve the creation of new industries and the destruction of old ones.

Tony Seba’s Predictions

Tony Seba has a list of technological changes (13m47s into the video below), and he looks at how convergence of these technologies, coupled with exponential changes in price or performance, will lead to disruption. [Note: exponential means changing by a constant factor per unit time, rather than changing by a constant increment per unit time]

Amongst the key technologies he looks at are:

  • Solar PV
  • Batteries
  • Computing: Artificial Intelligence

He predicts that the lowering in cost of Solar PV and Batteries, coupled with AI, will lead to disruptions of the entire energy industry, (oil and gas and coal) and transportation (automobiles, distribution). And his predictions are often quite specific and this – in general – means they are not quite right. But also not far off.

Overall, I think Tony Seba’s analysis is interesting and broadly sound, and I am grateful for even the chance to believe that Fossil Fuel industries will collapse in my lifetime. But his analysis is not beyond criticism and I have a list of comments on his work below the embedded video.

There are loads of Tony Seba videos on YouTube, and many of them are very similar. I’ve selected one long video below from April 2020 that covers most of his thoughts about this field.

Comment#1: The Learning Curve

The learning curve was something I had not fully appreciated as a general phenomena.

Tony Seba does not concern himself with the specifics of what gives rise to a particular learning curve. He takes the learning curve as an input, and then extrapolates to see what would happen if the learning trend continues.

In a way, this is a weakness, because understanding the details of how the learning curve works can really help to understand how the curve is likely to continue in the future. But in a way it is a strength, because it is very easy to get lost in details

As an example, lithium-ion batteries have fallen in price by around 19% for each doubling of cumulative production. Compounding these year-on-year change results in a factor 40 lowering price, as production increased by 50,000 times .

Click on image for a larger version. Learning Curves. The graph (with a logarithmic vertical axis) is from Our World in Data showing the decline of battery prices as cumulative production increased.

Comment#2: The S curve

A large part of Tony Seba’s analyses involve so-called S-curves that describe the way in which innovations diffuse through society. In particular, he points out that despite initial low market penetration, innovations can transform societies remarkably quickly. In almost all his talks he contrasts photographs of the New York City Easter Parade in 1900 and 1913. The parade has switched from being 99% horse-drawn carriages to 99% motor cars.

Click on image for a larger version. The S-curve describing the fractional market penetration of an innovation from 0% to 100%. In the early stages of the curve, the growth is typically exponential, but it can still be a very small fraction of the market, but full penetration of the market can happen very quickly. This graph is stolen from this excellent essay.

The importance of the S-curve is that it qualitatively describes the way technological disruptions occur: to use a literary metaphor, they happen “Slowly, then suddenly“. And when one is the lower part of the S-curve, it can be very difficult to anticipate what may be about to happen.

Comment#3: Are the Markets Free?

Perhaps my biggest concern about Tony Seba’s analysis is that there are a very large number of institutions and governments who are entangled with the oil and gas industries. These institutions value the ‘stability’ which would see oil and gas industries continuing exactly as they do now. Every extra year of extra production is a year in which existing investments in oil and gas infrastructure yield extra profit.

The switch to a renewable energy infrastructure will require colossal amounts of capital investment, but will also result in the destruction of the colossal value of existing investments which will become ‘stranded’. Consequently, existing investors, and the institutions they influence, are heavily incentivised to do everything they can to prolong the lifetime of oil and gas by slowing the development of wind, solar PV and batteries.

Tony Seba and his colleague warn about this in the video below, but don’t perhaps clearly communicate the underhand ways in which oil and gas seek to affect discussion of their industry.

Conclusion.

This article is about the future, and the future is fundamentally unknowable. But I think Tony Seba’s appreciation of some of the non-linear dynamics of markets is insightful. And it chimes with my own personal experience that the technology changes I have experienced in my lifetime have happened much faster than I personally anticipated.

But changes on the scale he foresees will bring phenomenal disruption across society and will be resisted furiously by those with vested interests in the status quo.

In the 2030’s will we look at the ruined infrastructure of coal, oil and gas as we now look at canals or closed down shipyards? Will these mighty industries be reduced to ruins, like the broken statue of Ozymandias in the desert. I do hope so.

Promoted Comment

Your skepticism is well founded because there is a fundamental error that most “free market” advocates make.

The belief is that companies will be driven to change by the pursuit of profit – that the drive to maximize shareholder value will drive them to pursue maximum profits.

However, the real motivation seems to be something adjacent but distinctly different: Companies – at least established ones – will always pursue the path of profitable least risk.

This makes sense through the “maximize shareholder value” lens: if we are making money now, why make major changes and put that at risk?

The answer of course is when the risk of getting upstaged by a competitor or upstart is sufficiently great, then _and_only_then_ will the company embrace radical change in pursuit of new profit opportunities.

The profit-maximization curve and the risk curve are related, but distinct, because the risk curve is a subject to restrictions in market access, industry inertia, and other factors that tend to retard change. Those factors are amplified significantly when there are other entities (either businesses or governments – in this case, both) that are heavily invested in the status quo.

In other words, your skepticism is warranted. The market is a useful tool, but it is far from perfect and it doesn’t work the way it’s biggest fans think it does. The economic case of non-fossil energy sources is compelling and becoming moreso every day. But we will not be able to rely on market effects alone – we’re already far behind where we need to be, and the incumbent market will do everything in its power to make sure we stay that way.

Greta’s Climate Book: An antidote to hope

November 21, 2022

Friends, as I have written before, I love and admire Greta Thunberg.

So when I heard that Greta had edited a collection of short essays on Climate Change, I ordered a copy immediately. Quietly I thought to myself: “Well that is Christmas gifts for everyone sorted.”

And when I collected my copy from the local bookshop I was delighted to find that Greta herself had signed the book! When I got home I sat down eagerly to read.

The book is attractive, covered in Climate Stripes but at 446  pages and 1.383 ± 0.002 kg, it was larger and heavier than I had anticipated.

It is also well-written. Greta’s essays that introduce the various sections are excellent: she writes with outstanding clarity. And the general standard of the short essays is excellent. I learned a lot about many different aspects of Climate Change that I had not previously focussed on.

However, I will not be gifting this book to anyone I love. Why? Because I found it overwhelmingly depressing.

Antidote to hope

Friends, Climate Change scares me. I feel the fragility of our way of life and I feel terrified for my children. And I am acutely aware of just how profoundly bad our situation is. In many ways, I am a natural ‘Doomer‘. But I resist that temptation and prefer to focus on what I can do to try to improve things – however marginally.

My resistance is not really supported by the weight of evidence which is probably on the side of the doomers. It’s a choice I have made.

And that’s my problem with the book. It amplifies every negative aspect of our situation in a way which I found overwhelmingly depressing. I appreciate the book’s straightforward honesty, but it doesn’t help me get by from day to day.

The book implies that there is no solution to the problem of climate change without simultaneously solving multiple problems of inter-national, inter-ethnic, inter-gender and inter-generational justice – problems that seem to me to be much harder than the fundamentally technical problem of stopping emitting carbon dioxide.

There is more than one thing happening

At the moment on Earth there are two epochal changes taking place. Climate Change is one of them, and its multiple levels and scales and implications of that change are well-described by Greta’s book.

But we are also undergoing an Energy Transition which I estimate will have impacts on the same scale as the Industrial Revolution.

Solar Energy, Wind Energy and Battery storage have plummeted in price and their deployment is accelerating exponentially. I’ll be writing more about this in coming weeks, but by most measures, some combination of these technologies provides the cheapest electricity humanity has ever known.

As someone who is planning to operate their home entirely from solar power for 6 months of next year, this technological shift feels very real. And this change has taken place in my lifetime.

Cost is the key. Because these technologies are cheaper than building any other kind of power, they will – even in the face of strong opposition – inevitably win. In the end, the fossil fuel technologies will simply not be able to compete. In the end we will make the energy transition, not because it is the moral thing to do, but because it is economically essential.

And this transition seems to me to offer some hope to people living in both developed and developing countries.

The Energy Transition will not bring with it solutions to the multiple problems of inter-national, inter-ethnic, inter-gender and inter-generational justice. But it does offer at least a realistic opportunity to reduce carbon dioxide emissions relatively quickly.

And for me, that would be enough.

 


%d bloggers like this: