Are fusion scientists crazy?

July 8, 2020

Preamble

I was just about to write another article (1, 2, 3) about the irrelevance of nuclear fusion to the challenges of climate change.

But before I sharpened my pen, I thought I would look again to see if I could understand why a new breed of fusion scientists, engineers and entrepreneurs seem to think so differently. 

Having now listened to two and a half hours of lectureslinks at the bottom of the page – I have to say, I am no longer so sure of myself.

I still think that the mainstream routes to fusion should be shut down immediately.

But the scientists and engineers advocating the new “smaller faster” technology make a fair case that they could conceivably have a relevant contribution to make. 

I am still sceptical. The operating conditions are so extreme that it is likely that there will be unanticipated engineering difficulties that could easily prove fatal.

But I now think their proposals should be considered seriously, because they might just work.

Let me explain…

JET and ITER

Deriving usable energy from nuclear fusion has been a goal for nuclear researchers for the past 60 years.

After a decade or two, scientists and engineers concluded (correctly) that deriving energy from nuclear fusion was going to be extraordinarily difficult.

But using a series of experiments culminating in JET – the Joint European Torus, fusion scientists identified a pathway to create a device that could release fusion energy and proceeded to build ITER, the International Thermonuclear Experimental Reactor.

ITER is a massive project with lots of smart people, but I am unable to see it as anything other than a $20 billion dead end – a colossal and historic error. 

Image of ITER from Wikipedia modified to show cost and human being. Click for larger view.

In addition to its cost, the ITER behemoth is slow. Construction was approved in 2007 but first tests are only expected to begin in 2025; first fusion is expected in 2035; and the study would be complete in 2045.

I don’t think anyone really doubts that ITER will “work”: the physics is well understood.

But even if everything proceeds according to plan, and even if the follow-up DEMO reactor was built in 2050 – and even if it also worked perfectly, it would be a clear 40 years or so from now before fusion began to contribute low carbon electricity. This is just too late to be relevant to the problem of tackling climate change. I think the analysis in my previous three articles still applies to ITER.

I would recommend we stop spending money on ITER right now and leave it’s rusting carcass as a testament to our folly. The problem is not that it won’t ‘work’. The problem is that it just doesn’t matter whether it works or not.

But it turns out that ITER is no longer the only credible route to fusion energy generation.

High Temperature Superconductors

While ITER was lumbering onwards, science and technology advanced around it.

Back in 1986 people discovered high-temperature superconductors (HTS). The excitement around this discovery was intense. I remember making a sample of YBCO at Bristol University that summer and calling up the inestimable Balázs Győrffy near to midnight to ask him to come in to the lab and witness the Meissner effect – an effect which hitherto had been understood, but rarely seen.

But dreams of new superconducting technologies never materialised. And YBCO and related compounds became scientific curiosities with just a few niche applications.

But after 30 years of development, engineers have found practical ways to exploit them to make stronger electromagnets. 

The key property of HTS that makes them relevant to fusion engineering is not specifically the high temperature at which they became superconducting. Instead it is their ability – when cooled to well below their transition temperature – to remain superconducting in extremely high magnetic fields.

Magnets and fusion

As Zach Hartwig explains at length (video below) the only practical route to fusion energy generation involves heating a mixture of deuterium and tritium gases to immensely high temperatures and confining the resulting plasma with magnetic fields.

Stronger electromagnets allow the ‘burning’ plasma to be more strongly confined, and the fusion power density in the burning plasma varies as the fourth power of the magnetic field strength. 

In the implementation imagined by Hartwig, the HTS technology enables magnetic fields 1.7 times stronger, which allows an increase in power density by a factor 1.7 x 1.7 x 1.7 x 1.7 x 1.7 ≈ 9. 

Or alternatively, the apparatus could be made roughly 9 times smaller. So using no new physics, it has become feasible to make a fusion reactor which is much smaller than ITER. 

A smaller reactor can be built quicker and cheaper. The cost is expected to scale roughly as the size cubed – so the cost would be around 9 x 9 x 9 ~ 700 times lower – still expensive but no longer in the billions.

And crucially it would take just a few years to build rather than a few decades. 

And that gives engineers a chance to try out a few designs and optimise them. All of fusion’s eggs would no longer be in one basket.

The engineering vision

Dennis Whyte’s talk (link below) outlines the engineering vision driving the modern fusion ‘industry’.

A fusion power station would consist of small modular reactors each one generating perhaps only 200 kW of electrical power. The reactors could be produced on a production line which could lower their production costs substantially.

This would allow a power station to begin generating electricity and revenue after the first small reactor was built. This would shorten the time to payback after the initial investment and make the build out of the putative new technology more feasible from both a financial and an engineering perspective.

The reactors would be linked in clusters so that a single reactor could come on-line for extra generation and be taken off-line for maintenance. Each reactor would be built so that the key components could be replaced every year or so. This reduces the demands on the materials used in the construction. 

Each reactor would sit in a cooling flow of molten salt containing lithium that when irradiated would ‘breed’ the tritium required for operation and simultaneously remove the heat to drive a conventional steam turbine.

You can listen to Dennis Whyte’s lecture below for more details.

But…

Dennis Whyte and Zach Hartwig seem to me to be highly credible. But while I appreciate their ingenuity and engineering insight, I am still sceptical.

  • Perhaps operating a reactor with 500 MW of thermal power in a volume of a just 10 cubic metres or so at 100 million kelvin might prove possible for seconds, minutes or hours or even days. But it might still prove impossible to operate 90% of the time for extended periods. 
  • Perhaps the unproven energy harvesting and tritium production system might not work.
  • Perhaps the superconductor so critical to the new technology would be damaged by years of neutron irradiation

Or perhaps any one of a large number of complexities inconceivable in advance might prove fatal.

But on the other hand it might just work.

So I now understand why fusion scientists are doing what they are doing. And if their ideas did come to fruition on the 10-year timescale they envision, then fusion might yet still have a contribution to make towards solving the defining challenge of our age.

I wish them luck!

===========================================

Videos

===========================================

Video#1: Pathway to fusion

Zach Hartwig goes clearly through the MIT plan to make a fusion reactor.

Timeline of Zach Hartwig’s talk

  • 2:20: Start
  • 2:52: The societal importance of energy
  • 3:30: Societal progress has been at the expense of CO2 emissions
  • 3:51: Fusion is an attractive alternative in principle. – but how to compare techniques?
  • 8:00: 3 Questions
  • 8:10: Question 1: What are viable fusion fuels
  • 18:00 Answer to Q1: Deuterium-Tritium is optimal fuel.
  • 18:40: Question 2: Physical Conditions
    • Density, Temperature, Energy confinement
  • 20:00 Plots of Lawson Criterion versus Temperature.
    • Shows contours of energy ration Q
    • Regions of the plot divided into Pointless, possible, and achieved
  • 22:35: Question 3: Confinement Methods compared on Lawson Criterion/Temperature plots
    1. Cold Fusion 
    2. Gravity
    3. Hydrogen Bombs
    4. Inertial Confinement by Laser
    5. Particle accelerator
    6. Electrostatic well
    7. Magnetic field: Mirrors
    8. Magnetic field: Magnetized Targets or Pinches
    9. Magnetic field: Torus of Mirrors
    10. Magnetic field: Spheromaks
    11. Magnetic field: Stellerator
    12. Magnetic field: Tokamak
  • 39:35 Summary
  • 40:00 ITER
  • 42:00 Answer to Question 3: Tokamak is better than all other approaches.
  • 43:21 Combining previous answers: 
    • Tokamak is better than all other approaches.
  • 43:21 The existing pathway JET to ITER is logical, but too big, too slow, too complex: 
  • 46:46 The importance of magnetic field: Power density proportional to B^4. 
  • 48:00 Use of higher magnetic fields reduces size of reactor
  • 50:10 High Temperature Superconductors enable larger fields
  • 52:10 Concept ARC reactor
    • 3.2 m versus 6.2 m for ITER
    • B = 9.2 T versus 5.3 T for ITER: (9.2/5.3)^4 = 9.1
    • Could actually power an electrical generator
  • 52:40 SPARC = Smallest Possible ARC
  • 54:40 End: A viable pathway to fusion.

Video#2: The Affordable, Robust, Compact (ARC) Reactor: and engineering approach to fusion.

Dennis Whyte explains how improved magnets have made fusion energy feasible on a more rapid timescale.

Timeline of Dennis Whyte’s talk

  • 4:40: Start and Summary
    • New Magnets
    • Smaller Sizes
    • Entrepreneurially accessible
  • 7:30: Fusion Principles
  • 8:30: Fuel Cycle
  • 10:00: Fusion Advantages
  • 11:20: Lessons from the scalability and growth of nuclear fission
  • 12:10 Climate change is happening now. No time to waste.
  • 12:40 Science of Fusion:
    • Gain
    • Power Density
    • Temperature
  • 13:45 Toroidal Magnet Field Confinement:
  • 15:20: Key formulae
    • Gain 10 bar-s
    • Power Density ∝ pressure squared = 10 MW/m^3
  • 17:20 JET – 10 MW but no energy gain
  • 18:20 Progress in fusion beat Moore’s Law in the 1990’s but the science stalled as the devices needed to be too big.
  • 19:30 ITER Energy gain Q = 10, P = 3 Bar, no tritium breeding, no electricity generation.
  • 20:30 ITER is too big and slow
  • 22:10 Magnetic Field Breakthrough
    • Energy gain ∝ B^3 and ∝ R^1.3 
    • Power Density ∝ B^4 and ∝ R 
    • Cost ∝ R^3 
  • 24:30 Why ITER is so large
  • 26:26 Superconducting Tape
  • 28:19 Affordable, Robust, Compact (ARC) Reactor. 
    • 500 MW thermal
    • 200 MW electrical
    • R = 3.2 m – the same as JET but with B^4 scaling 
  • 30:30 HTS Tape and Coils.
  • 37:00 High fields stabilise plasma which leads to low science risks
  • 40:00 ARC Modularity and Repairability
    • De-mountable coils 
    • Liquid Blanket Concept
    • FLiBe 
    • Tritium Breeding with gain = 1.14
    • 3-D Printed components
  • 50:00 Electrical cost versus manufacturing cost.
  • 53:37 Accessibility to ‘Start-up” entrepreneurial attitude.
  • 54:40 SP ARC – Soomest Possible / Smallest Practical ARC to Demonstart fusion
  • 59:00 Summary & Questions

Be Constructive!

July 5, 2020

Friends, I am very excited.

Yesterday I signed and returned the contract to have external wall insulation applied to my house. The work will start in mid-September in time for what I hope will be a really cold winter!

My calculations suggest this should reduce direct heat loss through the 133 square metres of walls on my house by a factor which might be as large as 5 – WOW!

There are still many uncertainties. For example:

  • I don’t know the extent of heat loss to the Earth through the ground floor.
  • And I am unsure about the significance of air flow in losing heat.

But by continuing my measurements through this summer and next winter I hope to gain insight that should help me plan the next steps.

Be Constructive

The company I have engaged to do this are charmingly called Be Constructive and they seemed very professional in their assessment of the work.

The work itself is conceptually easy to understand. But it has many time-consuming steps that are required in order to get a finish which will last for many years. The video below shows some of the basics of the process.

When the work is being done I will add more details but here are some of the decisions I have made.

I want the house to look visually similar before and after…

The reason for this is that I want to show that this can be done by ‘normal people’ – and not just measurement obsessives such as myself. Consequently:

  • I have resisted my son’s request to paint the whole house bright yellow.
  • I have restricted insulation to a thickness of 100 mm. I think this is thin enough that the insulation will not be immediately visually obvious.  
  • To get the best thermal performance from this thickness, I have reluctantly used a proprietary insulator – Kingspan K5. I would have preferred ‘Rock wool’ but the Be Constructive surveyor thought the thermal performance would not satisfy me! 
  • The lower part of the house – and the neighbouring houses – has exposed brickwork. This will be matched as closely as possible using ‘brick slips’ – thin ‘faux bricks’ – on top of the render. This looks incredibly tedious and I am glad not to be doing it myself!

This work is expensive. The whole job will cost around £20,000 or about £150 per square metre, rather more than the guide price (link) of £90 square metre. This increased cost per square metre is due to the improved insulation, the use of brick slips, and one or two ‘fiddly bits’. My guess is that about 40% of the cost is associated with the rendering rather than the insulation. 

But in terms of heat loss, the work is considerably more cost effective than the triple-glazing I had done previously, and should represent a big step towards making the house carbon-neutral. 

The return on investment – in terms of reduced bills – will probably be around 2%. But my rationale is moral rather than financial.

I see wasting heat and putting carbon dioxide in the atmosphere as being in the same category as leaving a sewer to spill onto the street. Given that I have the wherewithal to do something about this, I feel it would be shameful not to act.

..but I do have a large empty wall…

..and my son did want the house pained yellow… Perhaps I should get a mural like this fantastic painting of William Morris?

COVID-19: Day 190: Population Prevalence Projections and Models

July 3, 2020

Warning: Discussing death is difficult, and if you feel you will be offended by this discussion, please don’t read any further.
========================================

This post is a 3rd July update on the likely prevalance of COVID-19 in the UK population. (Previous update).

The summary is that the population prevalence of COVID-19 is likely not declining as previously expected, but is either stable or increasing.

This should be a matter of concern as on 4th July (tomorrow) the ‘opening up’ of society is likely – to some extent – to lead to increased transmission.

Population Prevalence

On 2nd July the Office for National Statistics (ONS) updated their survey data on the prevalence of people actively ill with COVID-19 in the general population (link), incorporating data for four fortnightly periods covering the period from 3rd May up until 21st June.

Start of period of survey  End of period of survey    Middle Day of Survey (day of year 2020) % testing positive for COVID-19 Lower confidence limit Upper confidence limit
03/05/2020 10/05/2020 131 0.27 0.13 0.47
17/05/2020 24/05/2020 145 0.10 0.05 0.19
31/05/2020 07/06/2020 159 0.06 0.02 0.14
14/06/2020 21/06/2020 173 0.04 0.02 0.08

Data from ONS

Plotting this on a graph we see a decreasing trend.

Click to see the graph in more detail.

The new data indicate a broadly similar story to the old data, with the (unweighted) exponential fit indicating a factor 10 reduction in prevalence every 51 days, similar to the previous estimate of 45 days.

Prevalence Date Cases in the UK
1 in 1,000 Start of June About 60,000
1 in 10,000 Mid July About 6,000
1 in 100,000 Early September About 600
Projected dates for a given level of COVID-19 prevalence

We can see the data in more detail if we plot them on a log-linear graph.

Click to see the graph in more detail.

More detailed modelling

We are probably approaching the lower limit of the population prevalence that this kind survey can detect. Each fortnightly data point is based on testing roughly 20,000 people and the four data points on the graph above correspond to:

  • 35 positive cases detected from a sample of 16,205
  • 21 positive cases detected from a sample of 20,259
  • 10 positive cases detected from a sample of 24,978
  • 12 positive cases detected from sample of 23,203

I don’t understand how the weighting process that ONS uses then converts these last two data points from a nominal increase in incidence into a decline in incidence, but I am sure they have their reasons.

However the ONS also include details of a model that they believe underlies the data. Presumably this is based on detailed analysis of the daily data. This is shown on the graph below as a blue dotted line.

Click for a larger graph

This model suggests that the incidence of COVID-19 is no longer declining.

ONS also include a separate model based on the estimated chance that 100 people followed for one week would yield a positive test. This analysis – below – indicates that the chance of acquiring infection has not just stabilised, but has actually increased.

Click for a larger graph.

These models have large uncertainties associated with them. But it is concerning that they no longer indicate a declining prevalence. 

Daily Deaths

Below I have also plotted the 7-day retrospective rolling average of the daily death toll along with the World-o-meter projection from the start of June

Click to see the graph in more detail.

This data seems to tell a different story to that told by the main prevalence survey and more in line with the modelling.

This data shows a death rate that is not falling in line with the expected trend. The trend value of deaths (~100 per day) is consistently higher than the 60 or so deaths expected.

In future updates I will continue to use the same World-o-meter projection to gauge whether the death rate is falling faster or slower than the model led us to expect.

===========================
Discussing death is difficult, and if you have been offended by this discussion, I apologise. The reason I have written this is that I feel it is important that we all try to understand what is happening.

NPL Reflections: The Serco Legacy

June 30, 2020

It’s been two full months since I left NPL, and it still feels great!

But in quieter moments I have been reflecting on my time at NPL. And in particular, I have been reflecting on how NPL reached its current state, which – as I experienced it – featured a poisonous working environment, abysmal staff morale, and a management detached from reality. From comments on my previous post, my experience is not unique.

Happily the state of NPL is no longer my problem. But even so, I have decided to write about it because, as I experienced it, it was traumatic and tragic. And for years it was impossible to speak openly – NPL’s culture of fear was such that any discussion of NPL’s difficulties would be considered a disciplinary offence.

A recent comment on my previous post by “Bob” asked whether there was any “safe space” at NPL. Or any way to start “a conversation “with management. “Bob” asked how it was that one had to leave NPL in order to be able to discuss the abusive working culture! It’s a good question.

So this article is for “Bob” and his or her colleagues who still have to live with NPL’s poor working culture. It describes some previous attempts to “start a conversation” and what happened. It’s quite likely many people – even those working at NPL – were not aware of these past events.

Some people might think this kind of culture of fear is OK, and perhaps in private companies it might be understandable. But what about in public institutions that are being run on behalf of the Government by a private company? This has been the case at NPL where private companies have established a culture of fear to protect themselves from criticism. Even criticism which raises important issues which should be aired in the public interest since they are mainly spending public money.

What has happened at NPL is not, I think, unusual: it is part of the march of ‘managerialism’ – the belief by ‘managers’ in the special powers of ‘managers’. But that makes it no less regrettable.

In this article I am simply stating what has happened – as I experienced it. I am doing this because the alternative – staying silent now that I no longer have to experience it daily – feels like “letting the bullies win“. Writing this feels like the very minimum that I can do, but it still feels very difficult.

Serco

Looking back, the management company Serco – which ran NPL from 1995 until 2012 – seems to be at the root of many of NPL’s problems. Serco can do some things efficiently, but in retrospect, it was spectacularly ill-suited to running NPL, and – as I discuss below – it created a rift in the organisation between ‘management’ and ‘staff’ – the scientists and engineers who actually embody what NPL is about. As I see it, the poor working culture – the culture of fear that now permeates NPL – stems from this rift.

When I started working at NPL in April 2000, I was 40 years old and Serco had already been running NPL as a management contract for five years. For the first year or two, I came to work each day, went into a laboratory and did scientific experiments – it was great! I had very little to do with Serco and gave them no mind.

From my perspective, the changes at NPL came slowly, but they all came to one thing: a unique (albeit imperfect) working culture with two-way trust between management and scientists was trashed. In its place there has been a progressive glorification of ‘management’ and progressive growth in the institutionalised contempt for science and scientists, engineering and engineers.

How did this come about?

1:   Managers with Serco-vision

Back in 1995, Serco were unused to running scientific establishments and were proud of having won the contract to manage NPL. They saw NPL as a prestigious ‘win’ that added credibility to their expanding ambitions of managing activities on behalf of government. So, for the first few years, they adopted a hands-off management style that was quite well-suited to NPL.

A key step came when ‘management roles’ were made full-time. In retrospect it is surprising this took so long: Serco is a ‘management’ company and their core belief is that organisations are made of ‘managers’ and ‘staff’. Serco see managers as the key to every successful organisation, and ‘staff’ as ‘the problem’: what they – the managers – have to deal with.

Serco see ‘managers’ as embodying the organisation and they see ‘staff’ as being there to follow instructions: this is Serco-vision. Serco’s management challenge was that most people at NPL still thought of NPL as an institution of government that was, for the time-being, being managed on the government’s behalf by Serco. In contrast, Serco wanted to convince staff that NPL was a business; a part of Serco; and that it existed to make a profit.

The move to full-time managers created a management ‘cadre’ who – even if they were not actually paid by Serco – saw NPL through a Serco lens. Key parts of Serco-vision were focused on (paraphrasing) minimally-fulfilling contract specifications, and optimising profits. This vision generated a conflict with NPL working culture which (paraphrasing) sought to do the best possible work, but on a generally prolonged timescale.

Previously, scientists had taken on management tasks to a greater or lesser extent depending on their disposition and the needs of their particular group. ‘Science work’ and ‘management work’ both needed to be done. Having grown through the same system, managers and scientists shared – at least some – cultural assumptions about the value of NPL’s activities.

Serco instituted a policy where managers had to be full-time i.e. scientists had to choose to let go of management responsibilities, or let go of their scientific career. In general, staff who excelled at science chose science: staff who were not so good at science, chose management. This was the seed for a schism that has grown vast in recent years. And thus, the ‘failed’ scientists found themselves in charge!

Over time, managers who had been promoted from within their area of work (and thus had some amount of local expert knowledge and cultural understanding) came to be seen as ‘suspect’ – i.e. loyal to their team rather than to NPL senior management, and slowly managers with no knowledge of the details of the work of a team became the norm.

Really? Oh Yes. Managers with no knowledge of the technically complex areas they managed were preferred over those with knowledge of the area who might be sympathetic to the ‘staff’. In many areas the results were laughable. But managers with no knowledge of what their team were doing became the norm at NPL. And in fact, this is still common.

The managers cope using Serco-vision. They see all work as part of ‘a project’ which is viewed in terms of profit and loss, and capital and staff requirements. The technical aspects of a project (i.e. how it gets done) are seen as mere details that ‘staff’ can deal with.

2     Contempt for Science and Engineering

It had been commented many times that NPL was not really a single institution but a collection of one- or two-person ’boutique’ activities. This meant that there were problems in equitably assessing promotions and adequately rewarding staff.

Serco addressed this by instituting their vision of scientists’ jobs which they described in terms of Role Profiles and a Competency Dictionary. Each ‘role’ within NPL was characterised in extraordinary detail by the extent to which ‘staff’ required certain levels of competency in 15 key areas:

Customer Focus, Building Networks, Strategic Thinking, Commercial Awareness, Planning & Organising, Managing Change , Scientific Awareness, Quality Focus, Conceptual Thinking, Leadership & Team Motivation, Application of Knowledge, Working Together, Understanding Others, Communicating & Influencing, Achievement Drive.

Sharp-eyed readers may have noticed that just one of those 15 key areas had the word ‘science’ in it. But even in this category, scientific or technical excellence was not valued. I won’t bore you with the details but even at the highest level, scientific knowledge was only useful to the extent that it generated business.

It shouldn’t have been a surprise: Serco only sees value in management. So the more like a manager you were – the more valuable you were. Technical skills such as “understanding science“, or “being good at doing experiments” or “understanding Maxwell’s equations“, did not even register.

So one’s career as a scientist depended on fulfilling these competencies and exhibiting particular ‘behaviours’ that demonstrated them. But only one tiny part of one competency referred to actually being professionally excellent at science! Damningly, the words ‘engineer’ and ‘engineering’ did not occur in the entire publication!

Whereas physics and engineering degrees devote perhaps 5% of a course to ‘soft skills’, in the eyes of Serco, 100% of what they valued were non-technical skills. And NPL still has an ongoing legacy of contempt for science and engineering that lives on a fortiori!

Management simply did not care about creating an organisation in which great science and engineering was valued. The entire organisation had been taken over by a cadre of people actively indifferent to science and engineering, and with a Serco-vision focus on profit. This was true back then and, based on my experience, I think it is true to an even higher order now.

3     2007 Grand Meeting

The strains on the organisation were becoming clear and in 2007, the then managing director Steve McQuillan called a couple of meetings of senior scientists with senior managers at (bizarrely) local racecourses: first Kempton Park and then Sandown. I should stress these meetings were not on race days ;-).

At the end of the second almost entirely pointless meeting there was a Q&A and the senior managers sat on stools at the front, ready to answer questions. After a few evasive answers to simple questions, the silence of people NOT asking questions became deafening. At this point Steve McQuillan took the microphone and said (to my great surprise!) “Well, if people feel unable to ask questions in this forum, why don’t they feed them through Michael.” and then he looked at me.

No one contacted me.

I thought about just letting it go, but then I reflected that this stuff mattered, and the next week I emailed all the senior scientists and said I would agree to act as a conduit to Steve McQuillan. I offered to anonymise any comments they had and feed them on. I got 14 responses which I duly anonymised and forwarded.

I was honoured that my colleagues had trusted me, because the culture of fear had already taken root. Even these very senior members of NPL feared the possibility of retribution if they were seen to openly disagree with managers.

I won’t offer the responses here, but they were all very tame. They consisted of the senior scientists respectfully suggesting how to do things better. Reading them, I thought this would be gold dust for managers!

And the response of management was… nothing. Despite having undertaken this task at the specific request of the managing director himself, I did not even receive an acknowledgement.

The reality was becoming clearer still: management did not care in the slightest about scientists’ unhappiness at the changes and did not care about ‘improving things‘. We were ‘staff’: Why should they care?

4     Serco-vision in action

NPL earned its money from government by deploying staff on activities agreed with the relevant government department. Projects would have ‘deliverables’ and when a ‘deliverable’ was complete, NPL would be paid.

Through the years, as managers grew in number and ‘professionalised’, the old habits of NPL began to fade. And new habits arose. Including the habit of needing to meet revenue targets for Serco Head Office.

Using Serco-Vision, managers looked at deliverables differently. The upshot of this was that we transitioned to a culture of marking things complete when they had only minimally been completed. So, for example, if a deliverable specified that “a prototype would be produced“, using Serco-vision, it might not matter whether or not the prototype worked.

When I read about the fraudulent activities that Serco oversaw in other contracts I immediately recognised the scenarios in which people felt their integrity was being challenged by a conflict between their loyalty to an institution, and their loyalty to the company who happened to be running the institution at the moment.

Summary

Looking back now, I reflect that, ten years in to my NPL career, there were several personal achievements.

  • I had been learning a lot of new physics.
  • I had been trusted by my colleagues to reflect their views to management.
  • I had begun the Protons for Breakfast course.
  • I had been awarded a medal by the Queen!
  • And, thanks to the foresight of my colleague Graham Machin, I had become involved in the most complex and challenging task of my career – measuring the Boltzmann constant.

But the general situation of NPL was developing badly. There was a growing schism between managers and scientists, and a culture of fear had been established by management to discourage any questioning of their decisions. And the tempo of work and the focus on profit was building.

I had thought that perhaps in 2012, when Serco lost the contract to manage NPL, we might have had the equivalent of a Truth and Reconciliation Commission. I had thought we might have taken that opportunity speak about the impact that Serco had had on NPL culture – and it was not all bad – and to think of a new way of working. But instead senior management doubled-down on Serco-vision and have thus driven NPL to its current state.

Of necessity, I have written here in general terms. And this article is already too long. But I do feel obliged to mention that there were, and still are, many kind and talented individuals amongst managers. And most of them are doing their best to cope with the way things are. And not all scientists and engineers are saints and geniuses. But the culture of fear is real, and it stems from the top of the organisation which glorifies ‘managers’ and ‘leaders’ even more than Serco.

Finally, I am aware that all institutions must change – and that this is not an easy process. But in retrospect, Serco simply had no meaningful ideas for running NPL other than making it look like their other contracts. And having worked under the current management for several years, they simply have no meaningful ideas at all – except for more managers!

Despite its problems, some good work is still being carried out at NPL. But as ‘old NPL’ staff have retired, and as replacement staff are employed on short-term contracts, I get the sense that fewer and fewer people believe in the importance of the institution of NPL. And like Tinker Bell in Peter Pan, when people stop believing in something, it dies. And, ultimately that is why I am writing this. I feel that something is dying at NPL, and although I am sad about it, I am personally glad to be away from the stench.

Last words

I have set this article aside for a few days, and I have now looked to see if it is worth making public. Is it just me feeling angry or resentful towards NPL?

Well actually I don’t feel angry or resentful. As I mentioned at the start, I feel great!

But I do feel empathy for my ex-colleagues who have to put up with the culture of fear that emanates from senior management and HR. And although this article is not perfect, it does say one or two things that I feel need saying in public. And it tries to look at how things reached this sorry state. On balance I think it is worth publishing.

Note: If any staff at NPL would like to comment – privately or publicly – but are not able create anonymous dummy accounts then please feel free to e-mail me at michael@depodesta.net . If you would like your comment to be public then please let me know and I will anonymise your comments and post them here.

COVID-19: Day 177: Population Prevalence Projections

June 26, 2020

Warning: Discussing death is difficult, and if you feel you will be offended by this discussion, please don’t read any further.

========================================

This post is a 26th June update on the likely prevalance of COVID-19 in the UK population. (Previous update)

Population Prevalence

The Office for National Statistics (ONS) have updated their survey data on the prevalence of people actively ill with COVID-19 in the general population (link).

Start of period of survey End of period of survey  Middle Day of Survey (day of year 2020) % testing positive for COVID-19 Lower confidence limit Upper confidence limit
27/04/2020 10/05/2020 125 0.26 0.17 0.40
11/05/2020 24/05/2020 139 0.22 0.10 0.44
25/05/2020 07/06/2020 153 0.05 0.02 0.10
08/06/2020 21/06/2020 167 0.09 0.04 0.19
Data from ONS

Plotting this data on a graph we see a decreasing trend, but note that the most recent data point worryingly shows an increase in prevalence.

Click for a larger view

The new data indicate a slightly more concerning story than the previous data, with the (unweighted) exponential fit indicating a factor 10 reduction in prevalence every 70 days, significantly slower than the previous estimate of 43 days. We can see the data in more detail if we plot them on a log-linear graph.

Click for a larger image.

The new data significantly shifts the dates at which key low prevalence values might be expected.

Prevalence Date Cases in the UK
1 in 1,000 Start of June About 60,000
1 in 10,000 Early August About 6,000
1 in 100,000 Early October About 600
Projected dates for a given level of COVID-19 prevalence

It is a matter of some concern that levels may not have fallen to the 1 in 100,000 level by the start of the new school term in September.

Limits of the survey power

I mentioned last week that we are probably approaching the lower limit of the population prevalence that this kind survey can detect.

The last two fortnightly data points were based on testing of 22,523 people with 11 positives and 24,256 people with 14 positives.

The statistical rule-of-thumb is that the expected variability of a count is roughly the square root of the number counted. So the true population incidence amongst these samples could easily have been (say) 12 ± 3. So the difference between 11 and 14 is not strong evidence of an increase in prevalence.

However, based on the previous trend, the expected number of positives would have been (roughly) 5 ± 2. So 14 ± 3 is reasonably strong evidence that that the previous trend is not continuing.

Daily Deaths

Below I have also plotted the 7-day retrospective rolling average of the daily death toll along with the World-o-meter projection from late May

Click for a larger view.

The data again are close to the predicted rate of decline but lie consistently above the predicted curve for the last 10 days. In short, the rate of decline appears to slowing.

A slowing in the rate of reduction of deaths now corresponds to additional infections acquired in late May or early June.

Personal View

Personally I am not surprised by the failure of the either the prevalence data or the death rate to continue falling at their previous rates.

In order for that to have happened, societal restrictions would have to have remained as they were in May.

My perception is that that restrictions as observed in practice on the mean streets of Teddington, have become somewhat more relaxed.

If this is the new normal, then we may need to get used to living with corona virus in circulation at a population prevalence of ill people of around 1 in 1000.

In future updates I will continue to use the same World-o-meter projection to gauge whether the death rate is falling faster or slower than the model led us to expect.

===========================

Discussing death is difficult, and if you have been offended by this discussion, I apologise. The reason I have written this is that I feel it is important that we all try to understand what is happening.

Estimating the thermal performance of my house

June 24, 2020

My house of shame

As I mentioned the other day, I want to make my house as close to carbon-neutral as I can manage.

The largest source of emissions is from the gas I use each winter to heat the house: the emissions have been around 2.5 tonnes of carbon dioxide in each of the last two years. Yes, that’s tonnes.

I could replace this with electrical heating or with a heat pump, but if the house is not properly insulated then it will very expensive to heat, and the heat pump will need to manage a larger load.

Before splashing cash on fancy heat pumps and solar panels and batteries, my intention is to first to improve the thermal performance of the house by:

  1. Replacing the windows – which were old anyway – with modern triple glazing.
  2. External wall insulation
  3. Managing the air flow through the house.

Step 1 is almost complete: Step 2 is planned for the summer: and Step 3 is planned for next year.

I am aware that the return on these investments will be only just a few percent per annum of savings. But I honestly feel ashamed to live in a house that performs so badly.

But in order to plan rationally and to assess the effect of my improvements, I first need a way to assess how the house is performing at the moment.

Assessing current performance

One obvious approach is to record one’s household gas consumption and then plot it over time. This involves looking back through old bills and finding out the consumption of gas (in cubic metres or cubic feet for example) over time.

But typically one only makes readings every few months, and the amount of gas consumed will vary from one year to the next depending on the weather.

My approach is both extremely tedious and not very accurate. But I feel it does offer some insight which I hope you will find interesting!

  • Firstly, I read the gas meter every weekend and record the readings on a spreadsheet  that calculates my weekly consumption of gas.
  • Secondly I calculate the total energy contained in the gas (in joules) and divide it by the time between readings (in seconds) to give the average power (in joules per second or watts)
  • Thirdly I look up the average weekly external temperature near my house for the corresponding period. I use data from my own weather station, but you can pick data any nearby station on the Weather Underground’sWundermap‘..
  • Finally I subtract the average external temperature from the average internal temperature which I think is around 18 °C. So if the average external temperature for a week is 2 °C then I record the temperature difference as 16°C. I expect that the gas consumption will be proportional to this figure.

What do the data look like?

Here is the rate at which my house uses gas (average heating rate in heating watts) versus days since the 1st January 2019.

Estimated average heating power due to gas heating in watts. Click for a larger view

And here is the difference of the average weekly external temperature from 18 °C versus days since the 1st January 2019.

Difference between the average weekly external temperature and 18 °C. Click for a larger view

The correlation between these data sets is striking. And it becomes even more striking if one plots both data sets on the same graph:

The data sets from the two data sets above plotted on the same graph. The weather data is referenced to the left hand axis and the gas consumption data is referenced to the right-hand axis. Click for a larger view.

Notice how the weekly rises or falls in the difference from average external temperature are reflected in corresponding increases or decreases in the rate of gas consumption

Looking at the data up until day 250 (August 2019), the two data sets broadly overlap. By comparing the two scales on this graph, this means the house required 6000 W to heat the house 18 °C above ambient, or 333 W per °C above ambient.

This is rather more than the 280 W per °C I estimated previously.

Since then, the gas consumption data fall consistently below the weather data. I believe this is the effect of the triple-glazing which I installed last summer.

I calculated this would improve the thermal performance of the house by about 10% and this data are broadly consistent with that.

What’s next?

I would love to have better data than this: but I don’t. For example, both internal and external temperatures change hourly. The internal temperature is affected by how many people are at home and at what time of day the heating is required. Perhaps the relevant internal temperatures should be 19 °C.

But being to spot – albeit with the eye of faith – a 10 % improvement in building performance is as good as most building engineers would hope for. And this data can be had for free without purchasing any monitoring equipment.

My understanding last year was that about 20% of the building heat loss was through the windows, and the triple-glazing has – roughly – halved that.

External wall insulation (EWI) should tackle the 80% of the building heat loss that goes through the walls.  In principle EWI could cut this by 75% – see the figure below – but I am sceptical that this can be achieved – and I am hoping for a 50% cut in heat loss through the walls. The reason for my scepticism is that I have not included the effects of heat loss through the floors, or through draughts, neither of which I know how to estimate estimate.

If a 50% improvement in heat loss through the walls is achievable then as I plot this data through next winter I should be able to reduce peak demand in the winter to below 2000 W – comfortably within the range of an electrically-powered air-source heat pump.

Operating with a coefficient of performance of 3, this should require an average of just 600 watts of electrical power – which can all be low carbon.

Exciting times…

Intuition and Experience

June 23, 2020

Or how thermal modelling taught me to appreciate the obvious.

It is a special kind of pleasure to find one’s intuition about something to be seriously wrong.

I recall the pleasure at learning what happened when one dropped a stretched slinky spring – check out the video if you haven’t seen it.

And concerning the pandemic I am still realising that despite being well-informed and numerate, I really have no intuition about what is happening. On 27th March I wrote “Well, I didn’t see this coming” – and on 25th April – with 10,000 deaths I thought we were ‘about halfway‘. 

But even much closer to home – in the realm of thermal physics – I can still get things spectacularly wrong. Which brings us to today’s case in point.

External Wall Insulation 

For some time now I have been on a quest to reduce the carbon footprint of my house: just heating the house has led to a shameful 2.5 tonnes of CO2 emissions per year.

So I have made a thermal model of the house and identified a sequence of steps to achieve as close to carbon-neutral operation as is feasible. Those steps are:

  • Triple Glazing
  • Draught-proofing
  • External Wall Insulation
  • Possible mechanical air flow with heat recovery
  • Replace gas boiler with heat pump
  • Add solar panels and a battery

Steps one and two are almost complete and my measurements show that they have had the expected 10% reduction in heat flow – more on how I estimated this in a follow up article.

Now I have been considering external wall insulation. As this authoritative literature review makes clear, it is difficult to assess the performance of heat transfer through walls, and so it is difficult to assess the effectiveness of EWI. So in addition to the long term energy monitoring that I undertake, I thought it would be useful to do a specific experiment to test heat flow through my walls

My External Wall Insulation Experiment

I thought I would install some EWI with embedded temperature sensors to try to understand the likely effect. 

So back in January I bought two 50 mm thick polystyrene panels and stuck to them to the external wall of the house with temperature loggers monitoring the internal temperature, the external temperature and the temperature at the junction between the wall and the insulation.

Click for a larger view

I set the loggers to monitor every 10 minutes and left them for the month of February during which external night-time temperatures reached 0 °C. The data are shown below:

Data showing the internal (red), external (blue) and interface (green) temperatures measured every 10 minutes during the month of February 2020. Click for a larger view.

When I recovered the data I discovered that to my surprise, the 100 mm thick external wall insulation appeared to make almost no difference! The temperature underneath the insulation was much closer than I expected to the external temperature!

I was puzzled.

Then the reason dawned on me. I had used to my “intuition” to assume that having a panel 450 mm high and 1000 mm wide was a enough area to minimise “edge-effects”. 

My “intuition” told me that heat flow would be perpendicular to the wall: like this:

Click for a larger view

But I was wrong. The heat flow is much more like this:

Click for a larger view

 

In retrospect, the reason is easy to appreciate. The thermal conductivity of brick is around 0.8 W/m/K but the thermal conductivity of polystyrene is 24 times lower. In fact the insulating properties of polystyrene are so good – and so different to the properties of brick – that heat from the house  reach the external environment more easily by flowing “upwards” through 225 mm of brick than it can “outwards” through 100 mm of polystyrene.

Just to be sure, I then wrote an elaborate two-dimensional simulation to verify my new experience-based “understanding”. Or dare I call it, my new “experience-based intuition”? And sure enough the model showed clearly that heat was flowing easily through the walls.

The model considered a 500 mm x 500 mm cross-section of the wall as being made 60 x 60 =3600 small elements, each just 8.3 mm square.

It then considered second-by-second the heat flow from each element into its neighbours, depending on their relative temperatures, and the thermal conductivity and heat capacity of brick, air or polystyrene. After about an hour of run time, several days of real time had been simulated.

Click for a larger view

I arranged to colour the elements of the model by their temperature in 0.2 °C steps to produce contours. This showed clearly that in the region of the interface sensor, much more heat was flowing vertically along the wall and around the polystyrene, than was flowing “outward” through it.

My Conclusions

I have come to three conclusions.

Firstly, and most generally, my capacity for stupidity is depth-less and humiliating: Intuition is only useful when combined with experience.

Secondly, I have been reminded that measurement is what connects us to reality – it is indifferent to our predispositions or expectations. That’s why I love it!

Thirdly, I will just have to insulate the entire house and then measure the effect! I will write later about my plan to achieve that.

And finally, I have been reminded of the ingenuity of my colleagues at NPL who built systems for actually measuring properly what I have tried so inexpertly to measure – heat flow through building structures.

As building insulation and windows have improved, it has become harder and harder to make actual measurements of the thermal performance of building elements as opposed to making (possibly optimistic?) calculations.

But my NPL colleagues persisted until their unique facility was put in a skip a couple years ago. Really? Indeed. Sadly, NPL management couldn’t think of a way to make money from it!

COVID-19: Day 170: Population Prevalence Projections

June 19, 2020

Warning: Discussing death is difficult, and if you feel you will be offended by this discussion, please don’t read any further.
========================================

This post is a 19th June update on the likely prevalance of COVID-19 in the UK population. (Previous update)

Population Prevalence

The Office for National Statistics (ONS) have updated their survey data on the prevalence of people actively ill with COVID-19 in the general population (link).

Previously they published the data week-by-week, then (irritatingly) fortnight-by-fortnight, and now they have shifted the dates of their fortnights so that the data cannot be compared directly with previous data. (Grrr)

StartEnd% testing positive for COVID-19Lower confidence limit (%)Upper confidence limit (%)
3/4/202016/5/20200.270.130.47
17/5/202030/5/20200.100.050.19
31/5/202013/6/20200.060.020.13
Data from ONS

Plotting this on a graph we see a pleasingly decreasing trend.

Right-click and “open in a new tab” to see the graph in more detail.

I have not shown the old data on the figure above because the figure then becomes too confusing. The new data indicate a broadly similar story to the old data, with the (unweighted) exponential fit indicating a factor 10 reduction in prevalence every 43 days, similar to the previous estimate of 45 days.

PrevalenceDateCases in the UK
1 in 1,000Start of JuneAbout 60,000
1 in 10,000Early JulyAbout 6,000
1 in 100,000Late AugustAbout 600
Projected dates for a given level of COVID-19 prevalence

We can see the data in more detail if we plot them on a log-linear graph.

Right-click and “open in a new tab” to see the graph in more detail.

Limits of the survey power

We are probably approaching the lower limit of the population prevalence that this kind survey can detect. Each fortnightly data point is based on testing roughly 20,000 people and for the last data point, only 10 positive cases were detected.

The statistical rule-of-thumb is that the expected variability of a count is roughly the square root of the number counted. So the true population incidence amongst the 20,000 samples could easily have been 10 ± 3.

If the population incidence were 10 times lower – as we hope it will be in July/August – then surveying 20,000 people would result in only a single positive result. Or none. Estimates made using this sampling technique would then have large uncertainties.

This weakness highlights the ability of the virus to spread at a low level without detection, and the only way to counter this is with truly massive numbers of tests.

It is perhaps worth mentioning that according to a few searches, a test kit and analysis costs on the order of £100. So 10,000 tests per week costs on the order of £1 million pounds a week. In my opinion, this is a lot, but it is small compared to what we have spent so far – and what is at risk if the infection returns without our being able to detect it at the lowest levels.

Daily Deaths

Below I have also plotted the 7-day retrospective rolling average of the daily death toll along with the World-o-meter projections.

Right-click and “open in a new tab” to see the graph in more detail.

The data again lie close to the predicted rate of decline. This is consistent with the population prevalence projection, falling by a factor 10 in about 50 days.

Notice that if recent relaxations of social restrictions are currently giving rise to more cases in the community now, this will not show up in this curve for several weeks. The deaths of people recorded now are from infections acquired in late May.

In future updates I will continue to use the same World-o-meter projection to gauge whether the death rate is falling faster or slower than the model led us to expect.

===========================
Discussing death is difficult, and if you have been offended by this discussion, I apologise. The reason I have written this is that I feel it is important that we all try to understand what is happening.

COVID-19: Day 164: Population Prevalence Projections

June 14, 2020

June 14, 2020

Warning: Discussing death is difficult, and if you feel you will be offended by this discussion, please don’t read any further.
========================================

This post is an update on the likely prevalance of COVID-19 in the UK population in the coming weeks:

Population Prevalence

The Office for National Statistics (ONS) now have updated their survey data on the prevalence of people actively ill with COVID-19 in the general population (link).

Previously they published the data week-by-week, but now (irritatingly) they have grouped the data by fortnight-by-fortnight.

StartEnd% testing positive for COVID-19Lower confidence limit (%)Upper confidence limit (%)
26/04/202010/05/20200.270.170.41
11/05/202024/05/20200.220.100.43
25/05/202007/06/20200.060.020.12
Data from the ONS

Plotting this on a graph we see a pleasingly decreasing trend.

Right-click and “open in a new tab” to see the graph in more detail. The GREY data are the old data and the RED data are the new analysis.

The old data is shown in grey on the figure above. The new data (shown in red) indicate a broadly similar story. The population prevalence in mid-June (now) is below 0.1% and extrapolating using an exponential function implies – if the trend continued – a factor 10 reduction in prevalence every 45 days.

PrevalenceDateCases in the UK
1 in 1,000Start of JuneAbout 60,000
1 in 10,000Mid-JulyAbout 6,000
1 in 100,000Start of SeptemberAbout 600
1 in 1,000,000Mid OctoberAbout 60
Projected dates for a given level of COVID-19 prevalence

We can see the data in more detail if we plot them on a log-linear graph.

Right-click and “open in a new tab” to see the graph in more detail. The GREY data are the old data and the RED data are the new analysis

The analysis suggests that at the start of September the population prevalence of COVID-19 cases might be close to 10 in a million.

Re-stating what I said last week, I personally think that level would be low enough for life to proceed reasonably normally – albeit with some of our ‘new normal’ behaviours – and it is probably low enough for schools to operate safely with minimal fuss.

Daily Deaths

Below I have also plotted the 7-day retrospective rolling average of the daily death toll along with the World-o-meter projections.

Right-click and “open in a new tab” to see the graph in more detail.

The data lie close to the predicted rate of decline. This is consistent with the population prevalence projection, falling by a factor 10 in about 50 days.

In my future updates I will continue to use the same World-o-meter projection to gauge whether the death rate is falling faster or slower than the model led us to expect.

===========================
Discussing death is difficult, and if you have been offended by this discussion, I apologise. The reason I have written this is that I feel it is important that we all try to understand what is happening.

COVID-19: Day 157: Population Prevalence Projections

June 6, 2020

Warning: Discussing death is difficult, and if you feel you will be offended by this discussion, please don’t read any further.
========================================

Previously (link), I made estimates of likely prevalance of COVID-19 in the UK population by combining:

  • A projection for daily deaths from world-o-meter with…
  • A single measure of population prevalence from the Office for National Statistics (ONS).

Now we have more data and so I have produced updated projections for both daily deaths and population prevalence.

Population Prevalence

The ONS now have survey data (link) on the prevalence of people actively ill with COVID-19 in the general population. The data covers 5 weeks:

% testing positive for COVID-19Lower confidence limitUpper confidence limit
26 April to 2 May0.440.250.72
3 May to 9 May0.310.200.50
10 May to 16 May0.220.150.36
17 May to 23 May0.160.100.25
24 May to 30 May0.110.060.19
Data from the ONS

Plotting this on a graph we see a pleasingly decreasing trend.

Right-click and “open in a new tab” to see the graph in more detail.

The population prevalence at the start of June (now) is around 0.1% and the trend is well described by an exponential function which – if the trend continued – would imply a factor 10 reduction in prevalence every 47 days.

PrevalenceDateCases in the UK
1 in 1,000Start of JuneAbout 60,000
1 in 10,000Mid-JulyAbout 6,000
1 in 100,000Start of SeptemberAbout 600
1 in 1,000,000Mid OctoberAbout 60
Projected dates for a given level of COVID-19 prevalence.

We can see the data in more detail if we plot them on a log-linear graph.

Right-click and “open in a new tab” to see the graph in more detail.

This rate of decline is slower than any of us would like, but it is similar to the projection I made a couple of weeks ago.

It suggests that at the start of September the population prevalence of COVID-19 cases might be close to 10 in a million.

Personally, I think this is low enough for life to proceed reasonably normally – albeit with some of our ‘new normal’ behaviours – and probably low enough for schools to operate safely with minimal fuss.

Daily Deaths

Below I have also plotted the 7 day retrospective rolling average of the daily death toll along with the World-o-meter projections.

Right-click and “open in a new tab” to see the graph in more detail.

The predicted rate of decline is similar to the population prevalence projection, falling by a factor 10 in about 50 days.

In my future updates I will use the current World-o-meter projection to gauge whether the death rate is falling faster or slower than we currently hope for.

===========================
Discussing death is difficult, and if you have been offended by this discussion, I apologise. The reason I have written this is that I feel it is important that we all try to understand what is happening.


%d bloggers like this: