Archive for the ‘Protons for Breakfast’ Category

Rocket Science

January 14, 2021

One of my lockdown pleasures has been watching SpaceX launches.

I find the fact that they are broadcast live inspiring. And the fact they will (and do) stop launches even at T-1 second shows that they do not operate on a ‘let’s hope it works’ basis. It speaks to me of confidence built on the application of measurement science and real engineering prowess.

Aside from the thrill of the launch  and the beautiful views, one of the brilliant features of these launches is that the screen view gives lots of details about the rocket: specifically it gives time, altitude and speed.

When coupled with a little (public) knowledge about the rocket one can get to really understand the launch. One can ask and answer questions such as:

  • What is the acceleration during launch?
  • What is the rate of fuel use?
  • What is Max Q?

Let me explain.

Rocket Science#1: Looking at the data

To do my study I watched the video above starting at launch, about 19 minutes 56 seconds into the video. I then repeatedly paused it – at first every second or so – and wrote down the time, altitude (km) and speed (km/h) in my notebook. Later I wrote down data for every kilometre or so in altitude, then later every 10 seconds or so.

In all I captured around 112 readings, and then entered them into a spreadsheet (Link). This made it easy to convert the  speeds to metres per second.

Then I plotted graphs of the data to see how they looked: overall I was quite pleased.

Click for a larger image. Speed (m/s) of Falcon 9 versus time after launch (s) during the Turksat 5A launch.

The velocity graph clearly showed the stage separation. In fact looking in detail, one can see the Main Engine Cut Off (MECO), after which the rocket slows down for stage separation, and then the Second Engine Start (SES) after which the rocket’s second stage accelerates again.

Click for a larger image. Detail from graph above showing the speed (m/s) of Falcon 9 versus time (s) after launch. After MECO the rocket is flying upwards without power and so slows down. After stage separation, the second stage then accelerates again.

It is also interesting that acceleration – the slope of the speed-versus-time graph – increases up to stage separation, then falls and then rises again.

The first stage acceleration increases because the thrust of the rocket is almost constant – but its mass is decreasing at an astonishing 2.5 tonnes per second as it burns its fuel!

After stage separation, the second stage mass is much lower, but there is only one rocket engine!

Then I plotted a graph of altitude versus time.

Click for a larger image. Altitude (km) of Falcon 9 versus time after launch (s) during the Turksat 5A launch.

The interesting thing about this graph is that much of the second stage is devoted to increasing the speed of the second stage at almost constant altitude – roughly 164 km above the Earth. It’s not pushing the spacecraft higher and higher – but faster and faster.

About 30 minutes into the flight the second stage engine re-started, speeding up again and raising the altitude further to put the spacecraft on a trajectory towards a geostationary orbit at 35,786 km.

Rocket Science#2: Analysing the data for acceleration

To estimate the acceleration I subtracted each measurement of speed from the previous measurement of speed and then divided by the time between the two readings. This gives acceleration in units of metres per second, but I thought it would be more meaningful to plot the acceleration as a multiple of the strength of Earth’s gravitational field g (9.81 m/s/s).

The data as I calculated them had spikes in because the small time differences between speed measurements (of the order of a second) were not very accurately recorded. So I smoothed the data by averaging 5 data points together.

Click for a larger image. Smoothed Acceleration (measured in multiples of Earth gravity g) of Falcon 9 versus time after launch (s) during the Turksat 5A launch. Also shown as blue dotted line is a ‘theoretical’ estimate for the acceleration assuming it used up fuel as a uniform rate.

The acceleration increased as the rocket’s mass reduced reaching approximately 3.5g just before stage separation.

I then wondered if I could explain that behaviour.

  • To do that I looked up the launch mass of a Falcon 9 (Data sources at the end of the article and saw that it was 549 tonnes (549,000 kg).
  • I then looked up the mass of the second stage 150 tonnes (150,000 kg).
  • I then assumed that the mass of the first stage was almost entirely fuel and oxidiser and guessed that the mass would decrease uniformly from T = 0 to MECO at T = 156 seconds. This gave a burn rate of 2558 kg/s – over 2.5 tonnes per second!
  • I then looked up the launch thrust from the 9 rocket engines and found it was 7,600,000 newtons (7.6 MN)
  • I then calculated the ‘theoretical’ acceleration using Newton’s Second Law (a = F/m) at each time step – remembering to decrease the mass by 2.558 kilograms per second. And also remembering that the thrust has to exceed 1 x g before the rocket would leave the ground!

The theoretical line (– – –) catches the trend of the data pretty well. But one interesting feature caught my eye – a period of constant acceleration around 50 seconds into the flight.

This is caused by the Falcon 9 throttling back its engines to reduce stresses on the rocket as it experiences maximum aerodynamic pressure – so-called Max Q – around 80 seconds into flight.

Click for a larger image. Detail from the previous graph showing smoothed Acceleration (measured in multiples of Earth gravity g) of Falcon 9 versus time after launch (s) during the Turksat 5A launch. Also shown as blue dotted line is a ‘theoretical’ estimate for the acceleration assuming it used up fuel as a uniform rate. Highlighted in red are the regions around 50 seconds into flight when the engines are throttled back to reduce the speed as the craft experience maximum aerodynamic pressure (Max Q) about 80 seconds into flight.

Rocket Science#3: Maximum aerodynamic pressure

Rocket’s look like they do – rocket shaped – because they have to get through Earth’s atmosphere rapidly, pushing the air in front of them as they go.

The amount of work needed to do that is generally proportional to the three factors:

  • The cross-sectional area A of the rocket. Narrower rockets require less force to push through the air.
  • The speed of the rocket squared (v2). One factor of v arises from the fact that travelling faster requires one to move the same amount of air out of the way faster. The second factor arises because moving air more quickly out of the way is harder due to the viscosity of the air.
  • The air pressure P. The density of the air in the atmosphere falls roughly exponentially with height, reducing by approximately 63% every 8.5 km.

The work done by the rocket on the air results in so-called aerodynamic stress on the rocket. These stresses – forces – are expected to vary as the product of the above three factors: A P v2. The cross-sectional area of the rocket A is constant so in what follows I will just look at the variation of the product P v2.

As the rocket rises, the pressure falls and the speed increases. So their product P v, and functions like P v2, will naturally have a maximum value.

The importance of the maximum of the product P v2 (known as Max Q) as a point in flight, is that if the aerodynamic forces are not uniformly distributed, then the rocket trajectory can easily become unstable – and Max Q marks the point at which the danger of this is greatest.

The graph below shows the variation of pressure P with time during flight. The pressure is calculated using:

Where the ‘1000’ is the approximate pressure at the ground (in mbar), h is the altitude at a particular time, and h0 is called the scale height of the atmosphere and is typically 8.5 km.

Click for a larger image. The atmospheric pressure calculated from the altitude h versus time after launch (s) during the Turksat 5A launch.

I then calculated the product P v2, and divided by 10 million to make it plot easily.

Click for a larger image. The aerodynamic stresses calculated from the altitude and speed versus time after launch during the Turksat 5A launch.

This calculation predicts that Max Q occurs about 80 seconds into flight, long after the engines throttled down, and in good agreement with SpaceX’s more sophisticated calculation.

Summary 

I love watching the Space X launches  and having analysed one of them just a little bit, I feel like understand better what is going on.

These calculations are well within the capability of advanced school students – and there are many more questions to be addressed.

  • What is the pressure at stage separation?
  • What is the altitude of Max Q?
  • The vertical velocity can be calculated by measuring the rate of change of altitude with time.
  • The horizontal velocity can be calculated from the speed and the vertical velocity.
  • How does the speed vary from one mission to another?
  • Why does the craft aim for a particular speed?

And then there’s the satellites themselves to study!

Good luck with your investigations!

Resources

And finally thanks to Jon for pointing me towards ‘Flight Club – One-Click Rocket Science‘. This site does what I have done but with a good deal more attention to detail! Highly Recommended.

 

Research into Nuclear Fusion is a waste of money

November 24, 2019

I used to be a Technological Utopian, and there has been no greater vision for a Technical Utopia than the prospect of limitless energy at low cost promised by Nuclear Fusion researchers.

But glowing descriptions of the Utopia which awaits us all, and statements by fusion Utopians such as:

Once harnessed, fusion has the potential to be nearly unlimited, safe and CO2-free energy source.

are deceptive. And I no longer believe this is just the self-interested optimism characteristic of all institutions.

It is a damaging deception, because money spent on nuclear fusion research could be spent on actual solutions to the problem of climate change. Solutions which exist right now and which could be implemented inside in a decade in the UK.

Reader: Michael? Are you OK? You seem to have come over a little over-rhetorical?

Me: Thanks. Just let me catch my breath and I’ll be fine. Ahhhhhh. Breathe…..

What’s the problem?

Well let’s just suppose that the current generation of experiments at JET and ITER are ‘successful’. If so, then having started building in 2013:

  • By 2025 the plant should be ready for initial plasma experiments.
  • Unbelievably, full deuteriumtritium fusion experiments will not start until 2035!
    • I could not believe this so I checked. Here’s the link.
    • I can’t find a source for it, but I have been told that the running lifetime of ITER with deuterium and tritium is just 4000 hours.
  • The cost of this experiment is hard to find written down – ITER has its own system of accounting! – but will probably be around 20 billion dollars.

And at this point, without having ever generated a single kilowatt of electricity, ITER will be decommissioned and its intensely radioactive core will be allowed to cool down until it can be buried.

The ‘fusion community’ would then ask for another 20 billion dollars or so to fund a DEMO power station which might be operational around 2050. At which point after a few years of DEMO operation, commercial designs would become available.

So the overall proposal is to spend about 40 billion dollars over the next 30 years to find out if a ‘commercial’ fusion power station is viable.

This plan is the embodiment of madness that could only be advocated by Technological Utopians who have lost track of the reason that fusion might once have been a good idea.

Let’s look at the problems in the most general terms.

1. Cost

Fusion will not be cheap. If we look at the current generation of nuclear fission stations, such as Hinkley C, then these will cost around £20 billion each.

Despite the fact the technology for building nuclear fission reactors is now half a century old, previous versions of the Hinkley C reactor being built at Olkiluoto and Flamanville are many years late, massively over-budget and in fact may never be allowed to operate.

Assuming Hinkley C does eventually become operational, the cost of the electricity it produces will be barely affected by the fuel it uses. More than 90% of the cost of the electricity is paying back the debt used to finance the reactor. It will produce the most expensive electricity ever supplied in the UK.

Nuclear fusion reactors designed to produce a gigawatt of electricity would definitely be engineering behemoths in the same category of engineering challenge as Hinkley C, but with much greater complexity and many more unknown failure modes. 

ITER Project. Picture produced by Oak Ridge National Laboratory [CC BY 2.0 (https://creativecommons.org/licenses/by/2.0)]

The ITER Torus. The scale and complexity is hard to comprehend. Picture produced by Oak Ridge National Laboratory [CC BY 2.0 (https://creativecommons.org/licenses/by/2.0)%5D

Even in the most optimistic case – an optimism which we will see is not easy to justify – it is inconceivable that fusion technology could ever produce low cost electricity.

I don’t want to live in a world with
nuclear fusion reactors, because
I don’t want to live in a world
where electricity is that expensive.
Unknown author

2. Sustainable

One of the components of the fuel for a nuclear fusion reactor – deuterium – is readily available on Earth. It can be separated from sea water at modest cost.

The other componenttritium – is extraordinarily rare and expensive. It is radioactive with a half-life of about 10 years.

To  become <irony>sustainable<\irony>, a major task of a fusion reactor is to manufacture tritium.

The ‘plan’ is to do this by bombarding lithium-6 with neutrons causing a reaction yielding tritium and helium.

Ideally, every single neutron produced in the fusion reaction would be captured, but in fact most of them will not be lost. Instead, a ‘neutron multiplication’ process is conceived of, despite the intense radioactive waste this will produce.

3. Technical Practicality

I have written enough here and so I will just refer you to this article published on the web site of the Bulletin of Atomic Scientists.

This article considers:

  • The embedded carbon and costs
  • Optimistic statements of energy balance that fail to recognise the difference between:
    • The thermal energy of particles in the plasma
    • The thermal energy extracted – or extractable.
    • The electrical energy supplied for operation
  • Other aspects of the tritium problem I mentioned above.
  • Radiation and radioactive waste
  • The materials problems caused by – putatively – decades of neutron irradiation.
  • The cooling water required.

I could add my own concerns about neutron damage to the immense superconducting magnets that are just a metre or so away from the hottest place in the solar system.

In short, there are really serious problems that have no obvious solution.

4. Alternatives

If there were no alternative, then I would think it worthwhile to face down all these challenges and struggle on.

But there are really good alternatives based on that fusion reactor in the sky – the Sun.

We can extract energy directly from sunlight, and from the winds that the Sun drives around the Earth.

We need to capture only 0.02% of the energy in the sunlight reaching Earth to power our entire civilisation!

The complexity and cost of fusion reactors even makes fission reactors look good!

And all the technology that we require to address what is acknowledged as a climate emergency exists here and now.

By 2050, when (optimistically?) the first generation of fusion reactors might be ready to be built – carbon-free electricity production could be a solved problem.

Nuclear fusion research is, at its best, a distraction from the problem at hand. At worst, it sucks money and energy away from genuinely renewable energy technologies which need it.

We should just stop it all right now.

Hazards of Flying

November 17, 2019

Radiation Dose

Radeye in Cabin

RadEye Geiger Counter on my lap in the plane.

It is well-known that by flying in commercial airliners, one exposes oneself to increased intensity of ionising radiation.

But it is one thing to know something in the abstract, and another to watch it in front of you.

Thus on a recent flight from Zurich I was fascinated to use a Radeye B20-ER survey meter to watch the intensity of radiation rise with altitude as I flew home.

Slide1

Graph showing the dose rate in microsieverts per hour as a function of time before and after take off. The dose rate at cruising altitude was around 25 times on the ground.

Slide2

During the flight from Zurich, the accumulated radiation dose was almost equal to my entire daily dose in the UK.

The absolute doses are not very great (Some typical doses). The dose on flight from Zurich (about 2.2 microsieverts) was roughly equivalent to the dose from a dental X-ray, or one whole day’s dose in the UK.

But for people who fly regularly the effects mount up.

Given how skittish people are about exposing themselves to any hazard I am surprised that more is not made of this – it is certainly one more reason to travel by train!

CO2 Exposure

Although I knew that by flying I was exposing myself to higher levels of radiation – I was not aware of how high the levels of carbon dioxide can become in the cabin.

I have been using a portable detector for several months. I was sceptical that it really worked well, and needed to re-assure myself that it reads correctly. I am now more or less convinced and the insights it has given have been very helpful.

In fresh air the meter reads around 400 parts per million (ppm) – but in the house, levels can exceed this by a factor of two – especially if I have been cooking using gas.

One colleague plotted levels of CO2 in the office as a function of the number of people using the office. We were then able to make a simple airflow model based on standard breathing rates and the specified number of air changes per hour.

Slide5

However I was surprised at just how high the levels became in the cabin of an airliner.

The picture below shows CO2 levels in the bridge leading to the plane in Zurich Airport. Levels around 1500 ppm are indicative very poor air quality.

Slide3

Carbon dioxide concentration on the bridge leading to the plane – notice the rapid rise.

The picture below shows that things were even worse in the aeroplane cabin as we taxied on the tarmac.

Slide4

Carbon dioxide concentration measured in the cabin while we taxied on the ground in Zurich.

Once airborne, levels quickly fell to around 1000 ppm – still a high level – but much more comfortable.

I have often felt preternaturally sleepy on aircraft and now I think I know why – the spike in carbon dioxide concentrations at this level can easily induce drowsiness.

One more reason not to fly!

 

 

 

Getting there…

November 14, 2019

Life is a journey to a well-known destination. It’s the ‘getting there’ that is interesting.

The journey has been difficult these last few weeks. But I feel like I am ‘getting there

Work and non-work

At the start of 2019 I moved to a 3-day working week, and at first I managed to actually work around 3-days a week, and felt much better for it.

But as the year wore on, I have found it more difficult to limit my time at work. This has been particularity intense these last few weeks.

My lack of free time has been making me miserable. It has limited my ability to focus on things I want to do for personal, non-work reasons.

Any attention I pay to a personal project – such as writing this blog – feels like a luxurious indulgence. In contrast, work activities acquire a sense of all-pervading numinous importance.

But despite this difficulty – I feel like I am better off than last year – and making progress towards the mythical goal of work-life balance on the way to a meaningful retirement.

I am getting there!

Travelling 

Mainly as a result of working too much, I am still travelling too much by air. But on some recent trips to Europe I was able to travel in part by train, and it was surprisingly easy and enjoyable.

I am getting there! By train.

My House

The last of the triple-glazing has been installed in the house. Nine windows and a door (around £7200 since you asked) have been replaced.

Many people have knowingly askedWhat’s the payback time?

  • Using financial analysis the answer is many years.
  • Using moral and emotional analysis, the payback has been instantaneous.

It would be shameful to have a house which spilt raw sewage onto the street. I feel the same way about the 2.5 tonnes of carbon dioxide my house currently emits every winter.

This triple-glazing represents the first steps in bringing my home up to 21st Century Standards and it is such a relief to have begun this journey.

I will monitor the performance over the winter to see if it coincides with my expectations, and then proceed to take the next steps in the spring of 2020.

I am getting there! And emitting less carbon dioxide in the process

Talking… and listening

Physics in Action 3

Yesterday I spoke about the SI to more than 800 A level students at the Emmanuel Centre in London. I found the occasion deeply moving.

  • Firstly, the positivity and curiosity of this group of group of young people was palpable.
  • Secondly, their interest in the basics of metrology was heartwarming.
  • Thirdly, I heard Andrea Sella talk about ‘ice’.

Andrea’s talked linked the extraordinary physical properties of water ice to the properties of ice on Earth: the dwindling glaciers and the retreat of sea-ice.

He made the connection between our surprise that water ice was in any way unusual with the journalism of climate change denial perpetrated by ‘newspapers’ such as the Daily Mail.

This link between the academic and the political was shocking to hear in this educational context – but essential as we all begin our journey to a new world in which we acknowledge what we have done to Earth’s climate.

We have a long way to go. But hearing Andrea clearly and truthfully denounce the lies to which we are being exposed was personally inspiring.

We really really are getting there. 

What it takes to heat my house: 280 watts per degree Celsius above ambient

August 16, 2019

Slide1

The climate emergency calls on us to “Think globally and act locally“. So moving on from distressing news about the Climate, I have been looking to reduce energy losses – and hence carbon dioxide emissions – from my home.

One of the problems with doing this is that one is often working ‘blind’ – one makes choices – often expensive choices – but afterwards it can be hard to know precisely what difference that choice has made.

So the first step is to find out the thermal performance of the house as it is now. This is as tedious as it sounds – but the result is really insightful and will help me make rational decisions about how to improve the house.

Using the result from the end of the article I found out that to keep my house comfortable in the winter, for each degree Celsius that the average temperature falls below 20 °C, I currently need to use around 280 W of heating. So when the temperature is 5 °C outside, I need to use 280 × (20 – 5) = 4200 watts of heating.

Is this a lot? Well that depends on the size of my house. By measuring the wall area and window area of the house, this figure allows me to work out the thermal performance of the walls and windows. And then I can estimate how much I could reasonably hope to improve the performance by using extra insulation or replacing windows. These details will be the topic of my next article.

In the rest of this article I describe how I made the estimate for my home which uses gas for heating, hot water, and cooking. My hope is it will help you make similar estimates for your own home.

Overall Thermal Performance

The first step to assessing the thermal performance of the house was to read the gas meter – weekly: I did say it was tedious. I began doing that last November.

One needs to do this in the winter and the summer. Gas consumption in winter is dominated by heating, and the summer reading reveals the background rate of consumption for the other uses.

My meter reads gas consumption in units of ‘hundreds of cubic feet’. This archaic unit can be converted to energy units – kilowatt-hours using the formula below.

Energy used in kilowatt-hours = Gas Consumption in 100’s of cubic feet × 31.4

So if you consume 3 gas units per day i.e. 300 cubic feet of gas, then that corresponds to 3 × 31.4 = 94.2 kilowatt hours of energy per day, and an average power of 94.2 / 24 = 3 925 watts.

The second step is to measure the average external temperature each week. This sounds hard but is surprisingly easy thanks to Weather Underground.

Look up their ‘Wundermap‘ for your location – you can search by UK postcode. They have data from thousands of weather stations available.

To get historical data I clicked on a nearby the weather station (it was actually the one in my garden [ITEDDING4] but any of the neighbouring ones would have done just as well.)  I then selected ‘weekly’ mode and noted down the average weekly temperature for each week in the period from November 2018 to the August 2019.

Slide3

Weather history for my weather station. Any nearby station would have done just as well. Select ‘Weekly Mode’ and then just look at the ‘Average temperature’. You can navigate to any week using the ‘Next’ and ‘Previous’ buttons, or by selecting a date from the drop down menus

Once I had the average weekly temperature, I then worked out the difference between the internal temperature in the house – around 20 °C and the external temperature.

I expected that the gas consumption to be correlated with the difference from 20 °C, but I was surprised by how close the correlation was.

Slide2

Averaging the winter data in the above graph I estimate that it takes approximately 280 watts to keep my house at 20 °C for each 1 °C that the temperature falls below 20 °C.

Discussion

I have ignored many complications in arriving at this estimate.

  • I ignored the variability in the energy content of gas
  • I ignored the fact that less than 100% of the energy of the gas is use in heating

But nonetheless, I think it fairly represents the thermal performance of my house with an uncertainty of around 10%.

In the next article I will show how I used this figure to estimate the thermal performance – the so-called ‘U-values’ – of the walls and windows.

Why this matters

As I end, please let me explain why this arcane and tedious stuff matters.

Assuming that the emissions of CO2 were around 0.2 kg of CO2 per kWh of thermal energy, my meter readings enable me to calculate the carbon dioxide emissions from heating my house last winter.

The graph below shows the cumulative CO2 emissions…

Slide4

Through the winter I emitted 17 kg of CO2 every day – amounting to around 2.5 tonnes of CO2 emissions in total.

2.5 tonnes????!!!!

This is around a factor of 10 more than the waste we dispose of or recycle. I am barely conscious that 2.5 tonnes of ANYTHING have passed through my house!

I am stunned and appalled by this figure.

Without stealing the thunder from the next article, I think I can see a way to reduce this by a factor of three at least – and maybe even six.

Gravity Wave Detector#1

July 6, 2017
Me and Albert Einstein

Not Charlie Chaplin: That’s me and Albert Einstein. A special moment for me. Not so much for him.

I belong to an exclusive club! I have visited two gravity wave detectors in my life.

Neither of the detectors have ever detected gravity waves, but nonetheless, both of them filled me with admiration for their inventors.

Bristol, 1987 

In 1987, the buzz of the discovery of high-temperature superconductors was still intense.

I was in my first post-doctoral appointment at the University of Bristol and I spent many late late nights ‘cooking’ up compounds and carrying out experiments.

As I wandered around the H. H. Wills Physics department late at night I opened a door and discovered a secret corridor underneath the main corridor.

Stretching for perhaps 50 metres along the subterranean hideout was a high-tech arrangement of vacuum tubing, separated every 10 metres or so by a ‘castle’ of vacuum apparatus.

It lay dormant and dusty and silent in the stillness of the night.

The next day I asked about the apparatus at morning tea – a ritual amongst the low-temperature physicists.

It was Peter Aplin who smiled wryly and claimed ownership. Peter was a kindly antipodean physicist, a generalist – and an expert in electronics.

New Scientist article from 1975

New Scientist article from 1975

He explained that it was his new idea for a gravity wave detector.

In each of the ‘castles’ was a mass suspended in vacuum from a spring made of quartz.

He had calculated that by detecting ‘ringing’ in multiple masses, rather than in a single mass, he could make a detector whose sensitivity scaled as its Length2 rather than as its Length.

He had devised the theory; built the apparatus; done the experiment; and written the paper announcing that gravity waves had not been detected with a new limit of sensitivity.

He then submitted the paper to Physical Review. It was at this point that a referee had reminded him that:

When a term in L2 is taken from the left-hand side of the equation to the right-hand side, it changes sign. You will thus find that in your Equation 13, the term in L2 will cancel.

And so his detector was not any more sensitive than anyone else’s.

And so…

If it had been me, I think I might have cried.

But as Peter recounted this tale, he did not cry. He smiled and put it down to experience.

Peter was – and perhaps still is – a brilliant physicist. And amongst the kindest and most helpful people I have ever met.

And I felt inspired by his screw up. Or rather I was inspired by his ability to openly acknowledge his mistake. Smile. And move on.

30 years later…

…I visited Geo 600. And I will describe this dramatically scaled-up experiment in my next article.

P.S. (Aplin)

Peter S Aplin wrote a review of gravitational wave experiments in 1972 and had a paper at a conference called “A novel gravitational wave antenna“. Sadly, I don’t have easy access to either of these sources.

 

Not everything is getting worse!

April 19, 2017

Carbon Intensity April 2017

Friends, I find it hard to believe, but I think I have found something happening in the world which is not bad. Who knew such things still happened?

The news comes from the fantastic web site MyGridGB which charts the development of electricity generation in the UK.

On the site I read that:

  • At lunchtime on Sunday 9th April 2017,  8 GW of solar power was generated.
  • On Friday all coal power stations in the UK were off.
  • On Saturday, strong winds and solar combined with low demand to briefly provide 73% of power.

All three of these facts fill me with hope. Just think:

  • 8 gigawatts of solar power. In the UK! IN APRIL!!!
  • And no coal generation at all!
  • And renewable energy providing 73% of our power!

Even a few years ago each of these facts would have been unthinkable!

And even more wonderfully: nobody noticed!

Of course, these were just transients, but they show we have the potential to generate electricity which has a significantly low carbon intensity.

Carbon Intensity is a measure of the amount of carbon dioxide emitted into the atmosphere for each unit (kWh) of electricity generated.

Wikipedia tells me that electricity generated from:

  • Coal has a carbon intensity of about 1.0 kg of CO2 per kWh
  • Gas has a carbon intensity of about 0.47 kg of CO2 per kWh
  • Biomass has a carbon intensity of about 0.23 kg of CO2 per kWh
  • Solar PV has a carbon intensity of about 0.05 kg of CO2 per kW
  • Nuclear has a carbon intensity of about 0.02 kg of CO2 per kWh
  • Wind has a carbon intensity of about 0.01 kg of CO2 per kWh

The graph at the head of the page shows that in April 2017 the generating mix in the UK has a carbon intensity of about 0.25 kg of CO2 per kWh.

MyGridGB’s mastermind is Andrew Crossland. On the site he has published a manifesto outlining a plan which would actually reduce our carbon intensity to less than 0.1 kg of CO2 per kWh.

What I like about the manifesto is that it is eminently doable.

And who knows? Perhaps we might actually do it?

Ahhhh. Thank you Andrew.

Even thinking that a good thing might still be possible makes me feel better.

 

Science in the face of complexity

February 4, 2016
Jeff Dahn: Battery Expert at Dalhousie University

Jeff Dahn: Battery Expert at Dalhousie University

My mother-in-law bought me a great book for Christmas: Black Box Thinking by Matthew Syed: Thanks Kathleen 🙂

The gist of the book is easy to state: our cultural attitude towards “failure”- essentially one of blame and shame – is counter productive.

Most of the book is spent discussing this theme in relation to the practice of medicine and the law, contrasting attitudes in these areas to those in modern aviation. The stories of unnecessary deaths and of lives wasted are horrific and shocking.

Engineering

But when he moves on to engineering, the theme plays out more subtly. He discusses the cases of James Dyson, the Mercedes Formula 1 team, and David Brailsford from Sky Cycling. All of them have sought success in the face of complexity.

In the case of Dyson, his initial design of a ‘cyclone-based’ dust extractor wasn’t good enough, and the theory was too complex to guide improvements. So he started changing the design and seeing what happened. As recounted, he investigated 5,127 prototypes before he was satisfied with the results. The relevant point here is that his successful design created 5,126 failures.

One of his many insights was to devise a simple measurement technique that detected tiny changes in the effectiveness of his dust extraction: he sucked up fine white dust and blew the exhaust over black velvet.

Jeff Dahn

This approach put me in mind of Jeff Dahn, a battery expert I met at Dalhousie University.

Batteries are really complicated and improving them is hard because there are so many design features that could be changed. What one wants is a way to test as many variants as quickly and as sensitively as possible in order to identify what works and what doesn’t.

However when it comes to battery lifetime – the rate at which the capacity of a battery falls over time – it might seem inevitable that this would take years.

Not so. By charging and discharging batteries in a special manner and at elevated temperatures, it is possible to accelerate the degradation. Jeff then detects this with precision measurements of the ‘coulombic efficiency’ of the cell.

‘Coulombic efficiency’ sounds complicated but is simple. One first measures the electric current as the cell is charged. If the electric current is constant during charging then the electric current multiplied by the charging time gives the total amount of electric charge stored in the cell. One then measures the same thing as the cell discharges.

For the lithium batteries used in electric cars and smart phones, the coulombic efficiency is around 99.9%. But it is that tiny of amount (less than 0.1%) of the electric charge which doesn’t come back that is progressively damaging the cell and limiting it’s life.

One of Jeff’s innovations is the application of precision measurement to this problem. By measuring electric currents with uncertainties of around one part in a million, Jeff can measure that 0.1% of non-returned charge with an uncertainty of around 0.1%. So he can distinguish between cells that 99.95% efficient and 99.96% efficient. That may not sound much, but the second one is 20% better!

By looking in detail at the Coulombic efficiencyJeff can tell in a few weeks whether a new design of electrode will improve or degrade battery life.

The sensitivity of this test is akin to the ‘white dust on black velvet’ test used by Dyson: it doesn’t tell him why something got better or worse – he has to figure that out for himself. But it does tell him quickly which things were bad ideas.

I couldn’t count the ammeters in Jeff’s lab – each one attached to a test cell – but he was measuring hundreds of cells simultaneously. Inevitably, most of these tests will make the cells perform worse and be categorised as ‘failures’.

But this system allows him to fail fast and fail often: and it is this capability that allows him to succeed at all. I found this application of precision measurement really inspiring.

Thanks Jeff.

 

 

 

 

 

Explanations are not always possible.

April 28, 2015
I asked Google how to get from NPL to Richmond and it assumed I meant Richmond Virginia instead of Richmond upon Thames which is 5 kilometres away.

I asked Google how to get from NPL to Richmond and it assumed I meant Richmond, Virginia, USA instead of Richmond upon Thames which is 5 kilometres away. 

I have spent a fair amount of time in my life trying to explain things to people. But I think now – in all but the most basic of cases – explanations are impossible.

The reason I feel this is because I think that giving an explanation is like giving directions. And most people will acknowledge that unless you know where someone is ‘starting from’, it is impossible to give general directions to a given ‘destination’.

But while we accept that every set of directions should start with the question: “Where are you now?“, people are reluctant to acknowledge that logically every explanation should start with the question: “What do you know now?”.

Instead there seems to be a widespread belief that explanations can exist ‘by themselves’.

Of course we can draw maps and then explain how to navigate the map. And if someone can follow this, then they can learn the route to a particular ‘destination’. Or someone might already be familiar with the ‘landscape’. In these cases explanations are possible.

But many people find maps difficult. However:

  • Getting someone to drive you to a destination does not in general teach you the route.
  • And programming a ‘sat-nav’ to take someone to a particular location will also – in general – fail to teach them the route. They may have ‘arrived’ but they will be ‘lost’.
  • Travelling by tube to a location teaches them nothing about where they are!

Similarly, by sophistry, or by entertaining imagery, it is possible to give people the illusion that they understand something. But unless they can link this new state of insight to their previous understanding, they will still be ‘lost’.

I thought I would illustrate the general idea with a picture of a route on a Google map. But when I tried to generate a route from Teddington to nearby Richmond (upon Thames), Google assumed that the word ‘Richmond’ referred to the much more populous Richmond, Virginia!

And the impossibility of explanations is clear in this video of Dave Allen ‘explaining’ how to tell the time. It features the classic lines:

“There are three hands on a clock. The first hand is the hour hand, the second hand is the minute hand, and the third hand is the second hand.”

Enjoy.

Theories and Facts about Santa Claus

December 21, 2014

My friend Alom Shaha recently scripted the video above to try to explain the concept of ‘a scientific theory’.

I like the video, but I feel one point gets lost. And that is that  ‘theories’ are like ‘plot lines’ in a TV detective programme – they link together ‘the facts’ to tell ‘a story’.

Example#1: Café Mimmo

  • I am observed at 10:15 a.m. leaving my house on Church Road
  • I am observed at 10:21 a.m. at the junction of Church Road and Walpole St.
  • I am observed at 10:27 a.m. near to Café Mimmo on Broad Street.
  • I am observed at 10:28 a.m. in Café Mimmo on Broad Street.
  • I am observed at 10:58 a.m. walking North on Church Road.

These are ‘the facts’. But what is really going on? Is there a ‘story’ that links all these facts together? Let me propose a theory:

  • Theory: Michael goes for coffee every day at Café Mimmo

This theory links all these facts i.e. it explains how they relate to each other.

If this is a good theory, it ought to be able to predict new facts – and these predictions could then be checked.

Notice that the theory doesn’t specify the route I take to the Café. So even though the theory explains why I was seen on Church Road, it doesn’t predict that I always will be.

But the theory does predict that I will go to Café Mimmo every day. This could be easily tested. And if I didn’t visit every day, the theory could either be invalidated, or it might need to be revised to state Michael goes for coffee most days at Café Mimmo

I am sure you get the idea. But notice how the theory is simpler and more meaningful than a large number of facts. Facts tells us ‘what happened’, but theories tell us ‘what’s going on’.

Example#2: Santa Claus

  • On Christmas Day (usually, but sometimes on Christmas Eve) children all around the world receive presents.
  • On Christmas Eve parents leave out gifts – typically whisky (please) and a carrot – and these gifts are at least partly consumed by the morning.

These are ‘the facts’. But is there a theory that links these facts together? In fact there are two theories.

The traditional theory is that a mysterious being known as ‘Santa Claus’ delivers the presents on a sleigh pulled by flying reindeer and filled with an improbably large bag of toys.

Alternatively, some people contend that there is a vast conspiracy in operation in which otherwise honest and rational parents consistently lie to their children, year after year. According to this theory, it is in fact parents that buy all the presents, and fabricate forensic evidence of a visit by the fictional being ‘Santa Claus’.

The traditional theory has been heavily criticised, largely due to the unknown identity and quasi-magical abilities of the ‘Santa Claus’ figure.

However, I have never been a fan of conspiracy theories – they always seem rather unlikely to me. In particular I am sceptical that millions upon millions of parents would systematically lie to their children. I would never do such a thing.

So I will leave it up to you to imagine experiments that you could perform that would help to decide which theory is in fact correct.

One obvious experiment is to stay up late on Christmas Eve and watch for the arrival of ‘Santa Claus’.

But can I please ask you to be careful: Santa Claus is known never to bring gifts to boys and girls that don’t go to sleep at night. So use a webcam.

But whatever you believe: I hope you have a Happy Christmas 🙂


%d bloggers like this: