Archive for the ‘Uncategorized’ Category

Summer Solstice 2022: Solar PV update

June 21, 2022

The Summer Solstice seems like a good a time to take a look at the first half-year of generation from our 12 solar panels and the effect of our Tesla Powerwall battery.

Last year at around this time I wrote that – having been off-grid for 90 days – I felt like I was ‘floating’.  And then just a couple of days later I came back down to Earth after several consecutive days of unseasonably dull weather led to me have to buy some full-price electricity! In mid-summer!

This year we have been off-grid for only 50 days so far and I will discuss the reason for the difference below.

In case you don’t have time to read the article, it’s performing pretty much as expected and very similarly to last year.

Let’s begin.

Solar Generation.

One of the difficulties in communicating data from a solar PV system is its variability. The data obviously change seasonally, but also on daily, weekly and monthly time scales. So I have chosen to present the same data in several ways.

Let’s start with a basic chart showing the average daily generation (in kWh) over the last two years.

Click image for a larger version. Average daily solar generation (kWh) for each month of this year and last year.

This chart shows that generation this year is generally a little better than last year, with the exception of last May, which was enchantingly sunny.

Now let’s look at the daily data, and different averages.

Click image for a larger version: you’ll need to do this to see anything! This chart shows the daily generation this year, and the 5-day running average  of generation this year and last year. See the text for details of the other data on the graph.

The graph above is complicated, showing how various quantities (expressed as kWh/day) change versus day of the year.

Primarily it shows daily solar generation as a light green line. Notice the day-to-day variation: even in June, daily generation can fall to 3.8 kW/day even when the average generation is ~15 kWh/day.

Also shown are the 5-day averages of solar generation this year and last year. The 5-day average shows the smoothing effect of the use of a 13.5 kWh battery. When the 5-day average falls below demand, then it’s likely we will need to buy some electricity from the grid. You can see that this happened in mid-summer last year.

Notice that the 5-day averages show peaks and troughs that can last for weeks.

My expectations for the system are shown as a dotted green line (just a simple mathematical guess)  and as yellow monthly data points estimated using a multi-year European database. Generation is broadly in line with expectations.

The graph also shows nominal household demand as two red-dotted lines.

  • The horizontal line (10 kWh/day) corresponds to daily demand throughout the year.
  • The demand peaking in winter represents the electricity used by the heat pump to heat the house. This demand has only been present this winter – previously, heating was supplied by a gas boiler.

So the generation has interesting day-to-day variability, but when the 5-day average exceeds the average demand, then with the aid of the battery, we stand a good chance of being able to be off-grid for a sustained period.

Looking at the graph, the heating demand was still non-zero in April – and this delayed the point at which we were able to go off-grid.

The graph below shows actual (rather than nominal)  domestic use of electricity and the electricity drawn from the grid.

Click image for a larger version: This chart shows the ±7-day averages of the electricity we use in the house for heating and the electricity we draw from the grid. The difference between the two curves is supplied by the solar PV/battery system. Periods when the house has been off-grid are highlighted in red.

Another way of looking at the data.

Plotting day-to-day generation tends to emphasise the variability of the data.

Plotting cumulative generation through the year tends to de-emphasise the variability and highlight the similarity of generation from one year to the next.

The graphs below show cumulative generation versus day-of-the-year for this year and last year. The upper graph shows the whole year and the lower graph the first half of the year only.

Also shown as dotted lines are the MCS suggestion for likely annual generation and an empirical curve based on the MCS figures that I use to guide my expectations.

Click image for a larger version: This chart shows the cumulative generation this year and last year. See the text for details of the other data on the graph. Broadly speaking, the data are very similar.

Click image for a larger version: This chart shows the cumulative generation for the first 6-months of  this year and last year. See the text for details of the other data on the graph. Broadly speaking, the data are very similar. The very dull period in midsummer last June can be seen as a flat portion on the 2021 generation curve.

Remember that year-to-year variability in generation is typically ±5% (link), so that yearly variations of ~200 kWh on an expected generation of 3,780 kWh is quite normal.


Overall, everything is proceeding as expected. The combination of 12 x 340 Watt Q-Cells panels and the 13.5 kWh Tesla Powerwall battery has been astonishing.

  • We save money in summer – because we are off-grid.
  • We save money in spring and autumn because solar PV is still significant.
  • We save money in winter because we can buy cheap-rate electricity and use it during the day.

The reality of running the house – with fridges and freezers and computers and washing machines and air-conditioning and even tumble dryers – solely from the sunshine for several months a year, still warms my heart.

Another reason to stop using gas

March 6, 2022

Friends, many people are considering reducing, or stopping entirely, their use of natural gas for heating and cooking. Perhaps you are one of these people.

It may be that your motivation is because burning this gas is altering the climate of our planet.

Or may be that your motivation is because buying the gas supports murderous and megalomaniacal regimes across the plant.

But perhaps these motivation aren’t quite enough. If so, then please consider this:

  • Cooking with gas is poisoning you and your family

Yes, cooking with gas emits nitrogen oxide (NO) and nitric oxide (NO2) into your kitchen. Collectively these gases are known NOx.

When NOx reaches the membranes of your skin or nose, it quickly forms nitric acid, which irritates the membranes and can cause asthma and sensitise people to other allergens.

If you are concerned about air pollution in cities, then before worrying about vehicle emissions, you should probably first focus your attention on your own home where NOx levels are likely to be very much higher.

Let me explain

Air consists of very roughly 80% nitrogen (N2) and 20% oxygen (O2).

When burning natural gas, methane (CH4), in air, the majority of the combustion products (water (H2O) and carbon dioxide (CO2)) arise from reactions between methane and oxygen.

The nitrogen molecules – despite making up the bulk of the air – are relatively inert. But they are not completely inert.

At the high temperatures – approaching 2000 °C – of a methane flame, the nitrogen and oxygen molecules dissociate into atomic nitrogen and oxygen and in this state they react to form oxides of nitrogen, primarily NO.

This NO then converts to NO2 over a time frame that depends on what else is in the atmosphere. Thus even when the amount of NOx is constant, the fractions of NO and NO2 are likely to change over time.

When methane combustion takes place in a boiler, none of the combustion products enter your home.

But when you cook with gas, the combustion products are all vented directly into your home. Including the NO and NO2 i.e indoor NOx pollution.

Is this really a problem?

I don’t know for sure, but I suspect it must be.

Whereas professional kitchens frequently have strong extraction over open burners and ovens, domestic kitchens often do not. And where extraction is present, it is often not used, and when it is used, it only covers burners and not ovens.

Concentrations of NOx are difficult to measure for several reasons.

Firstly a meter to measure NO2 costs thousands of pounds versus a hundred pounds or so for a CO2 meter, and so there are very few reported measurements in kitchens.

And secondly, the ratio of NO to NO2 is generally not well-known in any particular circumstance.

Consequently using measurements of NO2 to estimate NOx will always give an underestimate of the NOx level.

One measurement in kitchens is in this article.  It shows measurements of NO2 during an evening of cooking in one US household. I have reproduced the figure below.

Click on figure for a larger version. While cooking with gas in this US household, NO2 levels rose to almost 300 ppb. This figure is modified from the linked article.

In the UK exposure limits for NO2 are an annual average exposure to 40 μg/m^3 with less than 18 exposures per year to peaks above 200 μg/m^3 averaged over 1 hour.

So it looks like the occupants of this household are being exposed to very high levels of NO2. But the  NO levels close to the cooker are likely to be even higher.

My Measurements and Calculations

I wondered if the measurements above were plausible. The peak did not have the shape I would have expected: it seems to fall very rapidly suggesting there was strong airflow through the house.

Unfortunately, I can’t measure NO or NO2 directly but I routinely monitor CO2 in the central part of the house, well away from the oven and hob. Nonetheless I regularly see the CO2 levels rise to over 1000 ppm during cooking. For this article I also took measurements with the detector at roughly head height next to the hob.

Click on figure for a larger version. The location of the CO2 meter relative to the hob for the  measurements in red in the graph below.

The graph below shows the CO2 data.

  • For the detector near the hob, the burner was on for 15 minutes and the CO2 levels rose immediately.
  • For the detector in the neighbouring room, the burner was on for 17 minutes and there was a delay of many minutes before CO2 levels began to rise.

Click on figure for a larger version. The rise in carbon dioxide concentration above background resulting from a single gas burner on teh hob. The measurements in black were measured several metres away in a different room. The measurements in red were measure at head height above the hob..

What is clear from both these measurements is that CO2 concentrations of at least 500 ppm above background are likely to be commonplace in all the rooms in homes which use gas hobs, ovens or grills.

I wondered if the ratio of production of NO to CO2 might occur at a fixed ratio. If so, that would allow me to use measurements of CO2 concentration to estimate likely levels of NO.

I wasn’t quite sure how to do this but an old friend suggested using the free and excellent GasEq software to calculate the likely combustion products and their relative concentrations.

Using the methane combustion in air example, I calculated the ratio of the NO in the exhaust gases to CO2. Then from measuring the CO2 rise due to combustion, I could estimate the NO concentration in the house.

Click on figure for a larger version. Logarithmic graph showing estimates of the NO concentration in parts per billion (ppm) assuming 500 ppm CO2 concentration from combustion and 400 ppm CO2 background concentration i.e. a measured CO2 concentration of around 900 ppm. See text for further details.

At first I was shocked. The calculation suggested that NO levels of several thousand ppm were likely. But this was based on two assumptions: that the gas flame was adiabatic and stoichiometric. What wonderful words.

  • Adiabatic means that no heat is lost from the flame and so the products would be at their maximum possible temperature, approximately 2225 K. However in a domestic gas burner, heat will be lost to both the burner itself, and saucepans which typically only reach 250 °C. So I repeated the calculation for lower temperatures.
  • Stoichiometric means that exactly the right amount of oxygen was mixed with the methane so that all of the methane and oxygen reacted.
    • If the gas mixture has excess methane (a so-called fuel-rich mixture) then less oxygen will be available to react with the nitrogen, and NO production will be reduced.
    • Similarly If the gas mixture does not quite enough methane (a so-called fuel-lean mixture) then some un-reacted oxygen will be available to react with the nitrogen, and NO production will be increased.

So I repeated calculations for a range of stoichimetries (±5% and ±10% from ideal) and a range of temperatures, extending down to more than 200 °C below the adiabatic flame temperature.

My conclusion from this calculation is that even with very conservative assumptions, when CO2 levels from combustion rise 500 ppm above background, the levels of NO in the air is likely to be several hundred ppb. Eventually some fraction of this NO will convert to NO2 and yield NO2 levels well-above safe exposure levels.

Of course without direct measurements, I don’t know this for sure, but I am surprised that this issue is not discussed more.


My conclusion is simple. Based on measurements of CO2 concentration in my own home, and calculations of the likely ratio of NO to CO2, I think that NOx exposure in UK households with open gas hobs, ovens, and grills is likely to routinely exceed exposure guidelines.

For people standing over a hob, or people routinely working in a domestic kitchen, exposure levels could easily be dramatically higher.

If anyone has problems with asthma or is concerned about their own – or their children’s exposure to air pollution – then it is likely that the best thing people can do is to stop using gas for cooking, and to instead use microwaves, electric ovens and induction hobs.

This archaic ‘burning’ technology is funding Putin’s war machine, changing the Earth’s climate. AND polluting my home!

Personally, I just can’t wait to get rid of this gas hob as soon as possible.

Talk on 2nd February in Twickenham

January 31, 2022

Friends, if you live in West London, and have nothing better to do on Wednesday 2nd February at 7:30 p.m., then you might like to consider coming to hear me talk.

I’ll be talking about the steps I have taken to reduce household carbon dioxide emissions, and suggesting first steps that anyone interested might take.

The talk is in a room above a pub in Twickenham (Google Maps Link)

The Royal Oak,
13, Richmond Road,
TW1 3AB.

This is located opposite a Shell Petrol Station and York House.

I have been trying very hard to keep the talk as short as possible to allow more time for… talking i.e. actual conversations.


If you would like to come along, please e-mail the Richmond and Twickenham Friends of the Earth at


If you aren’t able to come, but have a question which you would like answered, please just drop me a line at .


Happy Christmas and Best Wishes for 2022

December 18, 2021

Click image for a larger version

Friends, it’s the end of the year and there is still so much to write about. But for the next couple of weeks, it won’t be me doing the writing.

I feel the need for a break and so I will be hunkering down in Podesta Towers and dreaming of spring sunshine on my solar panels.

My aim is to stay warm, catch up on some other projects, and try to avoid catching Omicron!

I wish you all the best for the Christmas Season and a splendidly low-carbon 2022.

Journey from the Centre of the Sun

August 15, 2021

Click image for a larger version. Some of the stages in the energy conversions and transfers that allows me to have hot water in the mornings without any carbon dioxide emissions. Simple heh?

Friends, just the other day I wrote about how my heat pump produced hot water each day.

The way in which the heat pump extracts heat from the air is ingenious in the extreme.

But as I reflected on it, I realised that this ingenuity occurred in the middle of a long series of energy transformations taking – very roughly – 170,000 years.

Please allow me to explain.

#1 Where does the energy come from?

The source of nearly* all the energy humans exploit on Earth is sunlight.

Using sunlight and carbon dioxide plants produce oxygen (thank you) and carbohydrates. This so-called ‘photosynthesis‘ captures energy from the sunlight in the form of re-arranged chemical bonds within carbohydrate molecules.

When we use animals for work – horsepower and ox-power – the energy the animals use is derived from the carbohydrate molecules in their food.

When we burn plants – primarily wood – for heat, the energy released is from the reverse of the reaction that created the carbohydrate molecules. And so, the carbon dioxide which was captured when the plant grew, is released. But since burned wood is generally only a few decades old – burning plants can be (almost) neutral in the production of carbon dioxide.

Fossil fuels are all derived from plants, and the energy of the captured sunlight has been ‘distilled’ over thousands of years by a variety of physical processes into coal, oil and gas. Unfortunately, burning fossil fuels also releases carbon dioxide, but not carbon dioxide that was recently captured. It releases carbon dioxide that was captured eons ago.

Burning fossil fuels is still the main way in which we make electricity – so electrical energy is in some sense the energy of ancient sunlight. How charming.

#2 Where does the energy of sunlight come from?

That the Sun is hot has been obvious to all humans since the dawn of time.

But the source of its immense heat was a mystery until just about 100 years ago when it was suggested that hydrogen nuclei (a.k.a. protons) might ‘fuse’ together to make helium nuclei, and release energy.

Segueing past a few decades of research and speculation, we now know for sure that nuclear fusion deep within the Sun is indeed the source of the energy that makes the Sun hot.

The environment deep within the Sun is extraordinary, with a temperature of roughly 15 million degrees Celsius.

The hot dense gas emits electromagnetic radiation – γ-rays, X-rays, ultra violet and visible light – in all directions. The nuclei and electrons in the Sun are a plasma – which is opaque to radiation. So the radiation is constantly absorbed, causing local heating, and then being re-emitted by the nuclei and electrons in the various layers of the Sun.

Because the Sun is so vast and so opaque – it takes roughly 170,000 years for the energy to travel from the core of the Sun to the surface of the Sun. Just so you know it was not a typo: I did indeed say 170,000 years.

Eventually the energy reaches the outer layers of the Sun which are at a tepid 5,500 °C (ish). The glow of this hot plasma sends visible light out in all directions and after roughly 8 minutes, a tiny fraction of it reaches Earth.

#3 My hot water: a summary

Click image for a larger version. Some of the stages in the energy conversions and transfers that allows me to have hot water in the mornings without any carbon dioxide emissions. Simple heh?

So where does the energy that heats my hot water come from? The letters in the bullet points below refer to the diagram above.

  • Nuclear fusion (A) around 170,000 years ago created energy from the fusion of hydrogen nuclei that were themselves created in the primordial ‘big bang’.
  • This energy travelled through the Sun’s layers as a variety of forms of electromagnetic radiation – γ-rays, X-rays, ultra violet and visible light – until it reached the outer layers when the radiation could travel uninterrupted into space (A, B).
  • A tiny tiny fraction of this radiation was intercepted by solar panels on my roof, which converted some of the visible light into an electrical current (B, C, D).
  • This electrical energy was stored in a battery by electrically forcing ions of lithium metal (shown in the figure as red dots) to cluster together against their desire to diffuse away from one another (E).
  • This energy was then released at night by allowing ions of lithium metal to diffuse away from one another (F), forcing electrons around an external inverter circuit that created AC currents to power a motor in a compressor (G) and electronics which ran the heat pump.
  • The heat pump then chilled a refrigerant (G) that extracted heat from molecules in the air that had also been heated by sunlight over the preceding few days.
  • This energy was transferred as heat to water flowing in a circuit through the heat pump (G).
  • And then this energy was further transferred as heat to fresh water in a hot water tank (H).

And then finally (I) a few hours later, this energy was transferred to the outer layers in my face and hands where transient receptor proteins in thermoreceptors in my skin sent signals to my brain that caused me to realise the water coming from the tap into the sink was ‘just right’.

Simple heh?

* Nearly?

Humans do exploit one source of energy which did not originate in the fusion of nuclei within the Sun: nuclear power.

Nuclear power exploits energy released by splitting the nuclei of heavy atoms that were created – as I understand it – during the last few destructive moments of a previous generation of stars.

These elements – uranium primarily – were then deposited on the primordial Earth as it formed at the same time as the Sun was ‘born’.


COVID 19: What have we learned? Nothing.

June 4, 2021

Click for a larger image. Logarithmic graph showing positive cases, hospital admissions and deaths since the start of the pandemic. The blue arrows show the dates of ‘opening’ events. See text for further details. The red dotted line shows cases doubling every 15 days as they did in September 2020.

Friends, so here we are, 4th June 2021, and I am reluctantly concluding that – as they did last summer – the government are about to screw things up.

The graph at the head of the page shows casesadmissions and deaths throughout the pandemic.

The situation now is strikingly similar to July last year, except that the growth rate of cases is more similar to September last year.

The statistics for admissions and deaths represent ‘ground truth’ – but when the situation is changing rapidly they lag the spread of the virus by several weeks

So to assess the spread we should look at cases. And with the best part of 1 million tests a day, mostly in asymptomatic people, we should have a reasonably good track on what is happening.

In my previous blog (s), I suggested we should not care about:

  • the absolute number of cases,
  • the population prevalence of cases,
  • or even the rate of change of cases.

What mattered was:

  • Is there the potential for the pandemic to expand into the general population and kill hundreds of thousands of people?

Last summer the answer was definitely ‘Yes’.

This summer I previously thought the answer was probably ‘No’.

Now I think that in fact the virus has run away from us – spreading through schools – and has the potential to reach to the general unvaccinated population.

And although it I don’t think it can kill ‘hundreds of thousands’, it could easily kill ‘thousands‘ and cause serious illness in many more.


First of all, please let me me warn you about statistics which state the fraction of the ‘adult’ population which have been vaccinated. Adulthood is not relevant.

It seems that unvaccinated and previously uninfected people can catch COVID and spread it, no matter how young, even if their symptoms are not strong.

As I write: 59% of the entire population, including practically all of the most vulnerable groups have received a first dose of the vaccine. Vaccination is reaching an additional 8% of the population per month.

Together with the 10% – 20% (roughly) of the population who have had the disease, we are close to herd immunity. This would be relevant if the virus were spreading randomly through the population. But it isn’t.

The virus appears to be spreading amongst exactly the fraction of the population who have not been vaccinated. This is an inevitable consequence of our choice to vaccinate the elderly first. And as social restrictions have eased, viral spread is barely hindered by social distancing.

There are three problems with this.

#1: More Death 

If we consider the population of people who could be infected to be the roughly 20 million people under 30: then with a fatality ratio of 0.01% this corresponds to a summer with a further 2000 dead people under 30. If we are lucky the number might be only a few hundred.

To me, these wholly preventable deaths seem like those who died in WW1 after the armistice: more tragic somehow than the previous 129,000 deaths.

This does not take account of the fact that vaccinated people are not invulnerable – merely less vulnerable.

#2: More Illness

Without further interventions, the current case rate appears to be growing at the same rate it did in schools last September. Cases are doubling roughly every 15 days. By the end of June they will exceed 10,000 per day and approach 40,000 per day at the end of the school term.

Aside from the deaths, this corresponds to a lot more illness – some of it chronic ‘Long COVID’.

#3: Rolling the variant dice

The larger the pool of infected people, the more chance the virus has to mutate and find variants which might escape the vaccine, or – heaven forbid – take a more dangerous form.

As far as I understand, nobody knows why elderly people are more vulnerable to COVID-19. But imagine a hypothetical COVID-21 which was more deadly to children? Is that an experiment we really want to conduct?


The latest outbreaks have not been contained locally– yet another failure of Track, Trace, and Isolate.

The vaccination program means that unlike last summer, we are unlikely to face a wave of a further 80,000 dead people.

But I am expecting a further wave of wholly unnecessary deaths – I just don’t know how large a wave to expect.

I did write out a list of recommendations for what we should do about this situation. But having edited it, and reflected on it, I realised that the recommendations were all obvious, but that writing them down was pointless, because the Government just doesn’t care!

Stay safe.




The Cat Sat on the Mat

March 14, 2021

While walking though Teddington the other day I saw tender sight which brought a smile to my eyes.

A man was mending his very old car – an Austin Maxi – and had tools and components laid out around the car.

And just by the car was a small mat on which his cat was very contentedly sat.

I commented to him that it was very considerate of him to put down a mat for the cat.

He smiled.

Then he told me that the mat was there to cover a drain so that he didn’t accidentally lose any parts. And the cat was sitting there opportunistically rather than by invitation.


Correlation does not imply causation

It had seemed so obvious that the man had placed the mat down for the cat. I had immediately intuited his state of mind and fondness for his cat.

In order to have fully appreciated what was happening, I would have needed to:

  • Imagine into being a drain – for which I had no evidence – it was completely covered.
  • And then understand that it would be sensible to cover the drain if working near it – a mat would be ideal.
  • And finally understand that cats will sit on mats unbidden.

And so I was reminded that even the simplest and most apparently obvious things are sometimes not what they seem.


See also these links suggested by astute commenter Dave Burton

In the bleak midwinter

January 19, 2021

So here we are in the bleak mid-winter – the place that everyone with External Wall Insulation loves to be.

As I remain-at-home-to-protect-the-NHS-and-save-lives I have spent a great deal of time staring at the following graph which shows the impact of the triple-glazing and External Wall Insulation.

Click for a larger vsrsion. Plotted in blue against the left-hand axis, the average daily consumption of gas (kWh per day) This is shown against the left-hand axis. Plotted in green against the right -hand axis is the average difference of the outside temperature from 19 °C (°C).

The graph shows two quantities plotted versus the number of days since the start of 2019.

  • In blue, I have plotted the average daily consumption of gas (kWh per day)
    • This is shown against the left-hand axis
  • In green, I have plotted the average difference of the outside temperature from 19 °C (°C)
    • This is shown against the right-hand axis

The dotted red line shows the weather now (circled in green) is colder than it was at this time two years ago.

However the amount of gas (circled in blue) that I am using to maintain the temperature of the house is now about half what was then: just over 50 kWh per day now versus just over 100 kWh per day then.

The carbon dioxide emissions associated with heating the house look set to be about 1.25 tonnes this winter – still a terrible figure – but much lower than 3 tonnes emitted in the winter of 2018/2019.

To go further we need to ditch the gas boiler and switch to a heat pump. Hopefully we will achieve this in the summer and then we can reasonably hope that next winter we will lower the carbon dioxide emissions associated with heating the house to about 0.4 tonnes – just 13% of what it was in 2018/2019.



Rocket Science

January 14, 2021

One of my lockdown pleasures has been watching SpaceX launches.

I find the fact that they are broadcast live inspiring. And the fact they will (and do) stop launches even at T-1 second shows that they do not operate on a ‘let’s hope it works’ basis. It speaks to me of confidence built on the application of measurement science and real engineering prowess.

Aside from the thrill of the launch  and the beautiful views, one of the brilliant features of these launches is that the screen view gives lots of details about the rocket: specifically it gives time, altitude and speed.

When coupled with a little (public) knowledge about the rocket one can get to really understand the launch. One can ask and answer questions such as:

  • What is the acceleration during launch?
  • What is the rate of fuel use?
  • What is Max Q?

Let me explain.

Rocket Science#1: Looking at the data

To do my study I watched the video above starting at launch, about 19 minutes 56 seconds into the video. I then repeatedly paused it – at first every second or so – and wrote down the time, altitude (km) and speed (km/h) in my notebook. Later I wrote down data for every kilometre or so in altitude, then later every 10 seconds or so.

In all I captured around 112 readings, and then entered them into a spreadsheet (Link). This made it easy to convert the  speeds to metres per second.

Then I plotted graphs of the data to see how they looked: overall I was quite pleased.

Click for a larger image. Speed (m/s) of Falcon 9 versus time after launch (s) during the Turksat 5A launch.

The velocity graph clearly showed the stage separation. In fact looking in detail, one can see the Main Engine Cut Off (MECO), after which the rocket slows down for stage separation, and then the Second Engine Start (SES) after which the rocket’s second stage accelerates again.

Click for a larger image. Detail from graph above showing the speed (m/s) of Falcon 9 versus time (s) after launch. After MECO the rocket is flying upwards without power and so slows down. After stage separation, the second stage then accelerates again.

It is also interesting that acceleration – the slope of the speed-versus-time graph – increases up to stage separation, then falls and then rises again.

The first stage acceleration increases because the thrust of the rocket is almost constant – but its mass is decreasing at an astonishing 2.5 tonnes per second as it burns its fuel!

After stage separation, the second stage mass is much lower, but there is only one rocket engine!

Then I plotted a graph of altitude versus time.

Click for a larger image. Altitude (km) of Falcon 9 versus time after launch (s) during the Turksat 5A launch.

The interesting thing about this graph is that much of the second stage is devoted to increasing the speed of the second stage at almost constant altitude – roughly 164 km above the Earth. It’s not pushing the spacecraft higher and higher – but faster and faster.

About 30 minutes into the flight the second stage engine re-started, speeding up again and raising the altitude further to put the spacecraft on a trajectory towards a geostationary orbit at 35,786 km.

Rocket Science#2: Analysing the data for acceleration

To estimate the acceleration I subtracted each measurement of speed from the previous measurement of speed and then divided by the time between the two readings. This gives acceleration in units of metres per second, but I thought it would be more meaningful to plot the acceleration as a multiple of the strength of Earth’s gravitational field g (9.81 m/s/s).

The data as I calculated them had spikes in because the small time differences between speed measurements (of the order of a second) were not very accurately recorded. So I smoothed the data by averaging 5 data points together.

Click for a larger image. Smoothed Acceleration (measured in multiples of Earth gravity g) of Falcon 9 versus time after launch (s) during the Turksat 5A launch. Also shown as blue dotted line is a ‘theoretical’ estimate for the acceleration assuming it used up fuel as a uniform rate.

The acceleration increased as the rocket’s mass reduced reaching approximately 3.5g just before stage separation.

I then wondered if I could explain that behaviour.

  • To do that I looked up the launch mass of a Falcon 9 (Data sources at the end of the article and saw that it was 549 tonnes (549,000 kg).
  • I then looked up the mass of the second stage 150 tonnes (150,000 kg).
  • I then assumed that the mass of the first stage was almost entirely fuel and oxidiser and guessed that the mass would decrease uniformly from T = 0 to MECO at T = 156 seconds. This gave a burn rate of 2558 kg/s – over 2.5 tonnes per second!
  • I then looked up the launch thrust from the 9 rocket engines and found it was 7,600,000 newtons (7.6 MN)
  • I then calculated the ‘theoretical’ acceleration using Newton’s Second Law (a = F/m) at each time step – remembering to decrease the mass by 2.558 kilograms per second. And also remembering that the thrust has to exceed 1 x g before the rocket would leave the ground!

The theoretical line (– – –) catches the trend of the data pretty well. But one interesting feature caught my eye – a period of constant acceleration around 50 seconds into the flight.

This is caused by the Falcon 9 throttling back its engines to reduce stresses on the rocket as it experiences maximum aerodynamic pressure – so-called Max Q – around 80 seconds into flight.

Click for a larger image. Detail from the previous graph showing smoothed Acceleration (measured in multiples of Earth gravity g) of Falcon 9 versus time after launch (s) during the Turksat 5A launch. Also shown as blue dotted line is a ‘theoretical’ estimate for the acceleration assuming it used up fuel as a uniform rate. Highlighted in red are the regions around 50 seconds into flight when the engines are throttled back to reduce the speed as the craft experience maximum aerodynamic pressure (Max Q) about 80 seconds into flight.

Rocket Science#3: Maximum aerodynamic pressure

Rocket’s look like they do – rocket shaped – because they have to get through Earth’s atmosphere rapidly, pushing the air in front of them as they go.

The amount of work needed to do that is generally proportional to the three factors:

  • The cross-sectional area A of the rocket. Narrower rockets require less force to push through the air.
  • The speed of the rocket squared (v2). One factor of v arises from the fact that travelling faster requires one to move the same amount of air out of the way faster. The second factor arises because moving air more quickly out of the way is harder due to the viscosity of the air.
  • The air pressure P. The density of the air in the atmosphere falls roughly exponentially with height, reducing by approximately 63% every 8.5 km.

The work done by the rocket on the air results in so-called aerodynamic stress on the rocket. These stresses – forces – are expected to vary as the product of the above three factors: A P v2. The cross-sectional area of the rocket A is constant so in what follows I will just look at the variation of the product P v2.

As the rocket rises, the pressure falls and the speed increases. So their product P v, and functions like P v2, will naturally have a maximum value.

The importance of the maximum of the product P v2 (known as Max Q) as a point in flight, is that if the aerodynamic forces are not uniformly distributed, then the rocket trajectory can easily become unstable – and Max Q marks the point at which the danger of this is greatest.

The graph below shows the variation of pressure P with time during flight. The pressure is calculated using:

Where the ‘1000’ is the approximate pressure at the ground (in mbar), h is the altitude at a particular time, and h0 is called the scale height of the atmosphere and is typically 8.5 km.

Click for a larger image. The atmospheric pressure calculated from the altitude h versus time after launch (s) during the Turksat 5A launch.

I then calculated the product P v2, and divided by 10 million to make it plot easily.

Click for a larger image. The aerodynamic stresses calculated from the altitude and speed versus time after launch during the Turksat 5A launch.

This calculation predicts that Max Q occurs about 80 seconds into flight, long after the engines throttled down, and in good agreement with SpaceX’s more sophisticated calculation.


I love watching the Space X launches  and having analysed one of them just a little bit, I feel like understand better what is going on.

These calculations are well within the capability of advanced school students – and there are many more questions to be addressed.

  • What is the pressure at stage separation?
  • What is the altitude of Max Q?
  • The vertical velocity can be calculated by measuring the rate of change of altitude with time.
  • The horizontal velocity can be calculated from the speed and the vertical velocity.
  • How does the speed vary from one mission to another?
  • Why does the craft aim for a particular speed?

And then there’s the satellites themselves to study!

Good luck with your investigations!


And finally thanks to Jon for pointing me towards ‘Flight Club – One-Click Rocket Science‘. This site does what I have done but with a good deal more attention to detail! Highly Recommended.


Everything is Rubbish!

December 22, 2020

All the factories in all the world are just making rubbish. All that differs is the speed and path of the trajectory from Factory to Dump.

Friends, when I say that “Everything is Rubbish“, this is not the moaning of a 60-year old man dissatisfied with new-fangled ways.

This is the insight of a 60-year old man who has seen with perfect clarity that, with very few exceptions, every object one ever ‘owns’ is really just ‘leased’ as it makes its way from a factory to a dump.


Recently I have been sorting my way through a loft filled with the detritus of bringing up two children, along with a few items of memorabilia from earlier in my own life.

And the following thought is irrepressible:

If anyone else looked at this they would call it junk“.

And in a related theme, a couple of close friends have recently been charged with sorting the belongings of a deceased parent.

And items which were one preciously hoarded as treasures, are revealed in the cold light of a parent’s absence to have negative monetary value: they are impossible even to give away.

And in a further related theme, I tried to play a VHS-Video Cassette the other day – and the player would not play. [PAUSE for younger readers to laugh at this folly].

I looked inside and poked around – they really are ingenious! – but to no avail. This package of metal and components is now junk. And so are all the 100 or so video cassettes. In their day I probably paid £1000 for them. Now, all the subtlety and artistry that went into their creation is worth nothing.

What has lasted?

I do have a small number of items which have lasted longer than the average.

  • I have a couple of photographs of my parents’ wedding – these are 68 years old and still in excellent condition.
  • Most days I still use the calculator that I bought in 1978 before I went to University. And I have a few books from that era too.
  • And I still regularly listen to music through  a pair of Wharfedale Denton loudspeakers. These were a present from my father for my 18th birthday in 1978. I recall that he could not believe that a pair of loudspeakers could conceivably cost £55 – but if were he alive today he would be pleased at their longevity.

But even for these items, it is not that they will last forever, but simply that the arc of their trajectory from factory to dump is slightly longer.

Why do I mention this?

Because the truth has struck me hard in the last few weeks.

  • All the factories in the world are really Rubbish Factories

All that differs is the category of rubbish and the arc of its path from Factory to Dump.

I know I am not the first person to mention this.

And I know that my own life is not a good exemplar of a life which minimises the amount of rubbish generated.

But it just struck me as being deeply, deeply true.






%d bloggers like this: