Archive for the ‘Simple Science’ Category

How big is that fire?

August 12, 2022

Click on the image for a larger version. The picture is courtesy of Michael Newbry.

Friends, you may have noticed that we have recently entered a period of what is euphemistically called “enhanced risk of wildfires”.

And reports of wildfires from around the world include some truly apocalyptic images.

But many of these reports fail to communicate clearly one of the key metrics for fires: the size of the fire.

Some reports do mention the area affected in hectares (abbreviated as ha) or acres, but while I can just about grasp the meaning of one acre or one hectare – I struggle to appreciate the size of a fire covering, say, 6,000 hectares.

In order to convert these statistics to something meaningful, I work out the length of one side of a square with the same area.

Areas expressed in hectares.

A hectare is an area of 100 m x 100 m, or 0.1 km x 0.1 km so that there are 100 hectares in a square kilometre.

So to convert an area expressed in hectares to the side of the square of equal area one takes two steps.

  • First one takes the square root of the number of hectares.
  • One then divides by 10.

So for a fire with an area of 6,000 hectares the calculation looks like this:

  • √6,000 = 77.4
  • 77.4÷10 = 7.74 km

Since the original area was probably quite uncertain I would express this as being equivalent to a square with a side of 7 or 8 km.

Areas expressed in acres.

An acre is an area of 63.6 m x 6.36 m, or 0.64 km x 0.64 km so that there are roughly 2.5 acres in a hectare.

I can’t think of an easy way to get a good approximation for acres, but a bad approximation is better than no estimate at all. So I recommend, the following 3- or 4-step process:

  • First one divides the number of acres by 2
  • Then one takes the square root of half the number of acres.
  • One then divides by 10.
  • This answer will be about 10% too large.

So for a fire with an area of 15,000 acres the calculation looks like this:

  • 15,000÷2 = 7,500
  • √7,500 = 86.6
  • 86.6÷10 = 8.7 km

At this point one can either just bear in mind that this is a slight over-estimate, or correct by 10%. In this context, the overall uncertainty in the estimate means the last step is barely worthwhile.

How bad is the situation in Europe?

Click on Image for larger version. Estimates of the cumulative area (in hectares) burned by wildfires in each of the EU countries. The red bars show data for this year, and the blue bars show the average area burned between 2006 and 2021.

There is a wonderful website (linkwhich publishes estimates of wildfire prevalence in all the countries of the EU. One output of the website is shown above:

  • The blue bars shows the average area burned from 2006 to 2021
  • The red bars shows the average area burned so far this year.

You can immediately see that Spain, Romania, and France are having bad years for wildfires.

But how big an area is 244,924 hectares – the area burned in Spain so far? Using the rule above, one can see that it is an area equivalent to a square with a side of 50 km – roughly equivalent to (say) the area of Cheshire.

The area burned in France so far this year is 60,901 hectares. Using the rule above, one can see that it is an area equivalent to a square with a side of 25 km.

Michael, what was the point of this article?

When trying to visualise large areas expressed in hectares (or acres) I find it useful to work out the length of side of a square which would have the same area.

Sodium Acetate: Fun in the Kitchen with Phase Change Experiments

August 7, 2022

Friends, you may recall that in a recent article I wrote about Phase Change Materials (PCMs) used for thermal storage. I illustrated that article with a measurement of the temperature versus time as some molten candle wax solidified. I then tried to work out how much so-called ‘latent’ heat was released as the wax solidified.

A Twitter source then told me that the actual material used in commercial thermal storage units was sodium acetate trihydrate, and within 18 hours, a kilogram of the substance was delivered to my door.

NOTE: In this article I have used the term sodium acetate to mean sodium acetate trihydrate and in some locations it is abbreviated to SAT.

NOTE: Sodium acetate is pretty safe from a toxicity perspective: it’s an allowed food ingredient E262, but one needs to be careful not to scald oneself – or others – when handling the hot liquid.

So I began a series of experiments in which I made a great variety of very different, but similarly basic, errors. There really is nothing like a practical experiment for making one feel incompetent and stupid! Part of the problem was that I was trying to do other things at the same time as reading the temperature of the two samples (wax and sodium acetate).

To overcome these difficulties,  I eventually bought a thermocouple data-logger which can read up to 4 thermocouples simultaneously and save the data on an SD card. This allowed me (a) get on with life and (b) to do something clever: to measure the cooling curve of a sample of water at the same time. I’ll explain why this was important later.

Eventually – after a series of new basic mistakes such as setting the logging interval to 30 minutes rather than30 seconds – I began to get some interesting data. And sodium acetate really is an extraordinary substance.

Of course my experiments are not complete and I would really like to repeat the whole series of experiments based on the golden rule, but I really need to the clean up the kitchen.

Experiment#1

As shown below, I heated three samples of equal volumes of wax, sodium acetate and water to roughly 90 °C for around 10 minutes – sufficient to melt all the SAT.

I then transferred the samples – while logging their temperature – into a cardboard stand where I guessed that the cooling environment of each sample would be similar.

The results of the first experiment are shown below.

Click on image for a larger version. The temperature of the three samples of water, wax and sodium acetate as a function of time.

The first thing to notice is how odd the curves are for the wax and the sodium acetate. They both have discontinuities in their rate of cooling.

And strikingly, although they start at similar temperatures, they both stay hotter than the water for longer – this is what makes them candidate thermal storage materials. But precisely how much more heat have they released?

To work this out we need to start with the cooling curve for the water which (happily) behaves normally i.e. smoothly. We would expect…

  • …the cooling rate (°C/s) to be proportional to…
  • …the difference between the temperature at any particular time, and the temperature of the environment (roughly 27 °C during Experiment #1).

Using the magic of spreadsheets we can check if this is the case, and as the graph below demonstrates, it is indeed approximately so.

Click on image for a larger version. The cooling rate of the water  as function of the difference between water temperature and the temperature of the environment.

Because the heat capacity of water is reasonably constant over this temperature range, we can now convert this cooling rate into an estimate of how much heat was leaving the water sample at each temperature. To do this we note that for each °C that each gram of water cools, 4.2 J of heat must leave the sample. So if 1 gram of water cools at a rate of 1 °C/s, then the rate of heat loss must be 4.2 J/s or 4.2 W.

Click on image for a larger version. Estimate for the rate of loss of heat (in watts) of the water as function of the difference between water temperature and the temperature of the environment.

This last graph tells us that when the temperature difference from the environments is (say) 10 °C, then the water is losing 0.104 x 10 = 1.04 watts of heat. Based on the closeness of the fit to the data, I would estimate there is about a 10% uncertainty in this figure.

Finally, if we add the amount of heat lost during the time interval between each data point, we can estimate the cumulative total amount of heat lost.

It is this cumulative total that indicates the capacity of a substance to store heat.

Importantly, because all the samples are held similarly, at any particular temperature, I think the heat loss from each of the other samples must be the similar to that for water when it was at the same temperature – even though the cooling rates are quite different.

Using this insight, I converted the cooling curve (temperature versus time) for these materials – into curves showing cumulative heat loss curves versus time.

Click on image for a larger version. Estimates for the cumulative heat lost from the water, wax and SAT (sodium acetate) samples as a function of time. Also shown as dotted lines are the limiting extrapolations from (a) the first part of the cooling curve of the SAT and (b) the final part of the cooling curve. The difference between these two extrapolations is an estimate for the latent heat of the SAT.

We can apply a couple of sanity checks here. The first is that the heat lost from the water comes to about 10.7 kilojoules. Since the 60 g of water cooled from 70 °C to 28 °C then based on a heat capacity of water of 4,200 J/°C/kg we would expect a heat loss of (0.06 x 4200 x 42 =)10.6 kJ. This rough numerical agreement just indicates that the spreadsheet analysis has not resulted in any gross errors.

Looking at the difference between the extrapolation of the first part of the SAT curve, and the extrapolation of the final curve, we see a difference of approximately 23.8 kJ. This heat evolved from 88 g of SAT in the tube and so corresponds to 23.8/0.088 = 270 kJ/kg. We can check that against an academic paper, which suggests values in the range 264 to 289 kJ/kg. So that too seems to check out.

With everything sort of working, I tried the experiment a couple more times

Further Experiments: coping with super-cooling

The most striking feature of these experiments is that when the sodium acetate freezes, it releases its ‘latent heat’ and warms up to its equilibrium freezing temperature of roughly 58 °C.

From the first experiment – and the experiments I had done previously – it became clear that the sodium acetate tended to supercool substantially. This is the process whereby a substance remains a liquid even when it is cooled below its equilibrium freezing temperature.

[The physics of supercooling is fascinating but I don’t really have time to discuss it here. In facile terms, it is like when a cartoon character runs over the edge of a cliff but doesn’t fall until it realises that there is nothing holding it up!]

In this context, the supercooling is just an irritation! So I tried different techniques in each of the three difference experiments

  • In Experiment #1, I stirred the sample to initiate the freezing.
  • In Experiment #2, I placed spoons in each sample in the hope that some additional cooling would initiate the freezing. It didn’t.
  • In Experiment #3, I left the sample for as long as was practical in the hope it would spontaneously freeze. It didn’t.
  • In Experiment #4, I left the sample for longer than was practical. But it still didn’t freeze spontaneously.

So I didn’t manage to control the supercooling – in each case I initiated the freeze by poking, shaking or stirring. I’ll comment on this failure at the end of the article.

The data and analysis from experiments 2, 3 and 4 is shown below.

Click on image for a larger version. The upper three graphs show 3 cooling curves for wax and SAT. The water sample is not shown to simplify the graphs. The Lower 3 graphs show estimates for the cumulative heat lost from the wax and SAT samples as a function of time. Also shown as dotted lines are the limiting extrapolations from (a) the first part of the cooling curve of the SAT and (b) the final part of the cooling curve. The difference between these two extrapolations is an estimate for the latent heat of the SAT.

Conclusions

The most important conclusion from the analysis above is that a given volume of SAT releases much more thermal energy on cooling than the equivalent volume of either water or wax. This what makes it useful for thermal storage.

If we consider heat released above 40 °C, then the SAT releases around 3 times as much heat as a similar volume of water. This means an equivalent thermal store built using SAT can be up to 3 times smaller than the equivalent thermal store using a hot water cylinder.

The experiments gave four estimates for the heat related as latent heat which are summarised in the table below. Pleasingly all are in reasonable agreement with the suggested likely range of results from 264 to 289 kJ/kg.

Click on image for a larger version. Three estimates of the latent heat of Sodium Acetate Trihydrate (SAT)

Practical Devices

Scaling to a larger sample, 100 kg of sodium acetate would occupy a volume of 68 litres and fit in a cube with a side of just 40 cm or so, and release around 27MJ (7.5 kWh) of latent heat. This is roughly the equivalent of the heat stored in a 200 litre domestic hot water cylinder.

Sodium acetate is the thermal storage medium in a range of devices that can serve the same purpose as a domestic hot water cylinder but which occupy (in practice) rather less than half the volume. Clever!

Heat is stored by melting the sodium acetate in an insulated box, and released by running cold water through pipes immersed in the sodium acetate: the water is heated and emerges piping hot through your taps! As the sodium acetate freezes, the temperature remains stable – and the water delivered similarly remains piping hot.

But what about the supercooling? How do the devices prevent from the sodium acetate from supercooling? I’m afraid I don’t know. This paper discusses some practical considerations for thermal storage devices made using SAT, and it lists a number of additives that apparently rectify shortcomings in SAT behaviour. One of the additives is – curiously – wallpaper paste. I did try experiments with this but I didn’t observe any regular change in behaviour.

In any case, have fun with your sodium acetate experiments. It is available from here.

The Physics of Guitar Strings

January 24, 2022

Friends, regular readers may be aware that I play the guitar.

And sleuths amongst you may have deduced that if I play the guitar, then I must occasionally change guitar strings.

The physics of tuning a guitar – the extreme stretching of strings of different diameters – has fascinated me for years, but it is only now that my life has become pointless that I have managed to devote some time to investigating the phenomenon.

What I have found is that the design of guitar strings is extraordinarily clever, leaving me in awe of the companies which make them. They are everyday wonders!

Before I get going, I just want to mention that this article will be a little arcane and a bit physics-y. Apologies.

What’s there to investigate?

The first question is why are guitar strings made the way they are? Some are plain ‘rustless steel’, but others have a steel core around which are wound super-fine wires of another metal, commonly phosphor-bronze.

The second question concerns the behaviour of the thinnest string: when tuned initially, it spontaneously de-tunes, sometimes very significantly, and it does this for several hours after initially being stretched.

Click image for a larger version. The structure of a wire-wound guitar string. Usually the thickest four strings on a guitar are made this way.

Of course, I write these questions out now like they were in my mind when I started! But no, that’s just a narrative style.

When I started my investigation I just wanted to see if I understood what was happening. So I just started measuring things!

Remember, “two weeks in the laboratory can save a whole afternoon in the library“.

Basic Measurements

The frequency with which a guitar string vibrates is related to three things:

  • The length of the string: the longer the string, the lower the frequency of vibration.
  • The tension in the string: for a given length, the tighter the string, the higher the frequency of vibration.
  • The mass per unit length of the string: for a given length and tension, heavier strings lower the frequency of vibration

In these experiments, I can’t measure the tension in the string directly. But I can measure all the other properties:

  • The length of string (~640 mm) can be measured with a tape measure with an uncertainty of 1 mm
  • The frequency of vibration (82 Hz to 330 Hz) can be measured by tuning a guitar against a tuner with an uncertainty of around 0.1 Hz
  • The mass per unit length can be determined by cutting a length of the string and measuring its length (with a tape measure) and its mass with a sensitive scale. So-called Jewellery scales can be bought for £30 which will weigh 50 g to the nearest milligram!

Click image for a larger version. Using a jewellery scale to weigh a pre-measured length of guitar string. Notice the extraordinary resolution of 1 mg. Typically repeatability was at the level of ± 1 mg.

These measurements are enough to calculate the tension in the string, but I am also interested in the stress to which the material of the string is subjected.

The stress in the string is defined as the tension in the string divided by the cross-sectional area of the string. I can work out the area by measuring the diameter of string with  digital callipers. These devices can be bought for under £20 and will measure string diameters with an uncertainty of 0.01 mm.

However there is a subtlety when it comes to working out the stress in the wire-wound strings. The stress is not carried across the whole cross-section of the wire, but only along its core – so one must measure the diameter of the core of these strings (which can be done at the ends) but not the diameter of the playing length.

Click image for a larger version. Formulas relating the tension T in a string (measured in newtons); the frequency of vibration f (measured in hertz); the length of the string L (measured in metres) and the mass per unit length of the string (m/l) measured in kilograms per metre.

Results#1: Tension

The graph below shows the calculated tension in the strings in their standard tuning.

Click image for a larger version. The calculated tension in the 6 strings of a steel-strung guitar. String 1 is the thinnest (E4) and String 6 is the thickest (E2).

The tension is high – roughly 120 newtons per string. If the tension were maintained by a weight stretching the string over a wheel, there would be roughly 12 kg on each string!

Note that the tension is reasonably uniform across the neck of the guitar. This is important. If it were not so, the tension in the strings would tend to bend the neck of the guitar.

Results#2: Core Diameter

The graph below shows the measured diameters of each of the strings.

Click image for a larger version. The measured diameter (mm) of the stainless steel core of the 6 strings of a steel-strung guitar and the diameter of the string including the winding. String 1 is the thinnest (E4) and String 6 is the thickest (E2).

First the obvious. The diameter of the second string is 33% larger than the first string, which increases its mass per unit length and causes the second string to vibrate at a frequency 33% lower than the first string. This is just the basic physics.

Now we get into the subtlety.

The core of the third string is smaller than the second string. And the core diameters of each the heavier strings is just a little larger than the preceding string.

But these changes in core diameter are small compared to changes in the diameter of the wound string.

The density of the phosphor bronze winding (~8,800 kg/m^3) is similar to, but actually around 10% higher than, the density of the stainless steel core (~8,000 kg/m^3). This is not a big difference.

If we simply take the ratios of outer diameters of the top and bottom strings (1.34/0.31 ≈ 4.3) this is sufficient to explain the required two octave (factor 4) change in frequency.

Results#3: Why are some strings wire-wound?

The reason that the thicker strings on a guitar are wire-wound can be appreciated if one imagines the alternative.

A piece of stainless steel 1.34 mm in diameter is not ‘a string’, it’s a rod. Think about the properties of the wire used to make a paperclip.

So although one could attach such a solid rod to a guitar, and although it would vibrate at the correct frequency, it would not move very much, and so could not be pressed against the frets, and would not give rise to a loud sound.

The purpose of using wire-wound strings is to increase their flexibility while maintaining a high mass per unit length.

Results#4: Stress?

The first thing I calculated was the tension in each string. By dividing that result by the cross-sectional area of each string I can calculate the stress in the wire.

But it’s important to realise that the tension is only carried within of the steel core of each string. The windings only provide mass-per-unit-length but add nothing to the resistance to stretching.

The stress has units of newtons per square metre (N/m^2) which in the SI has a special name: the pascal (Pa). The stresses in the strings are very high so the values are typically in the range of gigapascals (GPa).

Click image for a larger version. The estimated stress with the stainless-steel cores of the 6 strings of a steel-strung guitar. Notice that the first and third strings have considerably higher stress than the other strings. In fact the stress in these cores just exceeds the nominal yield stress of stainless steel. String 1 is the thinnest (E4) and String 6 is the thickest (E2).

This graph contains the seeds of an explanation for some of the tuning behaviour I have observed – that the first and third strings are tricky to tune.

With new strings one finds that – most particularly with the 1st string (the thinnest string) – one can tune the string precisely, but within seconds, the frequency of the string falls, and the string goes out of tune.

What is happening is a phenomenon called creep. The physics is immensely complex, but briefly, when a high stress is applied rapidly, the stress is not uniformly distributed within the microscopic grains that make up the metal.

To distribute the stress uniformly requires the motion of faults within the metal called dislocations. And these dislocations can only move at a finite rate. As the dislocations move they relieve the stress.

After many minutes and eventually hours, the dislocations are optimally distributed and the string becomes stable.

Results#5: Yield Stress

The yield stress of a metal is the stress beyond which the metal is no longer elastic i.e. after being exposed to stresses beyond the yield stress, the metal no longer returns to its prior shape when the stress is removed.

For strong steels, stretching beyond their yield stress will cause them to ‘neck’ and thin and rapidly fail. But stainless steel is not designed for strength – it is designed not to rust! And typically its yield curve is different.

Typically stainless steels have a smooth stress-strain curve, so being beyond the nominal yield stress does not imply imminent failure. It is because of this characteristic that the creep is not a sign of imminent failure. The ultimate tensile strength of stainless steel is much higher.

Results#6: Strain

Knowing the stress  to which the core of the wire is subjected, one can calculate the expected strain i.e. the fractional extension of the wire.

Click image for a larger version. The estimated strain of the 6 strings of a steel-strung guitar. Also shown on the right-hand axis is the actual extension in millimetres Notice that the first and third strings have considerably higher strain than the other strings. String 1 is the thinnest (E4) and String 6 is the thickest (E2).

The calculated fractional string extension (strain) ranges from about 0.4% to 0.8% and the actual string extension from 2.5 mm to 5 mm.

This is difficult to measure accurately, but I did make an attempt by attaching small piece of tape to the end of the old string as I removed it, and the end of the new string as I tightened it.

Click image for a larger version. Method for estimating the strain in a guitar string. A piece of tape is tied to the top of the old string while it is still tight. On loosening, the tape moves with the part of the string to which it is attached.

For the first string my estimate was between 6 mm and 7 mm of extension, so it seems that the calculations are a bit low, but in the right ball-park.

Summary

Please forgive me: I have rambled. But I think I have eventually got to a destination of sorts.

In summary, the design of guitar strings is clever. It balances:

  • the tension in each string.
  • the stress in the steel core of each string.
  • the mass-per-unit length of each string.
  • the flexibility of each string.

Starting with the thinnest string, these are typically available in a range from 0.23 mm to 0.4 mm. The thinnest strings are easy to bend, but reducing the diameter increase the stress in the wire and makes it more likely to break. They also tend to be less loud.

The second string is usually unwound like the first string but the stresses in the string are lower.

The thicker strings are usually wire-wound to increase the flexibility of the strings for a given tensioning mass-per-unit-length. If the strings were unwound they would be extremely inflexible, impossible to push against the frets, and would vibrate only with a very low amplitude.

How does the flexibility arise? When these wound strings are stretched, small gaps open up between the windings and allow the windings to slide past each other when the string is bent.

Click image for a larger version. Illustration of how stretching a wound string slightly separates the windings. This allows the wound components to slide past each other when the string is bent.

Additionally modern strings are often coated with a very thin layer of polymer which prevents rusting and probably reduces friction between the sliding coils.

Remaining Questions 

I still have many  questions about guitar strings.

The first set of questions concerns the behaviour of nylon strings used on classical guitars. Nylon has a very different stress-strain curve from stainless steel, and the contrast in density between the core materials and the windings is much larger.

The second set of questions concerns the ageing of guitar strings. Old strings sound dull, and changing strings makes a big difference to the sound: the guitar sounds brighter and louder. But why? I have a couple of ideas, but none of them feel convincing at the moment.

And now I must get back to playing the guitar, which is quite a different matter. And sadly understanding the physics does not help at all!

P.S.

The strings I used were Elixir 12-53 with a Phosphor Bronze winding and the guitar is a Taylor model 114ce

The James Webb Space Telescope

December 24, 2021

Friends, a gift to humanity!

On Christmas Day at 12:20 GMT/UTC, the James Webb Space Telescope will finally be launched.

You can follow the countdown here and watch the launch live via NASA or on YouTube – below.

In May 2018 I was fortunate enough to visit the telescope at the Northrop Grumman facility where it was built, and to speak with the project’s former engineering director Jon Arenberg.

Everything about this telescope is extraordinary, and so as the launch approaches I thought that it might be an idea to re-post the article I wrote back in those pre-pandemical days.

As a bonus, if you read to the end you can find out what I was doing in California back in 2018!

Happy Christmas and all that.

===================================

Last week I was on holiday in Southern California. Lucky me.

Lucky me indeed. During my visit I had – by extreme good fortune – the opportunity to meet with Jon Arenberg – former engineering director of the James Webb Space Telescope (JWST).

And by even more extreme good fortune I had the opportunity to speak with him while overlooking the JWST itself – held upright in a clean room at the Northrop Grumman campus in Redondo Beach, California.

[Sadly, photography was not allowed, so I will have to paint you a picture in words and use some stock images.]

The JWST

In case you don’t know, the JWST will be the successor to the Hubble Space Telescope (HST), and has been designed to exceed the operational performance of the HST in two key areas.

  • Firstly, it is designed to gather more light than the HST. This will allow the JWST to see very faint objects.
  • Secondly, it is designed to work better with infrared light than the HST. This will allow the JWST to see objects whose light has been extremely red-shifted from the visible.

A full-size model of the JWST is shown below and it is clear that the design is extraordinary, and at first sight, rather odd-looking. But the structure – and much else besides – is driven by these two requirements.

JWST and people

Requirement#1: Gather more light.

To gather more light, the main light-gathering mirror in the JWST is 6.5 metres across rather than just 2.5 metres in the HST. That means it gathers around 7 times more light than the HST and so can see fainter objects and produce sharper images.

1280px-JWST-HST-primary-mirrors.svg

Image courtesy of Wikipedia

But in order to launch a mirror this size from Earth on a rocket, it is necessary to use a  mirror which can be folded for launch. This is why the mirror is made in hexagonal segments.

To cope with the alignment requirements of a folding mirror, the mirror segments have actuators to enable fine-tuning of the shape of the mirror.

To reduce the weight of such a large mirror it had to be made of beryllium – a highly toxic metal which is difficult to machine. It is however 30% less dense than aluminium and also has a much lower coefficient of thermal expansion.

The ‘deployment’ or ‘unfolding’ sequence of the JWST is shown below.

Requirement#2: Improved imaging of infrared light.

The wavelength of visible light varies from roughly 0.000 4 mm for light which elicits the sensation we call violet, to 0.000 7 mm for light which elicits the sensation we call red.

Light with a wavelength longer than 0.000 7 mm does not elicit any visible sensation in humans and is called ‘infrared’ light.

Imaging so-called ‘near’ infrared light (with wavelengths from 0.000 7 mm to 0.005 mm) is relatively easy.

Hubble can ‘see’ at wavelengths as long as 0.002 5 mm. To achieve this, the detector in HST was cooled. But to work at longer wavelengths the entire telescope needs to be cold.

This is because every object emits infrared light and the amount of infrared light it emits is related to its temperature. So a warm telescope ‘glows’ and offers no chance to image dim infrared light from the edge of the universe!

The JWST is designed to ‘see’ at wavelengths as long as 0.029 mm – 10 times longer wavelengths than the HST – and that means that typically the telescope needs to be on the order of 10 times colder.

To cool the entire telescope requires a breathtaking – but logical – design. There were two parts to the solution.

  • The first part involved the design of the satellite itself.
  • The second part involved the positioning the satellite.

Cooling the telescope part#1: design

The telescope and detectors were separated from the rest of the satellite that contains elements such as the thrusters, cryo-coolers, data transmission equipment and solar cells. These parts need to be warm to operate correctly.

The telescope is separated from the ‘operational’ part of the satellite with a sun-shield roughly the size of a tennis court. When shielded from the Sun, the telescope is exposed to the chilly universe, and cooled gas from the cryo-coolers cools some of the detectors to just a few degrees above absolute zero.

Cooling the telescope part#2: location

The HST is only 300 miles or so from Earth, and orbits every 97 minutes. It travels in-to and out-of full sunshine on each orbit. This type of orbit is not compatible with keeping a gigantic telescope cold.

So the second part of the cooling strategy is to position the JWST approximately 1 million miles from Earth at a location beyond the orbit of the moon at a location known as the second Lagrange point L2. But JWST does not orbit the Earth like Hubble: it orbits the Sun.

Normally the period of orbits around the Sun get longer as satellites orbit at greater distances from the Sun. But at the L2 position, the gravitational attraction of the Earth and Moon add to the gravitational attraction of the Sun and speed up the orbit of the JWST so that it orbits the Sun with a period of one Earth year – and so JWST stays in the same position relative to the Earth.

  • The advantage of orbiting at L2 is that the satellite can maintain the same orientation with respect to the Sun for long periods. And so the sun-shade can shield the telescope very effectively, allowing it to stay cool.
  • The disadvantage of orbiting at L2 is that it is beyond the orbit of the moon and no manned space-craft has ever travelled so far from Earth. So once launched, there is absolutely no possibility of a rescue mission.

The most expensive object on Earth?

I love the concept of the JWST. At an estimated cost of $8 billion $10 billion, if this is not the most expensive single object on Earth, then I would be interested to know what is.

But it has not been created to make money or as an act of aggression.

Instead, it has been created to answer the simple question

I wonder what we would see if we looked into deep space at infrared wavelengths.”. 

Ultimately, we just don’t know until we look.

In a year or two, engineers will place the JWST on top of an Ariane rocket and fire it into space. And the most expensive object on Earth will then – hopefully – become the most expensive object in space.

Personally I find the mere existence of such an enterprise a bastion of hope in a world full of worry.

Thanks

Many thanks to Jon Arenberg  and Stephanie Sandor-Leahy for the opportunity to see this apogee of science and engineering.

Resources

Breathtaking photographs are available in galleries linked to from this page

Christmas Bonus

Re-posting this article, I remembered why I was in Southern California back in May 2018 – I was attending Dylanfest – a marathon celebration of Bob Dylan’s music as performed by people who are not Bob Dylan.

The pandemic hit Dylanfest like a Hard Rain, but in 2020 they went on-line and produced a superb cover of Subterranean Homesick Blues which I gift to you this Christmas. Look out for the fantastic guitar solo at 1’18” into the video.

And since I am randomly posting performances inspired by Dylan songs, I can’t quite leave without reminding you of the entirely palindromic (!) version of the song by Wierd Al Yankovic.

Our Old Car is Dead. Long Live The New Car!

July 28, 2021

Click for larger image. Our old and new Zafira cars

After 20 years and 96,000 miles, our 2001 Vauxhall Zafira is close to death.

We bought it for £7000 when it was three years old in 2004. Back then it was all shiny and new, but over the last 17 years it has developed a very long list of faults.

Alexei Sayle once said of his car: “When one door closes, another one opens.”. This was one of the faults our car did not have. But it did have a feature such that: “When one door closes, all the electric windows operate simultaneously.”

Over the last few weeks the engine has begun making horrific noises, the engine warning light is on permanently, and there is an acrid stench of burning oil in the cabin.

After much deliberation, we have replaced it with a closely similar car, a 2010 Zafira with only 52,000 miles on its ‘clock’. The new car lacks our old car’s charmingly idiosyncratic list of faults, but what can you expect for £3,200?

In this post I would like to explain the thinking behind our choice of car.

Do we need a car?

Strictly speaking, no. We could operate with a combination of bikes and taxis and hire cars. But my wife and I do find having a car extremely convenient.

Having a car available simplifies a large number of mundane tasks and gives us the sense of – no irony intended – freedom.

Further, although I am heavily invested in reducing my carbon dioxide emissions, I do not want to live the life of a ‘martyr’. I am keen to show that a life with low carbon dioxide emissions can be very ‘normal’.

So why not an electric car? #1: Cost

Given the effort and expense I have gone to in reducing carbon dioxide emissions from the house, I confess that I did want to get an electric car.

I have come to viscerally hate the idea of burning a few kilograms of hydrocarbon fuel in order to move myself around. It feels dirty.

But sadly buying a new electric car didn’t really make financial sense.

There are lots of excellent electric family cars available in the UK, but they all cost in the region of £30,000.

There are not many second-hand models available but amongst those that were available, there appeared to be very few for less than £15,000.

Click for larger version. Annual Mileage of our family cars since 1995 taken from their MOT certificates. The red dotted line is the Zafira’s average over its lifetime.

Typically my wife and I drive between 3,000 and 5,000 miles per year, and we found ourselves unable to enthuse about the high cost of these cars.

And personally, I feel like I have spent a fortune on the house. Indeed I have spent a fortune! And I now need to just stop spending money for a while. But Michael: What about the emissions?

So why not an electric car? #2: Carbon Dioxide

Sadly, buying an electric didn’t quite make sense in terms of carbon emissions either.

Electric cars have very low emissions of carbon dioxide per kilometre. But they have – like conventional cars – quite large amounts of so-called ’embedded’ carbon dioxide arising from their manufacture.

As a consequence, at low annual mileages, it takes several years for the carbon dioxide emissions of an electric car to beat the carbon dioxide emissions from an already existing internal combustion engine car.

The graph below compares the anticipated carbon dioxide emissions from our old car, our new car, and a hypothetical EV over the next 10 years. The assumptions I have made are listed at the end of the article.

Click for larger version. Projected carbon dioxide emissions from driving 5,000 miles per year in: Our current car (2001 Zafira); Our new car (2010 Zafira); and a typical EV. The dotted line shows the effect of grid carbon intensity falling from around 200 gCO2/kWhe now to 100 gCO2/kWhe in 2030.

For an annual mileage of 5000 miles, the breakeven point for carbon dioxide emissions is 6 or 7 years away. If we reduced our mileage to 3000 miles per year, then the breakeven point would be even further away.

Click for larger version. Projected carbon dioxide emissions from driving 3,000 miles per year in: Our new car (2010 Zafira); and a typical EV. The dotted line shows the effect of grid carbon intensity falling from around 200 gCO2/kWhe now to 100 gCO2/kWhe in 2030.

However, we are a low mileage household. If we drove a more typical 10,000 miles per year then the breakeven point would be just a couple of years away. Over 10 years, the Zafira would emit roughly 12 tonnes more carbon dioxide than the EV.

If we took account of embodied carbon dioxide in a combustion engine car, i.e. if we were considering buying a new car, the case for an EV would be very compelling.

Click for larger version. Projected carbon dioxide emissions from driving 10,000 miles per year in: Our new car (2010 Zafira); and a typical EV. The dotted line shows the effect of grid carbon intensity falling from around 200 gCO2/kWhe now to 100 gCO2/kWhe in 2030.

So…

By replacing our old car with a closely similar model we have minimised the cognitive stress of buying a new car. Hopefully it will prove to be reliable.

And however many miles we drive in the coming years, our new car will reduce our carbon dioxide emissions compared to what they would have been in the old car by about 17%. And no new cars will have been built to achieve that saving.

Assuming that our new car will last us for (say) 5 years, I am hopeful that by then the cost of electric cars will have fallen to the point where an electric car – new or second-hand – might make sense to us.

Additionally, if the electricity used to both manufacture and charge electric cars increasingly comes from renewable sources, then the reduction in carbon dioxide emissions associated with driving electric cars will (year-on-year) become ever more compelling.

However, despite being able to justify this decision to myself, I must confess that I am sad not to be able to join the electric revolution just yet.

Assumptions

For the Zafiras:

  • I used the standard CO2 emissions per kilometre (190 and 157 gCO2/km respectively) in the standard government database.

For the hypothetical EV

  • I took a typical high efficiency figure of 16 kWh per 100 km taken from this article.
  • I assumed a charging inefficiency of 10%, and a grid carbon intensity of 200 gCO2/kWhe reducing to 100 gCO2/kWhe in 10 years time.
  • I assumed that the battery size was 50 kWh and that embodied carbon emissions were 65 kg per kWh (link) of battery storage yielding 3.3 tonnes of embodied carbon dioxide.
  • I assumed the embodied carbon dioxide in the chassis and other components was 4.6 tonnes.
  • For comparison, the roughly 8 tonnes of embodied carbon dioxide in an EV is only just less than the combined embodied carbon dioxide in all the other emission reduction technology I have bought recently:
    • Triple Glazing, External Wall Insulation, Solar Panels, Powerwall Battery, Heat Pump, Air Conditioning

I think all these numbers are quite uncertain, but they seem plausible and broadly in line with other estimates one can find on the web

 

The Last Artifact – At Last!

May 20, 2021

Friends, at last a film to which I made a minor contribution – The Last Artifact – is available in full online!

It’s the story of the redefinition of the kilogram which took place on this day back in May 2019.

The director Ed Watkins and his team carried out interviews at NPL back in August 2017 (link) and then headed off on a globe-trotting tour of National Metrology Laboratories.

Excerpts from the film were released last year (link), but somehow the entire film was unavailable – until now!

So set aside 90 minutes or so, put it onto as big a screen as you can manage, and relax as film-making professionals explain what it was all about!

 

The Last Artifact

Gas Boilers versus Heat pumps

May 18, 2021

Click for a larger version. A recent quote for gas and electricity from Octopus Energy. The electricity is six times more expensive than gas.

We are receiving strong messages from the Government and the International Energy Agency telling us that we must stop installing new gas boilers in just a year or two.

And I myself will be getting rid of mine within the month, replacing it with an Air Source Heat Pump (ASHP).

But when a friend told me his gas boiler was failing, and asked for my advice, I paused.

Then after considering things carefully, I recommended he get another gas boiler rather than install an ASHP.

Why? It’s the cost, stupid!

Air Source Heat Pumps:

  • cost more to buy than a gas boiler,
  • cost more to install than a gas boiler,
  • cost more to run than a gas boiler.

I am prepared to spend my own money on this type of project because I am – slightly neurotically – intensely focused on reducing my carbon dioxide emissions.

But I could not in all conscience recommend it to someone else.

More to Buy

Using the services of Messrs. Google, Google and Google I find that:

And this does not even touch upon the costs of installing a domestic hot water tank if one is not already installed.

More to Install

Having experienced this, please accept my word that the installation costs of an ASHP exceed those of replacing an existing boiler by a large factor – probably less than 10.

More to Run

I have a particularly bad tariff from EDF,  so I got a quote from Octopus Energy, a popular supplier at the moment,

They offered me the following rates: 19.1 p/kWh for electricity and 3.2 p/kWh for gas.

Using an ASHP my friend would be likely to generate around 3 units of heat for every 1 unit of electricity he used: a so-called Coefficient of Performance (COP) of 3.

But electricity costs 19.1/3.2 = 6.0 times as much as gas. So heating his house would cost twice as much!

More to buy, install and run and they don’t work as well!

Without reducing the heating demand within a house – by insulation – it is quite possible that my friend would not be able to heat his house at all with an ASHP!

Radiator output is specified assuming that water flowing through the radiators is 50 °C warmer than the room. For rooms at 20 °C, this implies water flowing at 70 °C.

A gas boiler has no problem with this, but an ASHP can normally only heat water to 55 °C i.e. the radiators would be just 35 °C above room temperature.

As this document explains, the heating output would be reduced (de-rated to use the correct terminology) to just 63% of its output with 70 °C flow.

What would you do?

Now consider that my friend is not – as you probably imagined – a member of the global elite, a metropolitan intellectual with a comfortable income and savings. I have friends outside that circle too.

Imagine perhaps that my friend, was elderly and on a limited pension.

Or imagine that they were frail or confused?

Or imagine perhaps that they had small children and were on a tight budget.

Or imagine that they were just hard up.

Could you in all honesty have recommended anything different? 

These problems are well known (BBC story) but until this cost landscape changes the UK doesn’t stand a chance of reaching net-zero.

 

Recalesence

May 18, 2021

When I used to work at NPL, I remember being really impressed by the work of my colleague Andrew Levick.

Magnetic Resonance Imaging (MRI) machines are able to image many physical details inside human bodies.

One their little-used features is that they can also be used to image the variation of temperatures throughout bodies.

Andrew was working on a temperature standard that could be used to calibrate temperature measurements in MRI imaging machines.

This would be a device placed inside the imager that could create a volume of imageable organic material at a known temperature. But one of the difficulties was that there must be no metal parts – so it could not contain any heaters or conventional temperature sensors.

So Andrew had the idea of using a vessel containing a supercooled organic liquid. If the transition to a solid was initiated, then the released latent heat – the recalescence –  would warm the liquid back to the melting/freezing temperature, creating a region of liquid-solid mixture at a stable, known and reproducible temperature, ideal for calibrating MRI machines.

Anyway…

Early on in the research he was doing experiments on the different ways in which the liquid crystallised.

He was supercooling organic liquids and then seeding the solidification, and making videos of the solidification process using a thermal imaging camera.

I thought the results were beautiful and put them to music, and that’s what the movie is.

The music is Bach’s Jesu Joy of Man’s Desiring arranged for 12-string guitar by the inimitable Leo Kottke.

If you are interested You Tube has many versions and lessons!

That’s all. I hope you enjoy it.

How Many Naturally-Occurring Elements are there? Corrigendum

January 28, 2021

And in non-COVID news…

… I received an e-mail from ‘Claire’ the other day pointing out that there was an error in one of my blog articles.

I try quite hard to be ‘right’ on this blog, so despite her politeness, I was distressed to hear this.

The error was in an article written on 15th February 2010 – yes, more than 10 years ago – entitled: Just How Many Naturally Occurring Elements are there?

Reading it again after all these years I was pleased with it. The gist of the article is that there is not a clear answer to the question.

It turns out that the nuclei of the atoms of some elements are so radioactively unstable that even though they do exist on Earth naturally, at any one time there are only handful of atoms of the substance in existence.

These elements seemed to be in a different category from, say, carbon (which has some stable isotopes) or uranium (which has no stable isotopes). But some of the isotopes of uranium have very long half-lives: 238U has a half-life 4.468 billion years – roughly the length of time that the Earth has existed.

So of all the 238U has which was donated to the Earth at its formation – very roughly half of it has decayed (warming the Earth in the process) and half of it is still left.

So I had no problem saying that 238U was ‘naturally-occurring’, but that it was a moot point whether Francium, of which there are just a few atoms in existence on Earth at one time, could really be said to be ‘naturally-occurring’.

So in the article I stated that I had stopped giving an exact number for the number of ‘naturally-occurring’ elements – I just say it is ‘about 100’ – and then discuss the details should anyone ask for them.

What was my error?

In the article I stated that Bismuth – atomic number 83 – is the heaviest element which has at least one stable isotopes. For elements with larger atomic numbers than Bismuth, every isotope is radioactively unstable.

What Claire told me was that in fact the one apparently stable isotope of bismuth (209Bi, the one which occurs naturally) had been found to be unstable against alpha decay, but with an exceedingly long half-life. The discovery had been announced in Nature in 2003: link

Click image for a larger version. Link to Nature here

What I want to comment on here is the length of the half-life: The authors estimated the half life of 209Bi was:

  • 1.9 (± 0.2) x 1019 years.
  • 19 billion billion years

This is an extraordinarily long time. For comparison the estimated age of the Universe – the time since the Big Bang – is estimated to be about:

  • 1.4 x 1010 years.
  • 14 billion years

Imagining that 1 kilogram of pure 209Bi was gifted to the Earth when it was formed roughly…

  • 0.4 x 1010 years.
  • 4 billion years

…ago. Then since that time less than 1 in a billion atoms (0.15 micrograms) of the 209Bi would have decayed.

We might expect a single nuclear decay in 1 kilogram of pure 209Bi every 5 minutes.

How could one measure such a decay? The authors used a transparent crystal of Bismuth Germanate (Bi4Ge3O12) which scintillates when a radioactive particle – such as an alpha particle passes through it. In this case, the crystal would ‘self-scintillate’.

But the background rate of scintillation due to other sources of radiation is much higher than the count due to the decay of the 209Bi.

To improve the discrimination against the background the authors cooled the crystal down to just 0.1 K. At this very low temperature its heat capacity becomes a tiny fraction of its heat capacity at room temperature, and the energy of even a single radioactive decay can be detected with a thermometer!

Combining light detection and heat detection (scintillating bolometry) helps to discriminate against spurious events.

And my point was…?

For all practical purposes 209Bi is stable. Anything with a half-life a billion times longer than the age of the Universe is at least stable-ish!

But Claire’s e-mail caused me to reflect that the apparently binary distinction between ‘stable’ and ‘unstable’ is not as obvious as I had assumed.

By this extraordinary measurement, the authors have reminded me that instead of saying that something is ‘stable’ we should really state that it may be stable, but that if it decays, its rate of decay is beyond our current limit of detectability.

So for example, we know that neutrons – outside a nucleus – decay with a radioactive half-life of just 10.2 minutes. But what about protons? Are they really unconditionally stable?

People have searched for the decay of the proton and established that protons may be stable, but if they do decay, their half-life is greater than 1.7 x 1034 years – or more than a million, billion, billion times the age of the Universe.

So now we know.

Rocket Science

January 14, 2021

One of my lockdown pleasures has been watching SpaceX launches.

I find the fact that they are broadcast live inspiring. And the fact they will (and do) stop launches even at T-1 second shows that they do not operate on a ‘let’s hope it works’ basis. It speaks to me of confidence built on the application of measurement science and real engineering prowess.

Aside from the thrill of the launch  and the beautiful views, one of the brilliant features of these launches is that the screen view gives lots of details about the rocket: specifically it gives time, altitude and speed.

When coupled with a little (public) knowledge about the rocket one can get to really understand the launch. One can ask and answer questions such as:

  • What is the acceleration during launch?
  • What is the rate of fuel use?
  • What is Max Q?

Let me explain.

Rocket Science#1: Looking at the data

To do my study I watched the video above starting at launch, about 19 minutes 56 seconds into the video. I then repeatedly paused it – at first every second or so – and wrote down the time, altitude (km) and speed (km/h) in my notebook. Later I wrote down data for every kilometre or so in altitude, then later every 10 seconds or so.

In all I captured around 112 readings, and then entered them into a spreadsheet (Link). This made it easy to convert the  speeds to metres per second.

Then I plotted graphs of the data to see how they looked: overall I was quite pleased.

Click for a larger image. Speed (m/s) of Falcon 9 versus time after launch (s) during the Turksat 5A launch.

The velocity graph clearly showed the stage separation. In fact looking in detail, one can see the Main Engine Cut Off (MECO), after which the rocket slows down for stage separation, and then the Second Engine Start (SES) after which the rocket’s second stage accelerates again.

Click for a larger image. Detail from graph above showing the speed (m/s) of Falcon 9 versus time (s) after launch. After MECO the rocket is flying upwards without power and so slows down. After stage separation, the second stage then accelerates again.

It is also interesting that acceleration – the slope of the speed-versus-time graph – increases up to stage separation, then falls and then rises again.

The first stage acceleration increases because the thrust of the rocket is almost constant – but its mass is decreasing at an astonishing 2.5 tonnes per second as it burns its fuel!

After stage separation, the second stage mass is much lower, but there is only one rocket engine!

Then I plotted a graph of altitude versus time.

Click for a larger image. Altitude (km) of Falcon 9 versus time after launch (s) during the Turksat 5A launch.

The interesting thing about this graph is that much of the second stage is devoted to increasing the speed of the second stage at almost constant altitude – roughly 164 km above the Earth. It’s not pushing the spacecraft higher and higher – but faster and faster.

About 30 minutes into the flight the second stage engine re-started, speeding up again and raising the altitude further to put the spacecraft on a trajectory towards a geostationary orbit at 35,786 km.

Rocket Science#2: Analysing the data for acceleration

To estimate the acceleration I subtracted each measurement of speed from the previous measurement of speed and then divided by the time between the two readings. This gives acceleration in units of metres per second, but I thought it would be more meaningful to plot the acceleration as a multiple of the strength of Earth’s gravitational field g (9.81 m/s/s).

The data as I calculated them had spikes in because the small time differences between speed measurements (of the order of a second) were not very accurately recorded. So I smoothed the data by averaging 5 data points together.

Click for a larger image. Smoothed Acceleration (measured in multiples of Earth gravity g) of Falcon 9 versus time after launch (s) during the Turksat 5A launch. Also shown as blue dotted line is a ‘theoretical’ estimate for the acceleration assuming it used up fuel as a uniform rate.

The acceleration increased as the rocket’s mass reduced reaching approximately 3.5g just before stage separation.

I then wondered if I could explain that behaviour.

  • To do that I looked up the launch mass of a Falcon 9 (Data sources at the end of the article and saw that it was 549 tonnes (549,000 kg).
  • I then looked up the mass of the second stage 150 tonnes (150,000 kg).
  • I then assumed that the mass of the first stage was almost entirely fuel and oxidiser and guessed that the mass would decrease uniformly from T = 0 to MECO at T = 156 seconds. This gave a burn rate of 2558 kg/s – over 2.5 tonnes per second!
  • I then looked up the launch thrust from the 9 rocket engines and found it was 7,600,000 newtons (7.6 MN)
  • I then calculated the ‘theoretical’ acceleration using Newton’s Second Law (a = F/m) at each time step – remembering to decrease the mass by 2.558 kilograms per second. And also remembering that the thrust has to exceed 1 x g before the rocket would leave the ground!

The theoretical line (– – –) catches the trend of the data pretty well. But one interesting feature caught my eye – a period of constant acceleration around 50 seconds into the flight.

This is caused by the Falcon 9 throttling back its engines to reduce stresses on the rocket as it experiences maximum aerodynamic pressure – so-called Max Q – around 80 seconds into flight.

Click for a larger image. Detail from the previous graph showing smoothed Acceleration (measured in multiples of Earth gravity g) of Falcon 9 versus time after launch (s) during the Turksat 5A launch. Also shown as blue dotted line is a ‘theoretical’ estimate for the acceleration assuming it used up fuel as a uniform rate. Highlighted in red are the regions around 50 seconds into flight when the engines are throttled back to reduce the speed as the craft experience maximum aerodynamic pressure (Max Q) about 80 seconds into flight.

Rocket Science#3: Maximum aerodynamic pressure

Rocket’s look like they do – rocket shaped – because they have to get through Earth’s atmosphere rapidly, pushing the air in front of them as they go.

The amount of work needed to do that is generally proportional to the three factors:

  • The cross-sectional area A of the rocket. Narrower rockets require less force to push through the air.
  • The speed of the rocket squared (v2). One factor of v arises from the fact that travelling faster requires one to move the same amount of air out of the way faster. The second factor arises because moving air more quickly out of the way is harder due to the viscosity of the air.
  • The air pressure P. The density of the air in the atmosphere falls roughly exponentially with height, reducing by approximately 63% every 8.5 km.

The work done by the rocket on the air results in so-called aerodynamic stress on the rocket. These stresses – forces – are expected to vary as the product of the above three factors: A P v2. The cross-sectional area of the rocket A is constant so in what follows I will just look at the variation of the product P v2.

As the rocket rises, the pressure falls and the speed increases. So their product P v, and functions like P v2, will naturally have a maximum value.

The importance of the maximum of the product P v2 (known as Max Q) as a point in flight, is that if the aerodynamic forces are not uniformly distributed, then the rocket trajectory can easily become unstable – and Max Q marks the point at which the danger of this is greatest.

The graph below shows the variation of pressure P with time during flight. The pressure is calculated using:

Where the ‘1000’ is the approximate pressure at the ground (in mbar), h is the altitude at a particular time, and h0 is called the scale height of the atmosphere and is typically 8.5 km.

Click for a larger image. The atmospheric pressure calculated from the altitude h versus time after launch (s) during the Turksat 5A launch.

I then calculated the product P v2, and divided by 10 million to make it plot easily.

Click for a larger image. The aerodynamic stresses calculated from the altitude and speed versus time after launch during the Turksat 5A launch.

This calculation predicts that Max Q occurs about 80 seconds into flight, long after the engines throttled down, and in good agreement with SpaceX’s more sophisticated calculation.

Summary 

I love watching the Space X launches  and having analysed one of them just a little bit, I feel like understand better what is going on.

These calculations are well within the capability of advanced school students – and there are many more questions to be addressed.

  • What is the pressure at stage separation?
  • What is the altitude of Max Q?
  • The vertical velocity can be calculated by measuring the rate of change of altitude with time.
  • The horizontal velocity can be calculated from the speed and the vertical velocity.
  • How does the speed vary from one mission to another?
  • Why does the craft aim for a particular speed?

And then there’s the satellites themselves to study!

Good luck with your investigations!

Resources

And finally thanks to Jon for pointing me towards ‘Flight Club – One-Click Rocket Science‘. This site does what I have done but with a good deal more attention to detail! Highly Recommended.

 


%d bloggers like this: