Archive for April, 2012

What a nuclear future used to look like.

April 30, 2012
Nuclear Closures

The scheduled closures of nuclear power stations in the UK. The vertical scale shows the generating capacity in GWe - the UK needs around 60 GWe. The horizontal scale runs from 2003 to 2035. The very earliest a nuclear power station could be opened would be in around 2022 - a decade from now. (Click for Larger version)

I may be wrong, but my impression is that the UK has no electricity generation strategy. I do not doubt that everyone in government wishes fervently for an electricity supply which is sustainable and affordable. But wishing is not enough: they need to make sure that it happens. Achieving ‘sustainable electricity generation’ and ‘affordable electricity’ is a really difficult engineering problem – let alone any political dimension.

Listening to energy minister Charles Hendry talking today (Sunday 29th April 2012 – listenable for 1 week) I was struck by his utter failure to address the engineering challenge we face. In contrast Sue Ion – a nuclear enthusiast – was completely on the ball even on the magnitude of the renewable commitment we needed to make. I decided to find out who she was.

Sue Ion

Professor Sue Ion OBE FREng

In 2009 i.e. pre Fukushima – she gave a talk at the Royal Academy of Engineering in which she presented her vision of how we could build a nuclear-powered future in the UK. It is interesting to listen now and see how much things have changed. As I mentioned recently, I am skeptical that we will build even one more nuclear power station – and doubly skeptical that the nuclear future she outlined could ever come true. But this is one of the clearest presentations I have yet seen of the case for nuclear power. I am sure not everyone will agree with her views, but they are well expressed. Take a look and see what you think.

With three years of hindsight two excerpts struck me as significant. In the first, she pointed out that nuclear fuel was only a small fraction of the cost of a nuclear electricity. The largest component was the repayment of the capital borrowed for construction of the plant. Since the financial crash of 2008,it has become considerably harder to find anyone willing to lend the £10 billion required for each power station. The second salient point arose in the Q&A session at the end in which she summed up our problem in one sentence:

What is required is a stable situation in which investors will place their orders

We don’t have that stability, and I see no prospect that it will return in the near future.

Useful notes (from which the figure of the head of the page was taken) can be found here

The Gravity Gnome

April 27, 2012
Weighing a Gnome

Weighing a gnome is actually a way of probing the gravity field around us.

Gravity is the most mysterious of the forces we experience in our lives. Impossible to screen against, it extends throughout space to the farthest corners (corners?) of the cosmos, causing every piece of matter in the Universe to affect every other. Wow!

More prosaically, gravity gives rise to the phenomenon of ‘weight’ – the force which pulls us ‘down’ to the Earth. GCSE students are tutored on the difference between mass and weight, and are told that the weight of an object, say a Gnome, varies from one planet to another, but its mass is the same on any planet. However, the Kern instrument company are keen to point out that if you use a sensitive force balance, its weight changes from place to place around the Earth.

The balance doesn’t even need to be that sensitive. I was surprised to find out by how much the weight of an object measured at a fixed height above sea level changes with latitude and longitude: – it varies by  around 0.5%. So for a Gnome weighing around 300 g, changes of 1.5 g should be seen and this is easily detectable. The rationale for the publicity stunt is explained here, and you can follow the Kern Gnome on his journey here. I like this experiment because the measurement is so simple – and yet the physics it uncovers is so profound.

The light-hearted video below shows my children’s reflections on the mystery of Gravity.

P.S. After a period of steady decline, my own weight has been mysteriously increasing. I think this may be due to a fluctuation in the gravitational constant G. More about this in future articles.

Open access is not such a smart idea

April 25, 2012
Elsevier ban

Should readers pay to read academic journals, or should the authors have to pay to publish their articles? (Graphic taken from

The recent move by the  Wellcome trust to create a new Open Access journal has highlighted a long-standing problem with the publication of academic and scientific works. The Wellcome Trust’s proposed solution has been reviewed favourably in all the newspapers I have seen, but I think the extent of the problem has been overstated, and there are serious problems with the proposed solution.

Let me explain:

  • Publishing journals is an essential activity for science. Journals form the archive of our collective endeavours and they are precious.
  • Publishing journals costs money and somebody has to pay for it.This can be either (a) Readers, or  (b) Authors. Option (c) would be ‘somebody else’ but there are no candidates that I know of who would be willing to do this. There are difficulties with (a) and (b) choices.

The problems with the current system (a) have been well-highlighted, but summarising, the main objection is that since the work was generally paid for with public money. the public should have the right of free access to it. What possible objections could there be to such an obviously fair proposal? Well, Slightly to my surprise I have four!

  • My first objection is that the arrangement by which someone who wants to publish something pays to have it published is well established: it is called vanity publishing. Another name for people who pay to have their work published is advertisers. Neither vanity-publishers nor advertisers are in general associated with high quality output. With the ‘author-pays’ dynamic, it is the author who is the ‘customer’ and in business, the ‘customer is king’. There is a serious danger that journals will lower the bar to publishing for those that are able to pay. In contrast, with the current system, anyone can publish for free – if the quality of their article is high enough – and so it is in the interest of journals to simply pick the best papers.
  • My second objection concerns the cost of making a journal article ‘Open Access’: typically £1500. This is easy to find for well-funded researchers, but it is not so easy for poorer researchers. Imagine a PhD student who wants to publish their work but whose supervisor objects? Imagine a researcher in a poorer country for whom the idea of spending £1500 is a dream! I know retired scientists who publish excellent work but who could not afford to pay for it to be published. And consider this: would Einstein have published all his papers in 1905 or would he have only submitted what he could afford? Since academics are judged in large part on the number of papers published (not necessarily a good idea but a fact of life) this would inevitably favour wealthier students and Universities. Is it not better to have all the information published with a criteria based solely on the quality of the work rather than just what people can afford to publish?
  • My third objection is that the problem with access is overstated. All the abstracts of each paper are available for free (e.g. here), and if you write to the authors they will usually be happy to send a pre-print of the paper – an unformatted version containing all the information and data – for nothing. I admit this is slower than ‘clicking’ and downloading – but it is free. Remember journals can only copyright the presentation of (say) a table of data, not the data in the table.
  • My  final objection is that if the budgets for science are fixed, then this proposal would cut the amount of money spent on science. If  (say) £100,000 of research produces 1 paper, then paying £1500 for the publication involves spending  £1500 less on the project – a 1.5% cut. If the research project was more successful and produced two papers – the cut would be 3%!
I am not denying that there are problems with the current system of academic publications – principally its expense, which makes access expensive (typically $30 per article) : academic publishing is run by businesses not charities or not-for-profit foundations.
Speaking entirely personally, my main problem with academic journals is the declining ratio of the amount of papers I read to the amount of papers I write.  I am under intense personal pressure to publish papers and increasingly I find it hard to find the time to read as many papers as I would like. If my experience is anything close to typical, then that bodes ill for the quality of published work.

Prosperity without Growth. Is it possible?

April 23, 2012
Prosperity without Growth

Prosperity without Growth by Tim Jackson. Is it possible to create a prosperous society in which there is no economic growth? Tim Jackson thinks it is, but he didn't convince me.

The idea of a non-growing economy may be an anathema to an economist. But the idea of a continually growing economy is an anathema to an ecologist.

This quotation is from Prosperity without Growth by Tim Jackson, a book which focuses on the conflict between economic success and ecological sustainability. The book asks the fundamental question:

  • Is it possible to have a prosperous society which is not continually growing?

Jackson asserts that prosperity is indeed possible without growth, but only if we re-consider what we mean by prosperity.

  • Growth of Gross Domestic Product (GDP), argues Jackson, is not a good indicator of anything that really corresponds to prosperity in people’s lives. GDP growth is linked with increased consumption of all kinds of objects and services, and while that may act as a proxy for prosperity in the developing world, this is no longer true in the developed world. Jackson asks whether a society in which people have ‘too much’ is meaningfully more prosperous than one in which everyone has ‘enough’?
  • Prosperity, he argues, is multi-dimensional: it involves community: access to education, chances to be with our family and the security that comes an ecologically sustainable lifestyle. It involves people ‘flourishing’. He argues that ‘consumerism’ is actually at the root of what makes people miserable.  If we could just find a way to incorporate indicators of genuine prosperity (as is done in Bhutan) and optimise these, then the absence of conventional economic GDP growth would not be catastrophic.

I have never fully understood why ‘growth’ plays such a central role in our capitalist system. I understand that year-on-year people will tend to improve processes and produce more with the same resources. And I understand that over the last two hundred years, astonishing technological changes have driven new ways of doing things, and symbiotic social changes have created new lifestyles that allow (or possibly compel?) us to both produce more and to consume more. So growth has been a feature of the economy we have all grown up in. But is it essential?

So I profoundly sympathise with Jackson‘s key points, but I am appalled by his vision of a zero-growth prosperous society. It reminds me of many of the worst regimes on Earth.

  • Jackson identifies ‘novelty’ as a fundamental problem. A problem? In my lifetime the world has been utterly changed by ‘novelty’, most notably that arising from computing technology. Would a Jacksonian society have stopped with the 386 processor and said: “that’s enough” or “Who could need more that 640k memory?”. Is he in favour of body scanners or are they too novel. How about X-rays? Maybe tractors are too modern because they cause unemployment on farms and employment is a way of ‘flourishing’? What about new alloys?  Or modern telecommunications? Or transport? Or vaccines? At some point a Jacksonian society would try to stop the clock, and the rest of the world would move on. It reminds me of Cuba.
  • Jacksonian society would have full employment and economic activities would be focussed on the sustainable provision of food and energy. It is a world in which ‘The humble broom would be preferred to the diabolical ‘leaf blower’ . This sounds very much like a planned self-sufficient economy in which individual ‘flourishing’ would substitute for material wealth. It reminds me of China during the cultural revolution.
  • Jackson’s vision fails to account for the chaos of human life and our aspiration to do the best we can for ourselves and our family. In the real world there will be dissenting views, and people will be able to leave Jackson-Land for other parts of the world. Peeping across the border, the bright lights and fast cars of unsustainable Jeremy Clarkson-Land would probably look pretty attractive.

I applaud Jackson for trying to be clear about how different his hypothetical world would be, but ultimately it just doesn’t seem like a world in which individual people would choose to live. Do I have an alternative vision? No.

Ultimately, the problem is one of sharing finite resources, and here the technology that Jackson so objects to, has given us previously unimaginable opportunities for novel ways of working and living. And for the first time in history given humanity a shared and truly global perspective.

Given the UK’s previously privileged position, it seems inevitable – whether we like it or not – that we will have to consume less than we have previously. This will feel strange and even just beginning to do it will present severe social challenges. These first steps will be really hard, but some of the principles Jackson expounds could highlight the fact that we have more choices than we might otherwise have imagined.

Sound into Light

April 20, 2012
A Laser Doppler Vibrometer shining a laser onto a resonator. The device could detect motion of a membrane just a few pico metres in amplitude

A Laser Doppler Vibrometer shining a laser onto a resonator. The vibrometer could detect motion of a membrane vibrating with an amplitude of less than one thousandth of a millimetre. The inset show the laser spot in detail.

Recently I have encountered two pieces of technology which have left me speechless: gobsmacked in amazement. And as I write about it now, I realise that they both did the same thing – they turned sound into light – but in completely different ways.

The first device was a Laser Doppler Vibrometer – a device that could detect tiny motions of a surface. It worked by shining a low power laser – like a laser pointer – onto a surface and analysing the light which scattered off the surface. What was amazing was how far away it could do this from – and how sensitive it was. We tested it on our resonator – the copper object in the photograph above. The device was about a metre away and yet it could easily detect vibrations of just a thousandth of a millimetre – it was as sensitive as our (very expensive) microphones! The company that makes the device (Polytec) have made a video that explains how it works

The second device was an optical fibre that could sense sound. What? A company (Silixa) have developed a special optical fibre – which can be up to 10 km long – and a device which turns this fibre into the equivalent of 10,000 independent microphones. It can simultaneously listen to the sound at every metre along the fibre.

Imagine hanging the fibre in a room – you could listen anywhere along the fibre and not hear all the other sounds! Silixa have thankfully produced products that are dramatically more useful than a party eavesdropper!

Silixa haven’t disclosed how it works but I think I can guess. I think it is an optical fibre with two cores. Light can travel in each of the two cores of the fibre almost completely independently. But when the fibre is strained – bent – even ever so slightly, light can leak from one core to the other.

I think the device works by shining a short pulse of bright laser light down one fibre. The pulse is only around 1 nanosecond long and so is only around 0.2 metres from start to finish. It travels from the laser to the end of a 10 kilometre fibre in around 50 millionths of a second. If the fibre is unstrained, very little light leaks from this brightly illuminated core to the dark core. But if the fibre is strained – perhaps because a sound wave has bent it microscopically – some light leaks from the bright core to the dark core. It then has to travel back to the source where it can be detected.

  • The time delay between sending the pulse into one core and the detecting the light in the second core allows one to work out where on the fibre the light came from.
  • If one sends several thousand pulses per second, one can evaluate the state of strain of the fibre several thousand times per second.
  • If one just detects the light with a particular delay after the initial pulse, then one is sensitive to vibrations of the fibre a particular distance away from the source
    • If we detect light 10.000 microseconds after the pulse, one is detecting light from 1.000 km away from the source
    • If we detect light 10.005 microseconds after the pulse, one is detecting light from 1.001 km away from the source

What is amazing is not that I can think up an explanation – right or wrong I don’t know – but that a company can make a product which actually does this in the field!

I take my hat off to Polytec and Silixa – they have made products which made my jaw drop. Thanks.

Could fracking be our least worst option?

April 18, 2012

Shale Gas Operations in Wyoming extend over the landscape in an environmental devastating way. Shale gas in the UK would be unlikely to develop in this way, but there are still risks. Picture from Nature courtesy of National Geographic.

News today that a committee of experts has recommend a resumption of ‘fracking’ in the UK. At first I was surprised that this earthquake-inducing technology had been approved. But on reflection I see that the approval is really a measure of just how addicted we are to hydrocarbon consumption.

I expect the media will shortly have stories about the prospect of a financial bonanza, lower gas prices and independence from ‘foreign’ influence. These are all excellent economic reasons to invest in fracking. There will also be stories about the environmental risks, which are real and significant. And the balance between environmental risk and economic benefit will be the conflict at the heart of the media ‘stories’. But the stories will probably miss the one real reason why fracking may just possibly be justified: it could reduce carbon emissions

As I write this, the UK is using 31.87 GW of electricity of which 45% is being generated by coal – the most carbon intensive fuel we have. Only 19% is being generated by gas. Because each unit of electricity generated by coal emits twice as much carbon dioxide as a unit of electricity generated by gas, this means that right now, 83% of electricity-associated carbon dioxide emissions are coming from coal-fired power stations. If we replaced them with gas we would make a big contribution to reducing our carbon dioxide emissions. This is the real argument for increasing the supply of gas: switching off our coal-fired stations and building new gas stations could be achieved in a decade or so and would be less controversial and have a lower capital cost than switching to nuclear.

Could fracking really be a sensible option? The answer is ‘possibly’ but there is considerable reason to be skeptical. The problem arises because (evaluated over a century), methane is roughly 20 times more effective than carbon dioxide as a greenhouse warming gas. Switching from burning coal to burning methane, we can reduce carbon dioxide emissions by 50%. However, if in mining and delivering methane to the power stations we leak just 1 part in 20 (about 5%) then we save nothing in carbon emissions. And unless we had measured the loss, we might not even know about it. And if we leaked more than 1 part in 20 of the methane, we have actually made things worse.

Can you tell what comes next? Yes, recent measurements of the amount of methane which leaks from US fracking fields reveal that the losses almost completely cancel the benefit of burning gas.

If I was in charge, I would skip the fracking adventure and I would try to drag the UK kicking and screaming into a genuinely sustainable way of living. I think we need an economic and social response to the hazard from carbon emissions on the scale of a war. But I am not in charge. So if we do embark on this fracking adventure then it is essential that we:

  • do not leak methane hither and thither, and
  • that we use the gas to eliminate coal-fired electricity generation.

Otherwise fracking will become the UK’s tar sands disaster.

Learning: Exploring the Landscape of Knowledge

April 16, 2012

Different people – learners’ – experience the ‘knowledge’ landscape differently. 

Learning is mysterious’. And I was reminded of this by comments on the use of science demonstrations by Alom Shaha and by other reflective comments by practicing teachers such as Candlelighter’s objection to the assertion that

Students behave in broadly similar ways, and, like machines, if the correct inputs are submitted, predictable outputs will be emitted.

At the nub of Candlelighter’s objection is the idea that ‘knowledge’ can be broken down into sub-units, often called ‘modules’. The central thesis of this idea is that if an individual accretes enough of these ‘modules’ they will then ‘know something’. It is analogous, I think, to the idea that if a child owns enough Lego™ bricks that they will be able to build something of interest (say) a bridge.

I do not hold with this ‘modular theory of knowledge’. Within certain limited realms, the theory may work, but it simply does not explain how people come to understand complex ideas, or become expert in fields such as physics. On meeting experts I am amazed that they seem able to converse about ‘modules’ about which they have never been taught. How can they possibly do that! So on these grounds alone I reject this modular idea as inadequate. And also it does not match my own experience of either teaching or learning. I would like to put forward a different representation of human knowledge.

The ‘geographical theory of knowledge’ holds that human knowledge can be envisaged as a landscape, and what we experience as we move through the landscape varies from person to person. Some people experience almost nothing, whereas others are acutely aware of, say, the colour of the terrain. Others understand the geology or history of the area and others its folklore. In short, there are a large number of ways in which people can become familiar with a region of the ‘knowledge landscape’.

If this were an accurate representation of the way human beings ‘experience’ knowledge, this idea would have consequences for the process we call ‘learning’. In the knowledge landscape, ‘learning’ consists of (a) moving from ‘where’ you now are, to a new ‘location’ and (b) becoming familiar with new location. Facilitating this journey corresponds to ‘teaching‘. And the normal process of class teaching corresponds to ‘giving people directions‘ – a bit like being a tour guide.

People in a learning group will only rarely be starting from the same place. So issuing them all with the same directions is as mad as giving everyone ‘directions to London’ when they are starting from different places: inevitably people will get lost. Being ‘lost’ the knowledge landscape corresponds to a state of confusion. In learning environments it is inevitable that people will experience confusion (i.e. get lost) as they embark on journeys to and fro across the landscape. Inevitably people will get lost (get confused) but in the same way that people who are lost eventually find their way back home – so confusion is the precursor to learning. And being able to cope with this confusion without panicking is critically important.

How can an institution devoted to learning (i.e. to promoting journeys across the knowledge landscape) respond to the fact that people require individual directions to (a) first find out where they are and then (b) to get them to a new place on the landscape? The only way that I know is to talk to people, i.e. to engage in dialogue. This is in stark contrast to many experiences of education in which people are subjected to monologue. Monologue is characteristic of the modular theory in which a monologue optimises the ‘amount of knowledge’ passed from the teacher (talker) to the learner (listener).

Insisting on a uniform learning procedure does not guarantee a uniform experience or that people will eventually ‘know’ the same thing. Imagine insisting that everyone goes to the top of, say, the Eiffel Tower. For some the experience is terrifying, for others a drudge, for some a pleasure, and others a great joy. And similarly with many ‘intellectual destinations’. Some find thermodynamics deadly dull, others serenely beautiful. Others useful. So just making sure that people have ‘passed through’ a particular point on the knowledge landscape does not predetermine what they then ‘know’.

Where am I? This is the most profound question of all. Issuing directions – personalised or not – to direct people from where they are to another place, only makes sense if people know where they are in the first place. It doesn’t matter if the teacher ‘knows where they are’: the student needs to find that out. Many people engaging with scientific topics, especially initially, are in a state of great ignorance. In this analogy, this corresponds to being utterly lost. Before any directions can be acted upon, a person has first to find out where they are. This learning step is in many ways the most difficult step of all. It involves acknowledging that the knowledge landscape exists and that they don’t know where they are on it. This can engender fear and confusion – especially in adults who feel embarrassed very easily. But finding out ‘where one is can also engineer elation – it is often something which people have sought for a long time.


Rocket Boys

April 12, 2012

On a previous visit to California, I introduced my friends to the thrills of launching water rockets. On this visit, they returned the favour by introducing me to chemical rockets. It was every bit as enjoyable as water rockets with the added frison of danger, especially when launching on the beach under the flight path from LA International Airport.

Rocket Engine

The internal structure of a rocket engine. After ignition, the fuel burns and the ejected hot gas drives the rocket forward. After a delay a second charge blasts the top off the rocket, releasing the parachute. (Click for larger version)

The rocket bodies were simply cardboard tubes, but the rocket ‘motors’ are ingenious. They are built into a cardboard tube, like a firework, and contain an electrical ignition ‘fuse’, rocket fuel, a ceramic nozzle to channel the hot gases, a delay element, and a second charge which blows the top off the rocket and releases the parachute. Very clever.

If you haven’t read ‘Rocket Boys‘  or seen the movie, then please allow me to recommend them both. My experience on the beach backs up the lesson from the book: if  you have young boys and want to channel their fascination with explosives, fire, weapons and space into a relatively positive and harmless direction (upwards!), then firing rockets must be just about the best activity. By the way, despite this gender-biased recommendation, Stephanie was the only bone-fide space scientist amongst us!

001 Leahy Family Launch cropped

Team Sandor-Leahy pose for a picture after three successful launches. One of the team was not a boy.

Brownian Motion observed in Milk

April 11, 2012

One of the best things about working at NPL is the people one gets to work with. The other day, during a Science Ambassador meeting, James Miall and Robert Ferguson showed a live video projection of the Brownian motion of fat globules in semi-skimmed milk. (There are fat globules in milk? Ughhh!) I was blown away! The one minute video above is typical of what they projected.

I think I was supposed to have seen the demonstration as a child, but the education system failed me – not everything was perfect in the ‘old days’  ;-). When I was a lecturer I bought a ‘smoke cell’ to rectify this shortcoming in my education, but I was unable to convince even myself that I could see the effect. So in my whole life I had never seen ‘Brownian’ motion. I thought the clarity of their demo was fantastic and I loved the fact that it just used normal milk.

So what is Brownian Motion? It is the random motion of large particles caused by the motion of the much smaller molecules that are all around them. In the case above, the circles you see are the fat globules in milk suspended in water as viewed through a web-cam down a microscope. The globules vary in diameter from 0.5 to 5 micrometres. The water in which they are suspended is made of H2O molecules with are typically 0.0003 micrometres across. The picture below tries to capture scale – but I haven’t managed to draw the water molecules small enough. Brownian motion is the jiggling of the gigantic fat globules due to the motion of the tiny water molecules.

Fat Globules

Rough indication of the relative size of water molecules (open circles) and an individual globule of fat in the movie at the head of this article (shown a giant incomplete circle). I have failed to draw the water molecules small enough to truly represent their minuscule size.

Is that really plausible? Well, it might not seem so because each fat globule weighs around 60 billion times more than a water molecule. In other words if the water molecules weighed as much as a UK penny coin (3.6 g) then in proportion the globule would weigh around 180,000 tonnes. Think about millions of people continually throwing pennies at an ocean liner: could they really move it? And consider also that there are molecules on all sides of the globule and (since their motions are random) their effect should almost exactly cancel out.

In fact that it is the explanation. And the reason that the motion is so (relatively) easily visible is due to one key fact:

  • The speed of the water molecules is stupendous. On average the speed of water molecules at room temperature is around 620 metres per second – well over 1000 m.p.h.

If you are viewing this on a normal computer screen and could somehow have seen an individual water molecule move uninterrupted by collisions with its neighbours, then at this scale of viewing it would have travelled way past the orbit of the moon in one second. So although the molecular mass is tiny, their momentum (the mass multiplied by the velocity) is much larger than one might think.

So although the cancellation of the random jigglings of the water molecules is indeed almost exact, it is not quite perfect. Judging from the video – the globules jiggle at around one tenth of a globule diameter per second – which is around 0.1 micrometers per second, roughly 6 billion times slower than the average speed of the water molecules.

Putting these two factors together we see that the jiggling of the millions of molecules on either side of the fat globules can be almost perfectly balanced – but it only takes an imbalance of around 10 molecules to explain the level of jiggling observed.

Working out detailed numbers for Brownian Motion is hard, and so you won’t be surprised to find that person who sorted it all out was Albert Einstein.

RI: Re-Inventing itself

April 9, 2012

Alok Jha Hosts the Guardian Science Weekly podcast from the RI

The Royal Institution is arguably the home of ‘Science Communication‘. From 1965 onwards, the broadcasts of the Christmas Lectures gave eminent scientists several hours in which to explain their work and place it in a context that could be readily appreciated by younger people. This unique format has had an enduring impact on UK culture.

But in this modern age the RI has had a hard time re-inventing itself. Producing a series of 6 lectures once a year is not enough.  I am not sure it is quite there yet, but watching the podcast at the head of this article over at the RI Channel, I get the feeling it might be getting close.

I have written previously that if the RI were really committed to science communication it should leave its hyper-posh headquarters in the most exclusive part of London, and move to the Midlands. Maybe I spoke too soon. This kind of podcast brings a topicality and accessibility to modern science communication that television – with its obsessions with short punchy ‘packages’-  just can’t touch. And the location definitely adds a little something.

The features I liked were:

  • The relaxed and informal atmosphere: Alok Jha does a great job, acknowledging the posh surroundings, and then ignoring them.
  • Some great, simple, demonstrations and excellent conversation that brought the demonstrations alive.
  • I loved Anna Starkey’s ‘perception’ demonstrations (9 to 24 minutes) and Alom Shaha’s simple motor (24 to 30 minutes) – I have ordered the magnet already!
  • The self-consciousness, showed particularly by Alom, that science demonstrations need to be more than Scientertainment

The things I didn’t like were:

  • The random explosion using liquid nitrogen trapped in a bottle (47 to 49 minutes). This is a seriously dangerous demonstration and in this context is pointless.
  • The condescending attitude towards the ‘technician’ Andrew Marmery. There is a fair chance he was the most knowledgeable scientist in the room.
  • The fact that the combined intellects of the demonstrators could not explain why the bottle explodes (56 minutes to the end). It has nothing to do with the ‘critical’ temperature!

Watch the full video with related content here: And  while you are over there check out the back catalogue of Christmas lectures and the excellent ‘Tales from the Prep Room’.

%d bloggers like this: