On June 1st 2009, a well-trained crew flew a fully-functioning state-of-the-art aircraft into the atlantic ocean, killing all on board. It took less than 5 minutes to go from cruising at altitude to crashing into the sea. At enormous expense and against all odds, the French government recovered the flight voice and data recorders and now we know what happened. As one reads the account of the last few minutes of the flight it is difficult to fathom how the pilots could behave as they did. But the essential element appears to have been tragically simple – the crew were confused.
The confusion appears to have arisen at the interface between automatic and manual control of the aeroplane. As the crew took manual control, they didn’t know which instruments to believe and so – time after time – and in the face of the repeated warnings – the crew made exactly the wrong decisions. This is a tragedy in itself, but I think there is a wider lesson to be learned. Imagine, if you will, that the crew were not flying an aeroplane but running a nuclear power station.
The level of safety criticality of systems for nuclear power stations and aeroplanes, while not the same, is of the same order. People exhaustively plan out all possible routes to catastrophe, and then block these routes by a combination of methods involving hardware, software and procedural training. However, the lesson of Flight AF447 is that there is always a risk of catastrophe, even with well-trained highly-motivated crew and perfectly functional hardware. If controllers are confused, then they can make the wrong decisions.
One solution is to have control systems for aircraft and power stations which are driven by software with no human override. This would avoid mistakes arising from confusion and panic. It would also be cheaper since there would be less need for highly trained staff. What could possibly go wrong?