No Man's Land

Home > Other > No Man's Land > Page 21
No Man's Land Page 21

by Kevin Sullivan


  28.

  We’re in the age of automation addiction.

  There’s no doubt that automation in aviation has saved lives. Advances in software design and performance, fly-by-wire and electronic flight-control systems, and flight simulators have also generated savings to airline operators competing for profits and market share.

  But I’m a survivor of an automation disaster, and I have a responsibility to issue a warning.

  If increased automation is incorporated into an aircraft’s design, then it must be a good crew member. It should enhance and improve the human pilots’ operational effectiveness and aircraft management.

  Unfortunately, on QF72 we had a bad crew member. It presented us with incorrect information and distracted us with erroneous warnings that couldn’t be silenced. It refused to cooperate and denied interaction. After the second pitch-down, its faulty systems identified themselves until they overloaded the aircraft’s ability to display and refresh this distracting flood of information – but it didn’t display the faulted operation of ADIRU 1. Its PRIMs commanded the simultaneous activation of two protections that had a combined input to the aircraft’s elevator, something that is omitted from the manufacturer’s manuals; how many protection manoeuvres can the computers add together after sensing a deviation from normal operation?

  A pilot’s startle response can be provoked by an automated system that doesn’t warn of the impending activation of protections – protections that can generate violent manoeuvres in its confused state and prevent pilots from intervening with intuitive control inputs. Such design flaws can increase workload and cause bewilderment for the crew responsible for managing the aircraft.

  Yes, I label them ‘flaws’, not ‘limitations’. If the automation is to be an enhancement then its design should allow the operator to troubleshoot problems with speed and confidence. In the case of QF72, the design generated a complete loss of confidence in the automated systems.

  Automation is a sleeping pill for today’s pilots. It hypnotises them into thinking, I can relax because everything is being monitored, and the automation will tell me what I need to do. Pilots now wait for automation to wake them up, and to reconfigure systems in response to faults or failures. Many airline operators perpetuate this erosion of skills by requiring their pilots to fly exclusively with the automation active.

  In 2013 Asiana Flight 214, a Boeing 777, impacted the seawall on approach to San Francisco Airport. In this situation, automation was desired but couldn’t be used for the approach flown.

  The ground-based instrument landing system that links to the autopilot was out of service, forcing the pilots to fly a visual, manual approach with only the automated engine thrust available. A lack of understanding in the operation of the auto thrust by the pilots generated a low-energy condition prior to landing. A contributing factor was that, quoting the final NTSB report, ‘Asiana’s automation policy emphasised the full use of all automation and did not encourage manual flight during line operations.’

  This new generation of automated aircraft have an Achilles heel. The suite of external sensors and probes – which provide the critical values of speed, altitude, temperature and body angle – are vulnerable to the extremes of the air mass in which they operate. The generation of false unreliable speed by blocked sensors is potentially a dynamic and confusing scenario on an Airbus. The associated software may not be capable of recognising its veracity. Human pilots are required to recognise unreliable speed through their interpretation of the instruments. If this assessment is inaccurate, or the pilot is distracted by simultaneous failures or conditions such as turbulence, then aircraft management can degrade rapidly.

  When unreliable speed is practised in the flight simulator, it usually involves a singular, uncomplicated scenario that causes the pilot’s primary display to ‘blank’. This isn’t simulating an environmental degradation of speed, only an electronic failure of the display. That’s completely unrealistic and doesn’t prepare the pilot for the cascading problems caused by corrupted data.

  In this non-normal scenario, pilots are directed to disconnect the automation – autopilot and autothrust – which they rely upon to make their job easier. Manual flying is the forced result. Pilots don’t practise high-altitude manual flight in the aircraft, and manual flight isn’t an easy task in Alternate or Direct modes of operation at high altitude. Throw in night, bad weather, convective activity or turbulence, and the workload increases markedly. The reconfiguration procedures to address these systems failures, although automated, are time-consuming and distracting, and the management and control of the aircraft can quickly be compromised.

  An example of this sensor failure occurred on Air France 447. While the aircraft was flying through thunderstorms, its pitot probes, which measure airspeed, were blocked with ice crystals. This also generated a false altitude loss display to the pilots. The corrupted air data, produced by the blocked probes, forced a disconnection of the autopilot and autothrust. The pilot, through startle response and in reaction to the falsely displayed altitude loss, pulled back on his sidestick. The aircraft climbed rapidly and exceeded the normal pitch parameters, resulting in the automatic removal of protections. The pilot was now permitted to stall the aircraft. The supposedly stall-proof Airbus was stalled so badly that it was described as a ‘super stall’. The flight crew didn’t have the training or experience to recognise their situation and recover from this extreme state.

  This accident was caused by a combination of automation failure and inappropriate pilot response. It could have been avoided.

  Information from the flight data recorder and cockpit voice recorder was publicly provided by the state investigative body, the French BEA. Reading the voice-recorder transcript sends chills down my spine, strongly reminding me of QF72.

  Embedded in the BEA report are some significant findings. Regarding unreliable speed events and training, the report notes: ‘the number and the type of manifestation linked to erroneous speed indications makes training and exhaustive information to pilots impossible’. The report also comments on the link between displayed error messages and the pilots’ ability to determine the unreliable speed situation: ‘In the absence of a specific message expressing detection of unreliable speed by the systems, the crew was unable to identify any logical link between the symptoms perceived and these ECAM (fault) messages.’

  Basically, the report accepts that a void exists in the aviation industry regarding training for unreliable speed scenarios. However, the report doesn’t criticise the Airbus design and its contribution to the fatal outcome. The BEA is a government agency, and participating governments from the European Union subsidise the Airbus consortium.

  There are striking similarities between the QF72 and AF447 accidents: unreliable speed, pilot overload, flight-control reversion to Alternate Law, automation-induced confusion, increased workload and the pilots’ startle response. How the two crews got there is quite different, but the destination is the same. If the Air France crew had ‘sat on their hands’ and not reacted as aggressively to the unreliable speed indication that was recorded, then it would have been a non-event.

  I won’t be an armchair quarterback and comment adversely on these pilots’ actions, their response to an abnormal event they hadn’t experienced before. It’s a tragedy, and they fought gallantly to regain control once the captain returned to the flight deck, but they ran out of altitude.

  NASA also conducted an analysis of the AF447 accident, concluding that the automated systems generated more abnormal failures than the pilot is trained to cope with.

  *

  Automation comes with side effects that are often unintended, and accidents continue to happen. Many of them are now caused by confusion in the interface between the pilot and an automated machine programmed by a human.

  A goat in a tuxedo is still a goat. In the words of Earl Wiener, an engineer who taught at the University of Miami:

  •Every device creates its own opp
ortunity for human error.

  •Exotic devices create exotic problems.

  •Digital devices tune out small errors while creating opportunities for large errors.

  •Invention is the mother of necessity.

  •Some problems have no solution.

  •It takes an aeroplane to bring out the worst in a pilot.

  •Whenever you solve a problem, you usually create one. You can only hope that the one you created is less critical than the one you eliminated.

  To ensure pilots understand all the complex scenarios that can be generated through displayed failures, their training has to extend well beyond normal modes of operation. If not, confusion can develop, causing pilots to question – most times in seconds – what’s real and what’s false, which diminishes their ability to make necessary interventions. It should be the manufacturers’ responsibility to provide enough training for pilots to be confident in all the system’s modes of operation. If the manufacturers don’t define which scenarios present an unacceptable risk, what chance does a human pilot have to intervene effectively when the automation fails?

  Airbus states that the piloting experience required to operate their equipment can be minimised due to the enhanced safety afforded by increased automation and ease of manipulation. But the Airbus design has a history of provoking confusion and startle responses, so is that really the place for an inexperienced pilot? One thing’s for sure: a novice can easily be taught basic manipulation and operation in only a few simulator sessions. But aviation, even in a protected fly-by-wire aircraft, is unforgiving of error. How will an airline operator manage a pilot’s degradation of core manipulation skills if the pilot begins their career with a minimum experience base?

  On QF72, my fallback position of extreme military flying experience was pivotal in our survival. So, too, for the other Sully.

  Is there a need to train flight crews for loss of automation? The answer is yes, now more than ever. It’s also time for manufacturers to share information with simulator developers so they can create more useful modules for pilots.

  Fly-by-wire leads to the erosion of basic flying skills, along with a remarkable increase in pilot overconfidence. This isn’t my unique opinion, but a view expressed by regulatory organisations and aviation professionals alike. Automation bias and a reduced motivation to explore issues outside the normal grind of commercial flying make it easier for a pilot to think, I’ll deal with it on the day. Automation events can then push pilots well out of their limited experiences and abilities. This is part and parcel of the new age of automation reliance.

  Maybe their egos are writing cheques that their abilities can’t cash.

  Pilots flying the new generation of electronic flight control and envelope-protected aircraft are conditioned to manage flight-path tasks with a pristine, by-the-book mentality that doesn’t give much consideration to unexpected failures of the installed technology – even though, historically, there are plenty to choose from, most occurring in an Airbus. It’s also easy to dismiss concerns with the causes of automation events after the manufacturer has said, ‘We fixed that.’

  If you’re a pilot, especially one operating on this particular piece of equipment, and you think you don’t have to worry about it – well, I have some sobering news for you. You should worry about it, and so should your training departments. Simply relying on ‘book knowledge’ provides a false sense of security. The books can’t teach the decision-making and time-management skills required to manage the overload of simultaneous failures in automated systems. The books don’t delve into the software’s complexities and the automated systems’ reliance on air data, as these topics aren’t considered need-to-know information for the pilots who fly these aircraft.

  Software should be enhanced to help pilots identify unreliable speed, and it should effectively deliver critical fault information through its warning systems and displays. Warnings should be designed to minimise pilots’ startle responses and increase their situational awareness. Angle-of-attack data, a critical value to fly-by-wire aircraft, should be displayed to the pilots as standard equipment.

  Pilots must be trained to manage startle. They should also be taught to manage the overload of multiple, complex system failures as recorded in the QF72 and AF447 accidents. And they should train for a loss of automation to ensure their core skills are present if manual manipulation is needed to recover a degraded aircraft. In 2019, the US Federal Aviation Administration mandated this training for all commercial aircraft operators. Australia will follow suit.

  Meanwhile, manufacturers are driven by traditional statistics from the accident history of previous-generation aircraft to design systems with increased automation while minimising direct pilot manipulation. After all, it’s a question of securing market share in providing global infrastructure, and in this instance, it goes beyond Airbus versus Boeing to the degree of the EU versus the US.

  Are manufacturers and airline operators loathe to admit to serious issues with their machines because of liability? If so, are passengers and crew being regulated to the bottom end of the corporate food chain, with acceptable losses factored in by risk analysis and actuaries? Perhaps it’s just plain corporate greed, and we’re expendable in order to protect brands and profits.

  For most of the parties involved, it’s very convenient to praise the automation and blame the pilot. But as a survivor, I know on which side of the fence my opinion lies. If you believe the marketing departments of automated systems, then you must accept that humans are unsafe operators of aircraft – and of motor vehicles.

  *

  I read an estimate that the autonomous vehicle industry is worth US$1 trillion – a juicy carrot for the many technology companies racing to get their products to market.

  Bolstered by an increase in motor vehicle-related deaths, the ‘driving’ force towards automation in cars is the goal of reducing accidents and enhancing safety. In the US, it’s estimated that 41,000 people lost their lives in motor vehicle accidents in 2017; the year before, it was 37,000. In China, the World Health Organization estimated that 260,000 deaths – including pedestrians – were attributed to road-traffic accidents in 2016. Worldwide, that estimate is 1.25 million deaths due to road-traffic injuries. It’s unacceptable, so why is it increasing instead of decreasing?

  One likely reason is that drivers are being distracted by mobile phones, satnav and music. How are drivers being trained to interact with these distracting systems when operating their vehicles? The obvious answer is that no training is required or legislated. I always joke that pilots are the only people trained to talk while operating their machines.

  Car manufacturers are striving to offer automated information systems to be used without any specialised training. The vehicles’ handbooks advise drivers against interacting with these systems while on the road, but who reads those manuals anyway? Perhaps their warnings aren’t enough to deter drivers from playing around with their infotainment when they should be concentrating on driving their cars.

  The suite of vehicle automation has been steadily increasing; adaptive cruise control, lane assist, collision avoidance and stability augmentation to name but a few. They enhance vehicle safety. The addition of semi-autonomous systems like ‘autopilot’, however, maybe generating a false sense of invincibility in drivers’ minds. In fact, they may just be providing the tools for enhanced bad driving.

  The overconfident mentality of many young pilots appears to be shared by many car owners who have these systems installed. But, unlike pilots, drivers have little or no practical instructions on their operation.

  In the past few years there have been four high-profile fatal accidents involving self-driving cars in the US. There’s a recurring theme: the sensor technology doesn’t guarantee an appropriate response from the installed algorithms to avert a collision. The human operator has neither the training nor the awareness to take over manually when a situation requires rapid action.

  The sensor–a
lgorithm conundrum present in aviation is becoming a prominent black hole in the design of autonomous vehicles. This technology places drivers in a No Man’s Land when automation fails and it’s too late for them to save the situation. Human operators are the last line of defence to address complex automation failures – and the first in line to be blamed if they can’t manage them.

  In March 2018 a pedestrian, 49-year-old Elaine Herzberg, was struck and killed by an autonomous Uber taxi in Arizona. In the wake of this tragic accident, the CEO of Toyota, North America, told a journalist, ‘The reality is there will be mistakes along the way. A hundred or five hundred or a thousand people could lose their lives in accidents like we’ve seen in Arizona.’ He estimated that autonomous vehicles could eventually save 35,000 lives in the US annually: 98 per cent of deadly crashes are blamed on driver error. If hundreds may die so that thousands may live, perhaps a ‘martyr’s clause’ should be included with each autonomous vehicle; this would guarantee the payment of compensation when a death or injury exposes a flaw that the manufacturer can now rectify. It seems we’re signing up as part of manufacturers’ research-and-development teams to fine-tune the design and safe operation of their products.

  But where does the liability for automation-related accidents lie: with the manufacturer, the software engineers outsourced to provide the algorithms, or the owners or operators? In the heavily regulated world of aviation, laws in every country identify the pilot-in-command as ultimately responsible for any accident. There’s a history of legal cases where pilots were found culpable because they didn’t use their automation, in the form of the autopilot, and an accident occurred.

  Human operators of automated systems are positioned to absorb the blame when an accident occurs, in what’s known as a ‘moral crumple zone’. Just as the crumple zone of a car is designed to absorb the force of impact in a crash, a human operating a highly complex automated system may become simply a component that bears the brunt of moral and legal responsibilities when the system malfunctions. A widely accepted notion is that the autopilot and associated automation are smart enough to save the human every time. It isn’t thought possible that the software and automation could both fail. It seems the operator’s central role becomes that of soaking up fault, even if they only had partial control of the automated system.

 

‹ Prev