The two 737 MAX crashes that killed 346 people and led to what is, so far, a six-month grounding of the jet, stemmed in part from Boeing’s failure to accurately anticipate how pilots would respond to a malfunctioning feature that pointed the jets toward the ground. That’s the key finding from a report the National Transportation Safety Board published Thursday, which included a series of recommendations to the Federal Aviation Administration. The NTSB advised the regulator to have Boeing consider how 737 MAX pilots would handle not just problems with the MCAS system alone, but how they respond to multiple simultaneous alerts and indicators. In short, the NTSB says Boeing was wrong to assume pilots would respond correctly to the problem that ended up killing them.
The crashes of Lion Air Flight 610, in October 2018, and Ethiopian Airlines Flight 302, in March, stemmed from a feature Boeing designed to prevent stalls. In both cases, the Maneuvering Characteristics Augmentation System, or MCAS, activated in response to a false reading from a faulty angle of attack sensor. The pilots fought to counteract the system, which pushed the nose of the plane down, but ultimately failed.
When Boeing tested what would happen if the MCAS malfunctioned, it didn’t account for other elements. The Lion Air and Ethiopian pilots on the doomed planes dealt with a cascade of problems and warnings: Their control sticks shook. Various alarms sounded. When the pilots retracted the flaps, the plane’s downward push required extra force to keep the jet aloft. The result: Their reactions “did not match [Boeing’s] assumptions,” the NTSB found. “An aircraft system should be designed such that the consequences of any human error are limited.”
The FAA hasn’t said whether it will adopt the recommendations of the NTSB, which has no regulatory or enforcement power. And this is far from the end of the 737 MAX saga: Boeing and the FAA are still negotiating a fix to the plane’s software, and congressional, international, and criminal investigations into the crashes are ongoing.
But as its title—“Assumptions Used in the Safety Assessment Process and the Effects of Multiple Alerts and Indications on Pilot Performance”—indicates, the NTSB report is about more than one troubled jet, one feature, one company, or even one country. The safety board wants the FAA to apply this sort of thinking to all the planes it certifies. And it hopes the agency will encourage its peers around the world to do the same. That’s because the report is all about the question at the core of modern aviation safety: How to ensure that pilots can work with the computers that have taken on more of the work in the cockpit. It’s about a field of study called “human factors.”
“The field of aviation has been the cradle of human factors, and its biggest beneficiary,” says Najmedin Meshkati, who studies the field at the University of Southern California. Where ergonomics and biomechanics center on physical responses, human factors tends to center on the gray stuff packed into their skulls. It matters in fields from self-driving cars to coal mines—anywhere people interact with machines. It’s long been a major focus in aviation because so many crashes trace back to pilots’ failure to understand what the plane’s myriad and complex systems are doing, why, or how to influence them. “Whenever you have a human error, and the consequence isn’t immediately noticeable or reversible, human factors is important,” Meshkati says.