Feds say Boeing 737 needs to be better designed for humans


Promotional image of Boeing 737 passenger jet plane.

The two 737 MAX crashes that killed 346 people and led to what is, so far, a six-month grounding of the jet, stemmed in part from Boeing’s failure to accurately anticipate how pilots would respond to a malfunctioning feature that pointed the jets toward the ground. That’s the key finding from a report the National Transportation Safety Board published Thursday, which included a series of recommendations to the Federal Aviation Administration. The NTSB advised the regulator to have Boeing consider how 737 MAX pilots would handle not just problems with the MCAS system alone, but how they respond to multiple simultaneous alerts and indicators. In short, the NTSB says Boeing was wrong to assume pilots would respond correctly to the problem that ended up killing them.

The crashes of Lion Air Flight 610, in October 2018, and Ethiopian Airlines Flight 302, in March, stemmed from a feature Boeing designed to prevent stalls. In both cases, the Maneuvering Characteristics Augmentation System, or MCAS, activated in response to a false reading from a faulty angle of attack sensor. The pilots fought to counteract the system, which pushed the nose of the plane down, but ultimately failed.

When Boeing tested what would happen if the MCAS malfunctioned, it didn’t account for other elements. The Lion Air and Ethiopian pilots on the doomed planes dealt with a cascade of problems and warnings: Their control sticks shook. Various alarms sounded. When the pilots retracted the flaps, the plane’s downward push required extra force to keep the jet aloft. The result: Their reactions “did not match [Boeing’s] assumptions,” the NTSB found. “An aircraft system should be designed such that the consequences of any human error are limited.”

The FAA hasn’t said whether it will adopt the recommendations of the NTSB, which has no regulatory or enforcement power. And this is far from the end of the 737 MAX saga: Boeing and the FAA are still negotiating a fix to the plane’s software, and congressional, international, and criminal investigations into the crashes are ongoing.

READ ALSO  ExtraTorrents Proxy List For 2019 [100% Working Proxies To Unblock Extratorrents]

But as its title—“Assumptions Used in the Safety Assessment Process and the Effects of Multiple Alerts and Indications on Pilot Performance”—indicates, the NTSB report is about more than one troubled jet, one feature, one company, or even one country. The safety board wants the FAA to apply this sort of thinking to all the planes it certifies. And it hopes the agency will encourage its peers around the world to do the same. That’s because the report is all about the question at the core of modern aviation safety: How to ensure that pilots can work with the computers that have taken on more of the work in the cockpit. It’s about a field of study called “human factors.”

“The field of aviation has been the cradle of human factors, and its biggest beneficiary,” says Najmedin Meshkati, who studies the field at the University of Southern California. Where ergonomics and biomechanics center on physical responses, human factors tends to center on the gray stuff packed into their skulls. It matters in fields from self-driving cars to coal mines—anywhere people interact with machines. It’s long been a major focus in aviation because so many crashes trace back to pilots’ failure to understand what the plane’s myriad and complex systems are doing, why, or how to influence them. “Whenever you have a human error, and the consequence isn’t immediately noticeable or reversible, human factors is important,” Meshkati says.

That’s often the case in aviation—and the error doesn’t always come from the human. The rising use of automation in aviation has produced major safety and practical benefits, but also distanced humans from the workings of the planes they’re commanding. Meshkati draws a distinction between decision making and problem solving. The former is usually routine and procedure-based, like using your altitude, airspeed, and heading to calculate a landing path. Computers are very good at this. Problem solving comes in when some combination of factors means the procedures don’t work, when a person needs to absorb information and devise a new formula that will keep them safe. This is where humanity has the edge, but hardly a guaranteed victory.

READ ALSO  WeWork Considers Rescue Plans From SoftBank and JPMorgan

According to the NTSB report, Boeing counted on pilots following a procedure that would get them out of a situation where MCAS malfunctioned. But Lion Air 610 and Ethiopian 302 demanded problem solving: Each set of pilots was fighting a plane that wanted to dive, while considering a cascade of malfunctions and signals. Better human factor thinking, Meshkati says, would have required less, or easier, problem solving. It could have produced a procedure that fit the actual conditions of the flights, allowing for good old decision making.

Of course, the FAA has other things to consider. The NTSB’s recommendations are “absolutely valid,” says Clint Balog, a flight test pilot and human factors expert with the College of Aeronautics at Embry-Riddle University. But, he says, the safety agency trends toward idealism. “The FAA has to consider, what is realistic testing?” If airplane makers had to test for every possible combination of malfunctions and cockpit alarms, they’d never get another plane certified, he says. Not all pilots are equally skilled, by virtue of their natural talent, training, or experience. It doesn’t make sense, Balog says, to design for the worst of the bunch—or the best. Cockpits as physical spaces, he points out, are designed for pilots of many shapes and sizes. But designers had to settle on limits on who can sit comfortably or reach every control. “We’ve got to figure out how to do the same thing for cognitive capability,” Balog says.

This story first appeared on wired.com.



Source link

?
WP Twitter Auto Publish Powered By : XYZScripts.com