Lessons for a coder from the Boeing 737 Max crashes
Before you read any further, you may want to read this rather long article explaining the reasons behind the two Boeing 737 Max crashes. It is quite technical in nature and will surely appeal to engineers. A couple of quotes from the article should make you read it fully:
About the difference between a computer and a human:
On the difference between a hardware glitch and software bug:
I wanted to highlight some points that are quite relevant to developers and implementers of various technology solutions at an organization.
A funny joke about the way automation (and now AI) is taking over our lives:
Long ago there was a joke that in the future planes would fly themselves, and the only thing in the cockpit would be a pilot and a dog. The pilot’s job was to make the passengers comfortable that someone was up front. The dog’s job was to bite the pilot if he tried to touch anything.
About the difference between a computer and a human:
The flight management computer is a computer. What that means is that it’s not full of aluminum bits, cables, fuel lines, or all the other accoutrements of aviation. It’s full of lines of code. And that’s where things get dangerous.
Those lines of code were no doubt created by people at the direction of managers. Neither such coders nor their managers are as in touch with the particular culture and mores of the aviation world as much as the people who are down on the factory floor, riveting wings on, designing control yokes, and fitting landing gears. Those people have decades of institutional memory about what has worked in the past and what has not worked. Software people do not.
On the difference between a hardware glitch and software bug:
The 737 Max saga teaches us not only about the limits of technology and the risks of complexity, it teaches us about our real priorities. Today, safety doesn’t come first—money comes first, and safety’s only utility in that regard is in helping to keep the money coming. The problem is getting worse because our devices are increasingly dominated by something that’s all too easy to manipulate: software.
Hardware defects, whether they are engines placed in the wrong place on a plane or O-rings that turn brittle when cold, are notoriously hard to fix. And by hard, I mean expensive. Software defects, on the other hand, are easy and cheap to fix. All you need to do is post an update and push out a patch.
I wanted to highlight some points that are quite relevant to developers and implementers of various technology solutions at an organization.
- Software applications are becoming increasingly powerful and controlling machines and eventually affecting humans. A case in point is the code that goes into meal coupon system at JGU. This system generates a random 5-digit code for each student that is entered on a tab to record the meal consumption. One line of code affects the way students respond to the system as it actually impacts their “time-to-eat”. For a hungry stomach, it can mean a change in behaviour. You would recollect this Snickers advt below and I have actually seen this in action in the campus!
- The tendency to release software (in a hurry) with the intent to fix things in future releases makes a compelling argument but it could mean the difference between success and failure. I have been a culprit too and now I am more careful about prioritizing the feature list so that the first release doesn’t fail.
- Sometimes we trust our own ability to think about the user requirements and that influences our design thinking. This is quite dangerous as Boeing would have found out by replacing experts with software designers in certifying a plane to be airworthy. We can be great at designing and developing code for a user but the starting point must be the user. Otherwise, we will only see disasters on the ground. Of course, the designers need to go deeper into the requirements and prod the user to go deeper too.
Comments