SW Engineering, NASA & how things go wrong
-
I really can't stop with this book, Modern Software Engineering[^], because so much of it resonates with me after working in IT/Dev for over 30 years. I came to Dev thru QA so I've always focused on "repeatable processes, errors & failing safely".
Quote:
One of the driving forces behind [Margaret] Hamilton’s[^] approach was the focus on how things fail—the ways in which we get things wrong. "There was a fascination on my part with errors, a never ending pass-time of mine was what made a particular error, or class of errors, happen and how to prevent it in the future." This focus was grounded in a scientifically rational approach to problem-solving. The assumption was not that you could plan and get it right the first time, rather that you treated all ideas, solutions, and designs with skepticism until you ran out of ideas about how things could go wrong. Occasionally, reality is still going to surprise you, but this is engineering empiricism at work. The other engineering principle that is embodied in Hamilton’s early work is the idea of “failing safely.” The assumption is that we can never code for every scenario, so how do we code in ways that allow our systems to cope with the unexpected and still make progress? Famously it was Hamilton’s unasked-for implementation of this idea that saved the Apollo 11 mission and allowed the Lunar Module Eagle to successfully land on the moon, despite the computer becoming overloaded during the descent. As Neil Armstrong and Buzz Aldrin descended in the Lunar Excursion Module (LEM) toward the moon, there was an exchange between the a
-
I really can't stop with this book, Modern Software Engineering[^], because so much of it resonates with me after working in IT/Dev for over 30 years. I came to Dev thru QA so I've always focused on "repeatable processes, errors & failing safely".
Quote:
One of the driving forces behind [Margaret] Hamilton’s[^] approach was the focus on how things fail—the ways in which we get things wrong. "There was a fascination on my part with errors, a never ending pass-time of mine was what made a particular error, or class of errors, happen and how to prevent it in the future." This focus was grounded in a scientifically rational approach to problem-solving. The assumption was not that you could plan and get it right the first time, rather that you treated all ideas, solutions, and designs with skepticism until you ran out of ideas about how things could go wrong. Occasionally, reality is still going to surprise you, but this is engineering empiricism at work. The other engineering principle that is embodied in Hamilton’s early work is the idea of “failing safely.” The assumption is that we can never code for every scenario, so how do we code in ways that allow our systems to cope with the unexpected and still make progress? Famously it was Hamilton’s unasked-for implementation of this idea that saved the Apollo 11 mission and allowed the Lunar Module Eagle to successfully land on the moon, despite the computer becoming overloaded during the descent. As Neil Armstrong and Buzz Aldrin descended in the Lunar Excursion Module (LEM) toward the moon, there was an exchange between the a
One of the hardest, or at least most memorable, software problem I ever had to chase was a programming error in a seldomly used error recovery routine. It seems my programmer, who was highly experienced in other programming languages such as COBOL, coded a "=" instead of an "==" inside an if statement in a C program--kinda like "if ( A = B)....". This ALWAYS returns true WHILE assigning the value of B to A. Unfortunately, this little error caused the system to crash. It took about six months to find the cause of the crash and understand what was happening. Looking at the code under the pressure of a "down system", we always asked why the condition was true because our mind was saying if A equals B and not considering that A was not equal to B before the if statement. I cursed (and still do curse to this day) whoever decided that allowing an assignment inside of a conditional statement was a good idea. And I have to wonder how many systems, like autonomous cars, have a statement like that buried way down deep in a infrequently used piece of critical code.
-
I found that (user) logging reduces a lot of "errors". Another term is graceful degradation. But that requires understanding when a try-catch block should continue or not. And, yes, it may require asking the user if it should proceed (e.g. a file not available). It then comes down to transparency (of the software) ... what Boeing failed to do with their software changes: tell the user what they did and how it might impact them.
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
-
It's just shocking to me how many "professional" developers don't embrace defensive programming.
-
One of the hardest, or at least most memorable, software problem I ever had to chase was a programming error in a seldomly used error recovery routine. It seems my programmer, who was highly experienced in other programming languages such as COBOL, coded a "=" instead of an "==" inside an if statement in a C program--kinda like "if ( A = B)....". This ALWAYS returns true WHILE assigning the value of B to A. Unfortunately, this little error caused the system to crash. It took about six months to find the cause of the crash and understand what was happening. Looking at the code under the pressure of a "down system", we always asked why the condition was true because our mind was saying if A equals B and not considering that A was not equal to B before the if statement. I cursed (and still do curse to this day) whoever decided that allowing an assignment inside of a conditional statement was a good idea. And I have to wonder how many systems, like autonomous cars, have a statement like that buried way down deep in a infrequently used piece of critical code.