SW Engineering, NASA & how things go wrong
-
Telecom call servers.
Robust Services Core | Software Techniques for Lemmings | Articles
The fox knows many things, but the hedgehog knows one big thing. -
Since we receive so many files per day (and at times it's X per second -- & I know other apps stress things much further) my system has pushed network, file storage h/w & the windows file system to their limits (since other apps are running our medium-sized company's file system also). Anyway, my point here is that there are times we are receiving files but the network cannot access the SSD or network node where the SSD is or whatever so I would get low-level errors back in my system when I'm just trying to save a file. Meanwhile, Infrastructure h/w & OS types are like, "You shouldn't ever get write errors. It's just not possible." So I had to make sure my app doesn't crash and somehow handles the situation without losing everything. It's a challenge & those kinds of errors almost always occur at 3am-4am local time. X| I really don't want to wake up (or wake anyone else up) in the middle of the night. My final point: People who may be wakened in the middle of the night are the closest to being/becoming engineers. Cuz it's on them. :laugh:
Curious - how are these files arriving? ftp? email? etc... rad, Check with your employer but I would love to see an article on the engineering description of your system. I consulted with a firm for a few years that processed motor vehicle records for insurance companies. The amount of data and file processing was incredible.
Charlie Gilley “They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” BF, 1759 Has never been more appropriate.
-
I worked on large systems (tens of millions of lines of code) that were "five nines" and later "six nines", which translates to 5 minutes or 30 seconds of downtime per year. Probably 90% of our code dealt with failure scenarios.
Robust Services Core | Software Techniques for Lemmings | Articles
The fox knows many things, but the hedgehog knows one big thing.Somewhat related: I have several times had to explain to youngsters why dotNet JIT code generation from MSIL is not a problem. Of course part of the issue is that lexical analysis etc. has been done earlier in the process, but the one, single factor is that the JIT compiler does not have to do error checking. It assumes error free input, and saves lots of time on that, compared to a complete from-source-code compiler that can not take error free input for given.
-
I really can't stop with this book, Modern Software Engineering[^], because so much of it resonates with me after working in IT/Dev for over 30 years. I came to Dev thru QA so I've always focused on "repeatable processes, errors & failing safely".
Quote:
One of the driving forces behind [Margaret] Hamilton’s[^] approach was the focus on how things fail—the ways in which we get things wrong. "There was a fascination on my part with errors, a never ending pass-time of mine was what made a particular error, or class of errors, happen and how to prevent it in the future." This focus was grounded in a scientifically rational approach to problem-solving. The assumption was not that you could plan and get it right the first time, rather that you treated all ideas, solutions, and designs with skepticism until you ran out of ideas about how things could go wrong. Occasionally, reality is still going to surprise you, but this is engineering empiricism at work. The other engineering principle that is embodied in Hamilton’s early work is the idea of “failing safely.” The assumption is that we can never code for every scenario, so how do we code in ways that allow our systems to cope with the unexpected and still make progress? Famously it was Hamilton’s unasked-for implementation of this idea that saved the Apollo 11 mission and allowed the Lunar Module Eagle to successfully land on the moon, despite the computer becoming overloaded during the descent. As Neil Armstrong and Buzz Aldrin descended in the Lunar Excursion Module (LEM) toward the moon, there was an exchange between the a
-
I really can't stop with this book, Modern Software Engineering[^], because so much of it resonates with me after working in IT/Dev for over 30 years. I came to Dev thru QA so I've always focused on "repeatable processes, errors & failing safely".
Quote:
One of the driving forces behind [Margaret] Hamilton’s[^] approach was the focus on how things fail—the ways in which we get things wrong. "There was a fascination on my part with errors, a never ending pass-time of mine was what made a particular error, or class of errors, happen and how to prevent it in the future." This focus was grounded in a scientifically rational approach to problem-solving. The assumption was not that you could plan and get it right the first time, rather that you treated all ideas, solutions, and designs with skepticism until you ran out of ideas about how things could go wrong. Occasionally, reality is still going to surprise you, but this is engineering empiricism at work. The other engineering principle that is embodied in Hamilton’s early work is the idea of “failing safely.” The assumption is that we can never code for every scenario, so how do we code in ways that allow our systems to cope with the unexpected and still make progress? Famously it was Hamilton’s unasked-for implementation of this idea that saved the Apollo 11 mission and allowed the Lunar Module Eagle to successfully land on the moon, despite the computer becoming overloaded during the descent. As Neil Armstrong and Buzz Aldrin descended in the Lunar Excursion Module (LEM) toward the moon, there was an exchange between the a
Margaret is legend. If you are serious about engineering software, read everything and I mean EVERYTHING she's written. She took her job a hell of a lot more seriously than most of us software weenies do - no offense. She knew three people were going to be on top of some serious energy, go to the moon and hopefully come back without going splat. dammit rad, I'm ordering the book now. On a side note, if anyone has good references on the material behind "Hidden Figures", please post.
Charlie Gilley “They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” BF, 1759 Has never been more appropriate.
-
Curious - how are these files arriving? ftp? email? etc... rad, Check with your employer but I would love to see an article on the engineering description of your system. I consulted with a firm for a few years that processed motor vehicle records for insurance companies. The amount of data and file processing was incredible.
Charlie Gilley “They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” BF, 1759 Has never been more appropriate.
-
Margaret is legend. If you are serious about engineering software, read everything and I mean EVERYTHING she's written. She took her job a hell of a lot more seriously than most of us software weenies do - no offense. She knew three people were going to be on top of some serious energy, go to the moon and hopefully come back without going splat. dammit rad, I'm ordering the book now. On a side note, if anyone has good references on the material behind "Hidden Figures", please post.
Charlie Gilley “They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” BF, 1759 Has never been more appropriate.
charlieg wrote:
Margaret is legend.
I agree. She is an amazing person. :thumbsup: Every time I read an article or excerpt about her work I am always inspired. :thumbsup: And you won't regret getting this book so far (chapter 2) it is fantastic read. So much great stuff in there.
-
In my early days of programming I had the mindset that keeping the program running at all cost results in making the source code hard to read and maintain because of the all the extra safety checks. Then I’ve learned about all the safety features and warnings in a plane’s cockpit. I come to realize that getting notifications while still keeping things under control has a point
I agree. I didn't do flight software, but I composed quite a lot of software (mostly for engineering and business applications). I try to handle every error condition possible with a message that an error occurred, what the error was and what created it, if possible. That follows with a recommendation of how to correct, avoid, and/or what one should document for later review. This is just a summary as it can get more complicated if a real-time system where software activity has to correct and control without user intervention. BTW all this back and forth on software engineering is very good info. In my grad school days, software engineering was it's formative stages. I was luck to be a part of that and used what I learned with success. Just a simple thing as naming convention considerations in the early stages made a big difference. It forced one to think of the architecture as a whole. Do it wrong and woe to you and others.
"A little time, a little trouble, your better day" Badfinger
-
I really can't stop with this book, Modern Software Engineering[^], because so much of it resonates with me after working in IT/Dev for over 30 years. I came to Dev thru QA so I've always focused on "repeatable processes, errors & failing safely".
Quote:
One of the driving forces behind [Margaret] Hamilton’s[^] approach was the focus on how things fail—the ways in which we get things wrong. "There was a fascination on my part with errors, a never ending pass-time of mine was what made a particular error, or class of errors, happen and how to prevent it in the future." This focus was grounded in a scientifically rational approach to problem-solving. The assumption was not that you could plan and get it right the first time, rather that you treated all ideas, solutions, and designs with skepticism until you ran out of ideas about how things could go wrong. Occasionally, reality is still going to surprise you, but this is engineering empiricism at work. The other engineering principle that is embodied in Hamilton’s early work is the idea of “failing safely.” The assumption is that we can never code for every scenario, so how do we code in ways that allow our systems to cope with the unexpected and still make progress? Famously it was Hamilton’s unasked-for implementation of this idea that saved the Apollo 11 mission and allowed the Lunar Module Eagle to successfully land on the moon, despite the computer becoming overloaded during the descent. As Neil Armstrong and Buzz Aldrin descended in the Lunar Excursion Module (LEM) toward the moon, there was an exchange between the a
Nice. We built a new internal service this year, for inbound files of the same magnitude. Because it's very important data, I designed the system to automatically recover from a broad set of unknown failures, with redundancy in place that gives us a 2 month leeway to get the automated system back on track. Extra attention was made so we could re-create scenario's on a disconnected environment easily, and audit both production and simulation process with very basic tools. So far we've had 7 failures, with a critical versioning failure last week. No impact at all because of our redundancy. When dealing with large volumes or critical data, you really need a sensible and simple failsafe.
-
I really can't stop with this book, Modern Software Engineering[^], because so much of it resonates with me after working in IT/Dev for over 30 years. I came to Dev thru QA so I've always focused on "repeatable processes, errors & failing safely".
Quote:
One of the driving forces behind [Margaret] Hamilton’s[^] approach was the focus on how things fail—the ways in which we get things wrong. "There was a fascination on my part with errors, a never ending pass-time of mine was what made a particular error, or class of errors, happen and how to prevent it in the future." This focus was grounded in a scientifically rational approach to problem-solving. The assumption was not that you could plan and get it right the first time, rather that you treated all ideas, solutions, and designs with skepticism until you ran out of ideas about how things could go wrong. Occasionally, reality is still going to surprise you, but this is engineering empiricism at work. The other engineering principle that is embodied in Hamilton’s early work is the idea of “failing safely.” The assumption is that we can never code for every scenario, so how do we code in ways that allow our systems to cope with the unexpected and still make progress? Famously it was Hamilton’s unasked-for implementation of this idea that saved the Apollo 11 mission and allowed the Lunar Module Eagle to successfully land on the moon, despite the computer becoming overloaded during the descent. As Neil Armstrong and Buzz Aldrin descended in the Lunar Excursion Module (LEM) toward the moon, there was an exchange between the a
I found that (user) logging reduces a lot of "errors". Another term is graceful degradation. But that requires understanding when a try-catch block should continue or not. And, yes, it may require asking the user if it should proceed (e.g. a file not available). It then comes down to transparency (of the software) ... what Boeing failed to do with their software changes: tell the user what they did and how it might impact them.
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
-
I found that (user) logging reduces a lot of "errors". Another term is graceful degradation. But that requires understanding when a try-catch block should continue or not. And, yes, it may require asking the user if it should proceed (e.g. a file not available). It then comes down to transparency (of the software) ... what Boeing failed to do with their software changes: tell the user what they did and how it might impact them.
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
The Boeing thing shows that big Engineering can be really hard, and that in many cases, software is just another unreliable component, somewhat miss-designed and under appreciated (in its failure modes!). The world is eating software, with the usual outcomes.
-
I really can't stop with this book, Modern Software Engineering[^], because so much of it resonates with me after working in IT/Dev for over 30 years. I came to Dev thru QA so I've always focused on "repeatable processes, errors & failing safely".
Quote:
One of the driving forces behind [Margaret] Hamilton’s[^] approach was the focus on how things fail—the ways in which we get things wrong. "There was a fascination on my part with errors, a never ending pass-time of mine was what made a particular error, or class of errors, happen and how to prevent it in the future." This focus was grounded in a scientifically rational approach to problem-solving. The assumption was not that you could plan and get it right the first time, rather that you treated all ideas, solutions, and designs with skepticism until you ran out of ideas about how things could go wrong. Occasionally, reality is still going to surprise you, but this is engineering empiricism at work. The other engineering principle that is embodied in Hamilton’s early work is the idea of “failing safely.” The assumption is that we can never code for every scenario, so how do we code in ways that allow our systems to cope with the unexpected and still make progress? Famously it was Hamilton’s unasked-for implementation of this idea that saved the Apollo 11 mission and allowed the Lunar Module Eagle to successfully land on the moon, despite the computer becoming overloaded during the descent. As Neil Armstrong and Buzz Aldrin descended in the Lunar Excursion Module (LEM) toward the moon, there was an exchange between the a
-
I really can't stop with this book, Modern Software Engineering[^], because so much of it resonates with me after working in IT/Dev for over 30 years. I came to Dev thru QA so I've always focused on "repeatable processes, errors & failing safely".
Quote:
One of the driving forces behind [Margaret] Hamilton’s[^] approach was the focus on how things fail—the ways in which we get things wrong. "There was a fascination on my part with errors, a never ending pass-time of mine was what made a particular error, or class of errors, happen and how to prevent it in the future." This focus was grounded in a scientifically rational approach to problem-solving. The assumption was not that you could plan and get it right the first time, rather that you treated all ideas, solutions, and designs with skepticism until you ran out of ideas about how things could go wrong. Occasionally, reality is still going to surprise you, but this is engineering empiricism at work. The other engineering principle that is embodied in Hamilton’s early work is the idea of “failing safely.” The assumption is that we can never code for every scenario, so how do we code in ways that allow our systems to cope with the unexpected and still make progress? Famously it was Hamilton’s unasked-for implementation of this idea that saved the Apollo 11 mission and allowed the Lunar Module Eagle to successfully land on the moon, despite the computer becoming overloaded during the descent. As Neil Armstrong and Buzz Aldrin descended in the Lunar Excursion Module (LEM) toward the moon, there was an exchange between the a
One of the hardest, or at least most memorable, software problem I ever had to chase was a programming error in a seldomly used error recovery routine. It seems my programmer, who was highly experienced in other programming languages such as COBOL, coded a "=" instead of an "==" inside an if statement in a C program--kinda like "if ( A = B)....". This ALWAYS returns true WHILE assigning the value of B to A. Unfortunately, this little error caused the system to crash. It took about six months to find the cause of the crash and understand what was happening. Looking at the code under the pressure of a "down system", we always asked why the condition was true because our mind was saying if A equals B and not considering that A was not equal to B before the if statement. I cursed (and still do curse to this day) whoever decided that allowing an assignment inside of a conditional statement was a good idea. And I have to wonder how many systems, like autonomous cars, have a statement like that buried way down deep in a infrequently used piece of critical code.
-
I found that (user) logging reduces a lot of "errors". Another term is graceful degradation. But that requires understanding when a try-catch block should continue or not. And, yes, it may require asking the user if it should proceed (e.g. a file not available). It then comes down to transparency (of the software) ... what Boeing failed to do with their software changes: tell the user what they did and how it might impact them.
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
-
It's just shocking to me how many "professional" developers don't embrace defensive programming.
-
One of the hardest, or at least most memorable, software problem I ever had to chase was a programming error in a seldomly used error recovery routine. It seems my programmer, who was highly experienced in other programming languages such as COBOL, coded a "=" instead of an "==" inside an if statement in a C program--kinda like "if ( A = B)....". This ALWAYS returns true WHILE assigning the value of B to A. Unfortunately, this little error caused the system to crash. It took about six months to find the cause of the crash and understand what was happening. Looking at the code under the pressure of a "down system", we always asked why the condition was true because our mind was saying if A equals B and not considering that A was not equal to B before the if statement. I cursed (and still do curse to this day) whoever decided that allowing an assignment inside of a conditional statement was a good idea. And I have to wonder how many systems, like autonomous cars, have a statement like that buried way down deep in a infrequently used piece of critical code.