10 eureka moments of coding in the community
-
Open Source[^]:
We asked our community to share about a time they sat down and wrote code that truly made them proud.
Bath tub not required
Posted to see if we can get a few of our own stories here
-
Open Source[^]:
We asked our community to share about a time they sat down and wrote code that truly made them proud.
Bath tub not required
Posted to see if we can get a few of our own stories here
Two stories (the two I was thinking about when I replied to honey the codewitch[^]: 1a: Detecting hydrogen fires in realtime after the Challenger accident - We had a multispectral camera, basically a camera with a spinning wheel having 6 narrowband filters in front of the CCD where this spin rate was sync'd to the cameras 60hz scan rate, and I managed to do two things (quite impressive given this was 30 years ago) - flip the image capture board into capture frame for the desired filter, flip back to display the captured frame for the next 5 frames, rinse and repeat. (By removing the IR filter in front of the CCD it was just barely able to detect the emissions around 950nm from burning hydrogen.) All during the vertical refresh interval, so it had to be assembly code, and the code let you move to the previous / next filter on the wheel. The point being, so you could see just the filter you wanted to see. 1b: The PhD people had created a complicated FFT to analyze all six frames of a captured set of images to determine if a hydrogen fire existed. It took like 30 minutes to run (remember, this was 30 years ago) and produced a questionable image result and then you had to tweak the parameters to try again. I realized that the entire process was just a lookup table of intensity for each filter band. So I wrote a near-real-time translation to produce a single video frame from the six filter frames. 2: The PhD's had been working on analyzing the failure modes of switch rings in satellites (this[^] is a simple but good example). The idea being, analyze the ring that the engineers dreamed up for handling failed TWTA's (Travelling Wave Tube Amplifier) which would be switched to spare TWTA's on the satellite, and determine what the failure modes were that couldn't be handled even if there were available spares. This is not as simple as one might think, as the output of one switch can be the input of another switch as an alternate input. The point being, the PhD's were using the tools in their PhD t
-
Two stories (the two I was thinking about when I replied to honey the codewitch[^]: 1a: Detecting hydrogen fires in realtime after the Challenger accident - We had a multispectral camera, basically a camera with a spinning wheel having 6 narrowband filters in front of the CCD where this spin rate was sync'd to the cameras 60hz scan rate, and I managed to do two things (quite impressive given this was 30 years ago) - flip the image capture board into capture frame for the desired filter, flip back to display the captured frame for the next 5 frames, rinse and repeat. (By removing the IR filter in front of the CCD it was just barely able to detect the emissions around 950nm from burning hydrogen.) All during the vertical refresh interval, so it had to be assembly code, and the code let you move to the previous / next filter on the wheel. The point being, so you could see just the filter you wanted to see. 1b: The PhD people had created a complicated FFT to analyze all six frames of a captured set of images to determine if a hydrogen fire existed. It took like 30 minutes to run (remember, this was 30 years ago) and produced a questionable image result and then you had to tweak the parameters to try again. I realized that the entire process was just a lookup table of intensity for each filter band. So I wrote a near-real-time translation to produce a single video frame from the six filter frames. 2: The PhD's had been working on analyzing the failure modes of switch rings in satellites (this[^] is a simple but good example). The idea being, analyze the ring that the engineers dreamed up for handling failed TWTA's (Travelling Wave Tube Amplifier) which would be switched to spare TWTA's on the satellite, and determine what the failure modes were that couldn't be handled even if there were available spares. This is not as simple as one might think, as the output of one switch can be the input of another switch as an alternate input. The point being, the PhD's were using the tools in their PhD t
Fascinating stories - thanks for sharing them! > originally written almost 30 years ago in C++, then rewritten in C#, without performance degredation mind you. I'd be interested in hearing an expansion on this. C++ folks tend to be rather religious that there's just _no way_ that C# could be as performant as C++ for anything other than contrived scenarios, and a real world example with an explanation would be an interesting tidbit to add to that ageless argument. (Possibly in the Lounge, or an article, if not appropriate here). I say this with love as I used to be one of those C++ folks. :)
-
Open Source[^]:
We asked our community to share about a time they sat down and wrote code that truly made them proud.
Bath tub not required
Posted to see if we can get a few of our own stories here
The situation was where a Xerox Data Systems (XDS) Sigma 5 was being used to collect and process telemetry data transmitted by a satellite. This was in 1974 and the network was probably designed in the 1960s and was state-of-the-art equipment for its time. The highest speed line was 220kbps and the communications controller occupied two full cabinets. The satellite in question would transmit three tape reels worth of data every day. If the data came through low-speed lines (because that was what certain ground stations could support), the computer could process about two tapes worth of data, saving the last reel as unprocessed data. If the high-speed line was in use, then it was all the computer could do to just write the data to tape reels. Over one year, about 300 unprocessed reels of data were sitting in the tape library. The contract called for the system to support 3 satellites in orbit simultaneously. The computer was choking on the data from just one satellite. A second satellite was to go up in about six months, with a third one scheduled for a year later. At that, I was working in an obscure field known as computer performance evaluation. This called for probes to be connected to certain pins available on the motherboard. For the IBM 360 series, these pins were known. Data obtained from these probes would tell you which part of the CPU were being used frequently and there was software written to analyze this data. Unfortunately, the pin output information was not available for other computers. In fact, it was not in the interest of the computer vendor to optimize performance as they could sell faster and bigger processors to the customer. The hardware monitoring equipment was in fact sold by two vendors independent of IBM. Thus, I had to figure out how to simulate a hardware monitor in software. The program, conceptually, was trivially simple. Every 100 milliseconds or so, my program would interrupt the computer, look at what instruction was being executed and in which part of the memory that instruction resided: was it in the operating system or in the application program? It turned out that a vast majority of time was being spent in the area reserved for program overlaying. Ah, yes, this particular OS didn’t have virtual memory (hardly any of the OS on various computers had virtual memory at that time and certainly not on the XDS Sigma 5 which would hardly qualify as a minicomputer) so we programmatically swapped overlays of the application program as needed into main
-
Fascinating stories - thanks for sharing them! > originally written almost 30 years ago in C++, then rewritten in C#, without performance degredation mind you. I'd be interested in hearing an expansion on this. C++ folks tend to be rather religious that there's just _no way_ that C# could be as performant as C++ for anything other than contrived scenarios, and a real world example with an explanation would be an interesting tidbit to add to that ageless argument. (Possibly in the Lounge, or an article, if not appropriate here). I say this with love as I used to be one of those C++ folks. :)
Gjeltema wrote:
C++ folks tend to be rather religious that there's just no way that C# could be as performant as C++
The main issue was my use of the STL and all the memory allocations / deallocations that were sucking up a ton of time. I improved on that in the C# code (one could argue if I did the same thing in C++ it would still be even faster than the C# code.) However, C#'s p-code, compiled to native CPU code, is really, in my experience, just as fast as C++ unless you spend a lot of time optimizing the C/C++ code. Conversely, in the C# code, I discovered this issue[^] with memory allocations in threads and wrote that article about it.
Latest Articles:
DivWindow: Size, drag, minimize, and maximize floating windows with layout persistence -
The situation was where a Xerox Data Systems (XDS) Sigma 5 was being used to collect and process telemetry data transmitted by a satellite. This was in 1974 and the network was probably designed in the 1960s and was state-of-the-art equipment for its time. The highest speed line was 220kbps and the communications controller occupied two full cabinets. The satellite in question would transmit three tape reels worth of data every day. If the data came through low-speed lines (because that was what certain ground stations could support), the computer could process about two tapes worth of data, saving the last reel as unprocessed data. If the high-speed line was in use, then it was all the computer could do to just write the data to tape reels. Over one year, about 300 unprocessed reels of data were sitting in the tape library. The contract called for the system to support 3 satellites in orbit simultaneously. The computer was choking on the data from just one satellite. A second satellite was to go up in about six months, with a third one scheduled for a year later. At that, I was working in an obscure field known as computer performance evaluation. This called for probes to be connected to certain pins available on the motherboard. For the IBM 360 series, these pins were known. Data obtained from these probes would tell you which part of the CPU were being used frequently and there was software written to analyze this data. Unfortunately, the pin output information was not available for other computers. In fact, it was not in the interest of the computer vendor to optimize performance as they could sell faster and bigger processors to the customer. The hardware monitoring equipment was in fact sold by two vendors independent of IBM. Thus, I had to figure out how to simulate a hardware monitor in software. The program, conceptually, was trivially simple. Every 100 milliseconds or so, my program would interrupt the computer, look at what instruction was being executed and in which part of the memory that instruction resided: was it in the operating system or in the application program? It turned out that a vast majority of time was being spent in the area reserved for program overlaying. Ah, yes, this particular OS didn’t have virtual memory (hardly any of the OS on various computers had virtual memory at that time and certainly not on the XDS Sigma 5 which would hardly qualify as a minicomputer) so we programmatically swapped overlays of the application program as needed into main
Very cool!
Latest Articles:
DivWindow: Size, drag, minimize, and maximize floating windows with layout persistence -
Open Source[^]:
We asked our community to share about a time they sat down and wrote code that truly made them proud.
Bath tub not required
Posted to see if we can get a few of our own stories here
Singapore in 1980 had a hundred plus banks trading in foreign currencies. Some banks didn’t have regular retail or commercial banking operations but had a trading floor for trading US dollars against British pounds, Japanese yen, Italian lira, Deutsche Mark, French franc, etc. The profits were really minuscule as exchange rates normally varied within a very short band. One could trade a million dollars against the British pound and show just a few thousand dollars in profit when lucky. This particular bank had a young man from London who was their trader. About 9 months into the job, he started drinking heavily during lunch and showed other erratic behavior. The alarmed manager called the audit firm where I was employed to look into the books. They discovered that the trader had been booking false profits by putting into the computer system incorrect (and favorable to his trades) exchange rates. The profits he had booked were about $6 million. The trader was fired and a very experienced and much older trader brought in to fix the mess. The new trader closed out all the trades and said that this should stop the losses. The next day, the books showed a new loss of $450,000. The trader said the computer software was screwed up and there was no way there could be additional losses. The auditors refused to certify the books unless and until the trades were re-run on the computer for every single day with the correct exchange rates for that day for each currency. Presumably, one could get this from the daily newspapers but where does one go for six-month old newspapers? Each day’s run would take five hours or so and six months of daily processing would be in excess of 1000 hours. This was around December 1 and the books have to be closed on December 31 and shortly thereafter the audited results have to be submitted to the relevant authorities. There were hardly 700 hours in the rest of December if we ran the system 24 hours a day so this was an impossible task. I came to know of the situation when an auditor ran up to me in the office and asked if I had heard about the major disaster and related the story to me. I went to the audit partner and offered to look into the computer system to determine what can be done. She said the decision was to re-run all the processing with the correct exchange rates and that I could do nothing to alter the situation. I told her politely that perhaps the client should make the decision about involving an IT consultant. She called the bank manager who, h
-
Singapore in 1980 had a hundred plus banks trading in foreign currencies. Some banks didn’t have regular retail or commercial banking operations but had a trading floor for trading US dollars against British pounds, Japanese yen, Italian lira, Deutsche Mark, French franc, etc. The profits were really minuscule as exchange rates normally varied within a very short band. One could trade a million dollars against the British pound and show just a few thousand dollars in profit when lucky. This particular bank had a young man from London who was their trader. About 9 months into the job, he started drinking heavily during lunch and showed other erratic behavior. The alarmed manager called the audit firm where I was employed to look into the books. They discovered that the trader had been booking false profits by putting into the computer system incorrect (and favorable to his trades) exchange rates. The profits he had booked were about $6 million. The trader was fired and a very experienced and much older trader brought in to fix the mess. The new trader closed out all the trades and said that this should stop the losses. The next day, the books showed a new loss of $450,000. The trader said the computer software was screwed up and there was no way there could be additional losses. The auditors refused to certify the books unless and until the trades were re-run on the computer for every single day with the correct exchange rates for that day for each currency. Presumably, one could get this from the daily newspapers but where does one go for six-month old newspapers? Each day’s run would take five hours or so and six months of daily processing would be in excess of 1000 hours. This was around December 1 and the books have to be closed on December 31 and shortly thereafter the audited results have to be submitted to the relevant authorities. There were hardly 700 hours in the rest of December if we ran the system 24 hours a day so this was an impossible task. I came to know of the situation when an auditor ran up to me in the office and asked if I had heard about the major disaster and related the story to me. I went to the audit partner and offered to look into the computer system to determine what can be done. She said the decision was to re-run all the processing with the correct exchange rates and that I could do nothing to alter the situation. I told her politely that perhaps the client should make the decision about involving an IT consultant. She called the bank manager who, h
:thumbsup::thumbsup::thumbsup:
Vivi Chellappa wrote:
I just couldn’t believe that bankers haven’t understood...
Bankers and politicians are reeeeeaaaaaalllllyyyyyy slow learners what responsibility means.
M.D.V. ;) If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about? Help me to understand what I'm saying, and I'll explain it better to you Rating helpful answers is nice, but saying thanks can be even nicer.