What is the longest programming misconception you've held (that you are aware of)?
-
Until just now I believed negative integers were just a flip of the first bit. Wow, how wrong I was, for MANY years! Are there some architectures where that is the case, to make myself feel a little better?
-
Until just now I believed negative integers were just a flip of the first bit. Wow, how wrong I was, for MANY years! Are there some architectures where that is the case, to make myself feel a little better?
There are two ways to represent negative numbers: "sign & magnitude" and "two's-complement" - the "flip the top bit" approach is the former, and was used extensively by IBM until around the 70's bby which time it was clear that two's complement was a "better" solution (i.e. easier to implement in hardware, and didn't have a "negative zero" which is a odd concept all on it's own).
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony "Common sense is so rare these days, it should be classified as a super power" - Random T-shirt AntiTwitter: @DalekDave is now a follower!
-
There are two ways to represent negative numbers: "sign & magnitude" and "two's-complement" - the "flip the top bit" approach is the former, and was used extensively by IBM until around the 70's bby which time it was clear that two's complement was a "better" solution (i.e. easier to implement in hardware, and didn't have a "negative zero" which is a odd concept all on it's own).
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony "Common sense is so rare these days, it should be classified as a super power" - Random T-shirt AntiTwitter: @DalekDave is now a follower!
OTOH, two's complement has an oddity that -MINVALUE == MINVALUE, which for some use cases is even worse than "negative zero". (e.g. in a 16-bit system, MINVALUE = -32768 == 0x8000, and -MINVALUE == (~MINVALUE + 1) == 0x8000)
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows. -- 6079 Smith W.
-
OTOH, two's complement has an oddity that -MINVALUE == MINVALUE, which for some use cases is even worse than "negative zero". (e.g. in a 16-bit system, MINVALUE = -32768 == 0x8000, and -MINVALUE == (~MINVALUE + 1) == 0x8000)
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows. -- 6079 Smith W.
That depends, some systems have a "negative space" that is one larger than the positive space (or consider 0 to be a positive number, which is also an odd idea) We'd need to move away from binary computers to sort all this crap out! Can I suggest trinary? "True", "False", and "Dunno"? :laugh:
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony "Common sense is so rare these days, it should be classified as a super power" - Random T-shirt AntiTwitter: @DalekDave is now a follower!
-
That depends, some systems have a "negative space" that is one larger than the positive space (or consider 0 to be a positive number, which is also an odd idea) We'd need to move away from binary computers to sort all this crap out! Can I suggest trinary? "True", "False", and "Dunno"? :laugh:
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony "Common sense is so rare these days, it should be classified as a super power" - Random T-shirt AntiTwitter: @DalekDave is now a follower!
If you have a "negative space" that is one larger than the "positive space", then you must either have the anomaly that I discussed, or raise a flag/generate an exception when calculating -MINVALUE. Both are bad solutions, but the latter is safer, in the sense that you won't get bad results without knowing about them. We could avoid the anomalies and use any base we wished, if we didn't insist on encoding the sign as part of the number. For example, we could use a trinary value (positive, zero, negative) to represent the sign, and whatever base was convenient to represent the magnitude. If the sign is zero, the magnitude would be ignored. The problem, as you stated in your original answer, is that this is much more difficult to implement in hardware. EDIT: IIRC, the Zuse Z3 (?) actually had a trinary sign bit.
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows. -- 6079 Smith W.
-
If you have a "negative space" that is one larger than the "positive space", then you must either have the anomaly that I discussed, or raise a flag/generate an exception when calculating -MINVALUE. Both are bad solutions, but the latter is safer, in the sense that you won't get bad results without knowing about them. We could avoid the anomalies and use any base we wished, if we didn't insist on encoding the sign as part of the number. For example, we could use a trinary value (positive, zero, negative) to represent the sign, and whatever base was convenient to represent the magnitude. If the sign is zero, the magnitude would be ignored. The problem, as you stated in your original answer, is that this is much more difficult to implement in hardware. EDIT: IIRC, the Zuse Z3 (?) actually had a trinary sign bit.
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows. -- 6079 Smith W.
And can you imagine the fun of explaining XOR in a trinary system in QA? It makes my head hurt to think what a XOR b would actually work out as in trinary ... :laugh:
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony "Common sense is so rare these days, it should be classified as a super power" - Random T-shirt AntiTwitter: @DalekDave is now a follower!
-
And can you imagine the fun of explaining XOR in a trinary system in QA? It makes my head hurt to think what a XOR b would actually work out as in trinary ... :laugh:
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony "Common sense is so rare these days, it should be classified as a super power" - Random T-shirt AntiTwitter: @DalekDave is now a follower!
In a trinary-sign computer, there would be a much bigger difference between logical and arithmetic operations. One way to do so would be to enforce that only non-negative values may be used in logical operations. A better solution IMO would be to ignore positive or negative signs, performing the logical operation only on the magnitudes. A zero sign would indicate that the magnitude must be "normalized" to zero before performing the operation. The result of the operation would either have a positive sign (if non-zero) or a zero sign (if zero). I leave the design of the hardware as an exercise to our hardware colleagues... :)
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows. -- 6079 Smith W.
-
Until just now I believed negative integers were just a flip of the first bit. Wow, how wrong I was, for MANY years! Are there some architectures where that is the case, to make myself feel a little better?
The longest programming misconception that I have held? That one day I will understand asynchronous functions in javascript. Seriously, despite having worked often and in different javascript frameworks with asynchronous functions and
await
calls to those functions, I keep getting tripped up by the asynchronous nature of javascript.“That which can be asserted without evidence, can be dismissed without evidence.”
― Christopher Hitchens
-
There are two ways to represent negative numbers: "sign & magnitude" and "two's-complement" - the "flip the top bit" approach is the former, and was used extensively by IBM until around the 70's bby which time it was clear that two's complement was a "better" solution (i.e. easier to implement in hardware, and didn't have a "negative zero" which is a odd concept all on it's own).
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony "Common sense is so rare these days, it should be classified as a super power" - Random T-shirt AntiTwitter: @DalekDave is now a follower!
You've got 1's complement as well, which I believe was far more common in the '60s and '70 than sign-magnitude. Wasn't the Univac 1100 series all 1's complement? Some CDC mainframes as well, I believe. I believe that you have to go back to designs from the '50s to find sign-magnitude integer representation. For floating point, I have never seen anything but sign-magnitude, though.
-
Until just now I believed negative integers were just a flip of the first bit. Wow, how wrong I was, for MANY years! Are there some architectures where that is the case, to make myself feel a little better?
IBM 7090 apparently had that. For me I think it was the idea that local variables are created one by one, at the moment they are declared, leading to silly conclusions such as "obviously you should reuse an existing variable instead of making a new one". Maybe that's a thing in scripting languages?
-
Until just now I believed negative integers were just a flip of the first bit. Wow, how wrong I was, for MANY years! Are there some architectures where that is the case, to make myself feel a little better?
My answer changed today because of the above post[^]. :-D I was about to reply to it saying that, unlike C++, it's interesting that C# doesn't insist that
default
be the last label in aswitch
statement. But I figured I should check this, and it turns out that C++ also allows it! I'd always believed otherwise since starting to use C++ about 20 years ago, perhaps because that's the way it is in the language I used for a long time, though it usesOUT
instead ofdefault
. EDIT: That's the longest known misconception; there are probably tons of others!Robust Services Core | Software Techniques for Lemmings | Articles
The fox knows many things, but the hedgehog knows one big thing. -
Until just now I believed negative integers were just a flip of the first bit. Wow, how wrong I was, for MANY years! Are there some architectures where that is the case, to make myself feel a little better?
Note: The following is a personal statement of preference, not an invitation to a jihad. Not really a programming misconception, but a coding style choice. For a very long time, starting in the mid-1980's through about 2010 or so, I used K&R braces exclusively. When I started writing C#, I used Allman[^] braces, following the style recommended by Microsoft and a couple of the books I was using. As time has gone on Allman has become my preferred style. I have some vision problems due to age and glaucoma, so my code needs frequent blank lines to separate logical blocks. Allman braces provide white space that isn't merely cosmetic. I've even got an editor macro that converts K&R braces to Allman. I have a large body of C++ that I recently converted as part of a refactor and refresh effort on an old product that I'm maintaining.
Software Zen:
delete this;
-
Note: The following is a personal statement of preference, not an invitation to a jihad. Not really a programming misconception, but a coding style choice. For a very long time, starting in the mid-1980's through about 2010 or so, I used K&R braces exclusively. When I started writing C#, I used Allman[^] braces, following the style recommended by Microsoft and a couple of the books I was using. As time has gone on Allman has become my preferred style. I have some vision problems due to age and glaucoma, so my code needs frequent blank lines to separate logical blocks. Allman braces provide white space that isn't merely cosmetic. I've even got an editor macro that converts K&R braces to Allman. I have a large body of C++ that I recently converted as part of a refactor and refresh effort on an old product that I'm maintaining.
Software Zen:
delete this;
You don't need to fade it by saying it's personal preference when it's the Correct™️ way. Bring the jihad! :laugh: My rationale is that other coding styles often waste horizontal space but use vertical space miserly. The control statement before the
{
needs to stand out so that you don't have to squint to read its condition. It also aligns the{
…}
and reduces the number of broken lines, which is another thing I try to avoid (hence 3-space indentation instead of 4 or even 8, whose users should be forced to edit all their spaces manually.)Robust Services Core | Software Techniques for Lemmings | Articles
The fox knows many things, but the hedgehog knows one big thing. -
Note: The following is a personal statement of preference, not an invitation to a jihad. Not really a programming misconception, but a coding style choice. For a very long time, starting in the mid-1980's through about 2010 or so, I used K&R braces exclusively. When I started writing C#, I used Allman[^] braces, following the style recommended by Microsoft and a couple of the books I was using. As time has gone on Allman has become my preferred style. I have some vision problems due to age and glaucoma, so my code needs frequent blank lines to separate logical blocks. Allman braces provide white space that isn't merely cosmetic. I've even got an editor macro that converts K&R braces to Allman. I have a large body of C++ that I recently converted as part of a refactor and refresh effort on an old product that I'm maintaining.
Software Zen:
delete this;
Visual Studio has quite extensive options for code reformatting according to your preferred style. The preferences are given by the logged in user. You may want to set up two user names, with different formatting preferences: Log in with one name, go to the end brace, delete it and retype it, and you have the code the way you want it. Log in with the other name, do the same exercise, and code is formatted the way it should be delivered to others.
-
Until just now I believed negative integers were just a flip of the first bit. Wow, how wrong I was, for MANY years! Are there some architectures where that is the case, to make myself feel a little better?
I'm unaware that I suffer from any. :~ But, related to "negative integers", one misconception which I have seen at least one person state is the idea that signed integers (twos complement) are lower level (more native to the hardware) than unsigned integers -- that the CPU has to work harder to perform unsigned math. I'm pretty sure that I saw someone state that you should avoid using unsigned integers because they're slower! :omg:
-
Note: The following is a personal statement of preference, not an invitation to a jihad. Not really a programming misconception, but a coding style choice. For a very long time, starting in the mid-1980's through about 2010 or so, I used K&R braces exclusively. When I started writing C#, I used Allman[^] braces, following the style recommended by Microsoft and a couple of the books I was using. As time has gone on Allman has become my preferred style. I have some vision problems due to age and glaucoma, so my code needs frequent blank lines to separate logical blocks. Allman braces provide white space that isn't merely cosmetic. I've even got an editor macro that converts K&R braces to Allman. I have a large body of C++ that I recently converted as part of a refactor and refresh effort on an old product that I'm maintaining.
Software Zen:
delete this;
+5 for Allman.
-
You don't need to fade it by saying it's personal preference when it's the Correct™️ way. Bring the jihad! :laugh: My rationale is that other coding styles often waste horizontal space but use vertical space miserly. The control statement before the
{
needs to stand out so that you don't have to squint to read its condition. It also aligns the{
…}
and reduces the number of broken lines, which is another thing I try to avoid (hence 3-space indentation instead of 4 or even 8, whose users should be forced to edit all their spaces manually.)Robust Services Core | Software Techniques for Lemmings | Articles
The fox knows many things, but the hedgehog knows one big thing.Greg Utas wrote:
edit all their spaces manually
As on a VT100, with an eighty-character limit.
-
Greg Utas wrote:
edit all their spaces manually
As on a VT100, with an eighty-character limit.
The arrival of VT100s in our university computing lab was momentous! Our DECwriters were then used mostly for printouts.
Robust Services Core | Software Techniques for Lemmings | Articles
The fox knows many things, but the hedgehog knows one big thing. -
Note: The following is a personal statement of preference, not an invitation to a jihad. Not really a programming misconception, but a coding style choice. For a very long time, starting in the mid-1980's through about 2010 or so, I used K&R braces exclusively. When I started writing C#, I used Allman[^] braces, following the style recommended by Microsoft and a couple of the books I was using. As time has gone on Allman has become my preferred style. I have some vision problems due to age and glaucoma, so my code needs frequent blank lines to separate logical blocks. Allman braces provide white space that isn't merely cosmetic. I've even got an editor macro that converts K&R braces to Allman. I have a large body of C++ that I recently converted as part of a refactor and refresh effort on an old product that I'm maintaining.
Software Zen:
delete this;
I use Allman for C# but for Javascript(mostly Typescript) is make use of K&R. I do this because it's the generally accepted style for both languages and I am used to swapping between them. It's also because working as part of a small team within a larger group(a team of 5 developers within a group of 20+ developers) it's easier to follow the generally accepted standards, or rather code doesn't get past code review if it doesn't follow those standards. At home I do the same, using Allman for C# and K&R for Typescript or any other form of Javascript - it just kind of 'feels' right.
“That which can be asserted without evidence, can be dismissed without evidence.”
― Christopher Hitchens
-
That depends, some systems have a "negative space" that is one larger than the positive space (or consider 0 to be a positive number, which is also an odd idea) We'd need to move away from binary computers to sort all this crap out! Can I suggest trinary? "True", "False", and "Dunno"? :laugh:
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony "Common sense is so rare these days, it should be classified as a super power" - Random T-shirt AntiTwitter: @DalekDave is now a follower!
Save us all our pain: just use a unary system of 'Dunno'! All our problems would be solved, and none of them could be! How very Schrödinger-ish! :laugh: