Math with string, but why?
-
I often see questions of the form: "I have strings such as '0011001010010' and I want to calculate the bitwise AND/OR/XOR between them, how do I do that?" One person even wanted to implement SHA1 this way. Where is this coming from*? Who is teaching people that this is a good idea? Why is this allowed to continue? An other variant of this anti-pattern works with decimal strings, often zeroes are appended to the string because "that's faster than multiplying by 10, right?". * some theories I have are that the people suffering from these ideas have a fundamental misunderstanding either of how math works on a computer or of types in general (possibly because debuggers show string representations of everything), or because their brains are too visually oriented (and therefore sees integers merely as strings of glyphs).
-
I often see questions of the form: "I have strings such as '0011001010010' and I want to calculate the bitwise AND/OR/XOR between them, how do I do that?" One person even wanted to implement SHA1 this way. Where is this coming from*? Who is teaching people that this is a good idea? Why is this allowed to continue? An other variant of this anti-pattern works with decimal strings, often zeroes are appended to the string because "that's faster than multiplying by 10, right?". * some theories I have are that the people suffering from these ideas have a fundamental misunderstanding either of how math works on a computer or of types in general (possibly because debuggers show string representations of everything), or because their brains are too visually oriented (and therefore sees integers merely as strings of glyphs).
I know what you mean, it does seem that some educators need a swift kick up the bottom. But then, they don't seem to teach the basics of computing any more - it's as if the computer is an irrelevant part of computing! Start 'em with assembler - that'll learn 'em! The one which annoys me most is SQL queries built by string concatenation rather than teaching the little buggers about parametrized queries from day one. Stupid, dangerous, and time-wasting when they try to add binary data to the table...
Ideological Purity is no substitute for being able to stick your thumb down a pipe to stop the water
-
I often see questions of the form: "I have strings such as '0011001010010' and I want to calculate the bitwise AND/OR/XOR between them, how do I do that?" One person even wanted to implement SHA1 this way. Where is this coming from*? Who is teaching people that this is a good idea? Why is this allowed to continue? An other variant of this anti-pattern works with decimal strings, often zeroes are appended to the string because "that's faster than multiplying by 10, right?". * some theories I have are that the people suffering from these ideas have a fundamental misunderstanding either of how math works on a computer or of types in general (possibly because debuggers show string representations of everything), or because their brains are too visually oriented (and therefore sees integers merely as strings of glyphs).
Well, contrary to what other people will tell you, be a good person and teach them quickly and show them the way to understanding of how it works. something like : ----
Well, that number is a string representation of a number, you need to convert it to a real number with functions x, y, or z, after that, you can bitwize it to your liking and convert back the number to its string representation with function a, b or c; to be able to display it on screen.
Here are a few internal/external links that can help you further:
http://link_to_good_explanation
http://link_to_another_explanation---- IMO, this is what the forum should be. if you cannot or do not want to answer the question, just let it be and do something else that will make you happy. M.
Watched code never compiles.
-
I often see questions of the form: "I have strings such as '0011001010010' and I want to calculate the bitwise AND/OR/XOR between them, how do I do that?" One person even wanted to implement SHA1 this way. Where is this coming from*? Who is teaching people that this is a good idea? Why is this allowed to continue? An other variant of this anti-pattern works with decimal strings, often zeroes are appended to the string because "that's faster than multiplying by 10, right?". * some theories I have are that the people suffering from these ideas have a fundamental misunderstanding either of how math works on a computer or of types in general (possibly because debuggers show string representations of everything), or because their brains are too visually oriented (and therefore sees integers merely as strings of glyphs).
"Please explain me how to convert binary to hexadecimal" :mad: "How can I read a binary file into text" :mad: etc. I do get the feeling that some of the people posting questions here are the actual teachers rather than the students.
-
I know what you mean, it does seem that some educators need a swift kick up the bottom. But then, they don't seem to teach the basics of computing any more - it's as if the computer is an irrelevant part of computing! Start 'em with assembler - that'll learn 'em! The one which annoys me most is SQL queries built by string concatenation rather than teaching the little buggers about parametrized queries from day one. Stupid, dangerous, and time-wasting when they try to add binary data to the table...
Ideological Purity is no substitute for being able to stick your thumb down a pipe to stop the water
OriginalGriff wrote:
I know what you mean, it does seem that some educators need a swift kick up the bottom. But then, they don't seem to teach the basics of computing any more - it's as if the computer is an irrelevant part of computing! Start 'em with assembler - that'll learn 'em!
I have to fully agree with this. One of my interests is electronics, so I feel it is even more important to teach them the low level parts of a computer, including assembler. Heck, some of my projects are entirely written in assembler.
=====
\ | /
\|/
|
|-----|
| |
|_ |
_) | /
_) __/_
_) ____
| /|
| / |
| |
|-----|
|===
-
"Please explain me how to convert binary to hexadecimal" :mad: "How can I read a binary file into text" :mad: etc. I do get the feeling that some of the people posting questions here are the actual teachers rather than the students.
:laugh: I'm just waiting for the first "PLZ MRK THS HMWRK URGENTZZZZZ!!!!" in QA
Ideological Purity is no substitute for being able to stick your thumb down a pipe to stop the water
-
I often see questions of the form: "I have strings such as '0011001010010' and I want to calculate the bitwise AND/OR/XOR between them, how do I do that?" One person even wanted to implement SHA1 this way. Where is this coming from*? Who is teaching people that this is a good idea? Why is this allowed to continue? An other variant of this anti-pattern works with decimal strings, often zeroes are appended to the string because "that's faster than multiplying by 10, right?". * some theories I have are that the people suffering from these ideas have a fundamental misunderstanding either of how math works on a computer or of types in general (possibly because debuggers show string representations of everything), or because their brains are too visually oriented (and therefore sees integers merely as strings of glyphs).
This just seems to be a common pattern for 'noobs'. My guess is that they don't quite get datatypes, get lots of compiler errors and then automatically try to use strings for everything, including really awful string conversions.
At least artificial intelligence already is superior to natural stupidity
-
I often see questions of the form: "I have strings such as '0011001010010' and I want to calculate the bitwise AND/OR/XOR between them, how do I do that?" One person even wanted to implement SHA1 this way. Where is this coming from*? Who is teaching people that this is a good idea? Why is this allowed to continue? An other variant of this anti-pattern works with decimal strings, often zeroes are appended to the string because "that's faster than multiplying by 10, right?". * some theories I have are that the people suffering from these ideas have a fundamental misunderstanding either of how math works on a computer or of types in general (possibly because debuggers show string representations of everything), or because their brains are too visually oriented (and therefore sees integers merely as strings of glyphs).
-
harold aptroot wrote:
Math with string, but why?
Because string theory[^] desperately needs math? ;P
Veni, vidi, vici.
-
Ok I took a look at the section "The mathematics".. Stuff for nightmares. It desperately needs less math, I'd say.
-
I often see questions of the form: "I have strings such as '0011001010010' and I want to calculate the bitwise AND/OR/XOR between them, how do I do that?" One person even wanted to implement SHA1 this way. Where is this coming from*? Who is teaching people that this is a good idea? Why is this allowed to continue? An other variant of this anti-pattern works with decimal strings, often zeroes are appended to the string because "that's faster than multiplying by 10, right?". * some theories I have are that the people suffering from these ideas have a fundamental misunderstanding either of how math works on a computer or of types in general (possibly because debuggers show string representations of everything), or because their brains are too visually oriented (and therefore sees integers merely as strings of glyphs).
That's all.
-
I often see questions of the form: "I have strings such as '0011001010010' and I want to calculate the bitwise AND/OR/XOR between them, how do I do that?" One person even wanted to implement SHA1 this way. Where is this coming from*? Who is teaching people that this is a good idea? Why is this allowed to continue? An other variant of this anti-pattern works with decimal strings, often zeroes are appended to the string because "that's faster than multiplying by 10, right?". * some theories I have are that the people suffering from these ideas have a fundamental misunderstanding either of how math works on a computer or of types in general (possibly because debuggers show string representations of everything), or because their brains are too visually oriented (and therefore sees integers merely as strings of glyphs).
-
harold aptroot wrote:
An other variant of this anti-pattern works with decimal strings, often zeroes are appended to the string because "that's faster than multiplying by 10, right?".
I would expect appending to be faster myself.
-
harold aptroot wrote:
An other variant of this anti-pattern works with decimal strings, often zeroes are appended to the string because "that's faster than multiplying by 10, right?".
I would expect appending to be faster myself.
-
harold aptroot wrote:
An other variant of this anti-pattern works with decimal strings, often zeroes are appended to the string because "that's faster than multiplying by 10, right?".
I would expect appending to be faster myself.
You haven't met .net yet, have you? :sigh:
-
Well, contrary to what other people will tell you, be a good person and teach them quickly and show them the way to understanding of how it works. something like : ----
Well, that number is a string representation of a number, you need to convert it to a real number with functions x, y, or z, after that, you can bitwize it to your liking and convert back the number to its string representation with function a, b or c; to be able to display it on screen.
Here are a few internal/external links that can help you further:
http://link_to_good_explanation
http://link_to_another_explanation---- IMO, this is what the forum should be. if you cannot or do not want to answer the question, just let it be and do something else that will make you happy. M.
Watched code never compiles.
But the weather's OK, yeah? Can we bitch about the weather?
I wanna be a eunuchs developer! Pass me a bread knife!
-
That's all.
Best comment I've heard all day! :-D :-D
Full-fledged Java/.NET lover, full-fledged PHP hater. Full-fledged Google/Microsoft lover, full-fledged Apple hater. Full-fledged Skype lover, full-fledged YM hater.
-
I often see questions of the form: "I have strings such as '0011001010010' and I want to calculate the bitwise AND/OR/XOR between them, how do I do that?" One person even wanted to implement SHA1 this way. Where is this coming from*? Who is teaching people that this is a good idea? Why is this allowed to continue? An other variant of this anti-pattern works with decimal strings, often zeroes are appended to the string because "that's faster than multiplying by 10, right?". * some theories I have are that the people suffering from these ideas have a fundamental misunderstanding either of how math works on a computer or of types in general (possibly because debuggers show string representations of everything), or because their brains are too visually oriented (and therefore sees integers merely as strings of glyphs).
I worked on a system where the DEA encryption program used stings of '0's and '1's instead of trying to work out the possibly more efficient bitwise operations. That code was initially written in DataBASIC on a Pick system. In Pick all numbers are just ASCII strings, they get converted implicitly when you do an arithmetic expression but when assigned to a variable they are converted to ASCII string. It works fine and it sure makes the files easy to edit as all numbers are their ASCII representation. The Pick system was much too slow for encryption. Pick works well in an I/O bound system, the DataBASIC actually running as an interpreted P-Code. Fine for today's Mega-Fast systems doing Java but it was slow on 20Mhz 68020 systems. So the encryption was re-implemented on a PC-XT under C. The binary as strings logic was kept, and a good thing too. Several years later we upgraded to UniData running on a Motorola 88K, running under Unix. That C code for the encryption could now run on the same box as the 'system', no need for a clunky comms connection to a PC. And the C code ported just fine. I hate to think what that conversion would have been like from 16 bit ints to 32 bit with big endianess instead of little on the 8088. There was another upgrade to 64bit Alpha, again that would have needed the code adjusting for another increase in the size of integers. Not doubt that system is now running on XEONs, a change in endianess again, irrelevant to the binary as string code.
-
I worked on a system where the DEA encryption program used stings of '0's and '1's instead of trying to work out the possibly more efficient bitwise operations. That code was initially written in DataBASIC on a Pick system. In Pick all numbers are just ASCII strings, they get converted implicitly when you do an arithmetic expression but when assigned to a variable they are converted to ASCII string. It works fine and it sure makes the files easy to edit as all numbers are their ASCII representation. The Pick system was much too slow for encryption. Pick works well in an I/O bound system, the DataBASIC actually running as an interpreted P-Code. Fine for today's Mega-Fast systems doing Java but it was slow on 20Mhz 68020 systems. So the encryption was re-implemented on a PC-XT under C. The binary as strings logic was kept, and a good thing too. Several years later we upgraded to UniData running on a Motorola 88K, running under Unix. That C code for the encryption could now run on the same box as the 'system', no need for a clunky comms connection to a PC. And the C code ported just fine. I hate to think what that conversion would have been like from 16 bit ints to 32 bit with big endianess instead of little on the 8088. There was another upgrade to 64bit Alpha, again that would have needed the code adjusting for another increase in the size of integers. Not doubt that system is now running on XEONs, a change in endianess again, irrelevant to the binary as string code.