What I want.....
-
I faced a similar problem just recently. The solution was to let the compiler handle the endianness. Integers, Chars (multibyte/Unicode in that context), all work the same way in the same language. I don't cast/convert between integers and chars because the one is numbers, the other is characters, letters. If a conversion is needed, the compiler offers conversion functions that work the same independent of the platform. Same for data declarations, in C(++), a uint16_t is unambiguous. Across languages however, don't see how that's supposed to work. What's even the point of piping a C source file into a Delphi compiler or vise versa? This technique naturally results in my tools behaving the same on every platform the compiler offers. Your post reminds me on a nightmare from a coworker I fixed a while ago. He was parsing a protocol with the bytestream containing, amongst other things, integers. The protocol is LE, the system the parser's running on is LE (Windows x86-64), all things work splendidly. For 1, 2 and 4 byte-integers that is. Just cast it! And to parse 3-byte integers, he built a monster of bit shifts and whatnot. I've replaced his horrible mess with somewhat simple code that initializes the result to 0 and then adds byte by byte multiplying it accordingly. Viola, problem solved, function is simple and works for an arbitrary number of bytes.
Your 3 byte integer problem is exactly the kind of work-around that causes problems. I like your solution to the LE problem--simple and directly understandable. But, what if we didn't have to think about it? What if the variable could be defined as "3 byte little endian integer" and the conversion to/from machine requirements took place automatically on all future references? How could something like that be implemented at the language, compiler or machine level? I cannot tell you how many times I've seen "char" definitions used inappropriately when a "byte" definition would make more sense. Or things like: "memcpy( (char *) &structa, (char *) &structb, 100 );" (And if someone says they have never done something like that, I say "BULL!". They've done it one way or another in whatever language except perhaps LISP or SNOBOL). Just thinking.
-
Your 3 byte integer problem is exactly the kind of work-around that causes problems. I like your solution to the LE problem--simple and directly understandable. But, what if we didn't have to think about it? What if the variable could be defined as "3 byte little endian integer" and the conversion to/from machine requirements took place automatically on all future references? How could something like that be implemented at the language, compiler or machine level? I cannot tell you how many times I've seen "char" definitions used inappropriately when a "byte" definition would make more sense. Or things like: "memcpy( (char *) &structa, (char *) &structb, 100 );" (And if someone says they have never done something like that, I say "BULL!". They've done it one way or another in whatever language except perhaps LISP or SNOBOL). Just thinking.
But we already have that! Let's forget the 3-byte-integer for a moment, that's rather specific to that protocol and usually not needed. When I work with a int32_t (a rather common type), I don't have to care which endianness the underlying system uses. The compiler takes care of everything! Same goes for, let's say, uintptr_t. I don't have to care about endianness, bitness, none of that. The compiler does it for me. Well, I of course have to work with the compiler, but your world, the one where the programmer doesn't have to care, is already there. The other topic here is that people will always find a way to circumvent the compiler and shoot themselves in the foot. There's languages that make that easier or harder, C is the worst offender I've ever met (save for assembly, but that's in a league of it's own). C doesn't even have a byte type! A char is a byte in C, you can't blame the programmer (except for possibly poor choice of C as a tool). Switch the language. C# or Delphi on the other hand, those compilers yell at you when you're doing questionable things. And if that thing in question may work just fine while still remaining questinable, you'll at least get a warning. Feel free to yell BULL, by the way. I've never done such a thing because the question is not whether it'll blow up in my face but a mere when. It will blow up sooner or later. Fun fact: I've now spent about two weeks fixing a binary communication layer. Some predecessor of mine thought that using strings, data structures designed for text, where binary information is processed, would be a splendid idea. And then came Unicode. Trying to convert 99h to a Unicode string member yields 3Fh and some other byte I've forgotten. I bet it was a C programmer who grew up in the 60s riding the "Learned it once, never relearn"-mentality. Converted all of this nonsense to TArray (Delphi nomenclature) and stuff works now. I still wonder whether your topic is about programming in general C in particular as you seem insistent on issues that are long solved by several programming languages (granted, both Delphi and C# still allow you to shoot yourself in the foot, but you gotta fight hard against the compiler to do that).
-
I do a lot of development, migrating, porting, etc., in different environments, different systems, different languages, different versions,....get the idea? I have specialized over the years in doing weird stuff. I just finished chasing a bug in some vintage commercial software that was written to accommodate big-endian and little-endian integers depending on the platform. It turned out be that it applied big-endian logic in one place where little-endian was required. The why's and wherefore's of doing this are not important. What is important is the line of thought I reached upon conclusion, i.e., What I want to see in software. 1) I want to be able to declare my variables of a type and then have the necessary conversions take place automatically. E.g., I want to declare a "X" as a 2-byte integer and "S" as an ASCII (or utf8, utf16, whatever...) character string and be able simply specify 'S = X AS "format" --not "S =X.toString("fmt");", "S = itoa(i);" or whatever language conversion functions are appropriate. I know what I have going in, I know what I want coming out--make it easy to get from A to B!! 2) I want my data declarations to be consistent across all platforms/languages/environments/...! Before some reader gets all hot and bothered about using C typedefs, etc., in declarations, I know this can be done. What I want is consistency--i.e., a "short" to be a 2 byte integer, a "long" to be a 4 byte integer,....get the drift. Part of problem I was chasing had to do with the software expecting a C int to be 32 bits, but some 64 bit environments define an int as 64 bits 3) I want my utilities and commands to operate the same way, with the same results, across all platforms. If I do a "myutil -x" command on Linux, I want to see EXACTLY the same output and results across cygwin, Windows 10 Ubuntu, Debian, etc. 4) I want clear, simple, understandable, comprehensible programming constructs. I am tired of chasing errors such as where somebody fat-fingered "=" when "==" was meant or where a brace was misplaced or omitted. I want to be able to look at a piece of code and understand what the author intended easily and clearly. 5) I want clear, complete commercial documentation. I have seen thousands of circular definitions such as: returntype FunctionXYZ(int p1, int p2); Returns XYZ of p1 and p2. BIG WHOOPING DEAL! I'm not an idiot--I can plainly see that from the function call. I often need to know HOW it uses p1 and p2 to arrive at XYZ. (Of course, by now,
Sounds just like VB6 to me. I still can't figure out why the "elitists" hate it so much. Other than the name, of course. ;P If you don't like "GoTo" don't use it.:cool:
Slow Eddie
-
Just one word: Mickeysoft :-) .Net can never be so good that they get me to marry them and then let them move in and do whatever they like.
I have lived with several Zen masters - all of them were cats. His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
CodeWraith wrote:
just one word: Mickeysoft :)
Subjective religious reasons, then. Any objective reasons?
-
I do a lot of development, migrating, porting, etc., in different environments, different systems, different languages, different versions,....get the idea? I have specialized over the years in doing weird stuff. I just finished chasing a bug in some vintage commercial software that was written to accommodate big-endian and little-endian integers depending on the platform. It turned out be that it applied big-endian logic in one place where little-endian was required. The why's and wherefore's of doing this are not important. What is important is the line of thought I reached upon conclusion, i.e., What I want to see in software. 1) I want to be able to declare my variables of a type and then have the necessary conversions take place automatically. E.g., I want to declare a "X" as a 2-byte integer and "S" as an ASCII (or utf8, utf16, whatever...) character string and be able simply specify 'S = X AS "format" --not "S =X.toString("fmt");", "S = itoa(i);" or whatever language conversion functions are appropriate. I know what I have going in, I know what I want coming out--make it easy to get from A to B!! 2) I want my data declarations to be consistent across all platforms/languages/environments/...! Before some reader gets all hot and bothered about using C typedefs, etc., in declarations, I know this can be done. What I want is consistency--i.e., a "short" to be a 2 byte integer, a "long" to be a 4 byte integer,....get the drift. Part of problem I was chasing had to do with the software expecting a C int to be 32 bits, but some 64 bit environments define an int as 64 bits 3) I want my utilities and commands to operate the same way, with the same results, across all platforms. If I do a "myutil -x" command on Linux, I want to see EXACTLY the same output and results across cygwin, Windows 10 Ubuntu, Debian, etc. 4) I want clear, simple, understandable, comprehensible programming constructs. I am tired of chasing errors such as where somebody fat-fingered "=" when "==" was meant or where a brace was misplaced or omitted. I want to be able to look at a piece of code and understand what the author intended easily and clearly. 5) I want clear, complete commercial documentation. I have seen thousands of circular definitions such as: returntype FunctionXYZ(int p1, int p2); Returns XYZ of p1 and p2. BIG WHOOPING DEAL! I'm not an idiot--I can plainly see that from the function call. I often need to know HOW it uses p1 and p2 to arrive at XYZ. (Of course, by now,
You and every other quality technician... Seriously though, the problem is that as a profession we have no real enforced standards. Microsoft attempted to be a standard bearer in the 1990s and early 2000s but everyone complained and now we have the morass that technicians are just starting to rightly complain about. The other issue is that we have too many technicians in our ranks that are too eager to promote their own vision of things at the drop of a hat since using a tool that may be one or more years old may not be cool. First the major organizations destroyed the vital functions of IT development and then the technical community got on board and started doing the job for them. Now you have aberrations like MVC replacing Web Forms when it was already available and no one on the Microsoft side of things was interested until that company promoted its own version of MVC (most likely a direct copy of the original). Now you have JavaScript as a major language though it is a major headache to code with. Now you have Agile and DevOps, which in reality are diametrically opposed to quality software engineering standards. And now you have new tool-sets being introduced on a daily basis by anyone who thinks they know what they are doing. In short, the entire profession is a complete mess. And it ain't going to get better in the current economic environments of barbaric capitalism...
Steve Naidamast Sr. Software Engineer Black Falcon Software, Inc. blackfalconsoftware@outlook.com
-
.NET is useful - it provides a huge library of classes that work in a consistent way, unlike the libraries you had to play with with C / C++, and reduces the memory leak problems endemic to pointer based code written by people who think they know what they are doing ... :laugh: It's a tool - and one that works across multiple platforms with a high degree of "similarity". Try that with a native C compiler, and it becomes a struggle to get anything to work in a short timeframe. Don't get me wrong, I miss my assembler days (or decades more accurately) - but .NET is here to stay and I'm happy using it.
Sent from my Amstrad PC 1640 Never throw anything away, Griff Bad command or file name. Bad, bad command! Sit! Stay! Staaaay... AntiTwitter: @DalekDave is now a follower!
"but .NET is here to stay and I'm happy using it." I can happily say I hardly ever used .NET. As an embedded developer (now retired), .NET and C# were never options for the projects I worked on. Heck, I think only one project in my career had more than a meg of RAM. For my projects it's been C/C++ for at least the last 15 years of my career.
-
I do a lot of development, migrating, porting, etc., in different environments, different systems, different languages, different versions,....get the idea? I have specialized over the years in doing weird stuff. I just finished chasing a bug in some vintage commercial software that was written to accommodate big-endian and little-endian integers depending on the platform. It turned out be that it applied big-endian logic in one place where little-endian was required. The why's and wherefore's of doing this are not important. What is important is the line of thought I reached upon conclusion, i.e., What I want to see in software. 1) I want to be able to declare my variables of a type and then have the necessary conversions take place automatically. E.g., I want to declare a "X" as a 2-byte integer and "S" as an ASCII (or utf8, utf16, whatever...) character string and be able simply specify 'S = X AS "format" --not "S =X.toString("fmt");", "S = itoa(i);" or whatever language conversion functions are appropriate. I know what I have going in, I know what I want coming out--make it easy to get from A to B!! 2) I want my data declarations to be consistent across all platforms/languages/environments/...! Before some reader gets all hot and bothered about using C typedefs, etc., in declarations, I know this can be done. What I want is consistency--i.e., a "short" to be a 2 byte integer, a "long" to be a 4 byte integer,....get the drift. Part of problem I was chasing had to do with the software expecting a C int to be 32 bits, but some 64 bit environments define an int as 64 bits 3) I want my utilities and commands to operate the same way, with the same results, across all platforms. If I do a "myutil -x" command on Linux, I want to see EXACTLY the same output and results across cygwin, Windows 10 Ubuntu, Debian, etc. 4) I want clear, simple, understandable, comprehensible programming constructs. I am tired of chasing errors such as where somebody fat-fingered "=" when "==" was meant or where a brace was misplaced or omitted. I want to be able to look at a piece of code and understand what the author intended easily and clearly. 5) I want clear, complete commercial documentation. I have seen thousands of circular definitions such as: returntype FunctionXYZ(int p1, int p2); Returns XYZ of p1 and p2. BIG WHOOPING DEAL! I'm not an idiot--I can plainly see that from the function call. I often need to know HOW it uses p1 and p2 to arrive at XYZ. (Of course, by now,
I feel your pain. Here is the dilemma: Ease of use for the programmer. As a programmer, I don't want to declare: int4,int8,int16. The concept was that int was the DEFAULT machines bitness, which allowed it to move across platform. In memory this is great. Reading/writing to disk, created the Endian problem. Then when you are not looking you get coercion of types, etc. Add in signed/unsigned, and pretty soon you realize you need a completely Object Based system. To fix what? Write once test everywhere? I feel your pain, but see no solution outside of custom managing types that go into and out of some kind of storage!
-
You and every other quality technician... Seriously though, the problem is that as a profession we have no real enforced standards. Microsoft attempted to be a standard bearer in the 1990s and early 2000s but everyone complained and now we have the morass that technicians are just starting to rightly complain about. The other issue is that we have too many technicians in our ranks that are too eager to promote their own vision of things at the drop of a hat since using a tool that may be one or more years old may not be cool. First the major organizations destroyed the vital functions of IT development and then the technical community got on board and started doing the job for them. Now you have aberrations like MVC replacing Web Forms when it was already available and no one on the Microsoft side of things was interested until that company promoted its own version of MVC (most likely a direct copy of the original). Now you have JavaScript as a major language though it is a major headache to code with. Now you have Agile and DevOps, which in reality are diametrically opposed to quality software engineering standards. And now you have new tool-sets being introduced on a daily basis by anyone who thinks they know what they are doing. In short, the entire profession is a complete mess. And it ain't going to get better in the current economic environments of barbaric capitalism...
Steve Naidamast Sr. Software Engineer Black Falcon Software, Inc. blackfalconsoftware@outlook.com
-
I feel your pain. Here is the dilemma: Ease of use for the programmer. As a programmer, I don't want to declare: int4,int8,int16. The concept was that int was the DEFAULT machines bitness, which allowed it to move across platform. In memory this is great. Reading/writing to disk, created the Endian problem. Then when you are not looking you get coercion of types, etc. Add in signed/unsigned, and pretty soon you realize you need a completely Object Based system. To fix what? Write once test everywhere? I feel your pain, but see no solution outside of custom managing types that go into and out of some kind of storage!
Write once, run everywhere. How many times have you heard a senior management type say "We need it to run on Windows laptops for Accounting but Marketing uses Macs and Engineering uses Linux. And by the way, can you make it accessible from my phone?" If I had a nickel for every time I've heard that, I would make Bill Gates and Warren Buffet combined look like paupers.
-
Write once, run everywhere. How many times have you heard a senior management type say "We need it to run on Windows laptops for Accounting but Marketing uses Macs and Engineering uses Linux. And by the way, can you make it accessible from my phone?" If I had a nickel for every time I've heard that, I would make Bill Gates and Warren Buffet combined look like paupers.
That's why the DOCKER concept was so exciting to me, to be honest. UCSD Pascal had a runtime. Under DOS it was dog slow, but the idea was basically a VM... It ran on Linux and DOS the same, and anywhere else they ported that too, if memory serves me. Imagine a world where (X-windows tried this), you run your application, and the GUI attaches to it! Meaning you need only configure standard IO parameters. Now, this was the beginning of EVERY COBOL program (remember: Environment Division, etc). We have evolved really far, and we are getting places. The one upside of the web was a "Standard" GUI available to program to. Making things like Proton or Docker with a port to do things workable across platforms. And I see that is where we seem to be going. But like Scotty in Star Trek said "The fancier the plumbing, the bigger the problems" (or some such)