What I want.....
-
I do a lot of development, migrating, porting, etc., in different environments, different systems, different languages, different versions,....get the idea? I have specialized over the years in doing weird stuff. I just finished chasing a bug in some vintage commercial software that was written to accommodate big-endian and little-endian integers depending on the platform. It turned out be that it applied big-endian logic in one place where little-endian was required. The why's and wherefore's of doing this are not important. What is important is the line of thought I reached upon conclusion, i.e., What I want to see in software. 1) I want to be able to declare my variables of a type and then have the necessary conversions take place automatically. E.g., I want to declare a "X" as a 2-byte integer and "S" as an ASCII (or utf8, utf16, whatever...) character string and be able simply specify 'S = X AS "format" --not "S =X.toString("fmt");", "S = itoa(i);" or whatever language conversion functions are appropriate. I know what I have going in, I know what I want coming out--make it easy to get from A to B!! 2) I want my data declarations to be consistent across all platforms/languages/environments/...! Before some reader gets all hot and bothered about using C typedefs, etc., in declarations, I know this can be done. What I want is consistency--i.e., a "short" to be a 2 byte integer, a "long" to be a 4 byte integer,....get the drift. Part of problem I was chasing had to do with the software expecting a C int to be 32 bits, but some 64 bit environments define an int as 64 bits 3) I want my utilities and commands to operate the same way, with the same results, across all platforms. If I do a "myutil -x" command on Linux, I want to see EXACTLY the same output and results across cygwin, Windows 10 Ubuntu, Debian, etc. 4) I want clear, simple, understandable, comprehensible programming constructs. I am tired of chasing errors such as where somebody fat-fingered "=" when "==" was meant or where a brace was misplaced or omitted. I want to be able to look at a piece of code and understand what the author intended easily and clearly. 5) I want clear, complete commercial documentation. I have seen thousands of circular definitions such as: returntype FunctionXYZ(int p1, int p2); Returns XYZ of p1 and p2. BIG WHOOPING DEAL! I'm not an idiot--I can plainly see that from the function call. I often need to know HOW it uses p1 and p2 to arrive at XYZ. (Of course, by now,
-
I do a lot of development, migrating, porting, etc., in different environments, different systems, different languages, different versions,....get the idea? I have specialized over the years in doing weird stuff. I just finished chasing a bug in some vintage commercial software that was written to accommodate big-endian and little-endian integers depending on the platform. It turned out be that it applied big-endian logic in one place where little-endian was required. The why's and wherefore's of doing this are not important. What is important is the line of thought I reached upon conclusion, i.e., What I want to see in software. 1) I want to be able to declare my variables of a type and then have the necessary conversions take place automatically. E.g., I want to declare a "X" as a 2-byte integer and "S" as an ASCII (or utf8, utf16, whatever...) character string and be able simply specify 'S = X AS "format" --not "S =X.toString("fmt");", "S = itoa(i);" or whatever language conversion functions are appropriate. I know what I have going in, I know what I want coming out--make it easy to get from A to B!! 2) I want my data declarations to be consistent across all platforms/languages/environments/...! Before some reader gets all hot and bothered about using C typedefs, etc., in declarations, I know this can be done. What I want is consistency--i.e., a "short" to be a 2 byte integer, a "long" to be a 4 byte integer,....get the drift. Part of problem I was chasing had to do with the software expecting a C int to be 32 bits, but some 64 bit environments define an int as 64 bits 3) I want my utilities and commands to operate the same way, with the same results, across all platforms. If I do a "myutil -x" command on Linux, I want to see EXACTLY the same output and results across cygwin, Windows 10 Ubuntu, Debian, etc. 4) I want clear, simple, understandable, comprehensible programming constructs. I am tired of chasing errors such as where somebody fat-fingered "=" when "==" was meant or where a brace was misplaced or omitted. I want to be able to look at a piece of code and understand what the author intended easily and clearly. 5) I want clear, complete commercial documentation. I have seen thousands of circular definitions such as: returntype FunctionXYZ(int p1, int p2); Returns XYZ of p1 and p2. BIG WHOOPING DEAL! I'm not an idiot--I can plainly see that from the function call. I often need to know HOW it uses p1 and p2 to arrive at XYZ. (Of course, by now,
-
I do a lot of development, migrating, porting, etc., in different environments, different systems, different languages, different versions,....get the idea? I have specialized over the years in doing weird stuff. I just finished chasing a bug in some vintage commercial software that was written to accommodate big-endian and little-endian integers depending on the platform. It turned out be that it applied big-endian logic in one place where little-endian was required. The why's and wherefore's of doing this are not important. What is important is the line of thought I reached upon conclusion, i.e., What I want to see in software. 1) I want to be able to declare my variables of a type and then have the necessary conversions take place automatically. E.g., I want to declare a "X" as a 2-byte integer and "S" as an ASCII (or utf8, utf16, whatever...) character string and be able simply specify 'S = X AS "format" --not "S =X.toString("fmt");", "S = itoa(i);" or whatever language conversion functions are appropriate. I know what I have going in, I know what I want coming out--make it easy to get from A to B!! 2) I want my data declarations to be consistent across all platforms/languages/environments/...! Before some reader gets all hot and bothered about using C typedefs, etc., in declarations, I know this can be done. What I want is consistency--i.e., a "short" to be a 2 byte integer, a "long" to be a 4 byte integer,....get the drift. Part of problem I was chasing had to do with the software expecting a C int to be 32 bits, but some 64 bit environments define an int as 64 bits 3) I want my utilities and commands to operate the same way, with the same results, across all platforms. If I do a "myutil -x" command on Linux, I want to see EXACTLY the same output and results across cygwin, Windows 10 Ubuntu, Debian, etc. 4) I want clear, simple, understandable, comprehensible programming constructs. I am tired of chasing errors such as where somebody fat-fingered "=" when "==" was meant or where a brace was misplaced or omitted. I want to be able to look at a piece of code and understand what the author intended easily and clearly. 5) I want clear, complete commercial documentation. I have seen thousands of circular definitions such as: returntype FunctionXYZ(int p1, int p2); Returns XYZ of p1 and p2. BIG WHOOPING DEAL! I'm not an idiot--I can plainly see that from the function call. I often need to know HOW it uses p1 and p2 to arrive at XYZ. (Of course, by now,
I don't. Some languages - like C# - are strongly typed for a reason: to catch errors early. Firstly, by catching type conversions at compile time means that the code does exactly what you wanted, or it doesn't compile. Secondly, by making you explicitly convert things like user input to the type you want and providing exceptions (or "failed" responses as appropriate) if the user input doesn't match up. The global implicit typing you seem to prefer leads to errors because the compiler has to "guess" what you wanted - and that means that bad data gets into the system undetected. And until you've had to unpick a 100,000 row DB to try and fix dates that are entered as dd-MM-yy and MM-dd-YY you probably don't realise just how much of a PITA that is. And then there is the "pointer problem": a pointer to a ASCII char is a pointer to a byte, but a Unicode character is a pointer to a word. You can - by casting - explicitly convert a byte pointer to a word pointer but that doesn't change the underlying data, and it means that half the data accesses aren't going to work properly, because the byte pointer can be "half way up" a word value. And strong typing (in C#, if not in C or C++) also eliminates your "=" vs "==" in most cases because the result is not a bool so it can't be used in an conditional. You can set the C or C++ compiler to give warnings or errors when you type it wrong , but it is valid because being old languages they don't have a native boolean value: any nonzero value is "true". You want to get away from the complexity and want consistent declarations? Try C# ... (but hurry, it's getting complicated as well now).
Sent from my Amstrad PC 1640 Never throw anything away, Griff Bad command or file name. Bad, bad command! Sit! Stay! Staaaay... AntiTwitter: @DalekDave is now a follower!
-
I do a lot of development, migrating, porting, etc., in different environments, different systems, different languages, different versions,....get the idea? I have specialized over the years in doing weird stuff. I just finished chasing a bug in some vintage commercial software that was written to accommodate big-endian and little-endian integers depending on the platform. It turned out be that it applied big-endian logic in one place where little-endian was required. The why's and wherefore's of doing this are not important. What is important is the line of thought I reached upon conclusion, i.e., What I want to see in software. 1) I want to be able to declare my variables of a type and then have the necessary conversions take place automatically. E.g., I want to declare a "X" as a 2-byte integer and "S" as an ASCII (or utf8, utf16, whatever...) character string and be able simply specify 'S = X AS "format" --not "S =X.toString("fmt");", "S = itoa(i);" or whatever language conversion functions are appropriate. I know what I have going in, I know what I want coming out--make it easy to get from A to B!! 2) I want my data declarations to be consistent across all platforms/languages/environments/...! Before some reader gets all hot and bothered about using C typedefs, etc., in declarations, I know this can be done. What I want is consistency--i.e., a "short" to be a 2 byte integer, a "long" to be a 4 byte integer,....get the drift. Part of problem I was chasing had to do with the software expecting a C int to be 32 bits, but some 64 bit environments define an int as 64 bits 3) I want my utilities and commands to operate the same way, with the same results, across all platforms. If I do a "myutil -x" command on Linux, I want to see EXACTLY the same output and results across cygwin, Windows 10 Ubuntu, Debian, etc. 4) I want clear, simple, understandable, comprehensible programming constructs. I am tired of chasing errors such as where somebody fat-fingered "=" when "==" was meant or where a brace was misplaced or omitted. I want to be able to look at a piece of code and understand what the author intended easily and clearly. 5) I want clear, complete commercial documentation. I have seen thousands of circular definitions such as: returntype FunctionXYZ(int p1, int p2); Returns XYZ of p1 and p2. BIG WHOOPING DEAL! I'm not an idiot--I can plainly see that from the function call. I often need to know HOW it uses p1 and p2 to arrive at XYZ. (Of course, by now,
What I want is for my local 5-Guys burger joint to stop burning the bacon on my double-bacon cheeseburgers.
-
What I want is for my local 5-Guys burger joint to stop burning the bacon on my double-bacon cheeseburgers.
-
According to Microsoft, VB6 should meet all your needs! ;P
- I would love to change the world, but they won’t give me the source code.
-
I concur - but I skip the cheese part.
- I would love to change the world, but they won’t give me the source code.
...All I Want[^]
Sent from my Amstrad PC 1640 Never throw anything away, Griff Bad command or file name. Bad, bad command! Sit! Stay! Staaaay... AntiTwitter: @DalekDave is now a follower!
-
...All I Want[^]
Sent from my Amstrad PC 1640 Never throw anything away, Griff Bad command or file name. Bad, bad command! Sit! Stay! Staaaay... AntiTwitter: @DalekDave is now a follower!
-
He's playing with you: VB6 died for new projects in 2002 or so when .NET was released.
Sent from my Amstrad PC 1640 Never throw anything away, Griff Bad command or file name. Bad, bad command! Sit! Stay! Staaaay... AntiTwitter: @DalekDave is now a follower!
-
OMG!! Pass the mind-bleach!
- I would love to change the world, but they won’t give me the source code.
OK - Mind Bleach[^]
Sent from my Amstrad PC 1640 Never throw anything away, Griff Bad command or file name. Bad, bad command! Sit! Stay! Staaaay... AntiTwitter: @DalekDave is now a follower!
-
OK - Mind Bleach[^]
Sent from my Amstrad PC 1640 Never throw anything away, Griff Bad command or file name. Bad, bad command! Sit! Stay! Staaaay... AntiTwitter: @DalekDave is now a follower!
-
I knew what is was going to be and yet I still went there! You b&^$@$*&! ;P
- I would love to change the world, but they won’t give me the source code.
Took your mind off of Melissa Lynn tho' ... My work here is done. :-D
Sent from my Amstrad PC 1640 Never throw anything away, Griff Bad command or file name. Bad, bad command! Sit! Stay! Staaaay... AntiTwitter: @DalekDave is now a follower!
-
I don't. Some languages - like C# - are strongly typed for a reason: to catch errors early. Firstly, by catching type conversions at compile time means that the code does exactly what you wanted, or it doesn't compile. Secondly, by making you explicitly convert things like user input to the type you want and providing exceptions (or "failed" responses as appropriate) if the user input doesn't match up. The global implicit typing you seem to prefer leads to errors because the compiler has to "guess" what you wanted - and that means that bad data gets into the system undetected. And until you've had to unpick a 100,000 row DB to try and fix dates that are entered as dd-MM-yy and MM-dd-YY you probably don't realise just how much of a PITA that is. And then there is the "pointer problem": a pointer to a ASCII char is a pointer to a byte, but a Unicode character is a pointer to a word. You can - by casting - explicitly convert a byte pointer to a word pointer but that doesn't change the underlying data, and it means that half the data accesses aren't going to work properly, because the byte pointer can be "half way up" a word value. And strong typing (in C#, if not in C or C++) also eliminates your "=" vs "==" in most cases because the result is not a bool so it can't be used in an conditional. You can set the C or C++ compiler to give warnings or errors when you type it wrong , but it is valid because being old languages they don't have a native boolean value: any nonzero value is "true". You want to get away from the complexity and want consistent declarations? Try C# ... (but hurry, it's getting complicated as well now).
Sent from my Amstrad PC 1640 Never throw anything away, Griff Bad command or file name. Bad, bad command! Sit! Stay! Staaaay... AntiTwitter: @DalekDave is now a follower!
-
I agree with everything you wrote except the last paragraph. I despise everything about dot nyet.
"They have a consciousness, they have a life, they have a soul! Damn you! Let the rabbits wear glasses! Save our brothers! Can I get an amen?"
.NET is useful - it provides a huge library of classes that work in a consistent way, unlike the libraries you had to play with with C / C++, and reduces the memory leak problems endemic to pointer based code written by people who think they know what they are doing ... :laugh: It's a tool - and one that works across multiple platforms with a high degree of "similarity". Try that with a native C compiler, and it becomes a struggle to get anything to work in a short timeframe. Don't get me wrong, I miss my assembler days (or decades more accurately) - but .NET is here to stay and I'm happy using it.
Sent from my Amstrad PC 1640 Never throw anything away, Griff Bad command or file name. Bad, bad command! Sit! Stay! Staaaay... AntiTwitter: @DalekDave is now a follower!
-
...All I Want[^]
Sent from my Amstrad PC 1640 Never throw anything away, Griff Bad command or file name. Bad, bad command! Sit! Stay! Staaaay... AntiTwitter: @DalekDave is now a follower!
"These people looked deep within my soul and assigned me a number based on the order in which I joined." - Homer
-
I do a lot of development, migrating, porting, etc., in different environments, different systems, different languages, different versions,....get the idea? I have specialized over the years in doing weird stuff. I just finished chasing a bug in some vintage commercial software that was written to accommodate big-endian and little-endian integers depending on the platform. It turned out be that it applied big-endian logic in one place where little-endian was required. The why's and wherefore's of doing this are not important. What is important is the line of thought I reached upon conclusion, i.e., What I want to see in software. 1) I want to be able to declare my variables of a type and then have the necessary conversions take place automatically. E.g., I want to declare a "X" as a 2-byte integer and "S" as an ASCII (or utf8, utf16, whatever...) character string and be able simply specify 'S = X AS "format" --not "S =X.toString("fmt");", "S = itoa(i);" or whatever language conversion functions are appropriate. I know what I have going in, I know what I want coming out--make it easy to get from A to B!! 2) I want my data declarations to be consistent across all platforms/languages/environments/...! Before some reader gets all hot and bothered about using C typedefs, etc., in declarations, I know this can be done. What I want is consistency--i.e., a "short" to be a 2 byte integer, a "long" to be a 4 byte integer,....get the drift. Part of problem I was chasing had to do with the software expecting a C int to be 32 bits, but some 64 bit environments define an int as 64 bits 3) I want my utilities and commands to operate the same way, with the same results, across all platforms. If I do a "myutil -x" command on Linux, I want to see EXACTLY the same output and results across cygwin, Windows 10 Ubuntu, Debian, etc. 4) I want clear, simple, understandable, comprehensible programming constructs. I am tired of chasing errors such as where somebody fat-fingered "=" when "==" was meant or where a brace was misplaced or omitted. I want to be able to look at a piece of code and understand what the author intended easily and clearly. 5) I want clear, complete commercial documentation. I have seen thousands of circular definitions such as: returntype FunctionXYZ(int p1, int p2); Returns XYZ of p1 and p2. BIG WHOOPING DEAL! I'm not an idiot--I can plainly see that from the function call. I often need to know HOW it uses p1 and p2 to arrive at XYZ. (Of course, by now,
rjmoses wrote:
I do a lot of development, migrating, porting, etc., in different environments, different systems, different languages, different versions,....get the idea? I have specialized over the years in doing weird stuff.
Good! I got a PHD by asking questions about things that everyone else assumed, so that seemed weird at the time, but I was right to question the assumptions, it turns out they were wrong. The weird problems are the most interesting.
CQ de W5ALT
Walt Fair, Jr., P. E. Comport Computing Specializing in Technical Engineering Software
-
I concur - but I skip the cheese part.
- I would love to change the world, but they won’t give me the source code.
Agreed, and also skip the burger part. What's wrong with a plain bacon sandwich?
CQ de W5ALT
Walt Fair, Jr., P. E. Comport Computing Specializing in Technical Engineering Software
-
I don't. Some languages - like C# - are strongly typed for a reason: to catch errors early. Firstly, by catching type conversions at compile time means that the code does exactly what you wanted, or it doesn't compile. Secondly, by making you explicitly convert things like user input to the type you want and providing exceptions (or "failed" responses as appropriate) if the user input doesn't match up. The global implicit typing you seem to prefer leads to errors because the compiler has to "guess" what you wanted - and that means that bad data gets into the system undetected. And until you've had to unpick a 100,000 row DB to try and fix dates that are entered as dd-MM-yy and MM-dd-YY you probably don't realise just how much of a PITA that is. And then there is the "pointer problem": a pointer to a ASCII char is a pointer to a byte, but a Unicode character is a pointer to a word. You can - by casting - explicitly convert a byte pointer to a word pointer but that doesn't change the underlying data, and it means that half the data accesses aren't going to work properly, because the byte pointer can be "half way up" a word value. And strong typing (in C#, if not in C or C++) also eliminates your "=" vs "==" in most cases because the result is not a bool so it can't be used in an conditional. You can set the C or C++ compiler to give warnings or errors when you type it wrong , but it is valid because being old languages they don't have a native boolean value: any nonzero value is "true". You want to get away from the complexity and want consistent declarations? Try C# ... (but hurry, it's getting complicated as well now).
Sent from my Amstrad PC 1640 Never throw anything away, Griff Bad command or file name. Bad, bad command! Sit! Stay! Staaaay... AntiTwitter: @DalekDave is now a follower!
OriginalGriff wrote:
Some languages - like C# - are strongly typed for a reason: to catch errors early.
I found going from Object Pascall (Delphi) to C# was pretty easy, since they are both strongly typed. Of course going from ALGOL to PASCAl was also fairly easy, but going from FORTRAN to ALGOL was painful.
CQ de W5ALT
Walt Fair, Jr., P. E. Comport Computing Specializing in Technical Engineering Software
-
rjmoses wrote:
I do a lot of development, migrating, porting, etc., in different environments, different systems, different languages, different versions,....get the idea? I have specialized over the years in doing weird stuff.
Good! I got a PHD by asking questions about things that everyone else assumed, so that seemed weird at the time, but I was right to question the assumptions, it turns out they were wrong. The weird problems are the most interesting.
CQ de W5ALT
Walt Fair, Jr., P. E. Comport Computing Specializing in Technical Engineering Software
Never neglect the "trivial" roots of an equation. :)
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows. -- 6079 Smith W.