What I want.....
-
Rick York wrote:
I despise everything about dot nyet.
I'm curious about why? Particularly since I have quite the opposite reaction. :)
Latest Article - Azure Function - Compute Pi Stress Test Learning to code with python is like learning to swim with those little arm floaties. It gives you undeserved confidence and will eventually drown you. - DangerBunny Artificial intelligence is the only remedy for natural stupidity. - CDP1802
Just one word: Mickeysoft :-) .Net can never be so good that they get me to marry them and then let them move in and do whatever they like.
I have lived with several Zen masters - all of them were cats. His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
-
.NET is useful - it provides a huge library of classes that work in a consistent way, unlike the libraries you had to play with with C / C++, and reduces the memory leak problems endemic to pointer based code written by people who think they know what they are doing ... :laugh: It's a tool - and one that works across multiple platforms with a high degree of "similarity". Try that with a native C compiler, and it becomes a struggle to get anything to work in a short timeframe. Don't get me wrong, I miss my assembler days (or decades more accurately) - but .NET is here to stay and I'm happy using it.
Sent from my Amstrad PC 1640 Never throw anything away, Griff Bad command or file name. Bad, bad command! Sit! Stay! Staaaay... AntiTwitter: @DalekDave is now a follower!
OriginalGriff wrote:
and reduces the memory leak problems endemic to pointer based code written by people who think they know what they are doing ... :laugh:
In other words: .Net is for those who don't know what they are doing. Excellent argument. :-)
OriginalGriff wrote:
It's a tool - and one that works across multiple platforms with a high degree of "similarity". Try that with a native C compiler, and it becomes a struggle to get anything to work in a short timeframe.
Have been trying that in the last days. It's not as bad as you think anymore.
OriginalGriff wrote:
Don't get me wrong, I miss my assembler days (or decades more accurately) - but .NET is here to stay and I'm happy using it
In other words: You have become too comfortable and now you finally love the Big Brother. :-)
I have lived with several Zen masters - all of them were cats. His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
-
I do a lot of development, migrating, porting, etc., in different environments, different systems, different languages, different versions,....get the idea? I have specialized over the years in doing weird stuff. I just finished chasing a bug in some vintage commercial software that was written to accommodate big-endian and little-endian integers depending on the platform. It turned out be that it applied big-endian logic in one place where little-endian was required. The why's and wherefore's of doing this are not important. What is important is the line of thought I reached upon conclusion, i.e., What I want to see in software. 1) I want to be able to declare my variables of a type and then have the necessary conversions take place automatically. E.g., I want to declare a "X" as a 2-byte integer and "S" as an ASCII (or utf8, utf16, whatever...) character string and be able simply specify 'S = X AS "format" --not "S =X.toString("fmt");", "S = itoa(i);" or whatever language conversion functions are appropriate. I know what I have going in, I know what I want coming out--make it easy to get from A to B!! 2) I want my data declarations to be consistent across all platforms/languages/environments/...! Before some reader gets all hot and bothered about using C typedefs, etc., in declarations, I know this can be done. What I want is consistency--i.e., a "short" to be a 2 byte integer, a "long" to be a 4 byte integer,....get the drift. Part of problem I was chasing had to do with the software expecting a C int to be 32 bits, but some 64 bit environments define an int as 64 bits 3) I want my utilities and commands to operate the same way, with the same results, across all platforms. If I do a "myutil -x" command on Linux, I want to see EXACTLY the same output and results across cygwin, Windows 10 Ubuntu, Debian, etc. 4) I want clear, simple, understandable, comprehensible programming constructs. I am tired of chasing errors such as where somebody fat-fingered "=" when "==" was meant or where a brace was misplaced or omitted. I want to be able to look at a piece of code and understand what the author intended easily and clearly. 5) I want clear, complete commercial documentation. I have seen thousands of circular definitions such as: returntype FunctionXYZ(int p1, int p2); Returns XYZ of p1 and p2. BIG WHOOPING DEAL! I'm not an idiot--I can plainly see that from the function call. I often need to know HOW it uses p1 and p2 to arrive at XYZ. (Of course, by now,
Once you start writing significantly sized software automatic conversions are the last thing you want. It's a mine field of errors waiting to happen, IMO. As is often the case in line, you can sort of pick one, easy or good. You can't really have both unless it's a fairly modest undertaking. When it gets serious, the old saying of measure twice, cut once really applies. The time you spend being very specific to the compiler about what you want to happen will save you endless woe. This is one of the things that really makes me shake my head at modern C++ where people are using 'auto' all over the place.
Explorans limites defectum
-
I do a lot of development, migrating, porting, etc., in different environments, different systems, different languages, different versions,....get the idea? I have specialized over the years in doing weird stuff. I just finished chasing a bug in some vintage commercial software that was written to accommodate big-endian and little-endian integers depending on the platform. It turned out be that it applied big-endian logic in one place where little-endian was required. The why's and wherefore's of doing this are not important. What is important is the line of thought I reached upon conclusion, i.e., What I want to see in software. 1) I want to be able to declare my variables of a type and then have the necessary conversions take place automatically. E.g., I want to declare a "X" as a 2-byte integer and "S" as an ASCII (or utf8, utf16, whatever...) character string and be able simply specify 'S = X AS "format" --not "S =X.toString("fmt");", "S = itoa(i);" or whatever language conversion functions are appropriate. I know what I have going in, I know what I want coming out--make it easy to get from A to B!! 2) I want my data declarations to be consistent across all platforms/languages/environments/...! Before some reader gets all hot and bothered about using C typedefs, etc., in declarations, I know this can be done. What I want is consistency--i.e., a "short" to be a 2 byte integer, a "long" to be a 4 byte integer,....get the drift. Part of problem I was chasing had to do with the software expecting a C int to be 32 bits, but some 64 bit environments define an int as 64 bits 3) I want my utilities and commands to operate the same way, with the same results, across all platforms. If I do a "myutil -x" command on Linux, I want to see EXACTLY the same output and results across cygwin, Windows 10 Ubuntu, Debian, etc. 4) I want clear, simple, understandable, comprehensible programming constructs. I am tired of chasing errors such as where somebody fat-fingered "=" when "==" was meant or where a brace was misplaced or omitted. I want to be able to look at a piece of code and understand what the author intended easily and clearly. 5) I want clear, complete commercial documentation. I have seen thousands of circular definitions such as: returntype FunctionXYZ(int p1, int p2); Returns XYZ of p1 and p2. BIG WHOOPING DEAL! I'm not an idiot--I can plainly see that from the function call. I often need to know HOW it uses p1 and p2 to arrive at XYZ. (Of course, by now,
rjmoses wrote:
What's your thoughts?
After pondering this post all day, I finally came up with a response. IDIC[^] you must learn and become one with. (Argh, a Star Trek and Star Wars reference in one sentence.)
Latest Article - Azure Function - Compute Pi Stress Test Learning to code with python is like learning to swim with those little arm floaties. It gives you undeserved confidence and will eventually drown you. - DangerBunny Artificial intelligence is the only remedy for natural stupidity. - CDP1802
-
I don't. Some languages - like C# - are strongly typed for a reason: to catch errors early. Firstly, by catching type conversions at compile time means that the code does exactly what you wanted, or it doesn't compile. Secondly, by making you explicitly convert things like user input to the type you want and providing exceptions (or "failed" responses as appropriate) if the user input doesn't match up. The global implicit typing you seem to prefer leads to errors because the compiler has to "guess" what you wanted - and that means that bad data gets into the system undetected. And until you've had to unpick a 100,000 row DB to try and fix dates that are entered as dd-MM-yy and MM-dd-YY you probably don't realise just how much of a PITA that is. And then there is the "pointer problem": a pointer to a ASCII char is a pointer to a byte, but a Unicode character is a pointer to a word. You can - by casting - explicitly convert a byte pointer to a word pointer but that doesn't change the underlying data, and it means that half the data accesses aren't going to work properly, because the byte pointer can be "half way up" a word value. And strong typing (in C#, if not in C or C++) also eliminates your "=" vs "==" in most cases because the result is not a bool so it can't be used in an conditional. You can set the C or C++ compiler to give warnings or errors when you type it wrong , but it is valid because being old languages they don't have a native boolean value: any nonzero value is "true". You want to get away from the complexity and want consistent declarations? Try C# ... (but hurry, it's getting complicated as well now).
Sent from my Amstrad PC 1640 Never throw anything away, Griff Bad command or file name. Bad, bad command! Sit! Stay! Staaaay... AntiTwitter: @DalekDave is now a follower!
I also have had to fix problems such as the MM-DD-YY problem you described. Strong typing often causes as many problems as weak typing because people want to find a solution for their problem. I'm thinking along the lines of a "definable" strong type conversions. Using arithmetric conversions such as integer to character, I would simply like to say "string S = i" having previously declared "i" as an integer. Then, add optional meta-data such as format. Regarding pointers, C++ pointers has made debugging difficult. And having to cast causes even more confusion. I'm thinking the majority of pointer and casting problems are caused as a result of less experienced or lazy developers trying to find a quick, workable (in most cases) solution to their problem. I'm just wondering if there isn't perhaps a better solution.
-
I also have had to fix problems such as the MM-DD-YY problem you described. Strong typing often causes as many problems as weak typing because people want to find a solution for their problem. I'm thinking along the lines of a "definable" strong type conversions. Using arithmetric conversions such as integer to character, I would simply like to say "string S = i" having previously declared "i" as an integer. Then, add optional meta-data such as format. Regarding pointers, C++ pointers has made debugging difficult. And having to cast causes even more confusion. I'm thinking the majority of pointer and casting problems are caused as a result of less experienced or lazy developers trying to find a quick, workable (in most cases) solution to their problem. I'm just wondering if there isn't perhaps a better solution.
I disagree - they are a different type of problem. String typing reduces problems that can be located at compile time, while your weak typing problem is because the developer hasn't thought about the code sufficiently. If that forces him to look at what he's doing instead of assuming that the compiler will do the right thing, then that improves the code and reduces the chances of bad data. You make typing mistakes, I make them - we all do. The more that the compiler can pick up instead of assuming that it's correct and blindly converting the wrong data the better, no?
string s = id;
bad if you actually meant to do this:
string s = IansName;
Why is it such a problem for you to spell out what you want the compiler to do? Adding formatting defaults and suchlike makes it more confusing if the developer doesn't realise what format is being used ... which is why you get dd/MM/yy and MM/dd/yy confusion because different users use different defaults!
Sent from my Amstrad PC 1640 Never throw anything away, Griff Bad command or file name. Bad, bad command! Sit! Stay! Staaaay... AntiTwitter: @DalekDave is now a follower!
-
rjmoses wrote:
I do a lot of development, migrating, porting, etc., in different environments, different systems, different languages, different versions,....get the idea? I have specialized over the years in doing weird stuff.
Good! I got a PHD by asking questions about things that everyone else assumed, so that seemed weird at the time, but I was right to question the assumptions, it turns out they were wrong. The weird problems are the most interesting.
CQ de W5ALT
Walt Fair, Jr., P. E. Comport Computing Specializing in Technical Engineering Software
Weird problems are the most fun, most satisfying, but usually take more time than the budget allows. Result: People take shortcuts because somebody is crawling all over them to "get the job done". I, like many people, am inherently lazy. But I work very hard at being lazy efficiently. I do not like solving the same problems over and over. I do like solving a problem--once! The second and subsequent times are a waste of my time. I will never, ever, buy a self-driving car with the current state of software development.
-
I do a lot of development, migrating, porting, etc., in different environments, different systems, different languages, different versions,....get the idea? I have specialized over the years in doing weird stuff. I just finished chasing a bug in some vintage commercial software that was written to accommodate big-endian and little-endian integers depending on the platform. It turned out be that it applied big-endian logic in one place where little-endian was required. The why's and wherefore's of doing this are not important. What is important is the line of thought I reached upon conclusion, i.e., What I want to see in software. 1) I want to be able to declare my variables of a type and then have the necessary conversions take place automatically. E.g., I want to declare a "X" as a 2-byte integer and "S" as an ASCII (or utf8, utf16, whatever...) character string and be able simply specify 'S = X AS "format" --not "S =X.toString("fmt");", "S = itoa(i);" or whatever language conversion functions are appropriate. I know what I have going in, I know what I want coming out--make it easy to get from A to B!! 2) I want my data declarations to be consistent across all platforms/languages/environments/...! Before some reader gets all hot and bothered about using C typedefs, etc., in declarations, I know this can be done. What I want is consistency--i.e., a "short" to be a 2 byte integer, a "long" to be a 4 byte integer,....get the drift. Part of problem I was chasing had to do with the software expecting a C int to be 32 bits, but some 64 bit environments define an int as 64 bits 3) I want my utilities and commands to operate the same way, with the same results, across all platforms. If I do a "myutil -x" command on Linux, I want to see EXACTLY the same output and results across cygwin, Windows 10 Ubuntu, Debian, etc. 4) I want clear, simple, understandable, comprehensible programming constructs. I am tired of chasing errors such as where somebody fat-fingered "=" when "==" was meant or where a brace was misplaced or omitted. I want to be able to look at a piece of code and understand what the author intended easily and clearly. 5) I want clear, complete commercial documentation. I have seen thousands of circular definitions such as: returntype FunctionXYZ(int p1, int p2); Returns XYZ of p1 and p2. BIG WHOOPING DEAL! I'm not an idiot--I can plainly see that from the function call. I often need to know HOW it uses p1 and p2 to arrive at XYZ. (Of course, by now,
A colleague was just complaining about a new language+library that he was using for big data. The complaint: Too many "magical" conversions were taking place under the covers. Often processing the data in non-optimal ways, and not an easy way to force better ways to process it because of poor library design. Per dates and formats. Dates/DateTimes in codes should be some sort of numeric type (possibly wrapped) and always in UTC. As someone else mentioned, display of dates (including timezones) should be a user preference or a replaceable component that mimics being a user preference.
-
I do a lot of development, migrating, porting, etc., in different environments, different systems, different languages, different versions,....get the idea? I have specialized over the years in doing weird stuff. I just finished chasing a bug in some vintage commercial software that was written to accommodate big-endian and little-endian integers depending on the platform. It turned out be that it applied big-endian logic in one place where little-endian was required. The why's and wherefore's of doing this are not important. What is important is the line of thought I reached upon conclusion, i.e., What I want to see in software. 1) I want to be able to declare my variables of a type and then have the necessary conversions take place automatically. E.g., I want to declare a "X" as a 2-byte integer and "S" as an ASCII (or utf8, utf16, whatever...) character string and be able simply specify 'S = X AS "format" --not "S =X.toString("fmt");", "S = itoa(i);" or whatever language conversion functions are appropriate. I know what I have going in, I know what I want coming out--make it easy to get from A to B!! 2) I want my data declarations to be consistent across all platforms/languages/environments/...! Before some reader gets all hot and bothered about using C typedefs, etc., in declarations, I know this can be done. What I want is consistency--i.e., a "short" to be a 2 byte integer, a "long" to be a 4 byte integer,....get the drift. Part of problem I was chasing had to do with the software expecting a C int to be 32 bits, but some 64 bit environments define an int as 64 bits 3) I want my utilities and commands to operate the same way, with the same results, across all platforms. If I do a "myutil -x" command on Linux, I want to see EXACTLY the same output and results across cygwin, Windows 10 Ubuntu, Debian, etc. 4) I want clear, simple, understandable, comprehensible programming constructs. I am tired of chasing errors such as where somebody fat-fingered "=" when "==" was meant or where a brace was misplaced or omitted. I want to be able to look at a piece of code and understand what the author intended easily and clearly. 5) I want clear, complete commercial documentation. I have seen thousands of circular definitions such as: returntype FunctionXYZ(int p1, int p2); Returns XYZ of p1 and p2. BIG WHOOPING DEAL! I'm not an idiot--I can plainly see that from the function call. I often need to know HOW it uses p1 and p2 to arrive at XYZ. (Of course, by now,
I faced a similar problem just recently. The solution was to let the compiler handle the endianness. Integers, Chars (multibyte/Unicode in that context), all work the same way in the same language. I don't cast/convert between integers and chars because the one is numbers, the other is characters, letters. If a conversion is needed, the compiler offers conversion functions that work the same independent of the platform. Same for data declarations, in C(++), a uint16_t is unambiguous. Across languages however, don't see how that's supposed to work. What's even the point of piping a C source file into a Delphi compiler or vise versa? This technique naturally results in my tools behaving the same on every platform the compiler offers. Your post reminds me on a nightmare from a coworker I fixed a while ago. He was parsing a protocol with the bytestream containing, amongst other things, integers. The protocol is LE, the system the parser's running on is LE (Windows x86-64), all things work splendidly. For 1, 2 and 4 byte-integers that is. Just cast it! And to parse 3-byte integers, he built a monster of bit shifts and whatnot. I've replaced his horrible mess with somewhat simple code that initializes the result to 0 and then adds byte by byte multiplying it accordingly. Viola, problem solved, function is simple and works for an arbitrary number of bytes.
-
What I want is for my local 5-Guys burger joint to stop burning the bacon on my double-bacon cheeseburgers.
-
I faced a similar problem just recently. The solution was to let the compiler handle the endianness. Integers, Chars (multibyte/Unicode in that context), all work the same way in the same language. I don't cast/convert between integers and chars because the one is numbers, the other is characters, letters. If a conversion is needed, the compiler offers conversion functions that work the same independent of the platform. Same for data declarations, in C(++), a uint16_t is unambiguous. Across languages however, don't see how that's supposed to work. What's even the point of piping a C source file into a Delphi compiler or vise versa? This technique naturally results in my tools behaving the same on every platform the compiler offers. Your post reminds me on a nightmare from a coworker I fixed a while ago. He was parsing a protocol with the bytestream containing, amongst other things, integers. The protocol is LE, the system the parser's running on is LE (Windows x86-64), all things work splendidly. For 1, 2 and 4 byte-integers that is. Just cast it! And to parse 3-byte integers, he built a monster of bit shifts and whatnot. I've replaced his horrible mess with somewhat simple code that initializes the result to 0 and then adds byte by byte multiplying it accordingly. Viola, problem solved, function is simple and works for an arbitrary number of bytes.
Your 3 byte integer problem is exactly the kind of work-around that causes problems. I like your solution to the LE problem--simple and directly understandable. But, what if we didn't have to think about it? What if the variable could be defined as "3 byte little endian integer" and the conversion to/from machine requirements took place automatically on all future references? How could something like that be implemented at the language, compiler or machine level? I cannot tell you how many times I've seen "char" definitions used inappropriately when a "byte" definition would make more sense. Or things like: "memcpy( (char *) &structa, (char *) &structb, 100 );" (And if someone says they have never done something like that, I say "BULL!". They've done it one way or another in whatever language except perhaps LISP or SNOBOL). Just thinking.
-
Your 3 byte integer problem is exactly the kind of work-around that causes problems. I like your solution to the LE problem--simple and directly understandable. But, what if we didn't have to think about it? What if the variable could be defined as "3 byte little endian integer" and the conversion to/from machine requirements took place automatically on all future references? How could something like that be implemented at the language, compiler or machine level? I cannot tell you how many times I've seen "char" definitions used inappropriately when a "byte" definition would make more sense. Or things like: "memcpy( (char *) &structa, (char *) &structb, 100 );" (And if someone says they have never done something like that, I say "BULL!". They've done it one way or another in whatever language except perhaps LISP or SNOBOL). Just thinking.
But we already have that! Let's forget the 3-byte-integer for a moment, that's rather specific to that protocol and usually not needed. When I work with a int32_t (a rather common type), I don't have to care which endianness the underlying system uses. The compiler takes care of everything! Same goes for, let's say, uintptr_t. I don't have to care about endianness, bitness, none of that. The compiler does it for me. Well, I of course have to work with the compiler, but your world, the one where the programmer doesn't have to care, is already there. The other topic here is that people will always find a way to circumvent the compiler and shoot themselves in the foot. There's languages that make that easier or harder, C is the worst offender I've ever met (save for assembly, but that's in a league of it's own). C doesn't even have a byte type! A char is a byte in C, you can't blame the programmer (except for possibly poor choice of C as a tool). Switch the language. C# or Delphi on the other hand, those compilers yell at you when you're doing questionable things. And if that thing in question may work just fine while still remaining questinable, you'll at least get a warning. Feel free to yell BULL, by the way. I've never done such a thing because the question is not whether it'll blow up in my face but a mere when. It will blow up sooner or later. Fun fact: I've now spent about two weeks fixing a binary communication layer. Some predecessor of mine thought that using strings, data structures designed for text, where binary information is processed, would be a splendid idea. And then came Unicode. Trying to convert 99h to a Unicode string member yields 3Fh and some other byte I've forgotten. I bet it was a C programmer who grew up in the 60s riding the "Learned it once, never relearn"-mentality. Converted all of this nonsense to TArray (Delphi nomenclature) and stuff works now. I still wonder whether your topic is about programming in general C in particular as you seem insistent on issues that are long solved by several programming languages (granted, both Delphi and C# still allow you to shoot yourself in the foot, but you gotta fight hard against the compiler to do that).
-
I do a lot of development, migrating, porting, etc., in different environments, different systems, different languages, different versions,....get the idea? I have specialized over the years in doing weird stuff. I just finished chasing a bug in some vintage commercial software that was written to accommodate big-endian and little-endian integers depending on the platform. It turned out be that it applied big-endian logic in one place where little-endian was required. The why's and wherefore's of doing this are not important. What is important is the line of thought I reached upon conclusion, i.e., What I want to see in software. 1) I want to be able to declare my variables of a type and then have the necessary conversions take place automatically. E.g., I want to declare a "X" as a 2-byte integer and "S" as an ASCII (or utf8, utf16, whatever...) character string and be able simply specify 'S = X AS "format" --not "S =X.toString("fmt");", "S = itoa(i);" or whatever language conversion functions are appropriate. I know what I have going in, I know what I want coming out--make it easy to get from A to B!! 2) I want my data declarations to be consistent across all platforms/languages/environments/...! Before some reader gets all hot and bothered about using C typedefs, etc., in declarations, I know this can be done. What I want is consistency--i.e., a "short" to be a 2 byte integer, a "long" to be a 4 byte integer,....get the drift. Part of problem I was chasing had to do with the software expecting a C int to be 32 bits, but some 64 bit environments define an int as 64 bits 3) I want my utilities and commands to operate the same way, with the same results, across all platforms. If I do a "myutil -x" command on Linux, I want to see EXACTLY the same output and results across cygwin, Windows 10 Ubuntu, Debian, etc. 4) I want clear, simple, understandable, comprehensible programming constructs. I am tired of chasing errors such as where somebody fat-fingered "=" when "==" was meant or where a brace was misplaced or omitted. I want to be able to look at a piece of code and understand what the author intended easily and clearly. 5) I want clear, complete commercial documentation. I have seen thousands of circular definitions such as: returntype FunctionXYZ(int p1, int p2); Returns XYZ of p1 and p2. BIG WHOOPING DEAL! I'm not an idiot--I can plainly see that from the function call. I often need to know HOW it uses p1 and p2 to arrive at XYZ. (Of course, by now,
Sounds just like VB6 to me. I still can't figure out why the "elitists" hate it so much. Other than the name, of course. ;P If you don't like "GoTo" don't use it.:cool:
Slow Eddie
-
Just one word: Mickeysoft :-) .Net can never be so good that they get me to marry them and then let them move in and do whatever they like.
I have lived with several Zen masters - all of them were cats. His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
CodeWraith wrote:
just one word: Mickeysoft :)
Subjective religious reasons, then. Any objective reasons?
-
I do a lot of development, migrating, porting, etc., in different environments, different systems, different languages, different versions,....get the idea? I have specialized over the years in doing weird stuff. I just finished chasing a bug in some vintage commercial software that was written to accommodate big-endian and little-endian integers depending on the platform. It turned out be that it applied big-endian logic in one place where little-endian was required. The why's and wherefore's of doing this are not important. What is important is the line of thought I reached upon conclusion, i.e., What I want to see in software. 1) I want to be able to declare my variables of a type and then have the necessary conversions take place automatically. E.g., I want to declare a "X" as a 2-byte integer and "S" as an ASCII (or utf8, utf16, whatever...) character string and be able simply specify 'S = X AS "format" --not "S =X.toString("fmt");", "S = itoa(i);" or whatever language conversion functions are appropriate. I know what I have going in, I know what I want coming out--make it easy to get from A to B!! 2) I want my data declarations to be consistent across all platforms/languages/environments/...! Before some reader gets all hot and bothered about using C typedefs, etc., in declarations, I know this can be done. What I want is consistency--i.e., a "short" to be a 2 byte integer, a "long" to be a 4 byte integer,....get the drift. Part of problem I was chasing had to do with the software expecting a C int to be 32 bits, but some 64 bit environments define an int as 64 bits 3) I want my utilities and commands to operate the same way, with the same results, across all platforms. If I do a "myutil -x" command on Linux, I want to see EXACTLY the same output and results across cygwin, Windows 10 Ubuntu, Debian, etc. 4) I want clear, simple, understandable, comprehensible programming constructs. I am tired of chasing errors such as where somebody fat-fingered "=" when "==" was meant or where a brace was misplaced or omitted. I want to be able to look at a piece of code and understand what the author intended easily and clearly. 5) I want clear, complete commercial documentation. I have seen thousands of circular definitions such as: returntype FunctionXYZ(int p1, int p2); Returns XYZ of p1 and p2. BIG WHOOPING DEAL! I'm not an idiot--I can plainly see that from the function call. I often need to know HOW it uses p1 and p2 to arrive at XYZ. (Of course, by now,
You and every other quality technician... Seriously though, the problem is that as a profession we have no real enforced standards. Microsoft attempted to be a standard bearer in the 1990s and early 2000s but everyone complained and now we have the morass that technicians are just starting to rightly complain about. The other issue is that we have too many technicians in our ranks that are too eager to promote their own vision of things at the drop of a hat since using a tool that may be one or more years old may not be cool. First the major organizations destroyed the vital functions of IT development and then the technical community got on board and started doing the job for them. Now you have aberrations like MVC replacing Web Forms when it was already available and no one on the Microsoft side of things was interested until that company promoted its own version of MVC (most likely a direct copy of the original). Now you have JavaScript as a major language though it is a major headache to code with. Now you have Agile and DevOps, which in reality are diametrically opposed to quality software engineering standards. And now you have new tool-sets being introduced on a daily basis by anyone who thinks they know what they are doing. In short, the entire profession is a complete mess. And it ain't going to get better in the current economic environments of barbaric capitalism...
Steve Naidamast Sr. Software Engineer Black Falcon Software, Inc. blackfalconsoftware@outlook.com
-
.NET is useful - it provides a huge library of classes that work in a consistent way, unlike the libraries you had to play with with C / C++, and reduces the memory leak problems endemic to pointer based code written by people who think they know what they are doing ... :laugh: It's a tool - and one that works across multiple platforms with a high degree of "similarity". Try that with a native C compiler, and it becomes a struggle to get anything to work in a short timeframe. Don't get me wrong, I miss my assembler days (or decades more accurately) - but .NET is here to stay and I'm happy using it.
Sent from my Amstrad PC 1640 Never throw anything away, Griff Bad command or file name. Bad, bad command! Sit! Stay! Staaaay... AntiTwitter: @DalekDave is now a follower!
"but .NET is here to stay and I'm happy using it." I can happily say I hardly ever used .NET. As an embedded developer (now retired), .NET and C# were never options for the projects I worked on. Heck, I think only one project in my career had more than a meg of RAM. For my projects it's been C/C++ for at least the last 15 years of my career.
-
I do a lot of development, migrating, porting, etc., in different environments, different systems, different languages, different versions,....get the idea? I have specialized over the years in doing weird stuff. I just finished chasing a bug in some vintage commercial software that was written to accommodate big-endian and little-endian integers depending on the platform. It turned out be that it applied big-endian logic in one place where little-endian was required. The why's and wherefore's of doing this are not important. What is important is the line of thought I reached upon conclusion, i.e., What I want to see in software. 1) I want to be able to declare my variables of a type and then have the necessary conversions take place automatically. E.g., I want to declare a "X" as a 2-byte integer and "S" as an ASCII (or utf8, utf16, whatever...) character string and be able simply specify 'S = X AS "format" --not "S =X.toString("fmt");", "S = itoa(i);" or whatever language conversion functions are appropriate. I know what I have going in, I know what I want coming out--make it easy to get from A to B!! 2) I want my data declarations to be consistent across all platforms/languages/environments/...! Before some reader gets all hot and bothered about using C typedefs, etc., in declarations, I know this can be done. What I want is consistency--i.e., a "short" to be a 2 byte integer, a "long" to be a 4 byte integer,....get the drift. Part of problem I was chasing had to do with the software expecting a C int to be 32 bits, but some 64 bit environments define an int as 64 bits 3) I want my utilities and commands to operate the same way, with the same results, across all platforms. If I do a "myutil -x" command on Linux, I want to see EXACTLY the same output and results across cygwin, Windows 10 Ubuntu, Debian, etc. 4) I want clear, simple, understandable, comprehensible programming constructs. I am tired of chasing errors such as where somebody fat-fingered "=" when "==" was meant or where a brace was misplaced or omitted. I want to be able to look at a piece of code and understand what the author intended easily and clearly. 5) I want clear, complete commercial documentation. I have seen thousands of circular definitions such as: returntype FunctionXYZ(int p1, int p2); Returns XYZ of p1 and p2. BIG WHOOPING DEAL! I'm not an idiot--I can plainly see that from the function call. I often need to know HOW it uses p1 and p2 to arrive at XYZ. (Of course, by now,
I feel your pain. Here is the dilemma: Ease of use for the programmer. As a programmer, I don't want to declare: int4,int8,int16. The concept was that int was the DEFAULT machines bitness, which allowed it to move across platform. In memory this is great. Reading/writing to disk, created the Endian problem. Then when you are not looking you get coercion of types, etc. Add in signed/unsigned, and pretty soon you realize you need a completely Object Based system. To fix what? Write once test everywhere? I feel your pain, but see no solution outside of custom managing types that go into and out of some kind of storage!
-
You and every other quality technician... Seriously though, the problem is that as a profession we have no real enforced standards. Microsoft attempted to be a standard bearer in the 1990s and early 2000s but everyone complained and now we have the morass that technicians are just starting to rightly complain about. The other issue is that we have too many technicians in our ranks that are too eager to promote their own vision of things at the drop of a hat since using a tool that may be one or more years old may not be cool. First the major organizations destroyed the vital functions of IT development and then the technical community got on board and started doing the job for them. Now you have aberrations like MVC replacing Web Forms when it was already available and no one on the Microsoft side of things was interested until that company promoted its own version of MVC (most likely a direct copy of the original). Now you have JavaScript as a major language though it is a major headache to code with. Now you have Agile and DevOps, which in reality are diametrically opposed to quality software engineering standards. And now you have new tool-sets being introduced on a daily basis by anyone who thinks they know what they are doing. In short, the entire profession is a complete mess. And it ain't going to get better in the current economic environments of barbaric capitalism...
Steve Naidamast Sr. Software Engineer Black Falcon Software, Inc. blackfalconsoftware@outlook.com
-
I feel your pain. Here is the dilemma: Ease of use for the programmer. As a programmer, I don't want to declare: int4,int8,int16. The concept was that int was the DEFAULT machines bitness, which allowed it to move across platform. In memory this is great. Reading/writing to disk, created the Endian problem. Then when you are not looking you get coercion of types, etc. Add in signed/unsigned, and pretty soon you realize you need a completely Object Based system. To fix what? Write once test everywhere? I feel your pain, but see no solution outside of custom managing types that go into and out of some kind of storage!
Write once, run everywhere. How many times have you heard a senior management type say "We need it to run on Windows laptops for Accounting but Marketing uses Macs and Engineering uses Linux. And by the way, can you make it accessible from my phone?" If I had a nickel for every time I've heard that, I would make Bill Gates and Warren Buffet combined look like paupers.
-
Write once, run everywhere. How many times have you heard a senior management type say "We need it to run on Windows laptops for Accounting but Marketing uses Macs and Engineering uses Linux. And by the way, can you make it accessible from my phone?" If I had a nickel for every time I've heard that, I would make Bill Gates and Warren Buffet combined look like paupers.
That's why the DOCKER concept was so exciting to me, to be honest. UCSD Pascal had a runtime. Under DOS it was dog slow, but the idea was basically a VM... It ran on Linux and DOS the same, and anywhere else they ported that too, if memory serves me. Imagine a world where (X-windows tried this), you run your application, and the GUI attaches to it! Meaning you need only configure standard IO parameters. Now, this was the beginning of EVERY COBOL program (remember: Environment Division, etc). We have evolved really far, and we are getting places. The one upside of the web was a "Standard" GUI available to program to. Making things like Proton or Docker with a port to do things workable across platforms. And I see that is where we seem to be going. But like Scotty in Star Trek said "The fancier the plumbing, the bigger the problems" (or some such)