Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. The Lounge
  3. do any of you others have little coding mantras that save your behind?

do any of you others have little coding mantras that save your behind?

Scheduled Pinned Locked Moved The Lounge
csharpcssvisual-studioquestion
73 Posts 32 Posters 1 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • K kalberts

    If you remember to put the constant on the left side, you also would remember to double the equals sign. And why the h*** do we have to double the equals sign? This thread makes me miss Pascal so much! In Norwegian, "yoda" sounds like "joda...", ususally pronounced with a sigh, meaning "yes, but...". I hear what you say, but I am certainly not sure that you are right. So "yoda" may be an appropriate term :-) Pascal, and several other languages from that period, were designed by experts on formal languages, parsing etc. C is based on a collection of scraps left over from an early days space invation game implementation. OK, those students were certainly clever, but they were not experienced language designers.

    H Offline
    H Offline
    honey the codewitch
    wrote on last edited by
    #50

    it's not just about remembering. It's about typos. A better argument is that compilers these days catch accidental assignment, but some of us have just had certain practices drummed into us for years and they stick. Double equals sign is necessary in the C family of languages because there are different ways to do equality and assignment. And you may find the C language family inelegant, but there's a reason they carried the day and pascal well... didn't.

    When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.

    K 1 Reply Last reply
    0
    • M Member 9167057

      Oddly enough, I can't remember any problems you're talking about from my own experience. And I am not even a programmer by trade, I've studied physics and programming was a side-gig at first. To me, integer numbers are exact and floats are, as it's impossible to represent arbitrary numbers with discrete values, approximations. They may be good enough for daily use, but they may fail and when they do, they fail. Maybe that's why I didn't have any problems, the concept of approximations is deeply nested in a physicist's mind. Well, that and I've recently built a system which used integer for it's measurement values (mostly because the sensor returns integers by the value of 0,01°C). So your vocabulary would have spectacularily failed me :D

      K Offline
      K Offline
      kalberts
      wrote on last edited by
      #51

      Students insist that when you measure up 3 kg of flour for your bread, that is count of the number of kilograms. Their body height is a count of centimeters. It goes the other way, too: They may use a float for the number of students in the class, arguing that when they increase the number by 1.0 for each newcomer, the float still represents the count of students. And, the more advanced ones argue, with a float, you can count any number of units. A plain int can't even count the number of living humans! Sure, most of these problem come with students who have been playing with computers in their bedroom since they were ten, all self-learned, having picked up one little bit here one there, with no trace of discipline whatsoever. But frequently, these become class heroes: Other students learn "smart tricks" from them, and "how real programmer do it, not the way that silly lecturer tells us to". So they can have a large influence on otherwise "innocent" students. This is mostly a problem with students. With professional programmers, the problem is with those who do not fully realize that e.g. a comparison does NOT return an integer (-1, 0, 1) but "less", "equal", "greater", and you should NOT compare it to numerical values. If you declare non-numeric, yet ordered, values as an enum, and create an array of, say, weather[january..december], you canNOT index this array with an integer, "because may is the fifth month, I can use 5 as an index... no, wait, I have to use 4, because it is zero based!" One specfic example: In my own C code, I use to define "ever" as ";;" so that an infinite loop, it is made explicit as "for (ever) {...}" (inspired by the CHILL language, where "for ever" is recognized by the compiler). I used this in one of the code modules I was responsible for at work. It was discovered by one of the young and extremely self-confident programmers, who was immensely provoked by it: He promptly replaced it by the "proper" way of writing an infinite loop: "while(1){..}". He searched through our entire codebase for other places where I had done similar sins, adding a very nasty remark in the SVN log for each and every occurance, requesting that everybody in the future refrain from such inappropriate funnyness - we should do our progamming in a serious manner. Oh, well - I din't care to argue. Why should I. Readable, easily comprehendable code is more essential when it will be read by people who are not into embedded systems code. Or rahter, to a developer of embedded C code,

      1 Reply Last reply
      0
      • T TrinityRaven

        Different use cases. I’m stuck debugging an app that is crashing because NULL is not a valid value. A part of what makes programming fun (?), is the various ways to solve a particular problem.

        H Offline
        H Offline
        honey the codewitch
        wrote on last edited by
        #52

        ha!, i can understand your sentiment. I recently developed a caching JSON entity framework for accessing TMDb's REST API and all their JSON fields are marked optional, which means potential for nulls everywhere. It made errorhandling hell.

        When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.

        1 Reply Last reply
        0
        • H honey the codewitch

          The problem with that is they may not be 1, 0 or -1. Any positive value and 1 are going to have to be treated the same, and the same goes for the negative values - they're all -1, basically. But other than that, yeah. Although hate enums, because .NET made them slow. I still use them, but they make me frustrated. So usually in my classes where I don't want to burn extra clocks like my pull parsers I use an int to keep state, and cast it to an enum before the user of my code touches is.

          When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.

          K Offline
          K Offline
          kalberts
          wrote on last edited by
          #53

          honey the codewitch wrote:

          Although hate enums, because .NET made them slow. I still use them, but they make me frustrated.

          The very first compiler I dug into was the Pascal P4 compiler - those who think "open source" is something that came with Linux are completely wrong. Pascal provides enums as first class types, not something derived from integer. The compiler source showed very clearly how the compiler treats enums just like integers; it just doesn't mix the two types up, it doensn't allow you to use them interchangably. It is like having intTypeA and intTypeB which are 100% incompatible. If you do casting to (or from) int, it is a pure compile-time thing: It shortcuts the error handling reporting that the types are incompatible. There is nothing that causes more instructions to be executed when you use enums rather than int - not even when you do casting. Why would there be? Why should .net make them slower? If you have full enum implementation (like that of Pascal) and make more use of it, then there may of course be a few more instructions generated. E.g. if you have a 12-value enum from janurary to december, and define an array with indexes from april to august, then the runtime code must skew the actual index values so that an "april" index is mapped to the base adress of the array, not three elements higher. Index values must be checked against the array declaration: january to march and september to december must generate an exception. But that is extended functionality - if you want that with integer indexes, and the same checking, you would generate a lot more code writing the index scewing and testing as explicit C statements. Maybe the current C# .net compiler is not doing things "properly" - similar to that Pascal compiler written in the early 1970s. I guess it could. I see no reason why it should be able to, nothing in semantics of C# "semi-enums" making it more difficult that Pascal's full enum implemnentation.

          H 1 Reply Last reply
          0
          • T TrinityRaven

            Don't return null. Throw an exception instead. Removes need to null check everything. Hopefully give a more meaningful error when a problem occurs.

            K Offline
            K Offline
            kalberts
            wrote on last edited by
            #54

            TrinityRaven wrote:

            Don't return null. Throw an exception instead.

            Sure, if it really is an execption. But I don't want to handle, say, a person who has no middle name as an exception just because his middle name is null. Or a person without a spouse, or without children. I can guess your reply: The middle name should be a zero lenght string, not null! In some cases, a zero-size value may be conceptually correct. Far from always. There is a semantic difference between something being there, regardless of size, and something not being there. You easily end up with testing on nonzero size rather than null, which may in some contexts be both confusing and give more complex code. And it might require more data space. I guess that you still accept null checks in loops and list traversals, as long as as no function calls are involved: "while (nextobject != null) {process it and determine the next object}" is perfectly fine ... until "determine the next object" becomes so complex that you factor it out as a function. By your rule, the while condition can be dropped; you will threat the end of the list as something exceptional that requires exception handling. But it isn't "exceptional" to reach the end of a list in list traversal! If you do not process all elements but factor out the code that decides which elements to skip, that doesn't make the end of the list more exceptional. I started learing programming when access to computer resources were scarce. Maybe that was one reason for why many of the first hand-ins were to be made in pseudocode: somewhat formalized English, but remote from coding syntax. Actually, if we got even close to a programming language syntax, the professor used his red pen: Why do you restrict it this way? Is there, or isn't there, a semantic difference between this kind of value and that kind? Is it appropriate to add #apples to #oranges here - you tell that there isn't? I like pseudocode. It relieves you from language syntax, lets you describe the problem solution at a logical level. If I had it my way, every software design should include a documentation of the solution logic in a form of pseudocode completely removed from any programming language. It should be equally valid if it was decided to re-implement the C++ system i Fortran, or Visual Basic or Erlang or APL. Even if the system is never reimplemented in another language, I think that kind of documentation woul

            T 1 Reply Last reply
            0
            • I Ira Greenstein

              My Mantra: "I'm too old for this ***t"

              K Offline
              K Offline
              kalberts
              wrote on last edited by
              #55

              I'd be curious to see an expansion of "this ***t". It might very well have great overlaps with my list. I know very well the feelings that you are expressing.

              I 1 Reply Last reply
              0
              • K kalberts

                honey the codewitch wrote:

                Although hate enums, because .NET made them slow. I still use them, but they make me frustrated.

                The very first compiler I dug into was the Pascal P4 compiler - those who think "open source" is something that came with Linux are completely wrong. Pascal provides enums as first class types, not something derived from integer. The compiler source showed very clearly how the compiler treats enums just like integers; it just doesn't mix the two types up, it doensn't allow you to use them interchangably. It is like having intTypeA and intTypeB which are 100% incompatible. If you do casting to (or from) int, it is a pure compile-time thing: It shortcuts the error handling reporting that the types are incompatible. There is nothing that causes more instructions to be executed when you use enums rather than int - not even when you do casting. Why would there be? Why should .net make them slower? If you have full enum implementation (like that of Pascal) and make more use of it, then there may of course be a few more instructions generated. E.g. if you have a 12-value enum from janurary to december, and define an array with indexes from april to august, then the runtime code must skew the actual index values so that an "april" index is mapped to the base adress of the array, not three elements higher. Index values must be checked against the array declaration: january to march and september to december must generate an exception. But that is extended functionality - if you want that with integer indexes, and the same checking, you would generate a lot more code writing the index scewing and testing as explicit C statements. Maybe the current C# .net compiler is not doing things "properly" - similar to that Pascal compiler written in the early 1970s. I guess it could. I see no reason why it should be able to, nothing in semantics of C# "semi-enums" making it more difficult that Pascal's full enum implemnentation.

                H Offline
                H Offline
                honey the codewitch
                wrote on last edited by
                #56

                It depends on what you do with them, but casting them back and forth to int requires a CLI check, i think maybe for invalid values. Ints don't require that.

                When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.

                K 1 Reply Last reply
                0
                • K kalberts

                  TrinityRaven wrote:

                  Don't return null. Throw an exception instead.

                  Sure, if it really is an execption. But I don't want to handle, say, a person who has no middle name as an exception just because his middle name is null. Or a person without a spouse, or without children. I can guess your reply: The middle name should be a zero lenght string, not null! In some cases, a zero-size value may be conceptually correct. Far from always. There is a semantic difference between something being there, regardless of size, and something not being there. You easily end up with testing on nonzero size rather than null, which may in some contexts be both confusing and give more complex code. And it might require more data space. I guess that you still accept null checks in loops and list traversals, as long as as no function calls are involved: "while (nextobject != null) {process it and determine the next object}" is perfectly fine ... until "determine the next object" becomes so complex that you factor it out as a function. By your rule, the while condition can be dropped; you will threat the end of the list as something exceptional that requires exception handling. But it isn't "exceptional" to reach the end of a list in list traversal! If you do not process all elements but factor out the code that decides which elements to skip, that doesn't make the end of the list more exceptional. I started learing programming when access to computer resources were scarce. Maybe that was one reason for why many of the first hand-ins were to be made in pseudocode: somewhat formalized English, but remote from coding syntax. Actually, if we got even close to a programming language syntax, the professor used his red pen: Why do you restrict it this way? Is there, or isn't there, a semantic difference between this kind of value and that kind? Is it appropriate to add #apples to #oranges here - you tell that there isn't? I like pseudocode. It relieves you from language syntax, lets you describe the problem solution at a logical level. If I had it my way, every software design should include a documentation of the solution logic in a form of pseudocode completely removed from any programming language. It should be equally valid if it was decided to re-implement the C++ system i Fortran, or Visual Basic or Erlang or APL. Even if the system is never reimplemented in another language, I think that kind of documentation woul

                  T Offline
                  T Offline
                  TrinityRaven
                  wrote on last edited by
                  #57

                  I didn't say don't use null. I said don't return null NULL can be useful in a data structure, and to use your example in a Person of Name class having null for the middle name could be (I won't say "is") better that "NMN" (No Middle Name) or similar. The question is what helps save [my] behind. There are times when "yoda conditionals" make sense. There are use cases where they don't. I didn't specifically chime in on that discussion because I can see both sides and use (or not) depending on readability and what is being tested for. Returning null, in my not so humble opinion, is a code smell. Using null in a data structure is not. But ultimately, it depends on the team's (or single developer's) style and agreements. And do you accept the related overhead - null checks (or Elvis operator), or try ... catch.

                  K 1 Reply Last reply
                  0
                  • H honey the codewitch

                    One of mine is - when dealing with IComparable in .NET "Greater than is less than" What it means is Converting

                    if(10>5);

                    to IComparable it reads

                    if(0<10.CompareTo(5));

                    Note '>' vs '<'

                    When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.

                    R Offline
                    R Offline
                    Roger House
                    wrote on last edited by
                    #58

                    Don't Be in a Hurry.

                    H K 2 Replies Last reply
                    0
                    • R Roger House

                      Don't Be in a Hurry.

                      H Offline
                      H Offline
                      honey the codewitch
                      wrote on last edited by
                      #59

                      i hear you Usually it's my code that I want to be in a hurry. =) Go! Compute that LALR(1) table! Factor that grammar!

                      When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.

                      1 Reply Last reply
                      0
                      • K kalberts

                        I'd be curious to see an expansion of "this ***t". It might very well have great overlaps with my list. I know very well the feelings that you are expressing.

                        I Offline
                        I Offline
                        Ira Greenstein
                        wrote on last edited by
                        #60

                        This catchy phrase was uttered by Roger Murtaugh (Danny Glover) in the original Lethal Weapon movie and then carried to the rest of that franchise.

                        1 Reply Last reply
                        0
                        • R Rick York

                          It's a metaphorical twice as in more than once. Although I have found sometimes just taking a shot in the dark can be useful if you can learn from failure.

                          "They have a consciousness, they have a life, they have a soul! Damn you! Let the rabbits wear glasses! Save our brothers! Can I get an amen?"

                          T Offline
                          T Offline
                          Tony ADV
                          wrote on last edited by
                          #61

                          When thinking fails, code and fail to move forward.

                          1 Reply Last reply
                          0
                          • R Rick York

                            It's a metaphorical twice as in more than once. Although I have found sometimes just taking a shot in the dark can be useful if you can learn from failure.

                            "They have a consciousness, they have a life, they have a soul! Damn you! Let the rabbits wear glasses! Save our brothers! Can I get an amen?"

                            T Offline
                            T Offline
                            Tony ADV
                            wrote on last edited by
                            #62

                            When thinking fails, code and fail to move forward.

                            1 Reply Last reply
                            0
                            • H honey the codewitch

                              One of mine is - when dealing with IComparable in .NET "Greater than is less than" What it means is Converting

                              if(10>5);

                              to IComparable it reads

                              if(0<10.CompareTo(5));

                              Note '>' vs '<'

                              When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.

                              F Offline
                              F Offline
                              firegryphon
                              wrote on last edited by
                              #63

                              Mostly I just follow the Babylon 5 mantra. I also often catalog the stupidity I'm about to do prior to doing it.

                              1 Reply Last reply
                              0
                              • H honey the codewitch

                                It depends on what you do with them, but casting them back and forth to int requires a CLI check, i think maybe for invalid values. Ints don't require that.

                                When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.

                                K Offline
                                K Offline
                                kalberts
                                wrote on last edited by
                                #64

                                Casting is not something you do very often - unless you continue to think of enums as just names for ints, so you continue to mix the two types up. So it shouldn't be essential (or even noticable) to system performance. In many cases, the compiler can suppress the runtime check, e.g. for int literals, or when a simple flow analysis reveals that an int variable couldn't possibly be outside the enum range (or most certainly would be outside, in which case the compiler should barf). For enum-to-int casts, there should be very little need for runtime checks - very few systems define more than 32K values for one enum type, and very little code nowadays use ints of less than 16 bits. Especially: In contexts where 8 bit ints are relevant, you very rarely see huge enum definitions with more than 128 alternatives (or 256 for uint8). If you declare enums by forcing the internal representation to be given by the bit pattern of some int value, then you show that you do not recognize enums as a distinct type. Forcing the internal representation is as bad for enums as it would be to force a pointer or float to a specific bit pattern given by the representation of a specific integer literal. You shouldn't do that. Even assuming that enum values form a dense sequence from 0000 and upwards is on the edge - they are not ints, and you cannot assume any given similarity in int and enum implementation. Really, int/enum casts are as meaningless as int/pointer casts; we have casts only because lots of C programmers can't stop thinking of them as "just a little different ints". Even for ints, the compiler should generate code that verifies that e.g. an int32 cast to an int16 is within the int16 range. Maybe the instruction set provides some harware support, creating an interrupt if not. Support may be available even for enum use: The last machine I programmed in assembly had a four operand instruction "LoadIndex register, value, min, max": If "value" was not in the range from min to max, an "illegal index" interrupt was generated. The Pascal compiler used this instruction for int-to-enum casts, specifying the first and last permitted enum value. (In Pascal, it is not given that "min" is zero; e.g. if an array is indexed from may to september.) I haven't spent time on learning the .net "instruction set", and don't know if it has something similar. But since it does index checks, I'd exepect it to.

                                H 1 Reply Last reply
                                0
                                • R Roger House

                                  Don't Be in a Hurry.

                                  K Offline
                                  K Offline
                                  kalberts
                                  wrote on last edited by
                                  #65

                                  Hmmmm... "Hurry", maybe that's a name I should consider for my new car. "I'm in Hurry, don't hinder me". (My present one is a red Ford, so I call it Robert.)

                                  1 Reply Last reply
                                  0
                                  • T TrinityRaven

                                    I didn't say don't use null. I said don't return null NULL can be useful in a data structure, and to use your example in a Person of Name class having null for the middle name could be (I won't say "is") better that "NMN" (No Middle Name) or similar. The question is what helps save [my] behind. There are times when "yoda conditionals" make sense. There are use cases where they don't. I didn't specifically chime in on that discussion because I can see both sides and use (or not) depending on readability and what is being tested for. Returning null, in my not so humble opinion, is a code smell. Using null in a data structure is not. But ultimately, it depends on the team's (or single developer's) style and agreements. And do you accept the related overhead - null checks (or Elvis operator), or try ... catch.

                                    K Offline
                                    K Offline
                                    kalberts
                                    wrote on last edited by
                                    #66

                                    TrinityRaven wrote:

                                    I didn't say don't use null. I said don't return null

                                    Yes, that's exactly what I pointed out in my loop example: You accept a loop to run until the next element is null unless determining the next element is so complex that it has been pulled out as a function. If you do that, pull it out as a function, and follow your rule, then the function cannot return the next element the way the simpler inline code (with no function definition) did. The function would have to raise an exception when reaching the end of the list, and the call to the function would have to be wrapped in a try-catch, the exception handler would treat the exception as "ok, so then we set next element to null, so that the while check will terminate the loop", rather than simply accept a the next element as null from the function. I find that to be an outright silly way of coding - and I don't think that you seriously suggest it. "don't return null" wasn't meant that absolutely; there are cases where communicating a null value to a calling function as something perfectly normal is ... perfectly normal. I say: That happens quite often. You say: OK, in some very special circumstances, like the one with "next object", you could accept it, as an exceptional case. - The question is where to draw the line. But the line is there. I have seen code that tries to hide nulls by returning pseudo objects: If you ask for, say, a person's spouse, you never receive "null" or "none" or "void", but a person object that has a special identifier member like "no person". Testing for the returned person object being a person with a "no person" identifier is not more convenient by any criteria. You might forget to do that check, too, an reference attributes of this person object, that it doesn't have, because it is a "no person". Finally: You make an absolute assumption that the called routine remembers to always define the return value to something non-null. I have had cases where the null check on the return value revealed errors in the called function, in a "graceful" way. If my programming style had been "You don't have to check for null returns, because functions do not return null", the error would have been caught much later. Nowadays, we are using static code analysis tools that do a very thorough check on pointer use. If there is any chance whatsoever that a pointer is null or unassigned when dereferenced, you receive a warning.

                                    1 Reply Last reply
                                    0
                                    • K kalberts

                                      Casting is not something you do very often - unless you continue to think of enums as just names for ints, so you continue to mix the two types up. So it shouldn't be essential (or even noticable) to system performance. In many cases, the compiler can suppress the runtime check, e.g. for int literals, or when a simple flow analysis reveals that an int variable couldn't possibly be outside the enum range (or most certainly would be outside, in which case the compiler should barf). For enum-to-int casts, there should be very little need for runtime checks - very few systems define more than 32K values for one enum type, and very little code nowadays use ints of less than 16 bits. Especially: In contexts where 8 bit ints are relevant, you very rarely see huge enum definitions with more than 128 alternatives (or 256 for uint8). If you declare enums by forcing the internal representation to be given by the bit pattern of some int value, then you show that you do not recognize enums as a distinct type. Forcing the internal representation is as bad for enums as it would be to force a pointer or float to a specific bit pattern given by the representation of a specific integer literal. You shouldn't do that. Even assuming that enum values form a dense sequence from 0000 and upwards is on the edge - they are not ints, and you cannot assume any given similarity in int and enum implementation. Really, int/enum casts are as meaningless as int/pointer casts; we have casts only because lots of C programmers can't stop thinking of them as "just a little different ints". Even for ints, the compiler should generate code that verifies that e.g. an int32 cast to an int16 is within the int16 range. Maybe the instruction set provides some harware support, creating an interrupt if not. Support may be available even for enum use: The last machine I programmed in assembly had a four operand instruction "LoadIndex register, value, min, max": If "value" was not in the range from min to max, an "illegal index" interrupt was generated. The Pascal compiler used this instruction for int-to-enum casts, specifying the first and last permitted enum value. (In Pascal, it is not given that "min" is zero; e.g. if an array is indexed from may to september.) I haven't spent time on learning the .net "instruction set", and don't know if it has something similar. But since it does index checks, I'd exepect it to.

                                      H Offline
                                      H Offline
                                      honey the codewitch
                                      wrote on last edited by
                                      #67

                                      Member 7989122 wrote:

                                      In many cases, the compiler can suppress the runtime check, e.g. for int literals,

                                      Plenty of developers overestimate the compilers they provide with .NET. They don't typically do optimizations like that. (Although in .NET you can explicitly turn overflow checks on and off on a per cast basis in your code) Experience has shown me time and again, when it comes to .NET, if there's any doubt about whether or not the compiler will optimize something, no matter how obvious, assume it won't. You're far more likely to be right than wrong that way. Spend enough time decompiling .NET asms and you learn the hard way to optimize your own code.

                                      When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.

                                      K 1 Reply Last reply
                                      0
                                      • H honey the codewitch

                                        it's not just about remembering. It's about typos. A better argument is that compilers these days catch accidental assignment, but some of us have just had certain practices drummed into us for years and they stick. Double equals sign is necessary in the C family of languages because there are different ways to do equality and assignment. And you may find the C language family inelegant, but there's a reason they carried the day and pascal well... didn't.

                                        When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.

                                        K Offline
                                        K Offline
                                        kalberts
                                        wrote on last edited by
                                        #68

                                        The big mistake was to use the single equals sign for assignment. Many languages, from Algol to Pascal to ADA uses := for assignment. APL has a special assignment character. Lisp uses keywords. Classic Basic uses LET. The real probles is: Why does C use the equal operator for assignment? Pointing to that is an explanation for why double == is needed for equal operations, but not an excuse. If you try to suggest that C squeezed out Pascal because C is "better", you suggest (with great force) that your main field of expertice is not in formal language design. VHS won the market becuse it was better, didn't it? And MP3 won over SACD/DVD-A because it was better? TCP/IP won over the OSI protocol stack because it was better? Well, that depends on the criteria. If your only crition is "degree of market penetration", all of these were "bests". But please don't pretend that this is only imaginable criterion.

                                        H 1 Reply Last reply
                                        0
                                        • K kalberts

                                          The big mistake was to use the single equals sign for assignment. Many languages, from Algol to Pascal to ADA uses := for assignment. APL has a special assignment character. Lisp uses keywords. Classic Basic uses LET. The real probles is: Why does C use the equal operator for assignment? Pointing to that is an explanation for why double == is needed for equal operations, but not an excuse. If you try to suggest that C squeezed out Pascal because C is "better", you suggest (with great force) that your main field of expertice is not in formal language design. VHS won the market becuse it was better, didn't it? And MP3 won over SACD/DVD-A because it was better? TCP/IP won over the OSI protocol stack because it was better? Well, that depends on the criteria. If your only crition is "degree of market penetration", all of these were "bests". But please don't pretend that this is only imaginable criterion.

                                          H Offline
                                          H Offline
                                          honey the codewitch
                                          wrote on last edited by
                                          #69

                                          Better is subjective. I'm saying more people found it usable, which speaks to its versatility. Perhaps it would have been better for C family languages to not use equals as an assignment operator. But it's also both not the first thing about the language I'd change, nor does it say much to me about formal language design. As someone who has written plenty of parsers and parser generators that accept formal grammars, I can tell you C's biggest sin is that type declarations need to be fed back into the lexer to resolve grammar constructs. This breaks the separation of lexer and parser. It's not quite as bad as python's significant whitespace but it's a pretty ugly thing to have to hack together in a parser. But then, I'm not Niklaus Wirth. I'm just someone that writes code. That being said, I don't holy roll. I use what works. Pascal doesn't. There just aren't modern tools for it. It's not quite as dead as latin, but catching up.

                                          When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.

                                          1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Don't have an account? Register

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups