do any of you others have little coding mantras that save your behind?
-
The problem with that is they may not be 1, 0 or -1. Any positive value and 1 are going to have to be treated the same, and the same goes for the negative values - they're all -1, basically. But other than that, yeah. Although hate enums, because .NET made them slow. I still use them, but they make me frustrated. So usually in my classes where I don't want to burn extra clocks like my pull parsers I use an int to keep state, and cast it to an enum before the user of my code touches is.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
honey the codewitch wrote:
Although hate enums, because .NET made them slow. I still use them, but they make me frustrated.
The very first compiler I dug into was the Pascal P4 compiler - those who think "open source" is something that came with Linux are completely wrong. Pascal provides enums as first class types, not something derived from integer. The compiler source showed very clearly how the compiler treats enums just like integers; it just doesn't mix the two types up, it doensn't allow you to use them interchangably. It is like having intTypeA and intTypeB which are 100% incompatible. If you do casting to (or from) int, it is a pure compile-time thing: It shortcuts the error handling reporting that the types are incompatible. There is nothing that causes more instructions to be executed when you use enums rather than int - not even when you do casting. Why would there be? Why should .net make them slower? If you have full enum implementation (like that of Pascal) and make more use of it, then there may of course be a few more instructions generated. E.g. if you have a 12-value enum from janurary to december, and define an array with indexes from april to august, then the runtime code must skew the actual index values so that an "april" index is mapped to the base adress of the array, not three elements higher. Index values must be checked against the array declaration: january to march and september to december must generate an exception. But that is extended functionality - if you want that with integer indexes, and the same checking, you would generate a lot more code writing the index scewing and testing as explicit C statements. Maybe the current C# .net compiler is not doing things "properly" - similar to that Pascal compiler written in the early 1970s. I guess it could. I see no reason why it should be able to, nothing in semantics of C# "semi-enums" making it more difficult that Pascal's full enum implemnentation.
-
Don't return
null
. Throw an exception instead. Removes need to null check everything. Hopefully give a more meaningful error when a problem occurs.TrinityRaven wrote:
Don't return
null
. Throw an exception instead.Sure, if it really is an execption. But I don't want to handle, say, a person who has no middle name as an exception just because his middle name is null. Or a person without a spouse, or without children. I can guess your reply: The middle name should be a zero lenght string, not null! In some cases, a zero-size value may be conceptually correct. Far from always. There is a semantic difference between something being there, regardless of size, and something not being there. You easily end up with testing on nonzero size rather than null, which may in some contexts be both confusing and give more complex code. And it might require more data space. I guess that you still accept null checks in loops and list traversals, as long as as no function calls are involved: "while (nextobject != null) {process it and determine the next object}" is perfectly fine ... until "determine the next object" becomes so complex that you factor it out as a function. By your rule, the while condition can be dropped; you will threat the end of the list as something exceptional that requires exception handling. But it isn't "exceptional" to reach the end of a list in list traversal! If you do not process all elements but factor out the code that decides which elements to skip, that doesn't make the end of the list more exceptional. I started learing programming when access to computer resources were scarce. Maybe that was one reason for why many of the first hand-ins were to be made in pseudocode: somewhat formalized English, but remote from coding syntax. Actually, if we got even close to a programming language syntax, the professor used his red pen: Why do you restrict it this way? Is there, or isn't there, a semantic difference between this kind of value and that kind? Is it appropriate to add #apples to #oranges here - you tell that there isn't? I like pseudocode. It relieves you from language syntax, lets you describe the problem solution at a logical level. If I had it my way, every software design should include a documentation of the solution logic in a form of pseudocode completely removed from any programming language. It should be equally valid if it was decided to re-implement the C++ system i Fortran, or Visual Basic or Erlang or APL. Even if the system is never reimplemented in another language, I think that kind of documentation woul
-
My Mantra: "I'm too old for this ***t"
-
honey the codewitch wrote:
Although hate enums, because .NET made them slow. I still use them, but they make me frustrated.
The very first compiler I dug into was the Pascal P4 compiler - those who think "open source" is something that came with Linux are completely wrong. Pascal provides enums as first class types, not something derived from integer. The compiler source showed very clearly how the compiler treats enums just like integers; it just doesn't mix the two types up, it doensn't allow you to use them interchangably. It is like having intTypeA and intTypeB which are 100% incompatible. If you do casting to (or from) int, it is a pure compile-time thing: It shortcuts the error handling reporting that the types are incompatible. There is nothing that causes more instructions to be executed when you use enums rather than int - not even when you do casting. Why would there be? Why should .net make them slower? If you have full enum implementation (like that of Pascal) and make more use of it, then there may of course be a few more instructions generated. E.g. if you have a 12-value enum from janurary to december, and define an array with indexes from april to august, then the runtime code must skew the actual index values so that an "april" index is mapped to the base adress of the array, not three elements higher. Index values must be checked against the array declaration: january to march and september to december must generate an exception. But that is extended functionality - if you want that with integer indexes, and the same checking, you would generate a lot more code writing the index scewing and testing as explicit C statements. Maybe the current C# .net compiler is not doing things "properly" - similar to that Pascal compiler written in the early 1970s. I guess it could. I see no reason why it should be able to, nothing in semantics of C# "semi-enums" making it more difficult that Pascal's full enum implemnentation.
It depends on what you do with them, but casting them back and forth to int requires a CLI check, i think maybe for invalid values. Ints don't require that.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
-
TrinityRaven wrote:
Don't return
null
. Throw an exception instead.Sure, if it really is an execption. But I don't want to handle, say, a person who has no middle name as an exception just because his middle name is null. Or a person without a spouse, or without children. I can guess your reply: The middle name should be a zero lenght string, not null! In some cases, a zero-size value may be conceptually correct. Far from always. There is a semantic difference between something being there, regardless of size, and something not being there. You easily end up with testing on nonzero size rather than null, which may in some contexts be both confusing and give more complex code. And it might require more data space. I guess that you still accept null checks in loops and list traversals, as long as as no function calls are involved: "while (nextobject != null) {process it and determine the next object}" is perfectly fine ... until "determine the next object" becomes so complex that you factor it out as a function. By your rule, the while condition can be dropped; you will threat the end of the list as something exceptional that requires exception handling. But it isn't "exceptional" to reach the end of a list in list traversal! If you do not process all elements but factor out the code that decides which elements to skip, that doesn't make the end of the list more exceptional. I started learing programming when access to computer resources were scarce. Maybe that was one reason for why many of the first hand-ins were to be made in pseudocode: somewhat formalized English, but remote from coding syntax. Actually, if we got even close to a programming language syntax, the professor used his red pen: Why do you restrict it this way? Is there, or isn't there, a semantic difference between this kind of value and that kind? Is it appropriate to add #apples to #oranges here - you tell that there isn't? I like pseudocode. It relieves you from language syntax, lets you describe the problem solution at a logical level. If I had it my way, every software design should include a documentation of the solution logic in a form of pseudocode completely removed from any programming language. It should be equally valid if it was decided to re-implement the C++ system i Fortran, or Visual Basic or Erlang or APL. Even if the system is never reimplemented in another language, I think that kind of documentation woul
I didn't say don't use
null
. I said don'treturn null
NULL can be useful in a data structure, and to use your example in a Person of Name class having null for the middle name could be (I won't say "is") better that "NMN" (No Middle Name) or similar. The question is what helps save [my] behind. There are times when "yoda conditionals" make sense. There are use cases where they don't. I didn't specifically chime in on that discussion because I can see both sides and use (or not) depending on readability and what is being tested for. Returning null, in my not so humble opinion, is a code smell. Using null in a data structure is not. But ultimately, it depends on the team's (or single developer's) style and agreements. And do you accept the related overhead - null checks (or Elvis operator), or try ... catch. -
One of mine is - when dealing with
IComparable
in .NET "Greater than is less than" What it means is Convertingif(10>5);
to
IComparable
it readsif(0<10.CompareTo(5));
Note '>' vs '<'
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
Don't Be in a Hurry.
-
Don't Be in a Hurry.
i hear you Usually it's my code that I want to be in a hurry. =) Go! Compute that LALR(1) table! Factor that grammar!
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
-
I'd be curious to see an expansion of "this ***t". It might very well have great overlaps with my list. I know very well the feelings that you are expressing.
This catchy phrase was uttered by Roger Murtaugh (Danny Glover) in the original Lethal Weapon movie and then carried to the rest of that franchise.
-
It's a metaphorical twice as in more than once. Although I have found sometimes just taking a shot in the dark can be useful if you can learn from failure.
"They have a consciousness, they have a life, they have a soul! Damn you! Let the rabbits wear glasses! Save our brothers! Can I get an amen?"
-
It's a metaphorical twice as in more than once. Although I have found sometimes just taking a shot in the dark can be useful if you can learn from failure.
"They have a consciousness, they have a life, they have a soul! Damn you! Let the rabbits wear glasses! Save our brothers! Can I get an amen?"
-
One of mine is - when dealing with
IComparable
in .NET "Greater than is less than" What it means is Convertingif(10>5);
to
IComparable
it readsif(0<10.CompareTo(5));
Note '>' vs '<'
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
Mostly I just follow the Babylon 5 mantra. I also often catalog the stupidity I'm about to do prior to doing it.
-
It depends on what you do with them, but casting them back and forth to int requires a CLI check, i think maybe for invalid values. Ints don't require that.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
Casting is not something you do very often - unless you continue to think of enums as just names for ints, so you continue to mix the two types up. So it shouldn't be essential (or even noticable) to system performance. In many cases, the compiler can suppress the runtime check, e.g. for int literals, or when a simple flow analysis reveals that an int variable couldn't possibly be outside the enum range (or most certainly would be outside, in which case the compiler should barf). For enum-to-int casts, there should be very little need for runtime checks - very few systems define more than 32K values for one enum type, and very little code nowadays use ints of less than 16 bits. Especially: In contexts where 8 bit ints are relevant, you very rarely see huge enum definitions with more than 128 alternatives (or 256 for uint8). If you declare enums by forcing the internal representation to be given by the bit pattern of some int value, then you show that you do not recognize enums as a distinct type. Forcing the internal representation is as bad for enums as it would be to force a pointer or float to a specific bit pattern given by the representation of a specific integer literal. You shouldn't do that. Even assuming that enum values form a dense sequence from 0000 and upwards is on the edge - they are not ints, and you cannot assume any given similarity in int and enum implementation. Really, int/enum casts are as meaningless as int/pointer casts; we have casts only because lots of C programmers can't stop thinking of them as "just a little different ints". Even for ints, the compiler should generate code that verifies that e.g. an int32 cast to an int16 is within the int16 range. Maybe the instruction set provides some harware support, creating an interrupt if not. Support may be available even for enum use: The last machine I programmed in assembly had a four operand instruction "LoadIndex register, value, min, max": If "value" was not in the range from min to max, an "illegal index" interrupt was generated. The Pascal compiler used this instruction for int-to-enum casts, specifying the first and last permitted enum value. (In Pascal, it is not given that "min" is zero; e.g. if an array is indexed from may to september.) I haven't spent time on learning the .net "instruction set", and don't know if it has something similar. But since it does index checks, I'd exepect it to.
-
Don't Be in a Hurry.
-
I didn't say don't use
null
. I said don'treturn null
NULL can be useful in a data structure, and to use your example in a Person of Name class having null for the middle name could be (I won't say "is") better that "NMN" (No Middle Name) or similar. The question is what helps save [my] behind. There are times when "yoda conditionals" make sense. There are use cases where they don't. I didn't specifically chime in on that discussion because I can see both sides and use (or not) depending on readability and what is being tested for. Returning null, in my not so humble opinion, is a code smell. Using null in a data structure is not. But ultimately, it depends on the team's (or single developer's) style and agreements. And do you accept the related overhead - null checks (or Elvis operator), or try ... catch.TrinityRaven wrote:
I didn't say don't use
null
. I said don'treturn null
Yes, that's exactly what I pointed out in my loop example: You accept a loop to run until the next element is null unless determining the next element is so complex that it has been pulled out as a function. If you do that, pull it out as a function, and follow your rule, then the function cannot return the next element the way the simpler inline code (with no function definition) did. The function would have to raise an exception when reaching the end of the list, and the call to the function would have to be wrapped in a try-catch, the exception handler would treat the exception as "ok, so then we set next element to null, so that the while check will terminate the loop", rather than simply accept a the next element as null from the function. I find that to be an outright silly way of coding - and I don't think that you seriously suggest it. "don't return null" wasn't meant that absolutely; there are cases where communicating a null value to a calling function as something perfectly normal is ... perfectly normal. I say: That happens quite often. You say: OK, in some very special circumstances, like the one with "next object", you could accept it, as an exceptional case. - The question is where to draw the line. But the line is there. I have seen code that tries to hide nulls by returning pseudo objects: If you ask for, say, a person's spouse, you never receive "null" or "none" or "void", but a person object that has a special identifier member like "no person". Testing for the returned person object being a person with a "no person" identifier is not more convenient by any criteria. You might forget to do that check, too, an reference attributes of this person object, that it doesn't have, because it is a "no person". Finally: You make an absolute assumption that the called routine remembers to always define the return value to something non-null. I have had cases where the null check on the return value revealed errors in the called function, in a "graceful" way. If my programming style had been "You don't have to check for null returns, because functions do not return null", the error would have been caught much later. Nowadays, we are using static code analysis tools that do a very thorough check on pointer use. If there is any chance whatsoever that a pointer is null or unassigned when dereferenced, you receive a warning.
-
Casting is not something you do very often - unless you continue to think of enums as just names for ints, so you continue to mix the two types up. So it shouldn't be essential (or even noticable) to system performance. In many cases, the compiler can suppress the runtime check, e.g. for int literals, or when a simple flow analysis reveals that an int variable couldn't possibly be outside the enum range (or most certainly would be outside, in which case the compiler should barf). For enum-to-int casts, there should be very little need for runtime checks - very few systems define more than 32K values for one enum type, and very little code nowadays use ints of less than 16 bits. Especially: In contexts where 8 bit ints are relevant, you very rarely see huge enum definitions with more than 128 alternatives (or 256 for uint8). If you declare enums by forcing the internal representation to be given by the bit pattern of some int value, then you show that you do not recognize enums as a distinct type. Forcing the internal representation is as bad for enums as it would be to force a pointer or float to a specific bit pattern given by the representation of a specific integer literal. You shouldn't do that. Even assuming that enum values form a dense sequence from 0000 and upwards is on the edge - they are not ints, and you cannot assume any given similarity in int and enum implementation. Really, int/enum casts are as meaningless as int/pointer casts; we have casts only because lots of C programmers can't stop thinking of them as "just a little different ints". Even for ints, the compiler should generate code that verifies that e.g. an int32 cast to an int16 is within the int16 range. Maybe the instruction set provides some harware support, creating an interrupt if not. Support may be available even for enum use: The last machine I programmed in assembly had a four operand instruction "LoadIndex register, value, min, max": If "value" was not in the range from min to max, an "illegal index" interrupt was generated. The Pascal compiler used this instruction for int-to-enum casts, specifying the first and last permitted enum value. (In Pascal, it is not given that "min" is zero; e.g. if an array is indexed from may to september.) I haven't spent time on learning the .net "instruction set", and don't know if it has something similar. But since it does index checks, I'd exepect it to.
Member 7989122 wrote:
In many cases, the compiler can suppress the runtime check, e.g. for int literals,
Plenty of developers overestimate the compilers they provide with .NET. They don't typically do optimizations like that. (Although in .NET you can explicitly turn overflow checks on and off on a per cast basis in your code) Experience has shown me time and again, when it comes to .NET, if there's any doubt about whether or not the compiler will optimize something, no matter how obvious, assume it won't. You're far more likely to be right than wrong that way. Spend enough time decompiling .NET asms and you learn the hard way to optimize your own code.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
-
it's not just about remembering. It's about typos. A better argument is that compilers these days catch accidental assignment, but some of us have just had certain practices drummed into us for years and they stick. Double equals sign is necessary in the C family of languages because there are different ways to do equality and assignment. And you may find the C language family inelegant, but there's a reason they carried the day and pascal well... didn't.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
The big mistake was to use the single equals sign for assignment. Many languages, from Algol to Pascal to ADA uses := for assignment. APL has a special assignment character. Lisp uses keywords. Classic Basic uses LET. The real probles is: Why does C use the equal operator for assignment? Pointing to that is an explanation for why double == is needed for equal operations, but not an excuse. If you try to suggest that C squeezed out Pascal because C is "better", you suggest (with great force) that your main field of expertice is not in formal language design. VHS won the market becuse it was better, didn't it? And MP3 won over SACD/DVD-A because it was better? TCP/IP won over the OSI protocol stack because it was better? Well, that depends on the criteria. If your only crition is "degree of market penetration", all of these were "bests". But please don't pretend that this is only imaginable criterion.
-
The big mistake was to use the single equals sign for assignment. Many languages, from Algol to Pascal to ADA uses := for assignment. APL has a special assignment character. Lisp uses keywords. Classic Basic uses LET. The real probles is: Why does C use the equal operator for assignment? Pointing to that is an explanation for why double == is needed for equal operations, but not an excuse. If you try to suggest that C squeezed out Pascal because C is "better", you suggest (with great force) that your main field of expertice is not in formal language design. VHS won the market becuse it was better, didn't it? And MP3 won over SACD/DVD-A because it was better? TCP/IP won over the OSI protocol stack because it was better? Well, that depends on the criteria. If your only crition is "degree of market penetration", all of these were "bests". But please don't pretend that this is only imaginable criterion.
Better is subjective. I'm saying more people found it usable, which speaks to its versatility. Perhaps it would have been better for C family languages to not use equals as an assignment operator. But it's also both not the first thing about the language I'd change, nor does it say much to me about formal language design. As someone who has written plenty of parsers and parser generators that accept formal grammars, I can tell you C's biggest sin is that type declarations need to be fed back into the lexer to resolve grammar constructs. This breaks the separation of lexer and parser. It's not quite as bad as python's significant whitespace but it's a pretty ugly thing to have to hack together in a parser. But then, I'm not Niklaus Wirth. I'm just someone that writes code. That being said, I don't holy roll. I use what works. Pascal doesn't. There just aren't modern tools for it. It's not quite as dead as latin, but catching up.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
-
Member 7989122 wrote:
In many cases, the compiler can suppress the runtime check, e.g. for int literals,
Plenty of developers overestimate the compilers they provide with .NET. They don't typically do optimizations like that. (Although in .NET you can explicitly turn overflow checks on and off on a per cast basis in your code) Experience has shown me time and again, when it comes to .NET, if there's any doubt about whether or not the compiler will optimize something, no matter how obvious, assume it won't. You're far more likely to be right than wrong that way. Spend enough time decompiling .NET asms and you learn the hard way to optimize your own code.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
dotNET is neither a compiler, an instruction set nor a machine architecture. (For the last one: Contrary to e.g. P4 or JVM.) It is like the specification of the interface in the gcc compiler suite between the (programming language dependent) front end and the (cpu architecture dependent) back end. "Like" is not in a "somewhat simillar to, in a rough sense", but rather like "an alternative competing directly with the gcc intermediate format for solving exactly the same set of problems". The backend exists in .net exactly as in gcc: CPU architecture dependent code is generated when the code is first loaded into the machine, and cached for later use. Do you ever "blame" the intermedate gcc format for front ends that doesn't optimize the code much? That's what you do with dotNET. Put the blame where it deserves to be put. I haven't studied the source code of a single compiler for the intermediate code of neither gcc nor dotNET. Maybe they don't do even the very simplest optimizations. Why not? Most techniques have been known for more than fifty years. I may suggest that they make a tradeoff: Compile time is far more essential with an interactive IDE than in the days when you submitted a card deck for compilation and picked up the compiler listing hours earlier. So we can't spend time on optimizations while the developer is impatiently twiddeling their thumbs waiting for the compilation to complete. Execution time is far less essential today - CPUs are fast enough! Optimizing for space is almost meaningless: Adding another 16 GiByte of RAM costs almost nothing, so why delay compilation to avoid that? To some degree, they are right. And also: Modern pipelined CPUs reduces drastically the benefit of shaving off a few instructions from a linear sequence, compared to a hardcoded RISC CPU where every instruction (ideally) requires one clock cycle. Some of the old optimization techniques can safely be left out, as they have no measurable effect at all, given today's hardware. Example, although not from compilers: Remember the "interleaving factor" when formatting DOS disks? If logical disk blocks 0, 1, 2... were physically laid out at sectors 0, 2, 4..., then you could read an entire track in only two disk revolutions. If you laid them out at sectors 0, 1, 2..., after reading sector 0, sector 1 passed the disk head while the controller was still stuffing away the data from sector 0. So it had to wait until sector 1 came around next time, a full revolution later. Reading an entire track, when
-
dotNET is neither a compiler, an instruction set nor a machine architecture. (For the last one: Contrary to e.g. P4 or JVM.) It is like the specification of the interface in the gcc compiler suite between the (programming language dependent) front end and the (cpu architecture dependent) back end. "Like" is not in a "somewhat simillar to, in a rough sense", but rather like "an alternative competing directly with the gcc intermediate format for solving exactly the same set of problems". The backend exists in .net exactly as in gcc: CPU architecture dependent code is generated when the code is first loaded into the machine, and cached for later use. Do you ever "blame" the intermedate gcc format for front ends that doesn't optimize the code much? That's what you do with dotNET. Put the blame where it deserves to be put. I haven't studied the source code of a single compiler for the intermediate code of neither gcc nor dotNET. Maybe they don't do even the very simplest optimizations. Why not? Most techniques have been known for more than fifty years. I may suggest that they make a tradeoff: Compile time is far more essential with an interactive IDE than in the days when you submitted a card deck for compilation and picked up the compiler listing hours earlier. So we can't spend time on optimizations while the developer is impatiently twiddeling their thumbs waiting for the compilation to complete. Execution time is far less essential today - CPUs are fast enough! Optimizing for space is almost meaningless: Adding another 16 GiByte of RAM costs almost nothing, so why delay compilation to avoid that? To some degree, they are right. And also: Modern pipelined CPUs reduces drastically the benefit of shaving off a few instructions from a linear sequence, compared to a hardcoded RISC CPU where every instruction (ideally) requires one clock cycle. Some of the old optimization techniques can safely be left out, as they have no measurable effect at all, given today's hardware. Example, although not from compilers: Remember the "interleaving factor" when formatting DOS disks? If logical disk blocks 0, 1, 2... were physically laid out at sectors 0, 2, 4..., then you could read an entire track in only two disk revolutions. If you laid them out at sectors 0, 1, 2..., after reading sector 0, sector 1 passed the disk head while the controller was still stuffing away the data from sector 0. So it had to wait until sector 1 came around next time, a full revolution later. Reading an entire track, when
I didn't say it was. I'm well aware of .NET and what it is and isn't. I worked on VStudio Whidbey, FFS.
Member 7989122 wrote:
Do you ever "blame" the intermedate gcc format for front ends that doesn't optimize the code much?
This isn't about blame. This is about stating a fact. None of the compilers shipped with any flavor of .NET that targets any flavor of the CLI optimizes very much, if at all. And when one writes code it pays to know what the compiler is actually doing with it, lest someone make some bad design decisions (like trying to use enums for state machine internals) It's all punted to the JIT, and the JIT just doesn't optimize much. You can wrap a response to that in as many words as you like, but it doesn't change the facts on the ground. And it won't get me to start using enums in my state machine code internals.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
-
Pure dates are generally not a problem, as long as you (or the coder from whose output you are getting the date) don't/didn't do something very stupid. Even here, however, timezone issues can cause problems. I once had an issue which resulted from the author of the firmware of a device I was getting data from recording what should have been a pure (midnight) datestamp as the corresponding local datetime. Since I am in the GMT-5 timezone, midnight on April 4 became 7 pm on April 3! Trying to compare datetimes for simultaneity, however, is almost always a severe PITA, when the source clocks are not both perfectly synchronized and using the same basic internal representation for clock time.
Storing dates in UTC internally solves almost all issues. Not all of them, but almost all. Comparing time stamps in milliseconds for equality may work or may fail, depending on the context. In a scientific context, all clocks involved are precise down to milliseconds at worst and way more precise at best. That, and time differences of a few milliseconds make huge differences. But it all boils down to context. But yeah, I've seen some very stupid date handling myself. My point is, while comparing floats for equality is a horrible idea by default and always, comparing dates for equality may work very well depending on the circumstances. Well, that and dates are like encryption, there's heaps of way to get it wrong, many of them very subtile but still destructive and only a few (of not only one) ways to get it right.