Gary R. Wheeler wrote:
Bit-twiddling in VB just seems like you're asking for trouble.
The same problem exists in C. If I were writing a language spec, I would specify that the 'not' operator shall always behave as though it returned the longest existing integer type, and that signed/unsigned comparisons shall always behave in numerically-correct fashion (i.e. if the signed number is negative, it's less than the signed one; otherwise the values compare numerically). Actually, I'd specify that any integer expression all of whose whose intermediate subexpressions fit in the largest integer type must be evaluated as though all calculations were done in that largest type. In many cases, a halfway-intelligent compiler should be able to figure out what size operands are actually necessary, and I can't think of any case where such behavior would break decently-written code. Do you know of any languages that work that way? BTW, a couple more things: (1) a decent compiler should be able to recognize the cases where casting to a long doesn't really mean casting to a long, such as "longvar = int1 * (long)int2;" or "longvar &= ~2;". In the former case, the hardware should use one int*int->long multiply rather than sign extending the two integers, performing four uint*uint->ulong multiplies, and adding up the partial products; a decent compiler should also recognize "longvar &= ~(long)smallpositivevalue;" and not bother doing anything with the upper bits of longvar; (2) I wonder why more languages and CPUs don't have an "and not" operator/instruction. I think Vax Basic included such an operator, and the ARM instruction set does, but I've not seen them often. One more thing: I think it's cool that the formula for computing sum(i=0..inf)(2^i) yields -1. So computers and "real math" agree.