Conversion errors in convert endian routine
-
Rather new to C# so I'm still getting used to the foibles. I'm working on a personal MIDI project and need to convert some ints and shorts to big endian before output. After a few trials I got the following to work for UInt32. private UInt32 CVTEndian(UInt32 u) { byte[] numb = { Convert.ToByte((u >> 24) & 0xff), Convert.ToByte((u >> 24) & 0xff), Convert.ToByte((u >> 8) & 0xff), Convert.ToByte(u & 0xff) }; return Convert.ToUInt32( numb[3] << 24) | Convert.ToUInt32(numb[2] << 16) | Convert.ToUInt32(numb[1] << 8) | Convert.ToUInt32( numb[0]); } However, in trying what should be the same but shorter and easier for UInt16 I came up with the following code. private UInt16 CVTEndian(UInt16 u) { UInt16 temp; byte[] numb = {Convert.ToByte((u >> 8) & 0xff), Convert.ToByte(u & 0xff)}; temp = (Convert.ToUInt16(numb[1]) << 8) | Convert.ToUInt16(numb[0]); return temp; } Mind you, I originally just returned the expression. Assigning to a temp was an attempt to narrow down the problem and an extra set of parenz is there to make sure I wasn't screwing up on order or precedence. AAR, the above assignment to temp gives me Cannot explicitly convert type 'int' to 'ushort' An explicit conversion exists It's not really any different than the code in the other method. It compiles fine if I leave off the expression on the right side of the or operation but that doesn't give me the proper results. I'd be happy with an explanation of where I'm going wrong or a better method of converting endian, especially if it's part of the .NET paradigm. TIA, Lilith
-
Rather new to C# so I'm still getting used to the foibles. I'm working on a personal MIDI project and need to convert some ints and shorts to big endian before output. After a few trials I got the following to work for UInt32. private UInt32 CVTEndian(UInt32 u) { byte[] numb = { Convert.ToByte((u >> 24) & 0xff), Convert.ToByte((u >> 24) & 0xff), Convert.ToByte((u >> 8) & 0xff), Convert.ToByte(u & 0xff) }; return Convert.ToUInt32( numb[3] << 24) | Convert.ToUInt32(numb[2] << 16) | Convert.ToUInt32(numb[1] << 8) | Convert.ToUInt32( numb[0]); } However, in trying what should be the same but shorter and easier for UInt16 I came up with the following code. private UInt16 CVTEndian(UInt16 u) { UInt16 temp; byte[] numb = {Convert.ToByte((u >> 8) & 0xff), Convert.ToByte(u & 0xff)}; temp = (Convert.ToUInt16(numb[1]) << 8) | Convert.ToUInt16(numb[0]); return temp; } Mind you, I originally just returned the expression. Assigning to a temp was an attempt to narrow down the problem and an extra set of parenz is there to make sure I wasn't screwing up on order or precedence. AAR, the above assignment to temp gives me Cannot explicitly convert type 'int' to 'ushort' An explicit conversion exists It's not really any different than the code in the other method. It compiles fine if I leave off the expression on the right side of the or operation but that doesn't give me the proper results. I'd be happy with an explanation of where I'm going wrong or a better method of converting endian, especially if it's part of the .NET paradigm. TIA, Lilith
Hi, yes there is an inherent difference: [Corrected] all your integer expressions are evaluated using 32-bit integers since they include several 32-bit values (all the constants) [/Corrected], hence for a 32-bit byteswap there is no problem, but for a 16-bit byteswap, the intermediate value is 32-bit and cannot be assigned to a 16-bit temp without a cast. so use (uint16)(myexpression) BTW: there is a mistake in the beginning, you have <<24 twice ! remark: there are slightly easier ways to do this; I am preparing an article on it. will take up to 10 days from now. :) -- modified at 19:50 Friday 3rd August, 2007
Luc Pattyn
try { [Search CP Articles] [Search CP Forums] [Forum Guidelines] [My Articles] } catch { [Google] }
-
Rather new to C# so I'm still getting used to the foibles. I'm working on a personal MIDI project and need to convert some ints and shorts to big endian before output. After a few trials I got the following to work for UInt32. private UInt32 CVTEndian(UInt32 u) { byte[] numb = { Convert.ToByte((u >> 24) & 0xff), Convert.ToByte((u >> 24) & 0xff), Convert.ToByte((u >> 8) & 0xff), Convert.ToByte(u & 0xff) }; return Convert.ToUInt32( numb[3] << 24) | Convert.ToUInt32(numb[2] << 16) | Convert.ToUInt32(numb[1] << 8) | Convert.ToUInt32( numb[0]); } However, in trying what should be the same but shorter and easier for UInt16 I came up with the following code. private UInt16 CVTEndian(UInt16 u) { UInt16 temp; byte[] numb = {Convert.ToByte((u >> 8) & 0xff), Convert.ToByte(u & 0xff)}; temp = (Convert.ToUInt16(numb[1]) << 8) | Convert.ToUInt16(numb[0]); return temp; } Mind you, I originally just returned the expression. Assigning to a temp was an attempt to narrow down the problem and an extra set of parenz is there to make sure I wasn't screwing up on order or precedence. AAR, the above assignment to temp gives me Cannot explicitly convert type 'int' to 'ushort' An explicit conversion exists It's not really any different than the code in the other method. It compiles fine if I leave off the expression on the right side of the or operation but that doesn't give me the proper results. I'd be happy with an explanation of where I'm going wrong or a better method of converting endian, especially if it's part of the .NET paradigm. TIA, Lilith
I'd have to go play, but one thing I'd wonder about is if the compiler is assuming 0xff is an int? Typically it would.
I'm largely language agnostic
After a while they all bug me :doh:
-
Hi, yes there is an inherent difference: [Corrected] all your integer expressions are evaluated using 32-bit integers since they include several 32-bit values (all the constants) [/Corrected], hence for a 32-bit byteswap there is no problem, but for a 16-bit byteswap, the intermediate value is 32-bit and cannot be assigned to a 16-bit temp without a cast. so use (uint16)(myexpression) BTW: there is a mistake in the beginning, you have <<24 twice ! remark: there are slightly easier ways to do this; I am preparing an article on it. will take up to 10 days from now. :) -- modified at 19:50 Friday 3rd August, 2007
Luc Pattyn
try { [Search CP Articles] [Search CP Forums] [Forum Guidelines] [My Articles] } catch { [Google] }
That did it. Thanks. And thanks for the heads up on the re-use of 24. I would have found it eventually, but better to have someone else do it for me and not have to figure out that I had an error at all. :-) Looking forward to your article. I could have worked this out a few other ways but they'd all be just as involved and less obvious. Lilith
-
I'd have to go play, but one thing I'd wonder about is if the compiler is assuming 0xff is an int? Typically it would.
I'm largely language agnostic
After a while they all bug me :doh:
That wasn't where the error occurred. However, looking at Luc Pattyn's response, which turned out to be the answer. It also explains why the 0xff, even if assumed to be an int, wouldn't have been been mismatched against a UInt16, since it would have achieved int stature for the operation. I do appreciate your answering. Lilith
-
That wasn't where the error occurred. However, looking at Luc Pattyn's response, which turned out to be the answer. It also explains why the 0xff, even if assumed to be an int, wouldn't have been been mismatched against a UInt16, since it would have achieved int stature for the operation. I do appreciate your answering. Lilith
In fact MidwestLimey got that part of the answer right where I got it wrong: it is not true that all integer expressions are evaluated using 32-bit arithmetic (how else would byte=byte+byte work?), it is due to the presence of several constants (such as 0xFF and 8,16,24) which are 32-bit quantities by definition (unless an L suffix is used for 64-bit), that all subexpressions get promoted to 32-bit evaluation. In the mean time I applied the necessary correction to my earlier post. greetings
Luc Pattyn
try { [Search CP Articles] [Search CP Forums] [Forum Guidelines] [My Articles] } catch { [Google] }
-
In fact MidwestLimey got that part of the answer right where I got it wrong: it is not true that all integer expressions are evaluated using 32-bit arithmetic (how else would byte=byte+byte work?), it is due to the presence of several constants (such as 0xFF and 8,16,24) which are 32-bit quantities by definition (unless an L suffix is used for 64-bit), that all subexpressions get promoted to 32-bit evaluation. In the mean time I applied the necessary correction to my earlier post. greetings
Luc Pattyn
try { [Search CP Articles] [Search CP Forums] [Forum Guidelines] [My Articles] } catch { [Google] }
-
Rather new to C# so I'm still getting used to the foibles. I'm working on a personal MIDI project and need to convert some ints and shorts to big endian before output. After a few trials I got the following to work for UInt32. private UInt32 CVTEndian(UInt32 u) { byte[] numb = { Convert.ToByte((u >> 24) & 0xff), Convert.ToByte((u >> 24) & 0xff), Convert.ToByte((u >> 8) & 0xff), Convert.ToByte(u & 0xff) }; return Convert.ToUInt32( numb[3] << 24) | Convert.ToUInt32(numb[2] << 16) | Convert.ToUInt32(numb[1] << 8) | Convert.ToUInt32( numb[0]); } However, in trying what should be the same but shorter and easier for UInt16 I came up with the following code. private UInt16 CVTEndian(UInt16 u) { UInt16 temp; byte[] numb = {Convert.ToByte((u >> 8) & 0xff), Convert.ToByte(u & 0xff)}; temp = (Convert.ToUInt16(numb[1]) << 8) | Convert.ToUInt16(numb[0]); return temp; } Mind you, I originally just returned the expression. Assigning to a temp was an attempt to narrow down the problem and an extra set of parenz is there to make sure I wasn't screwing up on order or precedence. AAR, the above assignment to temp gives me Cannot explicitly convert type 'int' to 'ushort' An explicit conversion exists It's not really any different than the code in the other method. It compiles fine if I leave off the expression on the right side of the or operation but that doesn't give me the proper results. I'd be happy with an explanation of where I'm going wrong or a better method of converting endian, especially if it's part of the .NET paradigm. TIA, Lilith
Have a look at HostToNetworkOrder[^]
-
Have a look at HostToNetworkOrder[^]