Int16 is not Int16 are you missing a cast?
-
Hello. I've have this little method (yes - I know it makes no sense - but it is a test):
public void AddNumbers() { Int16 a = 2; Int16 b = 4; Int16 c = 6; Int16 result = a + b + c; }
The above will not compile. I get the CS0266 compiler error: Cannot implicitly convert type 'int' to 'short'. An explicit conversion exists (are you missing a cast?) But this will:public void AddNumbers() { Int16 a = 2; Int16 b = 4; Int16 c = 6; Int32 result = a + b + c; // or this line: //Int16 result = (Int16)(a + b + c); }
Now why is this happening? I've checked the IL code for the method, it looks like this:.method public hidebysig instance void AddNumbers() cil managed { // Code size 14 (0xe) .maxstack 2 .locals init ([0] int16 a, [1] int16 b, [2] int16 c, [3] int32 result) IL_0000: nop IL_0001: ldc.i4.2 IL_0002: stloc.0 IL_0003: ldc.i4.4 IL_0004: stloc.1 IL_0005: ldc.i4.6 IL_0006: stloc.2 IL_0007: ldloc.0 IL_0008: ldloc.1 IL_0009: add IL_000a: ldloc.2 IL_000b: add IL_000c: stloc.3 IL_000d: ret } // end of method Program::AddNumbers
As far as I can see IL adds the numbers as ldc.i4. is this integer 4 byte? Because if it is, why does it not get the numbers from ldc.i2. instead? Since my variables are of the type Int16? Can anybody tell me why? Best regards Soeren -
Hello. I've have this little method (yes - I know it makes no sense - but it is a test):
public void AddNumbers() { Int16 a = 2; Int16 b = 4; Int16 c = 6; Int16 result = a + b + c; }
The above will not compile. I get the CS0266 compiler error: Cannot implicitly convert type 'int' to 'short'. An explicit conversion exists (are you missing a cast?) But this will:public void AddNumbers() { Int16 a = 2; Int16 b = 4; Int16 c = 6; Int32 result = a + b + c; // or this line: //Int16 result = (Int16)(a + b + c); }
Now why is this happening? I've checked the IL code for the method, it looks like this:.method public hidebysig instance void AddNumbers() cil managed { // Code size 14 (0xe) .maxstack 2 .locals init ([0] int16 a, [1] int16 b, [2] int16 c, [3] int32 result) IL_0000: nop IL_0001: ldc.i4.2 IL_0002: stloc.0 IL_0003: ldc.i4.4 IL_0004: stloc.1 IL_0005: ldc.i4.6 IL_0006: stloc.2 IL_0007: ldloc.0 IL_0008: ldloc.1 IL_0009: add IL_000a: ldloc.2 IL_000b: add IL_000c: stloc.3 IL_000d: ret } // end of method Program::AddNumbers
As far as I can see IL adds the numbers as ldc.i4. is this integer 4 byte? Because if it is, why does it not get the numbers from ldc.i2. instead? Since my variables are of the type Int16? Can anybody tell me why? Best regards SoerenBad Robot wrote:
As far as I can see IL adds the numbers as ldc.i4. is this integer 4 byte? Because if it is, why does it not get the numbers from ldc.i2. instead? Since my variables are of the type Int16? Can anybody tell me why?
That is what is happening. It's needed because adding two int16's could overflow and require an Int32 to store the result. Forcing an explicit cast is intended to make the developer aware of the failcase at design time rather than having a user discover it via a runtime exception.
-
Hello. I've have this little method (yes - I know it makes no sense - but it is a test):
public void AddNumbers() { Int16 a = 2; Int16 b = 4; Int16 c = 6; Int16 result = a + b + c; }
The above will not compile. I get the CS0266 compiler error: Cannot implicitly convert type 'int' to 'short'. An explicit conversion exists (are you missing a cast?) But this will:public void AddNumbers() { Int16 a = 2; Int16 b = 4; Int16 c = 6; Int32 result = a + b + c; // or this line: //Int16 result = (Int16)(a + b + c); }
Now why is this happening? I've checked the IL code for the method, it looks like this:.method public hidebysig instance void AddNumbers() cil managed { // Code size 14 (0xe) .maxstack 2 .locals init ([0] int16 a, [1] int16 b, [2] int16 c, [3] int32 result) IL_0000: nop IL_0001: ldc.i4.2 IL_0002: stloc.0 IL_0003: ldc.i4.4 IL_0004: stloc.1 IL_0005: ldc.i4.6 IL_0006: stloc.2 IL_0007: ldloc.0 IL_0008: ldloc.1 IL_0009: add IL_000a: ldloc.2 IL_000b: add IL_000c: stloc.3 IL_000d: ret } // end of method Program::AddNumbers
As far as I can see IL adds the numbers as ldc.i4. is this integer 4 byte? Because if it is, why does it not get the numbers from ldc.i2. instead? Since my variables are of the type Int16? Can anybody tell me why? Best regards SoerenThis is strange indeed. The compiler also complains when using the
unchecked
keyword. I googled a bit and found out thatldc
only supports i4, i8, r4 and r8 as parameters (i=int, r=float), which would mean thatInt16
are internally interpreted asInt32
, can anyone confirm this? The "ldc" instruction can support a 4-byte integer (i4), an 8-byte integer (i8), a 4-byte float (r4) or an 8-byte float (r8). -> source regards -
Bad Robot wrote:
As far as I can see IL adds the numbers as ldc.i4. is this integer 4 byte? Because if it is, why does it not get the numbers from ldc.i2. instead? Since my variables are of the type Int16? Can anybody tell me why?
That is what is happening. It's needed because adding two int16's could overflow and require an Int32 to store the result. Forcing an explicit cast is intended to make the developer aware of the failcase at design time rather than having a user discover it via a runtime exception.
-
This is strange indeed. The compiler also complains when using the
unchecked
keyword. I googled a bit and found out thatldc
only supports i4, i8, r4 and r8 as parameters (i=int, r=float), which would mean thatInt16
are internally interpreted asInt32
, can anyone confirm this? The "ldc" instruction can support a 4-byte integer (i4), an 8-byte integer (i8), a 4-byte float (r4) or an 8-byte float (r8). -> source regardsThat makes sense considering processor registers, supported by the .NET Framework, are either 32 or 64 bits wide. According to the documentation for the LDC_I4[^] and LDC_I8[^] opcodes, yes, this is the case. Integers of any size are stored in either 4 or 8 bytes.
Dave Kreskowiak Microsoft MVP - Visual Basic
-
Hello. I've have this little method (yes - I know it makes no sense - but it is a test):
public void AddNumbers() { Int16 a = 2; Int16 b = 4; Int16 c = 6; Int16 result = a + b + c; }
The above will not compile. I get the CS0266 compiler error: Cannot implicitly convert type 'int' to 'short'. An explicit conversion exists (are you missing a cast?) But this will:public void AddNumbers() { Int16 a = 2; Int16 b = 4; Int16 c = 6; Int32 result = a + b + c; // or this line: //Int16 result = (Int16)(a + b + c); }
Now why is this happening? I've checked the IL code for the method, it looks like this:.method public hidebysig instance void AddNumbers() cil managed { // Code size 14 (0xe) .maxstack 2 .locals init ([0] int16 a, [1] int16 b, [2] int16 c, [3] int32 result) IL_0000: nop IL_0001: ldc.i4.2 IL_0002: stloc.0 IL_0003: ldc.i4.4 IL_0004: stloc.1 IL_0005: ldc.i4.6 IL_0006: stloc.2 IL_0007: ldloc.0 IL_0008: ldloc.1 IL_0009: add IL_000a: ldloc.2 IL_000b: add IL_000c: stloc.3 IL_000d: ret } // end of method Program::AddNumbers
As far as I can see IL adds the numbers as ldc.i4. is this integer 4 byte? Because if it is, why does it not get the numbers from ldc.i2. instead? Since my variables are of the type Int16? Can anybody tell me why? Best regards Soerenshort
s are promoted - converted - toint
s before performing calculations. This is true in C and C++ as well as in C#. Therefore the type of 'a + b' isInt32
. When you then try to assign the result of the expression (which isInt32
) back to anInt16
, there's a type mismatch, which produces the error message.Stability. What an interesting concept. -- Chris Maunder