#pragma pack(1) packing wrong structure sizes ! ! !
-
I return to the windows development community in need of some anger and frustration relieving guidance from you guru's. I have the following code :
#pragma pack(push, 1) typedef struct { u16 _id : 10; u8 _hflip : 1; u8 _vflip : 1; u8 _palettetype : 4; } MAPDATA;
Correct me if I'm wrong but 10 bits + 1 bit + 1 bit + 4 bits = 16 bits which equates to 2 f**king bytes, yet #pragma pack, push, pop every blinking piece of crap that microsoft has to offer returns 3 bytes when I callsizeof(MAPDATA)
! ! ! If I use #pragma push(2) sizeof then miraculously returns with a 4 byte alignment. This is completely messing up my quantize and save algorithm and I've been up until 2am in the morning trying to figure out why gcc's __attribute__ ((packed)) works like a dream and microsoft's pants #pragma pack counts like an epileptic monkey in a bright multi-coloured jacket. Thanks all, and apologies for the frustration. "When I left you I was but the learner, now I am the master" - Darth Vader -
I return to the windows development community in need of some anger and frustration relieving guidance from you guru's. I have the following code :
#pragma pack(push, 1) typedef struct { u16 _id : 10; u8 _hflip : 1; u8 _vflip : 1; u8 _palettetype : 4; } MAPDATA;
Correct me if I'm wrong but 10 bits + 1 bit + 1 bit + 4 bits = 16 bits which equates to 2 f**king bytes, yet #pragma pack, push, pop every blinking piece of crap that microsoft has to offer returns 3 bytes when I callsizeof(MAPDATA)
! ! ! If I use #pragma push(2) sizeof then miraculously returns with a 4 byte alignment. This is completely messing up my quantize and save algorithm and I've been up until 2am in the morning trying to figure out why gcc's __attribute__ ((packed)) works like a dream and microsoft's pants #pragma pack counts like an epileptic monkey in a bright multi-coloured jacket. Thanks all, and apologies for the frustration. "When I left you I was but the learner, now I am the master" - Darth Vader1. Why are you using u8 type. If you used u16 again, both compilers might do what you like. 2. I have no idea why you think the pack pragma would do anything here. Everything is being packed byte aligned. 3. The Microsoft and GNU compilers are working as expected. If you would bother to look at the C++ standard, how bitfields are stored is totally implementation defined. Tim Smith I'm going to patent thought. I have yet to see any prior art.
-
1. Why are you using u8 type. If you used u16 again, both compilers might do what you like. 2. I have no idea why you think the pack pragma would do anything here. Everything is being packed byte aligned. 3. The Microsoft and GNU compilers are working as expected. If you would bother to look at the C++ standard, how bitfields are stored is totally implementation defined. Tim Smith I'm going to patent thought. I have yet to see any prior art.
Hey thankyou for the tip off. It came like a dose of paracetemol to me :). Here are my answers to some of your questions. 1. Because i didn't see the point in declaring a 16 bit type with a bitfield of less than 8 bits when the 8 bit type is nearer the size i wanted (well that was the thinking). 2. I was assuming the compiler would notice it was trying to pack a 16 bit struct and therefore pack it on a 2 byte boundary as I was requesting. 3. Mmm the weird thing was that I saw that you're supposed to use the 'unsigned' declaration with no size typing so the compilers can make the same choices in their different ways. I think I was just extremely tired, either that or ignorant :). I also never bother referencing the c++ standard when using a Microsoft compiler because its tends to break it at will, although the .NET 2003 Visual Studio one is much more compliant (and I'm hoping 2005 will be even better with platform independant project management yay :) ). I could see that the two compilers had implemented it in their different ways, I just wanted to know what I had to do to get the microsoft implementation to give me the same semantics as gcc. I was only trying to make a tool for my blinking program, but became frustrated as I wasted the entire evening after coming home from work at 9.30pm (long day) doing something simple like this until 2am (British time). Your response solved it in two seconds. I think it is a lesson learned now though, I shall use the 'unsigned' type declaration instead of a u16 or u8 declaration and leave the packing all up to the compiler. Thanks again though :) "When I left you I was but the learner, now I am the master" - Darth Vader
-
Hey thankyou for the tip off. It came like a dose of paracetemol to me :). Here are my answers to some of your questions. 1. Because i didn't see the point in declaring a 16 bit type with a bitfield of less than 8 bits when the 8 bit type is nearer the size i wanted (well that was the thinking). 2. I was assuming the compiler would notice it was trying to pack a 16 bit struct and therefore pack it on a 2 byte boundary as I was requesting. 3. Mmm the weird thing was that I saw that you're supposed to use the 'unsigned' declaration with no size typing so the compilers can make the same choices in their different ways. I think I was just extremely tired, either that or ignorant :). I also never bother referencing the c++ standard when using a Microsoft compiler because its tends to break it at will, although the .NET 2003 Visual Studio one is much more compliant (and I'm hoping 2005 will be even better with platform independant project management yay :) ). I could see that the two compilers had implemented it in their different ways, I just wanted to know what I had to do to get the microsoft implementation to give me the same semantics as gcc. I was only trying to make a tool for my blinking program, but became frustrated as I wasted the entire evening after coming home from work at 9.30pm (long day) doing something simple like this until 2am (British time). Your response solved it in two seconds. I think it is a lesson learned now though, I shall use the 'unsigned' type declaration instead of a u16 or u8 declaration and leave the packing all up to the compiler. Thanks again though :) "When I left you I was but the learner, now I am the master" - Darth Vader
I take it back, when I make all the struct elements of 'unsigned' type instead of specifying the size (u16), it packs them on a four byte boundary which is just absolutely crap because its soooooooo obvious i want it on a two byte boundary - not only am i specifying the bitfields upto a size of 16bits I'm also TELLING the thing via #pragma pack(2) to shove it on a 2 byte boundary and yet it still cocks it up ! It never ceases to amaze me. "When I left you I was but the learner, now I am the master" - Darth Vader