Binary serialization
-
I have a public [serializable] class with some simple strings and ints. I wrote an app that populates an instance of this class, and I can serialize as well as deserialize without problems. I then wrote a second app that should deserialize the file produced by the first program. The deserialization code is identical to that in the first program, and the class definition file is identical. However, it generates an exception with message "Unable to find assembly 'Q-Sort-Setup, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null'." where Q-Sort-Setup is the first app. I have been tearing out my (already sparse) hair. Thanks for any suggestions, Tom
-
I have a public [serializable] class with some simple strings and ints. I wrote an app that populates an instance of this class, and I can serialize as well as deserialize without problems. I then wrote a second app that should deserialize the file produced by the first program. The deserialization code is identical to that in the first program, and the class definition file is identical. However, it generates an exception with message "Unable to find assembly 'Q-Sort-Setup, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null'." where Q-Sort-Setup is the first app. I have been tearing out my (already sparse) hair. Thanks for any suggestions, Tom
That's probably because in .NET the standard serialization includes data that identifies the assembly in which the type was declared, and refuses to deserialize it to an other type that "just happens to have the same structure", if you declare that type in an dll shared by both programs it should work. Or, you could manually (de)serialize your data (with a
BinaryWriter
and aBinaryReader
, for example), the resulting data will also be a lot smaller since you don't really need to include insanely accurate type information (you could use a couple of bytes to identify the type if you have lots of different types of objects that you're serializing) In my experience it's rare to have more than 256 different "upper types", and the type of everything else can be determined by position, and I'm going to throw in a stupid example because I'm bored:abstract class Message
{
int messageID;public Message(BinaryReader r) { messageID = r.ReadInt32(); } public static Message Deserialize(BinaryReader r) { switch (r.ReadByte()) { case 0: return new SomeSpecificMessageWithData(r); case 1: //some other message break; } } public virtual void Serialize(BinaryWriter w) { w.Write(messageID); }
}
class SomeSpecificMessageWithData : Message
{
int somethingOrOther;
float a_float;
string name;
const byte TypeID = 0;public SomeSpecificMessageWithData(BinaryReader r) : base(r) { somethingOrOther = r.ReadInt32(); name = r.ReadString(); a\_float = r.ReadSingle(); } public override Serialize(BinaryWriter w) { w.Write(TypeID); base.Serialize(w); w.Write(somethingOrOther); w.Write(name); w.Write(a\_float); }
}
I know many people prefer the build-in serialization of .NET, and it's certainly easier (less code to write), but it has some very big disadvantages: it's slow as hell (so slow that there are questions about how to make it faster quite often in this forum), it produces a lot of overhead in the data, it's hard to customize and the resulting data makes little sense to other platforms (that may not always be a consideration)
-
That's probably because in .NET the standard serialization includes data that identifies the assembly in which the type was declared, and refuses to deserialize it to an other type that "just happens to have the same structure", if you declare that type in an dll shared by both programs it should work. Or, you could manually (de)serialize your data (with a
BinaryWriter
and aBinaryReader
, for example), the resulting data will also be a lot smaller since you don't really need to include insanely accurate type information (you could use a couple of bytes to identify the type if you have lots of different types of objects that you're serializing) In my experience it's rare to have more than 256 different "upper types", and the type of everything else can be determined by position, and I'm going to throw in a stupid example because I'm bored:abstract class Message
{
int messageID;public Message(BinaryReader r) { messageID = r.ReadInt32(); } public static Message Deserialize(BinaryReader r) { switch (r.ReadByte()) { case 0: return new SomeSpecificMessageWithData(r); case 1: //some other message break; } } public virtual void Serialize(BinaryWriter w) { w.Write(messageID); }
}
class SomeSpecificMessageWithData : Message
{
int somethingOrOther;
float a_float;
string name;
const byte TypeID = 0;public SomeSpecificMessageWithData(BinaryReader r) : base(r) { somethingOrOther = r.ReadInt32(); name = r.ReadString(); a\_float = r.ReadSingle(); } public override Serialize(BinaryWriter w) { w.Write(TypeID); base.Serialize(w); w.Write(somethingOrOther); w.Write(name); w.Write(a\_float); }
}
I know many people prefer the build-in serialization of .NET, and it's certainly easier (less code to write), but it has some very big disadvantages: it's slow as hell (so slow that there are questions about how to make it faster quite often in this forum), it produces a lot of overhead in the data, it's hard to customize and the resulting data makes little sense to other platforms (that may not always be a consideration)
I agree with you, but would add one other thing: using the built in serializer makes it difficult to change the structure in later versions and still read older data files correctly. You can find that the time saved in the beginning is multiplied several times when changes occur. For that reason I prefer to serialize it all myself right from the start!
If Barbie is so popular, why do you have to buy her friends? Eagles may soar, but weasels don't get sucked into jet engines. If at first you don't succeed, destroy all evidence that you tried.
-
I agree with you, but would add one other thing: using the built in serializer makes it difficult to change the structure in later versions and still read older data files correctly. You can find that the time saved in the beginning is multiplied several times when changes occur. For that reason I prefer to serialize it all myself right from the start!
If Barbie is so popular, why do you have to buy her friends? Eagles may soar, but weasels don't get sucked into jet engines. If at first you don't succeed, destroy all evidence that you tried.
-
That's probably because in .NET the standard serialization includes data that identifies the assembly in which the type was declared, and refuses to deserialize it to an other type that "just happens to have the same structure", if you declare that type in an dll shared by both programs it should work. Or, you could manually (de)serialize your data (with a
BinaryWriter
and aBinaryReader
, for example), the resulting data will also be a lot smaller since you don't really need to include insanely accurate type information (you could use a couple of bytes to identify the type if you have lots of different types of objects that you're serializing) In my experience it's rare to have more than 256 different "upper types", and the type of everything else can be determined by position, and I'm going to throw in a stupid example because I'm bored:abstract class Message
{
int messageID;public Message(BinaryReader r) { messageID = r.ReadInt32(); } public static Message Deserialize(BinaryReader r) { switch (r.ReadByte()) { case 0: return new SomeSpecificMessageWithData(r); case 1: //some other message break; } } public virtual void Serialize(BinaryWriter w) { w.Write(messageID); }
}
class SomeSpecificMessageWithData : Message
{
int somethingOrOther;
float a_float;
string name;
const byte TypeID = 0;public SomeSpecificMessageWithData(BinaryReader r) : base(r) { somethingOrOther = r.ReadInt32(); name = r.ReadString(); a\_float = r.ReadSingle(); } public override Serialize(BinaryWriter w) { w.Write(TypeID); base.Serialize(w); w.Write(somethingOrOther); w.Write(name); w.Write(a\_float); }
}
I know many people prefer the build-in serialization of .NET, and it's certainly easier (less code to write), but it has some very big disadvantages: it's slow as hell (so slow that there are questions about how to make it faster quite often in this forum), it produces a lot of overhead in the data, it's hard to customize and the resulting data makes little sense to other platforms (that may not always be a consideration)