Strange unicode rendering problem
-
Hey all, I have a little project I am messing around wiht, it just wraps up some winapi functions in c++ classes. I just changed all the std::string's to std::wstring and all my literal string have L"string" now. And UNICODE and _UNICODE are define. Anyway, everything compiles and runs. the only proplem is nothing renders. Buttons do not show and text control do not show, the only thing that shows in the scroolbars for the textarea. Any ideas, I bet it is something simple i am missing. Hope you guy and girls have something to point me to. Cheers
-
Hey all, I have a little project I am messing around wiht, it just wraps up some winapi functions in c++ classes. I just changed all the std::string's to std::wstring and all my literal string have L"string" now. And UNICODE and _UNICODE are define. Anyway, everything compiles and runs. the only proplem is nothing renders. Buttons do not show and text control do not show, the only thing that shows in the scroolbars for the textarea. Any ideas, I bet it is something simple i am missing. Hope you guy and girls have something to point me to. Cheers
Well, one thing that instantly comes to my mind, is that buffer byte sizes, that were previously calculated using
std::string::size
, must now be calculated usingstd::wstring::size * sizeof(wchar_t)
:const std::string s1("non unicode");
const std::wstring s2("unicode");const size_t bytesize1 = s1.size() * sizeof(char);
const size_t bytesize2 = s2.size() * sizeof(wchar_t);and, of course,
bytesize1 != bytesize2
. This in combination withmemcpy
,memset
, etc. is a common trap when changing to unicode. -- The Blog: Bits and Pieces -
Well, one thing that instantly comes to my mind, is that buffer byte sizes, that were previously calculated using
std::string::size
, must now be calculated usingstd::wstring::size * sizeof(wchar_t)
:const std::string s1("non unicode");
const std::wstring s2("unicode");const size_t bytesize1 = s1.size() * sizeof(char);
const size_t bytesize2 = s2.size() * sizeof(wchar_t);and, of course,
bytesize1 != bytesize2
. This in combination withmemcpy
,memset
, etc. is a common trap when changing to unicode. -- The Blog: Bits and PiecesIf your code always used the underlying string type's
value_type
, you would be always able to determine per-element size (and thus any buffer size requirements) correctly - there is no requirement thatbasic_string
only contain narrow or widechar
acter types:typedef std::basic_string< int > IntStr;
int iValue = 1024;
IntStr isInt( &iValue );
const std::wstring s2( L"unicode" );
const std::string s1( "non unicode" );
const size_t stByteSize1 = s1.size() * sizeof( std::string::value_type ); // 11
const size_t stByteSize2 = s2.size() * sizeof( std::wstring::value_type ); // 14
const size_t stByteSize3 = isInt.size() * sizeof( IntStr::value_type ); // 4Peace! -=- James
If you think it costs a lot to do it right, just wait until you find out how much it costs to do it wrong!
Tip for new SUV drivers: Professional Driver on Closed Course does not mean your Dumb Ass on a Public Road!
DeleteFXPFiles & CheckFavorites (Please rate this post!) -
If your code always used the underlying string type's
value_type
, you would be always able to determine per-element size (and thus any buffer size requirements) correctly - there is no requirement thatbasic_string
only contain narrow or widechar
acter types:typedef std::basic_string< int > IntStr;
int iValue = 1024;
IntStr isInt( &iValue );
const std::wstring s2( L"unicode" );
const std::string s1( "non unicode" );
const size_t stByteSize1 = s1.size() * sizeof( std::string::value_type ); // 11
const size_t stByteSize2 = s2.size() * sizeof( std::wstring::value_type ); // 14
const size_t stByteSize3 = isInt.size() * sizeof( IntStr::value_type ); // 4Peace! -=- James
If you think it costs a lot to do it right, just wait until you find out how much it costs to do it wrong!
Tip for new SUV drivers: Professional Driver on Closed Course does not mean your Dumb Ass on a Public Road!
DeleteFXPFiles & CheckFavorites (Please rate this post!)Thanks Johann and James. I look for size calculation errors tonight, I beleive that may be part of the problem. As for memory functions, I am not using them currently (i'll double check though). Thanks
-
If your code always used the underlying string type's
value_type
, you would be always able to determine per-element size (and thus any buffer size requirements) correctly - there is no requirement thatbasic_string
only contain narrow or widechar
acter types:typedef std::basic_string< int > IntStr;
int iValue = 1024;
IntStr isInt( &iValue );
const std::wstring s2( L"unicode" );
const std::string s1( "non unicode" );
const size_t stByteSize1 = s1.size() * sizeof( std::string::value_type ); // 11
const size_t stByteSize2 = s2.size() * sizeof( std::wstring::value_type ); // 14
const size_t stByteSize3 = isInt.size() * sizeof( IntStr::value_type ); // 4Peace! -=- James
If you think it costs a lot to do it right, just wait until you find out how much it costs to do it wrong!
Tip for new SUV drivers: Professional Driver on Closed Course does not mean your Dumb Ass on a Public Road!
DeleteFXPFiles & CheckFavorites (Please rate this post!)I answered nitpickingly correct at first, but decided that it would be easier to read the answer if I used
char
/wchar_t
andsize_t
, because yourconst size_t stByteSize1 = s1.size() * sizeof(std::string::value_type);
const size_t stByteSize2 = s2.size() * sizeof(std::wstring::value_type);is only half correct:
const std::string::size_type stByteSize1 = s1.size() * sizeof(std::string::value_type);
const std::wstring::size_type stByteSize2 = s2.size() * sizeof(std::wstring::value_type);is correct. And now it's easy to see, for the sake of this exercise, that
char
/wchar_t
andsize_t
makes reading and understanding easier. But still not correct.James R. Twine wrote:
typedef std::basic_string< int > IntStr;
Ohhh... dangerous! Using that correctly to get expected results, means that you probably also need to have a
char_traits
specialization forint
to get where you want. -- The Blog: Bits and Pieces -
I answered nitpickingly correct at first, but decided that it would be easier to read the answer if I used
char
/wchar_t
andsize_t
, because yourconst size_t stByteSize1 = s1.size() * sizeof(std::string::value_type);
const size_t stByteSize2 = s2.size() * sizeof(std::wstring::value_type);is only half correct:
const std::string::size_type stByteSize1 = s1.size() * sizeof(std::string::value_type);
const std::wstring::size_type stByteSize2 = s2.size() * sizeof(std::wstring::value_type);is correct. And now it's easy to see, for the sake of this exercise, that
char
/wchar_t
andsize_t
makes reading and understanding easier. But still not correct.James R. Twine wrote:
typedef std::basic_string< int > IntStr;
Ohhh... dangerous! Using that correctly to get expected results, means that you probably also need to have a
char_traits
specialization forint
to get where you want. -- The Blog: Bits and PiecesYou are correct about needing to use the class'
size_type
type to get the correct variable type. The example I gave for using a string class forint
types was contrived - I just wanted to demonstrate the use ofvalue_type
. But you are correct there as well - more would need to be done to make a completely usableIntStr
type. Peace! -=- James
If you think it costs a lot to do it right, just wait until you find out how much it costs to do it wrong!
Tip for new SUV drivers: Professional Driver on Closed Course does not mean your Dumb Ass on a Public Road!
DeleteFXPFiles & CheckFavorites (Please rate this post!)