Perhaps I should write an article
-
Thanks, but the point is to port everything away from Microsoft. And in unmanaged C++ every type is nullable, isn't it?
Ah ok. In natural C++ though, ints,bytes,chars,and bools aren't nullable as far as I've ever known.
-
Ah ok. In natural C++ though, ints,bytes,chars,and bools aren't nullable as far as I've ever known.
-
Yeah, but a pointer isn't any specific type, it's a reference to a location in memory.
-
Yeah, but a pointer isn't any specific type, it's a reference to a location in memory.
-
Yeah you could create pointers for doing nulls, but it would probubly make more sense to have some sort of default convention.
int MyInt = 0;
if(MyInt == 0)//equivalent to nullor if you need to use 0
int MyInt = -1;
if(MyInt == -1)//nullor if you need the entire integer
intMyInt = 0;
bool isIntNull = true;
//do work here
if(isIntNull)//int null, reguardless of the ints valueOR(it's pretty damn basic, and I just threw it together in notepad so it may not compile, but the idea would work)
template
class Nullable{
bool isNull;
theType Value;
public:
Nullable(){
isNull = true;
}
bool IsNull(){
return isNull;
}
theType GetValue(){
return Value;
}
void SetValue(theType val){
isNull = false;
Value = val;
}
}; -
Yeah you could create pointers for doing nulls, but it would probubly make more sense to have some sort of default convention.
int MyInt = 0;
if(MyInt == 0)//equivalent to nullor if you need to use 0
int MyInt = -1;
if(MyInt == -1)//nullor if you need the entire integer
intMyInt = 0;
bool isIntNull = true;
//do work here
if(isIntNull)//int null, reguardless of the ints valueOR(it's pretty damn basic, and I just threw it together in notepad so it may not compile, but the idea would work)
template
class Nullable{
bool isNull;
theType Value;
public:
Nullable(){
isNull = true;
}
bool IsNull(){
return isNull;
}
theType GetValue(){
return Value;
}
void SetValue(theType val){
isNull = false;
Value = val;
}
};Have a look at boost::optional, these things tend to be tricky in today's C++. I've a question to the thread author: You've apparently decided to rewrite a working piece of software, to "move away from Microsoft", if I may paraphrase... It appears to be more like "moving from managed to native", but why? This kind of thing is where the managed world offers you a fast, efficient and safe way to get your job done. Trying to rewrite this in native "bare metal" C++ is awkward, error prone and lengthy...
-
Ah ok. In natural C++ though, ints,bytes,chars,and bools aren't nullable as far as I've ever known.
-
CDP1802 wrote:
int * PointerToSomeInteger = NULL;
However that doesn't solve what happens when you do have a nullable int. Unless you are going to solve the problem by adding making every data type into a pointer. Thus int -> int* int* -> int** int** -> int*** string -> string* string* -> string** etc. That is similar to what I already said in that the only way to implement this is to add a flag for every property. Making every type into a pointer is just a different way of adding a flag. (And repeating what I also said I have in fact implemented a DTO structure with flags with every property and used it to indicate whether the value was set - and ultimately I did not find the representation useful.)
-
CDP1802 wrote:
int * PointerToSomeInteger = NULL;
However that doesn't solve what happens when you do have a nullable int. Unless you are going to solve the problem by adding making every data type into a pointer. Thus int -> int* int* -> int** int** -> int*** string -> string* string* -> string** etc. That is similar to what I already said in that the only way to implement this is to add a flag for every property. Making every type into a pointer is just a different way of adding a flag. (And repeating what I also said I have in fact implemented a DTO structure with flags with every property and used it to indicate whether the value was set - and ultimately I did not find the representation useful.)
Well, I already allow nulls in the database sparingly, that's why it is not a big issue. Detecting unfilled properties mostly has proven useful when external modules or web services are involved. I have already seen such things happen without an error because the client had been compiled without using the current WSDL of the service. For itself it may not be important. It's just an additional benefit that comes at no additional cost.
-
Have a look at boost::optional, these things tend to be tricky in today's C++. I've a question to the thread author: You've apparently decided to rewrite a working piece of software, to "move away from Microsoft", if I may paraphrase... It appears to be more like "moving from managed to native", but why? This kind of thing is where the managed world offers you a fast, efficient and safe way to get your job done. Trying to rewrite this in native "bare metal" C++ is awkward, error prone and lengthy...
Over the last few years I have found myself constantly rewriting parts of the code because of Microsoft's changing strategies. Each time I got a little more off the beaten path and did things my own way. The third big revision moved on to a completly self made UI that depended on nothing more than the .Net framework - and XNA. I was finally making some progress on the program itself when they pulled the plug on XNA. This time I will not waste my time rewriting everything whatever way they have come up with this time. I'm sure that they are on another course again by the time I would be finished with that. But what makes you think that C++ is so scary? I already had to rely on my own code more than the .Net framework anyway. In fact, the only things I'm going to miss is Reflection and XAML. The ability to load views and styles in the UI from XAML markup with only a few lines of code was really a treat. Still, it's not worth dooming yourself to rewriting your code forever on. I also don't really see the lengthy and error prone part. Mostly I would use the same class design as before and I also never had any problems with pointers or memory management. Before .Net arrived we actually got things done as well. I worked for a company that made solutions for document archivation. Even a smaller customer could want to have 100000 or more documents processed each day. An error of 1% was not acceptable since there would not be a chance to correct so many documents manually. I think that's fast, efficient and safe enough. Going in that direction again actually feels good.
-
Over the last few years I have found myself constantly rewriting parts of the code because of Microsoft's changing strategies. Each time I got a little more off the beaten path and did things my own way. The third big revision moved on to a completly self made UI that depended on nothing more than the .Net framework - and XNA. I was finally making some progress on the program itself when they pulled the plug on XNA. This time I will not waste my time rewriting everything whatever way they have come up with this time. I'm sure that they are on another course again by the time I would be finished with that. But what makes you think that C++ is so scary? I already had to rely on my own code more than the .Net framework anyway. In fact, the only things I'm going to miss is Reflection and XAML. The ability to load views and styles in the UI from XAML markup with only a few lines of code was really a treat. Still, it's not worth dooming yourself to rewriting your code forever on. I also don't really see the lengthy and error prone part. Mostly I would use the same class design as before and I also never had any problems with pointers or memory management. Before .Net arrived we actually got things done as well. I worked for a company that made solutions for document archivation. Even a smaller customer could want to have 100000 or more documents processed each day. An error of 1% was not acceptable since there would not be a chance to correct so many documents manually. I think that's fast, efficient and safe enough. Going in that direction again actually feels good.
Well, yeah, XNA is a Microsoft specific technology which might change or go away. But this is a common problem with any vendor bound framework, so you always making a kind of bet. This is however still not a reason which would make me move away from the managed world. You will agree that a well-designed application can move dependencies on specific technologies to distinguished layers which can be replaced by some other implementation if a technology is not appropriate any more. In case of XNA I'd even give the various clones and ports a chance. I assume you know that software written for .Net is not bound to be executed on Microsoft's platforms... But that's not the point. I didn't tell that native C++ is "scary" (but it's not a bad attribute at all), it is just uhmm.. awkward to get certain things done. It's mainly because of the lack of (good and out-of-the-box available) high level libraries, but also due to limitations of deployment and compatibility models of C++. With C++11 you got another dimension of incompatibility between the various implementations and another one for the complexity. Things got quite better lately, with Boost growing strongly and getting integrated into standard fast, good libraries are being thrown to the public by Microsoft, Facebook and others, but still: for many kind of problems, doing certain things (e.g. those you have described in your post) remains awkward. Cheers, Paul P.S. I have to add that in my job I am a native C++ developer and have been for a very long time... I got so frustrated by certain awkward trivialities that I started to rewrite and publish the vex library which I am using to circumvent some really "vexing" parts of common developer's life...
-
Have a look at boost::optional, these things tend to be tricky in today's C++. I've a question to the thread author: You've apparently decided to rewrite a working piece of software, to "move away from Microsoft", if I may paraphrase... It appears to be more like "moving from managed to native", but why? This kind of thing is where the managed world offers you a fast, efficient and safe way to get your job done. Trying to rewrite this in native "bare metal" C++ is awkward, error prone and lengthy...
It was just an example.
-
Well, I already allow nulls in the database sparingly, that's why it is not a big issue. Detecting unfilled properties mostly has proven useful when external modules or web services are involved. I have already seen such things happen without an error because the client had been compiled without using the current WSDL of the service. For itself it may not be important. It's just an additional benefit that comes at no additional cost.
There is an additional cost - complexity. I can note that when I tried the solution I was generating everything so there was no actual manual requirement to create the code. I still considered it overly complex and if one doesn't generate the solution then there is an increased cost due getting it correct. In general external source mapping is not a one to one process. Thus an external client (human or otherwise) might incorrectly use an external web service in such a way that it has nothing to do with a database field. Which is why validation must occur, regardless, at the boundary level. And validation must continue to occur since one cannot rely on the client not changing. (Should note that although I don't work with client GUI I have implemented a lot of mapping APIs and that is why I always validate at the boundary layer now.)
-
Well, yeah, XNA is a Microsoft specific technology which might change or go away. But this is a common problem with any vendor bound framework, so you always making a kind of bet. This is however still not a reason which would make me move away from the managed world. You will agree that a well-designed application can move dependencies on specific technologies to distinguished layers which can be replaced by some other implementation if a technology is not appropriate any more. In case of XNA I'd even give the various clones and ports a chance. I assume you know that software written for .Net is not bound to be executed on Microsoft's platforms... But that's not the point. I didn't tell that native C++ is "scary" (but it's not a bad attribute at all), it is just uhmm.. awkward to get certain things done. It's mainly because of the lack of (good and out-of-the-box available) high level libraries, but also due to limitations of deployment and compatibility models of C++. With C++11 you got another dimension of incompatibility between the various implementations and another one for the complexity. Things got quite better lately, with Boost growing strongly and getting integrated into standard fast, good libraries are being thrown to the public by Microsoft, Facebook and others, but still: for many kind of problems, doing certain things (e.g. those you have described in your post) remains awkward. Cheers, Paul P.S. I have to add that in my job I am a native C++ developer and have been for a very long time... I got so frustrated by certain awkward trivialities that I started to rewrite and publish the vex library which I am using to circumvent some really "vexing" parts of common developer's life...
Paul, I have begun programming long ago on a small computer I built from a kit with 4k RAM and great 32 x 64 pixel monochrome graphics. And the programs were machine code, entered diectly with the hexadecimal keyboard. And I still do that today, since I still have the old computer. I think that things can't get any more spartan or awkward than beginning with an absolutely empty memory and writing the first instructions which are to be executed after the CPU's reset. That's one of my first idea was to put every little fragment of code into some kind of library. The 'library' was a cassette tape on which I had saved all kinds of routines and a sheet of paper where I had written down what routine was to be found at which position on the tape. 'Linking' was done by loading the routines to the desired memory location and adapting the branching instructions manually. Here[^] you have a recreation of my little 'library'. I did it in two boring days before christmas, using an assembler and an emulator on my notebook. Pure luxury, just like using the memory wasting 64 x 64 resolution. :) I have been writing libraries for and against everything ever since. At first that may be slow, but over time you have so much useful and working code that you can do great things very quickly. It has always been like Lego, with the difference that I made more of my own pieces than buying boring sets off the shelf. And, if I have the choice, I prefer to spend my time crafting some new parts to the best of my ability than trying to read up on the next great thing a company wants to sell me and struggling to adapt my code just in time for the next big change. Or perhaps I just enjoy being a blak sheep that refuses to run along with the rest of herd :)
-
There is an additional cost - complexity. I can note that when I tried the solution I was generating everything so there was no actual manual requirement to create the code. I still considered it overly complex and if one doesn't generate the solution then there is an increased cost due getting it correct. In general external source mapping is not a one to one process. Thus an external client (human or otherwise) might incorrectly use an external web service in such a way that it has nothing to do with a database field. Which is why validation must occur, regardless, at the boundary level. And validation must continue to occur since one cannot rely on the client not changing. (Should note that although I don't work with client GUI I have implemented a lot of mapping APIs and that is why I always validate at the boundary layer now.)
I see your point. I tend to keep service layers as thin as possible. They deal only with exposing an interface to the underlying application logic and whatever security may be required. Data is only passed on in both directions. A little like a post office where they are mostly concerned with receiving and delivering packages, but not very much with what's inside the packages. The first thing that must happen in the logic layer, of course, is a a complete validation. But that's exactly why I moved the validation into the data object. Any layer can then perform a complete validation of any data object at any time. There are no redundant implementations of this validation code and everything is set up in just one method, usually the constructor of the data object.
-
Paul, I have begun programming long ago on a small computer I built from a kit with 4k RAM and great 32 x 64 pixel monochrome graphics. And the programs were machine code, entered diectly with the hexadecimal keyboard. And I still do that today, since I still have the old computer. I think that things can't get any more spartan or awkward than beginning with an absolutely empty memory and writing the first instructions which are to be executed after the CPU's reset. That's one of my first idea was to put every little fragment of code into some kind of library. The 'library' was a cassette tape on which I had saved all kinds of routines and a sheet of paper where I had written down what routine was to be found at which position on the tape. 'Linking' was done by loading the routines to the desired memory location and adapting the branching instructions manually. Here[^] you have a recreation of my little 'library'. I did it in two boring days before christmas, using an assembler and an emulator on my notebook. Pure luxury, just like using the memory wasting 64 x 64 resolution. :) I have been writing libraries for and against everything ever since. At first that may be slow, but over time you have so much useful and working code that you can do great things very quickly. It has always been like Lego, with the difference that I made more of my own pieces than buying boring sets off the shelf. And, if I have the choice, I prefer to spend my time crafting some new parts to the best of my ability than trying to read up on the next great thing a company wants to sell me and struggling to adapt my code just in time for the next big change. Or perhaps I just enjoy being a blak sheep that refuses to run along with the rest of herd :)
That, sir, is really cool :cool: Also your Lego metaphor is completely right and puts everything to the point: building blocks of software that match each other like Lego blocks is of course always possible, but there are platforms which make things easy and others make things hard... Cheers, Paul