Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. The Lounge
  3. Perhaps I should write an article

Perhaps I should write an article

Scheduled Pinned Locked Moved The Lounge
csharpc++databasedesignregex
42 Posts 12 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • J jschell

    CDP1802 wrote:

    I think each property should be initialized to a certainly invalid value,

    There are two problems with that. First it assumes that default valid values do not exist. Second it assumes that all data types will always have an invalid 'value'. That of course if a false presumption.

    CDP1802 wrote:

    Anyway, the initially invalid values help in detecting bugs when properties of the data objects are accidentally not filled.

    So will unit tests and system tests. Which you must have anyways.

    CDP1802 wrote:

    In the constructor of the data object this field and all others that are needed

    This idiom is not always suitable for object initialization. For example if there are several methods that need to fill in different data for one object then if construction is the only way to set the data then one would need to come up with a different container for each method to collect the data first.

    CDP1802 wrote:

    The code to implement the base class of the data objects

    Best I can suppose is that you are suggesting using inheritance for convenience and nothing else. And that is a bad idea. You should be using helper classes and composition.

    CDP1802 wrote:

    They make preparing new data objects that pass validation much easier.

    I doubt that assertion. Validation can encompass many aspects including but not limited to pattern checks, range checks, multiple field checks, cross entity checks, duplication checks, context specific checks, etc. There is no single catch-all strategy that allows one to solve all of those.

    CDP1802 wrote:

    A small dispute with some Java developers over this (I actually only wanted to add a validation method to the data objects, not the whole bible) also cost me my last job in the end. Anything but struct-like data objects was not their 'standard'.

    Presuming that you did in fact want to do nothing but add simple validation then their stance was idiotic. However you could have just as easily created a shadow tree that mimicked all of the data objects to provide validation.

    CDP1802 wrote:

    and now the whole world is religiously imitating the guru's 'standard'?

    L Offline
    L Offline
    Lost User
    wrote on last edited by
    #19

    jschell wrote:

    There are two problems with that.
     
    First it assumes that default valid values do not exist.
     
    Second it assumes that all data types will always have an invalid 'value'. That of course if a false presumption.

    No assumptions at all. At initialisation I want to have each property set to a value that says 'this property has not yet been filled'. I also do not assume that the data objects, wherever they may come from, have been properly filled and checked. When I encounter a 'not filled' value, I know that there is something wrong. I do not want this to go undetected and quietly use a valid default. That's sweeping an existing problem under the rug. As to the values themselves: Fortunately there are such things as 'NaN' for numerical types and you can also define values for that purpose which are highly unlikely ever to be needed. How often did you need a DateTime with a value like 23 Dec 9999 23:59:59?

    jschell wrote:

    So will unit tests and system tests. Which you must have anyways.

    Having seen often enough how unit tests are treated (especially when deadlines are close), I don't invest too much trust in them. Even then a unit test will have a hard time detecting an omission when it has been filled with a valid(!) default. And, by the way, a unit test that tests a single validation method like I want can already be a nightmare. Just think of a data object with dozens of properties with more complex validation rules. Having the same nightmare in every layer redundantly does not really make anything better. Anyway, I'm much more concerned what happens at runtime. I have seen too many unprecise specification, unexpected data or even 'clever' users that made a sport out of trying to crash the server. That particular application had no unit tests at all, but extensive diagnostics and logging under the hood. My last test went over the entire productive database and was repeated until the job was completed sucessfully. And then it ran without a single incident for years until I left the company. I must have done something right.

    jschell wrote:

    I doubt that assertion. Validation can encompass many aspects including but not limited to pattern checks, range checks, multiple field checks, cross entity checks, duplication checks, context specific checks, etc. There is no single catch-all strategy th

    J 1 Reply Last reply
    0
    • L Lost User

      I'm sitting here rewriting my former C# libraries in C++, and have come to a subject which I obviously see very differently than the rest of the world. I'm talking about data objects, those objects which are passed between all layers of an application from the UI down to the database. Wherever you look, you are told that the data objects should be simple containers. That's where I start to see things differently. I think each property should be initialized to a certainly invalid value, not just left to whatever defaults the properties may have in a freshly created data object. Picking such values may not be so easy. Just think of a integer database column that allows NULL. The definition of invalid values should also be done in a non-redundant way, not in the constructor of some data object. Anyway, the initially invalid values help in detecting bugs when properties of the data objects are accidentally not filled. That assumes, of course, that the values of data objects are validated at all. How should the validation be done? The application logic must validate the data objects before doing anything with them. That's its job. It can't simply assume that validation has already been done in the UI. Who guarantees that the validation in the UI was complete and correct or was done at all? How do we guarantee that the UI and the application logic validate exactly in the same manner? My answer: A smarter data object, not just a simple struct. To begin with, the data objects get a collection to hold data field objects which now represent the properties. The data fields define invalid and (where needed) maximum and minimum values for all basic data types. They form a small class hierarchy and allow you to create more project specific types by inheritance. Let's take a string as an example. In the database a column may be declared as VARCHAR(some length). The corresponding field in the database should then make sure that the string never exceeds the size of the column. Exceptions or truncation may otherwise be the result, both not wanted. Now let's say that not just any string of up to this length will be. Let's say it's supposed to hold a mail address and has to be checked against a regex. It's just a matter of deriving a regex data field from the string data field and overriding its Validate() method. In the constructor of the data object this field and all others that are needed. In this case the maximum length and the regex to check against would have to be set. Now we have the constructor of the data

      E Offline
      E Offline
      etkid84
      wrote on last edited by
      #20

      only instantiate a typed container that you know will have all of it's fields filled with valid data at the time you instantiate it? this reminds me of some user interfaces that have menu items grayed out instead of not being present at all because it's inappropriate or there is no reason to have them exist in the list. what say you?

      David

      1 Reply Last reply
      0
      • L Lost User

        I'm sitting here rewriting my former C# libraries in C++, and have come to a subject which I obviously see very differently than the rest of the world. I'm talking about data objects, those objects which are passed between all layers of an application from the UI down to the database. Wherever you look, you are told that the data objects should be simple containers. That's where I start to see things differently. I think each property should be initialized to a certainly invalid value, not just left to whatever defaults the properties may have in a freshly created data object. Picking such values may not be so easy. Just think of a integer database column that allows NULL. The definition of invalid values should also be done in a non-redundant way, not in the constructor of some data object. Anyway, the initially invalid values help in detecting bugs when properties of the data objects are accidentally not filled. That assumes, of course, that the values of data objects are validated at all. How should the validation be done? The application logic must validate the data objects before doing anything with them. That's its job. It can't simply assume that validation has already been done in the UI. Who guarantees that the validation in the UI was complete and correct or was done at all? How do we guarantee that the UI and the application logic validate exactly in the same manner? My answer: A smarter data object, not just a simple struct. To begin with, the data objects get a collection to hold data field objects which now represent the properties. The data fields define invalid and (where needed) maximum and minimum values for all basic data types. They form a small class hierarchy and allow you to create more project specific types by inheritance. Let's take a string as an example. In the database a column may be declared as VARCHAR(some length). The corresponding field in the database should then make sure that the string never exceeds the size of the column. Exceptions or truncation may otherwise be the result, both not wanted. Now let's say that not just any string of up to this length will be. Let's say it's supposed to hold a mail address and has to be checked against a regex. It's just a matter of deriving a regex data field from the string data field and overriding its Validate() method. In the constructor of the data object this field and all others that are needed. In this case the maximum length and the regex to check against would have to be set. Now we have the constructor of the data

        M Offline
        M Offline
        Member_5893260
        wrote on last edited by
        #21

        I think there's probably a trade-off between how smart it gets and how inefficient it becomes. Beware of doing anything which requires another programmer to have to spend a week learning how your stuff works before he can do anything with it. Unless there's a marked gain in security or elegance, it's probably not worth going to an extreme with it. Also note that by doing this, you're forcing the data model to conform to your ideas - i.e. it becomes harder to break the rules when you need to. I'd be careful of deciding ahead of time that all data will always conform to the way these objects work. Again, it's a trade-off... but make sure it's still efficient and still flexible.

        1 Reply Last reply
        0
        • L Lost User

          I'm sitting here rewriting my former C# libraries in C++, and have come to a subject which I obviously see very differently than the rest of the world. I'm talking about data objects, those objects which are passed between all layers of an application from the UI down to the database. Wherever you look, you are told that the data objects should be simple containers. That's where I start to see things differently. I think each property should be initialized to a certainly invalid value, not just left to whatever defaults the properties may have in a freshly created data object. Picking such values may not be so easy. Just think of a integer database column that allows NULL. The definition of invalid values should also be done in a non-redundant way, not in the constructor of some data object. Anyway, the initially invalid values help in detecting bugs when properties of the data objects are accidentally not filled. That assumes, of course, that the values of data objects are validated at all. How should the validation be done? The application logic must validate the data objects before doing anything with them. That's its job. It can't simply assume that validation has already been done in the UI. Who guarantees that the validation in the UI was complete and correct or was done at all? How do we guarantee that the UI and the application logic validate exactly in the same manner? My answer: A smarter data object, not just a simple struct. To begin with, the data objects get a collection to hold data field objects which now represent the properties. The data fields define invalid and (where needed) maximum and minimum values for all basic data types. They form a small class hierarchy and allow you to create more project specific types by inheritance. Let's take a string as an example. In the database a column may be declared as VARCHAR(some length). The corresponding field in the database should then make sure that the string never exceeds the size of the column. Exceptions or truncation may otherwise be the result, both not wanted. Now let's say that not just any string of up to this length will be. Let's say it's supposed to hold a mail address and has to be checked against a regex. It's just a matter of deriving a regex data field from the string data field and overriding its Validate() method. In the constructor of the data object this field and all others that are needed. In this case the maximum length and the regex to check against would have to be set. Now we have the constructor of the data

          R Offline
          R Offline
          RafagaX
          wrote on last edited by
          #22

          CDP1802 wrote:

          Who guarantees that the validation in the UI was complete and correct or was done at all?

          I believe you're a bit paranoid... ;P Seriously, I believe the idea is good, but given that it will introduce some overhead in the normal development workflow, you must first justify why you want such validations, then make them generic enough so they can be used with (ideally) no modification in any project and finally release them as a nice open source (MIT licensed) library/framework. :)

          CEO at: - Rafaga Systems - Para Facturas - Modern Components for the moment...

          1 Reply Last reply
          0
          • L Lost User

            jschell wrote:

            There are two problems with that.
             
            First it assumes that default valid values do not exist.
             
            Second it assumes that all data types will always have an invalid 'value'. That of course if a false presumption.

            No assumptions at all. At initialisation I want to have each property set to a value that says 'this property has not yet been filled'. I also do not assume that the data objects, wherever they may come from, have been properly filled and checked. When I encounter a 'not filled' value, I know that there is something wrong. I do not want this to go undetected and quietly use a valid default. That's sweeping an existing problem under the rug. As to the values themselves: Fortunately there are such things as 'NaN' for numerical types and you can also define values for that purpose which are highly unlikely ever to be needed. How often did you need a DateTime with a value like 23 Dec 9999 23:59:59?

            jschell wrote:

            So will unit tests and system tests. Which you must have anyways.

            Having seen often enough how unit tests are treated (especially when deadlines are close), I don't invest too much trust in them. Even then a unit test will have a hard time detecting an omission when it has been filled with a valid(!) default. And, by the way, a unit test that tests a single validation method like I want can already be a nightmare. Just think of a data object with dozens of properties with more complex validation rules. Having the same nightmare in every layer redundantly does not really make anything better. Anyway, I'm much more concerned what happens at runtime. I have seen too many unprecise specification, unexpected data or even 'clever' users that made a sport out of trying to crash the server. That particular application had no unit tests at all, but extensive diagnostics and logging under the hood. My last test went over the entire productive database and was repeated until the job was completed sucessfully. And then it ran without a single incident for years until I left the company. I must have done something right.

            jschell wrote:

            I doubt that assertion. Validation can encompass many aspects including but not limited to pattern checks, range checks, multiple field checks, cross entity checks, duplication checks, context specific checks, etc. There is no single catch-all strategy th

            J Offline
            J Offline
            jschell
            wrote on last edited by
            #23

            CDP1802 wrote:

            At initialisation I want to have each property set to a value that says 'this property has not yet been filled'.

            I understand your proposed solution. Because I have done it. And tried various variations as well. And the ONLY way to do it for all cases is to have a second flag property for every real property which indicates whether it has been set yet.

            CDP1802 wrote:

            I do not want this to go undetected and quietly use a valid default. That's sweeping an existing problem under the rug.

            A default value is often an appropriate solution and I haven't seen any evidence that the problem that you are attempting to solve is significant. (I should note that I create a lot of apis that use data transfers objects and have been doing so for years.)

            CDP1802 wrote:

            As to the values themselves

            By definition a magic value is magic. The value chosen doesn't alter that it is intended to be magic.

            CDP1802 wrote:

            I must have done something right.

            And you are suggesting that the only reason for this success is this proposed idiom?

            1 Reply Last reply
            0
            • L Lost User

              I'm sitting here rewriting my former C# libraries in C++, and have come to a subject which I obviously see very differently than the rest of the world. I'm talking about data objects, those objects which are passed between all layers of an application from the UI down to the database. Wherever you look, you are told that the data objects should be simple containers. That's where I start to see things differently. I think each property should be initialized to a certainly invalid value, not just left to whatever defaults the properties may have in a freshly created data object. Picking such values may not be so easy. Just think of a integer database column that allows NULL. The definition of invalid values should also be done in a non-redundant way, not in the constructor of some data object. Anyway, the initially invalid values help in detecting bugs when properties of the data objects are accidentally not filled. That assumes, of course, that the values of data objects are validated at all. How should the validation be done? The application logic must validate the data objects before doing anything with them. That's its job. It can't simply assume that validation has already been done in the UI. Who guarantees that the validation in the UI was complete and correct or was done at all? How do we guarantee that the UI and the application logic validate exactly in the same manner? My answer: A smarter data object, not just a simple struct. To begin with, the data objects get a collection to hold data field objects which now represent the properties. The data fields define invalid and (where needed) maximum and minimum values for all basic data types. They form a small class hierarchy and allow you to create more project specific types by inheritance. Let's take a string as an example. In the database a column may be declared as VARCHAR(some length). The corresponding field in the database should then make sure that the string never exceeds the size of the column. Exceptions or truncation may otherwise be the result, both not wanted. Now let's say that not just any string of up to this length will be. Let's say it's supposed to hold a mail address and has to be checked against a regex. It's just a matter of deriving a regex data field from the string data field and overriding its Validate() method. In the constructor of the data object this field and all others that are needed. In this case the maximum length and the regex to check against would have to be set. Now we have the constructor of the data

              L Offline
              L Offline
              Lost User
              wrote on last edited by
              #24

              "I'm talking about data objects, those objects which are passed between all layers of an application from the UI down to the database." I thought "data objects" (generally) only moved between the DAL (data access layer) and the database. It was the "business object" that talked to the DAL via the "business layer" (or "model") and talked to UI via the presentation layer (or "view") (and vise versa). By themselves, data objects have no knowledge of referential integrity or what is required to complete a "transaction" (which may require "many" data objects); that's the domain of the business object and it's (business) "rules". A data object might contain some "basic" validations, but it can't know all the possibities without having some idea of the overall context it is operating in (and which may change as the transaction is being constructed).

              1 Reply Last reply
              0
              • L Lost User

                I'm sitting here rewriting my former C# libraries in C++, and have come to a subject which I obviously see very differently than the rest of the world. I'm talking about data objects, those objects which are passed between all layers of an application from the UI down to the database. Wherever you look, you are told that the data objects should be simple containers. That's where I start to see things differently. I think each property should be initialized to a certainly invalid value, not just left to whatever defaults the properties may have in a freshly created data object. Picking such values may not be so easy. Just think of a integer database column that allows NULL. The definition of invalid values should also be done in a non-redundant way, not in the constructor of some data object. Anyway, the initially invalid values help in detecting bugs when properties of the data objects are accidentally not filled. That assumes, of course, that the values of data objects are validated at all. How should the validation be done? The application logic must validate the data objects before doing anything with them. That's its job. It can't simply assume that validation has already been done in the UI. Who guarantees that the validation in the UI was complete and correct or was done at all? How do we guarantee that the UI and the application logic validate exactly in the same manner? My answer: A smarter data object, not just a simple struct. To begin with, the data objects get a collection to hold data field objects which now represent the properties. The data fields define invalid and (where needed) maximum and minimum values for all basic data types. They form a small class hierarchy and allow you to create more project specific types by inheritance. Let's take a string as an example. In the database a column may be declared as VARCHAR(some length). The corresponding field in the database should then make sure that the string never exceeds the size of the column. Exceptions or truncation may otherwise be the result, both not wanted. Now let's say that not just any string of up to this length will be. Let's say it's supposed to hold a mail address and has to be checked against a regex. It's just a matter of deriving a regex data field from the string data field and overriding its Validate() method. In the constructor of the data object this field and all others that are needed. In this case the maximum length and the regex to check against would have to be set. Now we have the constructor of the data

                K Offline
                K Offline
                kelton5020
                wrote on last edited by
                #25

                If you're rewriting in c++ with the CLR you have Nullable types in which any object can be null(This is how most ORMs deal with ints and bools etc).

                L 1 Reply Last reply
                0
                • K kelton5020

                  If you're rewriting in c++ with the CLR you have Nullable types in which any object can be null(This is how most ORMs deal with ints and bools etc).

                  L Offline
                  L Offline
                  Lost User
                  wrote on last edited by
                  #26

                  Thanks, but the point is to port everything away from Microsoft. And in unmanaged C++ every type is nullable, isn't it?

                  K 1 Reply Last reply
                  0
                  • L Lost User

                    Thanks, but the point is to port everything away from Microsoft. And in unmanaged C++ every type is nullable, isn't it?

                    K Offline
                    K Offline
                    kelton5020
                    wrote on last edited by
                    #27

                    Ah ok. In natural C++ though, ints,bytes,chars,and bools aren't nullable as far as I've ever known.

                    L J 2 Replies Last reply
                    0
                    • K kelton5020

                      Ah ok. In natural C++ though, ints,bytes,chars,and bools aren't nullable as far as I've ever known.

                      L Offline
                      L Offline
                      Lost User
                      wrote on last edited by
                      #28

                      It's called a pointer :)

                      K 1 Reply Last reply
                      0
                      • L Lost User

                        It's called a pointer :)

                        K Offline
                        K Offline
                        kelton5020
                        wrote on last edited by
                        #29

                        Yeah, but a pointer isn't any specific type, it's a reference to a location in memory.

                        L 1 Reply Last reply
                        0
                        • K kelton5020

                          Yeah, but a pointer isn't any specific type, it's a reference to a location in memory.

                          L Offline
                          L Offline
                          Lost User
                          wrote on last edited by
                          #30

                          How about int * PointerToSomeInteger = NULL;

                          K J 2 Replies Last reply
                          0
                          • L Lost User

                            How about int * PointerToSomeInteger = NULL;

                            K Offline
                            K Offline
                            kelton5020
                            wrote on last edited by
                            #31

                            Yeah you could create pointers for doing nulls, but it would probubly make more sense to have some sort of default convention.

                            int MyInt = 0;
                            if(MyInt == 0)//equivalent to null

                            or if you need to use 0

                            int MyInt = -1;
                            if(MyInt == -1)//null

                            or if you need the entire integer

                            intMyInt = 0;
                            bool isIntNull = true;
                            //do work here
                            if(isIntNull)//int null, reguardless of the ints value

                            OR(it's pretty damn basic, and I just threw it together in notepad so it may not compile, but the idea would work)

                            template
                            class Nullable{
                            bool isNull;
                            theType Value;
                            public:
                            Nullable(){
                            isNull = true;
                            }
                            bool IsNull(){
                            return isNull;
                            }
                            theType GetValue(){
                            return Value;
                            }
                            void SetValue(theType val){
                            isNull = false;
                            Value = val;
                            }
                            };

                            P 1 Reply Last reply
                            0
                            • K kelton5020

                              Yeah you could create pointers for doing nulls, but it would probubly make more sense to have some sort of default convention.

                              int MyInt = 0;
                              if(MyInt == 0)//equivalent to null

                              or if you need to use 0

                              int MyInt = -1;
                              if(MyInt == -1)//null

                              or if you need the entire integer

                              intMyInt = 0;
                              bool isIntNull = true;
                              //do work here
                              if(isIntNull)//int null, reguardless of the ints value

                              OR(it's pretty damn basic, and I just threw it together in notepad so it may not compile, but the idea would work)

                              template
                              class Nullable{
                              bool isNull;
                              theType Value;
                              public:
                              Nullable(){
                              isNull = true;
                              }
                              bool IsNull(){
                              return isNull;
                              }
                              theType GetValue(){
                              return Value;
                              }
                              void SetValue(theType val){
                              isNull = false;
                              Value = val;
                              }
                              };

                              P Offline
                              P Offline
                              Paul Michalik
                              wrote on last edited by
                              #32

                              Have a look at boost::optional, these things tend to be tricky in today's C++. I've a question to the thread author: You've apparently decided to rewrite a working piece of software, to "move away from Microsoft", if I may paraphrase... It appears to be more like "moving from managed to native", but why? This kind of thing is where the managed world offers you a fast, efficient and safe way to get your job done. Trying to rewrite this in native "bare metal" C++ is awkward, error prone and lengthy...

                              L K 2 Replies Last reply
                              0
                              • K kelton5020

                                Ah ok. In natural C++ though, ints,bytes,chars,and bools aren't nullable as far as I've ever known.

                                J Offline
                                J Offline
                                jschell
                                wrote on last edited by
                                #33

                                kelton5020 wrote:

                                Ah ok. In natural C++ though, ints,bytes,chars,and bools aren't nullable as far as I've ever known

                                Correct.

                                1 Reply Last reply
                                0
                                • L Lost User

                                  How about int * PointerToSomeInteger = NULL;

                                  J Offline
                                  J Offline
                                  jschell
                                  wrote on last edited by
                                  #34

                                  CDP1802 wrote:

                                  int * PointerToSomeInteger = NULL;

                                  However that doesn't solve what happens when you do have a nullable int. Unless you are going to solve the problem by adding making every data type into a pointer. Thus int -> int* int* -> int** int** -> int*** string -> string* string* -> string** etc. That is similar to what I already said in that the only way to implement this is to add a flag for every property. Making every type into a pointer is just a different way of adding a flag. (And repeating what I also said I have in fact implemented a DTO structure with flags with every property and used it to indicate whether the value was set - and ultimately I did not find the representation useful.)

                                  L 1 Reply Last reply
                                  0
                                  • J jschell

                                    CDP1802 wrote:

                                    int * PointerToSomeInteger = NULL;

                                    However that doesn't solve what happens when you do have a nullable int. Unless you are going to solve the problem by adding making every data type into a pointer. Thus int -> int* int* -> int** int** -> int*** string -> string* string* -> string** etc. That is similar to what I already said in that the only way to implement this is to add a flag for every property. Making every type into a pointer is just a different way of adding a flag. (And repeating what I also said I have in fact implemented a DTO structure with flags with every property and used it to indicate whether the value was set - and ultimately I did not find the representation useful.)

                                    L Offline
                                    L Offline
                                    Lost User
                                    wrote on last edited by
                                    #35

                                    Well, I already allow nulls in the database sparingly, that's why it is not a big issue. Detecting unfilled properties mostly has proven useful when external modules or web services are involved. I have already seen such things happen without an error because the client had been compiled without using the current WSDL of the service. For itself it may not be important. It's just an additional benefit that comes at no additional cost.

                                    J 1 Reply Last reply
                                    0
                                    • P Paul Michalik

                                      Have a look at boost::optional, these things tend to be tricky in today's C++. I've a question to the thread author: You've apparently decided to rewrite a working piece of software, to "move away from Microsoft", if I may paraphrase... It appears to be more like "moving from managed to native", but why? This kind of thing is where the managed world offers you a fast, efficient and safe way to get your job done. Trying to rewrite this in native "bare metal" C++ is awkward, error prone and lengthy...

                                      L Offline
                                      L Offline
                                      Lost User
                                      wrote on last edited by
                                      #36

                                      Over the last few years I have found myself constantly rewriting parts of the code because of Microsoft's changing strategies. Each time I got a little more off the beaten path and did things my own way. The third big revision moved on to a completly self made UI that depended on nothing more than the .Net framework - and XNA. I was finally making some progress on the program itself when they pulled the plug on XNA. This time I will not waste my time rewriting everything whatever way they have come up with this time. I'm sure that they are on another course again by the time I would be finished with that. But what makes you think that C++ is so scary? I already had to rely on my own code more than the .Net framework anyway. In fact, the only things I'm going to miss is Reflection and XAML. The ability to load views and styles in the UI from XAML markup with only a few lines of code was really a treat. Still, it's not worth dooming yourself to rewriting your code forever on. I also don't really see the lengthy and error prone part. Mostly I would use the same class design as before and I also never had any problems with pointers or memory management. Before .Net arrived we actually got things done as well. I worked for a company that made solutions for document archivation. Even a smaller customer could want to have 100000 or more documents processed each day. An error of 1% was not acceptable since there would not be a chance to correct so many documents manually. I think that's fast, efficient and safe enough. Going in that direction again actually feels good.

                                      P 1 Reply Last reply
                                      0
                                      • L Lost User

                                        Over the last few years I have found myself constantly rewriting parts of the code because of Microsoft's changing strategies. Each time I got a little more off the beaten path and did things my own way. The third big revision moved on to a completly self made UI that depended on nothing more than the .Net framework - and XNA. I was finally making some progress on the program itself when they pulled the plug on XNA. This time I will not waste my time rewriting everything whatever way they have come up with this time. I'm sure that they are on another course again by the time I would be finished with that. But what makes you think that C++ is so scary? I already had to rely on my own code more than the .Net framework anyway. In fact, the only things I'm going to miss is Reflection and XAML. The ability to load views and styles in the UI from XAML markup with only a few lines of code was really a treat. Still, it's not worth dooming yourself to rewriting your code forever on. I also don't really see the lengthy and error prone part. Mostly I would use the same class design as before and I also never had any problems with pointers or memory management. Before .Net arrived we actually got things done as well. I worked for a company that made solutions for document archivation. Even a smaller customer could want to have 100000 or more documents processed each day. An error of 1% was not acceptable since there would not be a chance to correct so many documents manually. I think that's fast, efficient and safe enough. Going in that direction again actually feels good.

                                        P Offline
                                        P Offline
                                        Paul Michalik
                                        wrote on last edited by
                                        #37

                                        Well, yeah, XNA is a Microsoft specific technology which might change or go away. But this is a common problem with any vendor bound framework, so you always making a kind of bet. This is however still not a reason which would make me move away from the managed world. You will agree that a well-designed application can move dependencies on specific technologies to distinguished layers which can be replaced by some other implementation if a technology is not appropriate any more. In case of XNA I'd even give the various clones and ports a chance. I assume you know that software written for .Net is not bound to be executed on Microsoft's platforms... But that's not the point. I didn't tell that native C++ is "scary" (but it's not a bad attribute at all), it is just uhmm.. awkward to get certain things done. It's mainly because of the lack of (good and out-of-the-box available) high level libraries, but also due to limitations of deployment and compatibility models of C++. With C++11 you got another dimension of incompatibility between the various implementations and another one for the complexity. Things got quite better lately, with Boost growing strongly and getting integrated into standard fast, good libraries are being thrown to the public by Microsoft, Facebook and others, but still: for many kind of problems, doing certain things (e.g. those you have described in your post) remains awkward. Cheers, Paul P.S. I have to add that in my job I am a native C++ developer and have been for a very long time... I got so frustrated by certain awkward trivialities that I started to rewrite and publish the vex library which I am using to circumvent some really "vexing" parts of common developer's life...

                                        L 1 Reply Last reply
                                        0
                                        • P Paul Michalik

                                          Have a look at boost::optional, these things tend to be tricky in today's C++. I've a question to the thread author: You've apparently decided to rewrite a working piece of software, to "move away from Microsoft", if I may paraphrase... It appears to be more like "moving from managed to native", but why? This kind of thing is where the managed world offers you a fast, efficient and safe way to get your job done. Trying to rewrite this in native "bare metal" C++ is awkward, error prone and lengthy...

                                          K Offline
                                          K Offline
                                          kelton5020
                                          wrote on last edited by
                                          #38

                                          It was just an example.

                                          1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Don't have an account? Register

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups