Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. General Programming
  3. .NET (Core and Framework)
  4. Database connection

Database connection

Scheduled Pinned Locked Moved .NET (Core and Framework)
database
39 Posts 9 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • D dojohansen

    We largely agree then. You're definitely right that using a reader and assigning the properties/fields of a class is faster to load than datasets. I just don't think the difference actually matters in all applications. More importantly, data sets have some very nice features. They lend themselves to AJAX and web services rather well since their internal representation is XML, meaning they serialize and deserialize to and from XML very efficiently. You can save them to files and modify them in disconnected mode, such as on a laptop on the road, and easily sync back to some other store ("the central database") at a later time. You can filter the data and sort it very easily, add relations between entities of data dynamically, verify constraints, cascade deletes, and detect concurrency violations. All of this is out-of-the-box functionality you get just by using datasets and adapters. If you want to get the same functionality but use your own custom data objects you run into a bunch of other constraints. Of course it is possible to recreate this sort of flexible and rich functionality in your entity objects, but trust me, if you do they will not be as lightweight anymore. To implement things like the DataTable.Select() method or DataViews and filters, or relations, you need to start including metadata in the classes or use reflection to discover it, and you then lose the raw speed of primitive data objects with hardcoded relations, as in obj.Name = reader.GetString("name");. Sometimes this extra speed is necessary or at least highly desireable for an app to do it's job properly, but other times the user wouldn't even be able to tell any difference at all. So for me, dismissing datasets and adapters isn't a decision to be taken before you've asked yourself - and answered - this question: How much of the functionality it offers is useful to me? How much might become useful down the road? A specific example where datasets may be an excellent fit: You're writing an AJAX-enabled web app and have this idea: What if we simply use the same schema for the XML data on the wire as that used by the DataSet itself? We can have a client-side component (such as a table with in-place editing capability) effectively perform Insert, Update, Delete operations on the disconnected dataset without having to contact the server at all, neither by postback nor any AJAX callback. The UI would then basically just become a specialized XML editor. After multiple edits have been made and are ready to be persisted, the

    C Offline
    C Offline
    Colin Angus Mackay
    wrote on last edited by
    #26

    dojohansen wrote:

    data sets have some very nice features. They lend themselves to AJAX and web services rather well since their internal representation is XML

    The internal representation of a DataSet is not XML. That would be increadibly inefficient.

    dojohansen wrote:

    To implement things like the DataTable.Select() method or DataViews and filters, or relations, you need to start including metadata in the classes or use reflection to discover it, and you then lose the raw speed of primitive data objects with hardcoded relations

    Using LINQ to Objects you don't have these problems, it is built in to the language and the type discovery is done at compile time so it won't slow your application during run time.

    dojohansen wrote:

    How much of the functionality it offers is useful to me? How much might become useful down the road?

    I've already discussed the uses cases where I think DataAdapeters/Sets/Tables work. They are few because it is more useful to me in the business model. I don't like data artifacts to exist outside the data layer because it means I can't swap a Database for a flat file, XML file, web service or any number of other things that don't natively talk in DataSets. I like my applications to be clean and easily understood. DataSets have no business functionality. They cannot tell me the business rules regarding refunds, or how a certain discounts can be applied. Sure I can see that there is the data to support refunds or discounts but without the business rules it isn't nearly rich enough.

    dojohansen wrote:

    the client makes an AJAX request the body of which is the edited XML document, and the server simply creates a dataset from the XML and uses DataAdapter.Update() to delete, insert, and update as required, detecting concurrency violations if any.

    And where do the business rules fit in to all this? Maybe we are creating different types of application and you are creating an application that is an interface onto a database but in every application I've ever worked on there have been rules, validation, conditions that have to be met before the data is allowed anywhere near a database. There is more to persisting data than concurrency violations.

    dojohansen wrote:

    Personally, I quite like the datasets.

    D 2 Replies Last reply
    0
    • D dojohansen

      Larger amounts of data compared to what? The alternatives are many, and JSON might be a great alternative in some ways, but the truth is that JSON has it's own problems and you'd have to write a bunch of code to deal with it. And it's simply not true that DataSets "aren't interoperable". They're just XML and if using Java means "a lot of code" is "going to have to be written at the Java end" to pick out nodes from an XML document then Java is pretty weak. That said, I'd agree they're not the best choice for web services if (as ought to be the case) you intend the service to be as easily consumed by non .net clients. However, it's not like entity objects solve this problem! If you don't implement ISerializable but try to go down the .net XML serialization route you can no longer encapsulate any of the objects state properly, because anything you'd like to serialize must be public and read-write. This is because .net actually uses reflection to generate code for a serializer, incurring reflection costs once, and then uses the compiled serializer to do the actual serialization. Just like compiled regex this gives great performance, but the problem of course is you can't serialize any properly designed OOP objects. I personally wish that the Serializable attribute would instead cause the automatic inclusion of an implementation of ISerializable within the same type, so that we could serialize and deserialize otherwise readonly or private members without incurring any reflection cost. So I think you're getting off the hook to easily if we just let you point out what overhead is involved with one solution without saying anything about how it should actually be done. I am sure that no matter what you suggest as an alternative solution, it will have some drawbacks of it's own.

      C Offline
      C Offline
      Colin Angus Mackay
      wrote on last edited by
      #27

      dojohansen wrote:

      They're just XML and if using Java means "a lot of code" is "going to have to be written at the Java end" to pick out nodes from an XML document then Java is pretty weak.

      I'm going to defer to one of the industry experts on interoperability here (Christian Weyer) who advises against DataSets when dealing with non-.NET systems. Sure, it is easy for you, but for the guy on the other end who has to replicate the schema it is hellish. It isn't just about picking out a few XML nodes. When there is a response back you have to form a response that will deserialise into a dataset too.

      *Developer Day Scotland - Free community conference Delegate Registration Open

      D 1 Reply Last reply
      0
      • D dojohansen

        Hi, the answer is that in ADO.NET you should always close the connection when it isn't in use. The connection classes manage the underlying TCP connection for you, so although you are "logically" closing the connection you are not in fact incurring the cost of tearing down and reestablishing the database connection each time. In practice this makes the most difference in server apps where connection pooling is of great use. Whenever your code closes a connection it in fact simply releases that connection to the pool, and the next time an instance of a connection is constructed that uses the exact same connection string the pre-existing connection is returned. (There is something involved to make the connection state as if it was a freshly established one - see 'sp_reset' in SQL Server, not sure exactly how this is implemented with the other providers.) But even in a desktop app where each client has a dedicated connection to the database and no pooling takes place it's considered good practice to open and close the connection. I think it is; it makes error handling a little easier. You still have to catch exceptions and perhaps log and present errors, but if the user wishes to retry an operation there's no additional logic to check the state of the connection or find out if it's necessary to open it first, because you simply *always* open it where you need it and close it when you're done with it. I personally use a simple connection wrapper class to centralize the code implementing the patterns I wish to use. You may not bother doing this if you use code generation for most of your data access code, but if you hand-code this stuff it makes a huge difference - much less code, far fewer errors, and much easier debugging. And if you just put this class in a separate library and never put anything app-specific in it you'll start building reusable code that has applications everywhere. For example, with my Connection class you can do very common tasks like these very easily:

        // Get a connection from configuration. Why rewrite the same code all over the place?
        Connection c = Connection.FromConfig("mainDB");

        // Similar with the open-and-close logic. Why not centralize it so it's transparent?
        int count = c.ExecuteScalar("select count(*) from [table] where [col] < @p0", value);

        // SqlTransaction is a mess; it doesn't support nesting, and you must write code to associate each
        // SqlCommand with the transaction object before executing it. Why not automate this ted

        C Offline
        C Offline
        Colin Angus Mackay
        wrote on last edited by
        #28

        dojohansen wrote:

        The connection classes manage the underlying TCP connection for you

        Assuming you are connecting to the SQL Server with a TCP/IP connection. :-D

        *Developer Day Scotland - Free community conference Delegate Registration Open

        D 1 Reply Last reply
        0
        • P Pete OHanlon

          dojohansen wrote:

          Personally I think data adapters are useful and the disconnected data model can be enough for many things.

          Don't forget memory intensive, and a real no-no when it comes to interoperability. Speaking as somebody who spent a lot of time writing code that communicates with Java based systems, I can tell you that DataSets/etc, are just plain evil.

          "WPF has many lovers. It's a veritable porn star!" - Josh Smith

          As Braveheart once said, "You can take our freedom but you'll never take our Hobnobs!" - Martin Hughes.

          My blog | My articles | MoXAML PowerToys | Onyx

          D Offline
          D Offline
          dojohansen
          wrote on last edited by
          #29

          I think you mean "speaking as someone who considers himself quite the expert".

          Pete O'Hanlon wrote:

          Don't forget memory intensive, and a real no-no when it comes to interoperability.

          I believe you mean "scalability". If I am mistaken, I invite you to explain to us all how whether or not something is "memory intensive" (again, it's not very clear what you mean by this) affects whether or not it is interoperable. I for one have no idea. Exposed in a web service the dataset becomes an xml stream that the client may process as it comes in over the wire, save to a file and then process as a stream, or load into memory in full and then process in a random-access manner. I cannot imagine how the "memory intensity" of datasets can possibly affect their interoperability. If you mean "takes up a large amount of memory" then datasets are "intensive" in the sense that you can't easily work with them as streams. But again, a drawback that represents a *potential* issue in *some* usage scenarios is presented in a totally dogmatic fashion as if it was a disqualifying feature of the technology. And again, it's not like entity objects solve this problem either. If you need to keep a dataset in memory there's some reason why that does not magically disappear just because you choose to represent the data in a different way. It may *help* with a more compact representation, but only a totally different approach like streaming would really *solve* such an issue. So again, my challenge to you: If Datasets are so universally bad, describe a solution that is universally better. I assure you it will not be difficult to do like you did and just point out some potential problems that may exist in your solution and pretend that these automtically make it useless, even though the simple truth is that *ANY* solution has drawbacks and advantages compared to any other.

          P 1 Reply Last reply
          0
          • C Colin Angus Mackay

            dojohansen wrote:

            They're just XML and if using Java means "a lot of code" is "going to have to be written at the Java end" to pick out nodes from an XML document then Java is pretty weak.

            I'm going to defer to one of the industry experts on interoperability here (Christian Weyer) who advises against DataSets when dealing with non-.NET systems. Sure, it is easy for you, but for the guy on the other end who has to replicate the schema it is hellish. It isn't just about picking out a few XML nodes. When there is a response back you have to form a response that will deserialise into a dataset too.

            *Developer Day Scotland - Free community conference Delegate Registration Open

            D Offline
            D Offline
            dojohansen
            wrote on last edited by
            #30

            Colin Angus Mackay wrote:

            I'm going to defer to one of the industry experts on interoperability here (Christian Weyer) who advises against DataSets when dealing with non-.NET systems.

            Unless your only aim in this discussion is to create the impression that you are right and everybody else is wrong, why would you quote someone to say the same thing I just said, pretending us to disagree on something we do not? I already said I agree it's not ideal for non .net clients, although I maintain that it would not in fact be much of a challenge for a moderately skilled developer to read this xml and turn it into objects native to the client. But please don't quote just that bit and pretend I don't agree that the WDSL should let his tools do that for him and he shouldn't have to concern himself with XML, because I do agree. It's still valid to point out that it wouldn't be "hellish".

            1 Reply Last reply
            0
            • C Colin Angus Mackay

              dojohansen wrote:

              The connection classes manage the underlying TCP connection for you

              Assuming you are connecting to the SQL Server with a TCP/IP connection. :-D

              *Developer Day Scotland - Free community conference Delegate Registration Open

              D Offline
              D Offline
              dojohansen
              wrote on last edited by
              #31

              True. It's hard to comment on anything here with all the gotchas hiding in the bushes. :-\ ;P

              1 Reply Last reply
              0
              • C Colin Angus Mackay

                dojohansen wrote:

                data sets have some very nice features. They lend themselves to AJAX and web services rather well since their internal representation is XML

                The internal representation of a DataSet is not XML. That would be increadibly inefficient.

                dojohansen wrote:

                To implement things like the DataTable.Select() method or DataViews and filters, or relations, you need to start including metadata in the classes or use reflection to discover it, and you then lose the raw speed of primitive data objects with hardcoded relations

                Using LINQ to Objects you don't have these problems, it is built in to the language and the type discovery is done at compile time so it won't slow your application during run time.

                dojohansen wrote:

                How much of the functionality it offers is useful to me? How much might become useful down the road?

                I've already discussed the uses cases where I think DataAdapeters/Sets/Tables work. They are few because it is more useful to me in the business model. I don't like data artifacts to exist outside the data layer because it means I can't swap a Database for a flat file, XML file, web service or any number of other things that don't natively talk in DataSets. I like my applications to be clean and easily understood. DataSets have no business functionality. They cannot tell me the business rules regarding refunds, or how a certain discounts can be applied. Sure I can see that there is the data to support refunds or discounts but without the business rules it isn't nearly rich enough.

                dojohansen wrote:

                the client makes an AJAX request the body of which is the edited XML document, and the server simply creates a dataset from the XML and uses DataAdapter.Update() to delete, insert, and update as required, detecting concurrency violations if any.

                And where do the business rules fit in to all this? Maybe we are creating different types of application and you are creating an application that is an interface onto a database but in every application I've ever worked on there have been rules, validation, conditions that have to be met before the data is allowed anywhere near a database. There is more to persisting data than concurrency violations.

                dojohansen wrote:

                Personally, I quite like the datasets.

                D Offline
                D Offline
                dojohansen
                wrote on last edited by
                #32

                Colin Angus Mackay wrote:

                The internal representation of a DataSet is not XML. That would be increadibly inefficient.

                I'll take your word for it. I've read serveral places that it is, for example here[^], which is supposed to be about interview questions for programmers. The relevant bit is point 3. which reads 3.For a dataset XML is the internal representation as well as medium used for output but for a recordset XML is merely an output format. I am not really sure why you say it would be "incredibly inefficient" as trees aren't generally inefficient structures, at least not in terms of time. Spacewise they might be, although that really depends on whether there's a lot of nodes with little data or a few nodes with plenty - I guess most database data would lend itself towards the first case though.

                Colin Angus Mackay wrote:

                Using LINQ to Objects you don't have these problems, it is built in to the language and the type discovery is done at compile time so it won't slow your application during run time.

                LINQ may be better, but once again you're only proposing alternatives that work if you already have an entity layer. I think you are wrong to assume every application has a lot of business logic. Some servers for example do little more than expose (read only) data to the world.

                Colin Angus Mackay wrote:

                And where do the business rules fit in to all this?

                Oh come on! It's not like basing the business code on datasets is much harder than building it into your entity types - especially if they are generated and the darn tool can't make partial classes. Sure, the typed datasets have awful names but if I include a few aliases in my code file I can write code that is virtually identical to your beloved entity code to do validation. You don't seem to acknowledge that if you don't use datasets for CRUD and serialization/deserialization you have to use somethign else. LINQ may be a good alternative for CRUD, I must admit I don't like it though, it's totally opaque and one never quite knows what's going on under the hoods, even if it does seem to do a pretty good job at least with LINQ to SQL. I haven't got the experience with it to really judge. But for serial

                D 1 Reply Last reply
                0
                • D dojohansen

                  Colin Angus Mackay wrote:

                  The internal representation of a DataSet is not XML. That would be increadibly inefficient.

                  I'll take your word for it. I've read serveral places that it is, for example here[^], which is supposed to be about interview questions for programmers. The relevant bit is point 3. which reads 3.For a dataset XML is the internal representation as well as medium used for output but for a recordset XML is merely an output format. I am not really sure why you say it would be "incredibly inefficient" as trees aren't generally inefficient structures, at least not in terms of time. Spacewise they might be, although that really depends on whether there's a lot of nodes with little data or a few nodes with plenty - I guess most database data would lend itself towards the first case though.

                  Colin Angus Mackay wrote:

                  Using LINQ to Objects you don't have these problems, it is built in to the language and the type discovery is done at compile time so it won't slow your application during run time.

                  LINQ may be better, but once again you're only proposing alternatives that work if you already have an entity layer. I think you are wrong to assume every application has a lot of business logic. Some servers for example do little more than expose (read only) data to the world.

                  Colin Angus Mackay wrote:

                  And where do the business rules fit in to all this?

                  Oh come on! It's not like basing the business code on datasets is much harder than building it into your entity types - especially if they are generated and the darn tool can't make partial classes. Sure, the typed datasets have awful names but if I include a few aliases in my code file I can write code that is virtually identical to your beloved entity code to do validation. You don't seem to acknowledge that if you don't use datasets for CRUD and serialization/deserialization you have to use somethign else. LINQ may be a good alternative for CRUD, I must admit I don't like it though, it's totally opaque and one never quite knows what's going on under the hoods, even if it does seem to do a pretty good job at least with LINQ to SQL. I haven't got the experience with it to really judge. But for serial

                  D Offline
                  D Offline
                  dojohansen
                  wrote on last edited by
                  #33

                  By the way, I forgot to say that I tried to look at DataSet in reflector and though I didn't dig very deep it seems pretty clear that you're correct, and that site wrong: XML, it seems, is indeed not the internal representation. I just wanted to excuse myself by showing this misconception exists elsewhere. I do not however wish to continue spreading it!

                  1 Reply Last reply
                  0
                  • C Colin Angus Mackay

                    dojohansen wrote:

                    data sets have some very nice features. They lend themselves to AJAX and web services rather well since their internal representation is XML

                    The internal representation of a DataSet is not XML. That would be increadibly inefficient.

                    dojohansen wrote:

                    To implement things like the DataTable.Select() method or DataViews and filters, or relations, you need to start including metadata in the classes or use reflection to discover it, and you then lose the raw speed of primitive data objects with hardcoded relations

                    Using LINQ to Objects you don't have these problems, it is built in to the language and the type discovery is done at compile time so it won't slow your application during run time.

                    dojohansen wrote:

                    How much of the functionality it offers is useful to me? How much might become useful down the road?

                    I've already discussed the uses cases where I think DataAdapeters/Sets/Tables work. They are few because it is more useful to me in the business model. I don't like data artifacts to exist outside the data layer because it means I can't swap a Database for a flat file, XML file, web service or any number of other things that don't natively talk in DataSets. I like my applications to be clean and easily understood. DataSets have no business functionality. They cannot tell me the business rules regarding refunds, or how a certain discounts can be applied. Sure I can see that there is the data to support refunds or discounts but without the business rules it isn't nearly rich enough.

                    dojohansen wrote:

                    the client makes an AJAX request the body of which is the edited XML document, and the server simply creates a dataset from the XML and uses DataAdapter.Update() to delete, insert, and update as required, detecting concurrency violations if any.

                    And where do the business rules fit in to all this? Maybe we are creating different types of application and you are creating an application that is an interface onto a database but in every application I've ever worked on there have been rules, validation, conditions that have to be met before the data is allowed anywhere near a database. There is more to persisting data than concurrency violations.

                    dojohansen wrote:

                    Personally, I quite like the datasets.

                    D Offline
                    D Offline
                    dojohansen
                    wrote on last edited by
                    #34

                    And a final (I hope, this is fun but getting a bit long and I fear I might start annoying people by now, even if they don't have to read my scribblings) note. Wikipedia, that sometimes authorative source of information about anything (it always makes me think of THHGTTG), has an article on ADO.NET. It says this about datasets:

                    A DataSet is populated from a database by a DataAdapter whose Connection and Command properties have been set. However, a DataSet can save its contents to XML (optionally with an XSD schema), or populate itself from XML, making it exceptionally useful for web services, distributed computing, and occasionally-connected applications.

                    1 Reply Last reply
                    0
                    • D dojohansen

                      Larger amounts of data compared to what? The alternatives are many, and JSON might be a great alternative in some ways, but the truth is that JSON has it's own problems and you'd have to write a bunch of code to deal with it. And it's simply not true that DataSets "aren't interoperable". They're just XML and if using Java means "a lot of code" is "going to have to be written at the Java end" to pick out nodes from an XML document then Java is pretty weak. That said, I'd agree they're not the best choice for web services if (as ought to be the case) you intend the service to be as easily consumed by non .net clients. However, it's not like entity objects solve this problem! If you don't implement ISerializable but try to go down the .net XML serialization route you can no longer encapsulate any of the objects state properly, because anything you'd like to serialize must be public and read-write. This is because .net actually uses reflection to generate code for a serializer, incurring reflection costs once, and then uses the compiled serializer to do the actual serialization. Just like compiled regex this gives great performance, but the problem of course is you can't serialize any properly designed OOP objects. I personally wish that the Serializable attribute would instead cause the automatic inclusion of an implementation of ISerializable within the same type, so that we could serialize and deserialize otherwise readonly or private members without incurring any reflection cost. So I think you're getting off the hook to easily if we just let you point out what overhead is involved with one solution without saying anything about how it should actually be done. I am sure that no matter what you suggest as an alternative solution, it will have some drawbacks of it's own.

                      P Offline
                      P Offline
                      Pete OHanlon
                      wrote on last edited by
                      #35

                      dojohansen wrote:

                      They're just XML and if using Java means "a lot of code" is "going to have to be written at the Java end" to pick out nodes from an XML document then Java is pretty weak.

                      It's the fact that you HAVE to write the code to extract the data that's the problem. A DataSet (for example), is a Microsoft only construct. This means that you have to write code at the other end to navigate the structure, and extract the data - yes, you get this for free in .NET because the framework takes care of it for you, but having dealt with the Java world enough, I know that you'll just end up with people using other resources. The larger amount of data refers to the fact that a DataSet isn't just data - it's the whole metamodel for a non-specific data structure. We use well defined business objects that just contain the information that we're interested in, and use WCF to make this available.

                      "WPF has many lovers. It's a veritable porn star!" - Josh Smith

                      As Braveheart once said, "You can take our freedom but you'll never take our Hobnobs!" - Martin Hughes.

                      My blog | My articles | MoXAML PowerToys | Onyx

                      1 Reply Last reply
                      0
                      • D dojohansen

                        I think you mean "speaking as someone who considers himself quite the expert".

                        Pete O'Hanlon wrote:

                        Don't forget memory intensive, and a real no-no when it comes to interoperability.

                        I believe you mean "scalability". If I am mistaken, I invite you to explain to us all how whether or not something is "memory intensive" (again, it's not very clear what you mean by this) affects whether or not it is interoperable. I for one have no idea. Exposed in a web service the dataset becomes an xml stream that the client may process as it comes in over the wire, save to a file and then process as a stream, or load into memory in full and then process in a random-access manner. I cannot imagine how the "memory intensity" of datasets can possibly affect their interoperability. If you mean "takes up a large amount of memory" then datasets are "intensive" in the sense that you can't easily work with them as streams. But again, a drawback that represents a *potential* issue in *some* usage scenarios is presented in a totally dogmatic fashion as if it was a disqualifying feature of the technology. And again, it's not like entity objects solve this problem either. If you need to keep a dataset in memory there's some reason why that does not magically disappear just because you choose to represent the data in a different way. It may *help* with a more compact representation, but only a totally different approach like streaming would really *solve* such an issue. So again, my challenge to you: If Datasets are so universally bad, describe a solution that is universally better. I assure you it will not be difficult to do like you did and just point out some potential problems that may exist in your solution and pretend that these automtically make it useless, even though the simple truth is that *ANY* solution has drawbacks and advantages compared to any other.

                        P Offline
                        P Offline
                        Pete OHanlon
                        wrote on last edited by
                        #36

                        dojohansen wrote:

                        I think you mean "speaking as someone who considers himself quite the expert".

                        Nope. And don't try to put words into my mouth. I'm speaking as somebody who writes software that has to interoperate with systems running on Java, or other none .NET systems, where DataSets just add complications and overheads.

                        dojohansen wrote:

                        I believe you mean "scalability".

                        You're right. That's what comes of thinking three sentences ahead of what I'm typing. Also - where did I say that DataSets are universally bad? At no stage did I state that - I did state that they were a no-no when it came to interoperable systems, and this is based on hard learned lessons, with paying clients. Please stop reading more into posts than were ever intended. If I needed to use a DataSet internally, I would. I can't think of any instance where I've needed to off the top of my head, but that wouldn't stop me. In most cases, I prefer the flexibility of a plain old business entities, especially as we use them in conjunction with change notification and validation.

                        "WPF has many lovers. It's a veritable porn star!" - Josh Smith

                        As Braveheart once said, "You can take our freedom but you'll never take our Hobnobs!" - Martin Hughes.

                        My blog | My articles | MoXAML PowerToys | Onyx

                        1 Reply Last reply
                        0
                        • E EmZan

                          i'm wondoring which is better !! 1.openning the connection to the database and leaving it opened untill the application is closed. 2.closing that connection and openning it again when i a need to connect ,which means i will open and close the connection many times while the application is running !! thanks in advance :)

                          S Offline
                          S Offline
                          saanj
                          wrote on last edited by
                          #37

                          It's a very good and important question for beginners. It is not wise to open the connection for the whole application's life time. It will allocate the memory resource and cost in performance. Closing and opening database connections may be bit tougher than the earlier but it drammatically enhance the performance and it is the best practice as well.

                          Either you love IT or leave IT...

                          E 1 Reply Last reply
                          0
                          • S saanj

                            It's a very good and important question for beginners. It is not wise to open the connection for the whole application's life time. It will allocate the memory resource and cost in performance. Closing and opening database connections may be bit tougher than the earlier but it drammatically enhance the performance and it is the best practice as well.

                            Either you love IT or leave IT...

                            E Offline
                            E Offline
                            EmZan
                            wrote on last edited by
                            #38

                            thanks for ur reply

                            1 Reply Last reply
                            0
                            • D dojohansen

                              Hi, the answer is that in ADO.NET you should always close the connection when it isn't in use. The connection classes manage the underlying TCP connection for you, so although you are "logically" closing the connection you are not in fact incurring the cost of tearing down and reestablishing the database connection each time. In practice this makes the most difference in server apps where connection pooling is of great use. Whenever your code closes a connection it in fact simply releases that connection to the pool, and the next time an instance of a connection is constructed that uses the exact same connection string the pre-existing connection is returned. (There is something involved to make the connection state as if it was a freshly established one - see 'sp_reset' in SQL Server, not sure exactly how this is implemented with the other providers.) But even in a desktop app where each client has a dedicated connection to the database and no pooling takes place it's considered good practice to open and close the connection. I think it is; it makes error handling a little easier. You still have to catch exceptions and perhaps log and present errors, but if the user wishes to retry an operation there's no additional logic to check the state of the connection or find out if it's necessary to open it first, because you simply *always* open it where you need it and close it when you're done with it. I personally use a simple connection wrapper class to centralize the code implementing the patterns I wish to use. You may not bother doing this if you use code generation for most of your data access code, but if you hand-code this stuff it makes a huge difference - much less code, far fewer errors, and much easier debugging. And if you just put this class in a separate library and never put anything app-specific in it you'll start building reusable code that has applications everywhere. For example, with my Connection class you can do very common tasks like these very easily:

                              // Get a connection from configuration. Why rewrite the same code all over the place?
                              Connection c = Connection.FromConfig("mainDB");

                              // Similar with the open-and-close logic. Why not centralize it so it's transparent?
                              int count = c.ExecuteScalar("select count(*) from [table] where [col] < @p0", value);

                              // SqlTransaction is a mess; it doesn't support nesting, and you must write code to associate each
                              // SqlCommand with the transaction object before executing it. Why not automate this ted

                              P Offline
                              P Offline
                              Paulo Zemek
                              wrote on last edited by
                              #39

                              Always opening and closing the connection, without a helper object, can became really problematic. I started to work in a project that counted in this technique, and here is the problem: Method A - Opens a connection to do it's queries and then calls method B. Method B - Opens a connection to do it's queries and then call method C. Method C - Can you see the pattern? I created some "helper" objects for this. For example, the application considers one database to be the Default database. So, I use: using (var connection = new ThreadConnection()) { } This ThreadConnection is a wrapper class that: 1 - Checks if a connection for this given thread exists, and use it or opens a new connection if there is no connection opened. 2 - At dispose(), if it created the connection, then closes it, but if it didn't create the connection, does nothing. This solution looks very similar, as method A, B and C will all create a "ThreadConnection" object, but only the outer method (A in this example) will create and dispose the connection. But, if B is called from another method, not having an already created connection, then B will create and close it. Much better than having methods with overloads, so they create the connection or use the existing one (that was the "original solution" in the project I work now) and is absolutelly better than opening hundreds of connections as each method opens it's own connection.

                              1 Reply Last reply
                              0
                              Reply
                              • Reply as topic
                              Log in to reply
                              • Oldest to Newest
                              • Newest to Oldest
                              • Most Votes


                              • Login

                              • Don't have an account? Register

                              • Login or register to search.
                              • First post
                                Last post
                              0
                              • Categories
                              • Recent
                              • Tags
                              • Popular
                              • World
                              • Users
                              • Groups