Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. Other Discussions
  3. The Weird and The Wonderful
  4. Horror, or not?

Horror, or not?

Scheduled Pinned Locked Moved The Weird and The Wonderful
databasequestioncomtoolsxml
30 Posts 19 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • D darkelv

    Once, in my previous company, in a system that someone else maintained, all primary keys and relationship were removed because "they causes problems when they need to patch data (due to bugs such as multiple rows were inserted) by using scripts"......

    V Offline
    V Offline
    Vasudevan Deepak Kumar
    wrote on last edited by
    #4

    darkelv wrote:

    "they causes problems when they need to patch data (due to bugs such as multiple rows were inserted) by using scripts"......

    :wtf: That management should actually adopt the policy of abolishing all SQL Server licenses and prohibiting RDBMS in thier realm. They can simply live with plain vanilla text files which would save them pretty good bucks from software license costs, recurring DBA charges and more. :mad:

    Vasudevan Deepak Kumar Personal Homepage
    Tech Gossips
    A pessimist sees only the dark side of the clouds, and mopes; a philosopher sees both sides, and shrugs; an optimist doesn't see the clouds at all - he's walking on them. --Leonard Louis Levinson

    P P 2 Replies Last reply
    0
    • P Pete OHanlon

      Enforcing referential integrity takes clock cycles, and this is where you end up getting into a battle with DBAs. A DBA will typically point out that it is up to your application to ensure integrity, but you argue back that you have the tools in the database to do it - so why not let the database do what it is designed for? In some cases, the DBA has a point because they have a legacy database where the referential integrity checking is a real kludge (i.e. slow). In more modern DBs though, referential integrity is performed much quicker (generally by using a quick index scan). Now, the issue becomes how to react to a referential integrity problem and this becomes an architectural issue. If you leave it to the database to inform you then you've gone through the whole process of submitting the data and waiting for the database to verify (or not) that the operation has succeeded. If it fails, you have to notify the user/do some remedial work. If your application checks the integrity though, then theoretically this becomes less of an issue. There is a problem with this line of thinking though - you could only guarantee this if the database were single user; in the time between you performing the check and you actually attempting the insert (or update), the record could have been deleted at which point you've broken the integrity rules. Another issue boils down to this - if you leave it to your code to check the integrity then EVERY update/insert/delete statement must check the integrity (and in the case of deletes this can be across multiple tables - which means your selects must be redone everytime a new table is added into the referential mix). Bottom line - the DB provides the tools to do this. It's efficient, and means you don't have to worry about forgetting to perform a referential check.

      Deja View - the feeling that you've seen this post before.

      My blog | My articles

      V Offline
      V Offline
      Vasudevan Deepak Kumar
      wrote on last edited by
      #5

      Pete O'Hanlon wrote:

      Bottom line - the DB provides the tools to do this. It's efficient, and means you don't have to worry about forgetting to perform a referential check.

      There must be a harmonious combination of the application and the database to minimize the different heart-burns.

      Vasudevan Deepak Kumar Personal Homepage
      Tech Gossips
      A pessimist sees only the dark side of the clouds, and mopes; a philosopher sees both sides, and shrugs; an optimist doesn't see the clouds at all - he's walking on them. --Leonard Louis Levinson

      E 1 Reply Last reply
      0
      • P Philip Laureano

        A few years back, I used to work on an 'enterprise' system that touted itself for the 'increased' data accuracy that it provides its clients, and one day, my employers wanted me to change their DB schema to accommodate a new feature for their system, except there was one problem: the database had no referential integrity! Each table had a primary key and some foreign keys pointing to other tables, but none of the tables were actually linked together. When I asked the 'senior' programmer why they did this, his explanation was that their system maintained the links automatically, despite the fact that the DB itself was designed to have 'soft' deletes, and none of these soft deletes actually cascaded across the entire system. When I browsed the entire code base, however, there was nothing to indicate this sort of behavior. In short, the whole DB (and the application) was a mess, and not even the upper management knew about it. Now my first impression of this was a "WTF? That's just...immoral!", but it got me thinking...is not linking the DB tables together a viable strategy? Traditional DBA wisdom (from "within the box", per se) would say that referential integrity using the DB is important, but is it possible to do with out it? Anyway, here's my question: Is it a horror, or not? And if it isn't a horror, why would you say it isn't?

        Do you know...LinFu?

        P Offline
        P Offline
        Paddy Boyd
        wrote on last edited by
        #6

        Philip Laureano wrote:

        Is it a horror, or not? And if it isn't a horror, why would you say it isn't?

        Are you the person that has to clean up the data once it is a mess? The answer to this question is the same as to your one above...

        1 Reply Last reply
        0
        • V Vasudevan Deepak Kumar

          Pete O'Hanlon wrote:

          Bottom line - the DB provides the tools to do this. It's efficient, and means you don't have to worry about forgetting to perform a referential check.

          There must be a harmonious combination of the application and the database to minimize the different heart-burns.

          Vasudevan Deepak Kumar Personal Homepage
          Tech Gossips
          A pessimist sees only the dark side of the clouds, and mopes; a philosopher sees both sides, and shrugs; an optimist doesn't see the clouds at all - he's walking on them. --Leonard Louis Levinson

          E Offline
          E Offline
          Expert Coming
          wrote on last edited by
          #7

          They should both do the checks. Your application should send the type of information the database is wanting, and the database should expect a specific type of data.

          The best way to accelerate a Macintosh is at 9.8m/sec² - Marcus Dolengo

          L 1 Reply Last reply
          0
          • P Philip Laureano

            A few years back, I used to work on an 'enterprise' system that touted itself for the 'increased' data accuracy that it provides its clients, and one day, my employers wanted me to change their DB schema to accommodate a new feature for their system, except there was one problem: the database had no referential integrity! Each table had a primary key and some foreign keys pointing to other tables, but none of the tables were actually linked together. When I asked the 'senior' programmer why they did this, his explanation was that their system maintained the links automatically, despite the fact that the DB itself was designed to have 'soft' deletes, and none of these soft deletes actually cascaded across the entire system. When I browsed the entire code base, however, there was nothing to indicate this sort of behavior. In short, the whole DB (and the application) was a mess, and not even the upper management knew about it. Now my first impression of this was a "WTF? That's just...immoral!", but it got me thinking...is not linking the DB tables together a viable strategy? Traditional DBA wisdom (from "within the box", per se) would say that referential integrity using the DB is important, but is it possible to do with out it? Anyway, here's my question: Is it a horror, or not? And if it isn't a horror, why would you say it isn't?

            Do you know...LinFu?

            P Offline
            P Offline
            Paul Conrad
            wrote on last edited by
            #8

            Philip Laureano wrote:

            Is it a horror, or not?

            Yes, it is.

            "Real programmers just throw a bunch of 1s and 0s at the computer to see what sticks" - Pete O'Hanlon

            1 Reply Last reply
            0
            • V Vasudevan Deepak Kumar

              darkelv wrote:

              "they causes problems when they need to patch data (due to bugs such as multiple rows were inserted) by using scripts"......

              :wtf: That management should actually adopt the policy of abolishing all SQL Server licenses and prohibiting RDBMS in thier realm. They can simply live with plain vanilla text files which would save them pretty good bucks from software license costs, recurring DBA charges and more. :mad:

              Vasudevan Deepak Kumar Personal Homepage
              Tech Gossips
              A pessimist sees only the dark side of the clouds, and mopes; a philosopher sees both sides, and shrugs; an optimist doesn't see the clouds at all - he's walking on them. --Leonard Louis Levinson

              P Offline
              P Offline
              Paul Conrad
              wrote on last edited by
              #9

              Vasudevan Deepak K wrote:

              That management should actually adopt the policy of abolishing all SQL Server licenses and prohibiting RDBMS in thier realm.

              Maybe that management consists of a bunch of drunken lemurs :rolleyes:

              "Real programmers just throw a bunch of 1s and 0s at the computer to see what sticks" - Pete O'Hanlon

              1 Reply Last reply
              0
              • P Pete OHanlon

                Enforcing referential integrity takes clock cycles, and this is where you end up getting into a battle with DBAs. A DBA will typically point out that it is up to your application to ensure integrity, but you argue back that you have the tools in the database to do it - so why not let the database do what it is designed for? In some cases, the DBA has a point because they have a legacy database where the referential integrity checking is a real kludge (i.e. slow). In more modern DBs though, referential integrity is performed much quicker (generally by using a quick index scan). Now, the issue becomes how to react to a referential integrity problem and this becomes an architectural issue. If you leave it to the database to inform you then you've gone through the whole process of submitting the data and waiting for the database to verify (or not) that the operation has succeeded. If it fails, you have to notify the user/do some remedial work. If your application checks the integrity though, then theoretically this becomes less of an issue. There is a problem with this line of thinking though - you could only guarantee this if the database were single user; in the time between you performing the check and you actually attempting the insert (or update), the record could have been deleted at which point you've broken the integrity rules. Another issue boils down to this - if you leave it to your code to check the integrity then EVERY update/insert/delete statement must check the integrity (and in the case of deletes this can be across multiple tables - which means your selects must be redone everytime a new table is added into the referential mix). Bottom line - the DB provides the tools to do this. It's efficient, and means you don't have to worry about forgetting to perform a referential check.

                Deja View - the feeling that you've seen this post before.

                My blog | My articles

                P Offline
                P Offline
                PIEBALDconsult
                wrote on last edited by
                #10

                Pete O'Hanlon wrote:

                it is up to your application to ensure integrity

                Yeah, on a previous job (using RDB on OpenVMS) we had referential integrity on the dev systems only; the code was expected to be correct and fully-tested before it was deployed to production, so the database needn't check. They also said that metadata slows down the database, so I wasn't allowed to create functions in the database. Now that I get to use SQL Server, I do set up referential integrity... but turning on cascaded deletes still feels like cheating.

                P 1 Reply Last reply
                0
                • P PIEBALDconsult

                  Pete O'Hanlon wrote:

                  it is up to your application to ensure integrity

                  Yeah, on a previous job (using RDB on OpenVMS) we had referential integrity on the dev systems only; the code was expected to be correct and fully-tested before it was deployed to production, so the database needn't check. They also said that metadata slows down the database, so I wasn't allowed to create functions in the database. Now that I get to use SQL Server, I do set up referential integrity... but turning on cascaded deletes still feels like cheating.

                  P Offline
                  P Offline
                  Pete OHanlon
                  wrote on last edited by
                  #11

                  PIEBALDconsult wrote:

                  but turning on cascaded deletes still feels like cheating

                  It feels dirty - so dirty. And it's one of the reasons we don't do deletes - we use statuses to control whether a record is visible or not (and that way we don't worry about accidentally deleting something important).

                  Deja View - the feeling that you've seen this post before.

                  My blog | My articles

                  P 1 Reply Last reply
                  0
                  • P Pete OHanlon

                    PIEBALDconsult wrote:

                    but turning on cascaded deletes still feels like cheating

                    It feels dirty - so dirty. And it's one of the reasons we don't do deletes - we use statuses to control whether a record is visible or not (and that way we don't worry about accidentally deleting something important).

                    Deja View - the feeling that you've seen this post before.

                    My blog | My articles

                    P Offline
                    P Offline
                    PIEBALDconsult
                    wrote on last edited by
                    #12

                    Pete O'Hanlon wrote:

                    don't do deletes

                    I agree with that.

                    1 Reply Last reply
                    0
                    • V Vasudevan Deepak Kumar

                      darkelv wrote:

                      "they causes problems when they need to patch data (due to bugs such as multiple rows were inserted) by using scripts"......

                      :wtf: That management should actually adopt the policy of abolishing all SQL Server licenses and prohibiting RDBMS in thier realm. They can simply live with plain vanilla text files which would save them pretty good bucks from software license costs, recurring DBA charges and more. :mad:

                      Vasudevan Deepak Kumar Personal Homepage
                      Tech Gossips
                      A pessimist sees only the dark side of the clouds, and mopes; a philosopher sees both sides, and shrugs; an optimist doesn't see the clouds at all - he's walking on them. --Leonard Louis Levinson

                      P Offline
                      P Offline
                      Philip Laureano
                      wrote on last edited by
                      #13

                      The scary part is that they're a Microsoft Certified Gold Partner.

                      Do you know...LinFu?

                      V 1 Reply Last reply
                      0
                      • P Pete OHanlon

                        Enforcing referential integrity takes clock cycles, and this is where you end up getting into a battle with DBAs. A DBA will typically point out that it is up to your application to ensure integrity, but you argue back that you have the tools in the database to do it - so why not let the database do what it is designed for? In some cases, the DBA has a point because they have a legacy database where the referential integrity checking is a real kludge (i.e. slow). In more modern DBs though, referential integrity is performed much quicker (generally by using a quick index scan). Now, the issue becomes how to react to a referential integrity problem and this becomes an architectural issue. If you leave it to the database to inform you then you've gone through the whole process of submitting the data and waiting for the database to verify (or not) that the operation has succeeded. If it fails, you have to notify the user/do some remedial work. If your application checks the integrity though, then theoretically this becomes less of an issue. There is a problem with this line of thinking though - you could only guarantee this if the database were single user; in the time between you performing the check and you actually attempting the insert (or update), the record could have been deleted at which point you've broken the integrity rules. Another issue boils down to this - if you leave it to your code to check the integrity then EVERY update/insert/delete statement must check the integrity (and in the case of deletes this can be across multiple tables - which means your selects must be redone everytime a new table is added into the referential mix). Bottom line - the DB provides the tools to do this. It's efficient, and means you don't have to worry about forgetting to perform a referential check.

                        Deja View - the feeling that you've seen this post before.

                        My blog | My articles

                        L Offline
                        L Offline
                        Lost User
                        wrote on last edited by
                        #14

                        In my line of work speed is not such an issue as robustness and solution being as error proof as possible. So I think database and application which uses it must both be able to gracefully handle whatever crap is thrown at them (i.e. checks on both sides). That works for me and is my opinion based on experiences so far. Of course I'm always opened to well argumented ideas.

                        1 Reply Last reply
                        0
                        • P Pete OHanlon

                          Enforcing referential integrity takes clock cycles, and this is where you end up getting into a battle with DBAs. A DBA will typically point out that it is up to your application to ensure integrity, but you argue back that you have the tools in the database to do it - so why not let the database do what it is designed for? In some cases, the DBA has a point because they have a legacy database where the referential integrity checking is a real kludge (i.e. slow). In more modern DBs though, referential integrity is performed much quicker (generally by using a quick index scan). Now, the issue becomes how to react to a referential integrity problem and this becomes an architectural issue. If you leave it to the database to inform you then you've gone through the whole process of submitting the data and waiting for the database to verify (or not) that the operation has succeeded. If it fails, you have to notify the user/do some remedial work. If your application checks the integrity though, then theoretically this becomes less of an issue. There is a problem with this line of thinking though - you could only guarantee this if the database were single user; in the time between you performing the check and you actually attempting the insert (or update), the record could have been deleted at which point you've broken the integrity rules. Another issue boils down to this - if you leave it to your code to check the integrity then EVERY update/insert/delete statement must check the integrity (and in the case of deletes this can be across multiple tables - which means your selects must be redone everytime a new table is added into the referential mix). Bottom line - the DB provides the tools to do this. It's efficient, and means you don't have to worry about forgetting to perform a referential check.

                          Deja View - the feeling that you've seen this post before.

                          My blog | My articles

                          L Offline
                          L Offline
                          leppie
                          wrote on last edited by
                          #15

                          Pete O'Hanlon wrote:

                          DBA

                          Dumb bloody A$$@#$!@'s ;P

                          xacc.ide - now with IronScheme support
                          IronScheme - 1.0 alpha 1 out now

                          P 1 Reply Last reply
                          0
                          • E Expert Coming

                            They should both do the checks. Your application should send the type of information the database is wanting, and the database should expect a specific type of data.

                            The best way to accelerate a Macintosh is at 9.8m/sec² - Marcus Dolengo

                            L Offline
                            L Offline
                            leppie
                            wrote on last edited by
                            #16

                            Expert Coming wrote:

                            They should both do the checks.

                            I disagree.

                            Expert Coming wrote:

                            the database should expect a specific type of data.

                            It should expect nothing. Like any code, the caller should never be trusted (unless of course you are the guaranteed only caller).

                            xacc.ide - now with IronScheme support
                            IronScheme - 1.0 alpha 1 out now

                            E 1 Reply Last reply
                            0
                            • L leppie

                              Pete O'Hanlon wrote:

                              DBA

                              Dumb bloody A$$@#$!@'s ;P

                              xacc.ide - now with IronScheme support
                              IronScheme - 1.0 alpha 1 out now

                              P Offline
                              P Offline
                              Pete OHanlon
                              wrote on last edited by
                              #17

                              More along the lines of "does b*gger all".

                              Deja View - the feeling that you've seen this post before.

                              My blog | My articles

                              L 1 Reply Last reply
                              0
                              • P Philip Laureano

                                A few years back, I used to work on an 'enterprise' system that touted itself for the 'increased' data accuracy that it provides its clients, and one day, my employers wanted me to change their DB schema to accommodate a new feature for their system, except there was one problem: the database had no referential integrity! Each table had a primary key and some foreign keys pointing to other tables, but none of the tables were actually linked together. When I asked the 'senior' programmer why they did this, his explanation was that their system maintained the links automatically, despite the fact that the DB itself was designed to have 'soft' deletes, and none of these soft deletes actually cascaded across the entire system. When I browsed the entire code base, however, there was nothing to indicate this sort of behavior. In short, the whole DB (and the application) was a mess, and not even the upper management knew about it. Now my first impression of this was a "WTF? That's just...immoral!", but it got me thinking...is not linking the DB tables together a viable strategy? Traditional DBA wisdom (from "within the box", per se) would say that referential integrity using the DB is important, but is it possible to do with out it? Anyway, here's my question: Is it a horror, or not? And if it isn't a horror, why would you say it isn't?

                                Do you know...LinFu?

                                Y Offline
                                Y Offline
                                Yusuf
                                wrote on last edited by
                                #18

                                Philip Laureano wrote:

                                and not even the upper management knew about it.

                                As if they know what is under the code :laugh: :laugh: :laugh: I've to come across upper management who know what goes on in the code. Of course there some, but those are exceptions try { } catch (UpperManagmentException ex) { }

                                /* I can C */ // or !C Yusuf

                                D 1 Reply Last reply
                                0
                                • P Pete OHanlon

                                  More along the lines of "does b*gger all".

                                  Deja View - the feeling that you've seen this post before.

                                  My blog | My articles

                                  L Offline
                                  L Offline
                                  leppie
                                  wrote on last edited by
                                  #19

                                  True also :)

                                  xacc.ide - now with IronScheme support
                                  IronScheme - 1.0 alpha 1 out now

                                  1 Reply Last reply
                                  0
                                  • Y Yusuf

                                    Philip Laureano wrote:

                                    and not even the upper management knew about it.

                                    As if they know what is under the code :laugh: :laugh: :laugh: I've to come across upper management who know what goes on in the code. Of course there some, but those are exceptions try { } catch (UpperManagmentException ex) { }

                                    /* I can C */ // or !C Yusuf

                                    D Offline
                                    D Offline
                                    Dan Neely
                                    wrote on last edited by
                                    #20

                                    Compile Time Error #OMGWTF error UpperManagmentException can never be caught.

                                    Otherwise [Microsoft is] toast in the long term no matter how much money they've got. They would be already if the Linux community didn't have it's head so firmly up it's own command line buffer that it looks like taking 15 years to find the desktop. -- Matthew Faithfull

                                    P 1 Reply Last reply
                                    0
                                    • P Philip Laureano

                                      A few years back, I used to work on an 'enterprise' system that touted itself for the 'increased' data accuracy that it provides its clients, and one day, my employers wanted me to change their DB schema to accommodate a new feature for their system, except there was one problem: the database had no referential integrity! Each table had a primary key and some foreign keys pointing to other tables, but none of the tables were actually linked together. When I asked the 'senior' programmer why they did this, his explanation was that their system maintained the links automatically, despite the fact that the DB itself was designed to have 'soft' deletes, and none of these soft deletes actually cascaded across the entire system. When I browsed the entire code base, however, there was nothing to indicate this sort of behavior. In short, the whole DB (and the application) was a mess, and not even the upper management knew about it. Now my first impression of this was a "WTF? That's just...immoral!", but it got me thinking...is not linking the DB tables together a viable strategy? Traditional DBA wisdom (from "within the box", per se) would say that referential integrity using the DB is important, but is it possible to do with out it? Anyway, here's my question: Is it a horror, or not? And if it isn't a horror, why would you say it isn't?

                                      Do you know...LinFu?

                                      A Offline
                                      A Offline
                                      AEternal
                                      wrote on last edited by
                                      #21

                                      I believe it is. You're leaving data integrity up to the USERS! Are you kidding me!? :omg: Data integrity is everything. You can always index and perform other optimizations if you want to speed things up, even throw hardware at it if necessary, but fixing corrupted data is a nightmare with no easy solution. Given the horsepower of modern systems, there's no excuse for not using this important feature.

                                      1 Reply Last reply
                                      0
                                      • L leppie

                                        Expert Coming wrote:

                                        They should both do the checks.

                                        I disagree.

                                        Expert Coming wrote:

                                        the database should expect a specific type of data.

                                        It should expect nothing. Like any code, the caller should never be trusted (unless of course you are the guaranteed only caller).

                                        xacc.ide - now with IronScheme support
                                        IronScheme - 1.0 alpha 1 out now

                                        E Offline
                                        E Offline
                                        Expert Coming
                                        wrote on last edited by
                                        #22

                                        Expect isn't the right word, but I do think that the database needs to know what it is storing, and the application needs to know what kind of data the database wants.

                                        The best way to accelerate a Macintosh is at 9.8m/sec² - Marcus Dolengo

                                        1 Reply Last reply
                                        0
                                        • D Dan Neely

                                          Compile Time Error #OMGWTF error UpperManagmentException can never be caught.

                                          Otherwise [Microsoft is] toast in the long term no matter how much money they've got. They would be already if the Linux community didn't have it's head so firmly up it's own command line buffer that it looks like taking 15 years to find the desktop. -- Matthew Faithfull

                                          P Offline
                                          P Offline
                                          PIEBALDconsult
                                          wrote on last edited by
                                          #23

                                          dan neely wrote:

                                          can never be caught

                                          Even when they do get caught they get big bonuses. (boni?) :mad:

                                          1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Don't have an account? Register

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups