Horror, or not?
-
Enforcing referential integrity takes clock cycles, and this is where you end up getting into a battle with DBAs. A DBA will typically point out that it is up to your application to ensure integrity, but you argue back that you have the tools in the database to do it - so why not let the database do what it is designed for? In some cases, the DBA has a point because they have a legacy database where the referential integrity checking is a real kludge (i.e. slow). In more modern DBs though, referential integrity is performed much quicker (generally by using a quick index scan). Now, the issue becomes how to react to a referential integrity problem and this becomes an architectural issue. If you leave it to the database to inform you then you've gone through the whole process of submitting the data and waiting for the database to verify (or not) that the operation has succeeded. If it fails, you have to notify the user/do some remedial work. If your application checks the integrity though, then theoretically this becomes less of an issue. There is a problem with this line of thinking though - you could only guarantee this if the database were single user; in the time between you performing the check and you actually attempting the insert (or update), the record could have been deleted at which point you've broken the integrity rules. Another issue boils down to this - if you leave it to your code to check the integrity then EVERY update/insert/delete statement must check the integrity (and in the case of deletes this can be across multiple tables - which means your selects must be redone everytime a new table is added into the referential mix). Bottom line - the DB provides the tools to do this. It's efficient, and means you don't have to worry about forgetting to perform a referential check.
Deja View - the feeling that you've seen this post before.
In my line of work speed is not such an issue as robustness and solution being as error proof as possible. So I think database and application which uses it must both be able to gracefully handle whatever crap is thrown at them (i.e. checks on both sides). That works for me and is my opinion based on experiences so far. Of course I'm always opened to well argumented ideas.
-
Enforcing referential integrity takes clock cycles, and this is where you end up getting into a battle with DBAs. A DBA will typically point out that it is up to your application to ensure integrity, but you argue back that you have the tools in the database to do it - so why not let the database do what it is designed for? In some cases, the DBA has a point because they have a legacy database where the referential integrity checking is a real kludge (i.e. slow). In more modern DBs though, referential integrity is performed much quicker (generally by using a quick index scan). Now, the issue becomes how to react to a referential integrity problem and this becomes an architectural issue. If you leave it to the database to inform you then you've gone through the whole process of submitting the data and waiting for the database to verify (or not) that the operation has succeeded. If it fails, you have to notify the user/do some remedial work. If your application checks the integrity though, then theoretically this becomes less of an issue. There is a problem with this line of thinking though - you could only guarantee this if the database were single user; in the time between you performing the check and you actually attempting the insert (or update), the record could have been deleted at which point you've broken the integrity rules. Another issue boils down to this - if you leave it to your code to check the integrity then EVERY update/insert/delete statement must check the integrity (and in the case of deletes this can be across multiple tables - which means your selects must be redone everytime a new table is added into the referential mix). Bottom line - the DB provides the tools to do this. It's efficient, and means you don't have to worry about forgetting to perform a referential check.
Deja View - the feeling that you've seen this post before.
-
They should both do the checks. Your application should send the type of information the database is wanting, and the database should expect a specific type of data.
The best way to accelerate a Macintosh is at 9.8m/sec² - Marcus Dolengo
Expert Coming wrote:
They should both do the checks.
I disagree.
Expert Coming wrote:
the database should expect a specific type of data.
It should expect nothing. Like any code, the caller should never be trusted (unless of course you are the guaranteed only caller).
xacc.ide - now with IronScheme support
IronScheme - 1.0 alpha 1 out now -
Pete O'Hanlon wrote:
DBA
Dumb bloody A$$@#$!@'s ;P
xacc.ide - now with IronScheme support
IronScheme - 1.0 alpha 1 out nowMore along the lines of "does b*gger all".
Deja View - the feeling that you've seen this post before.
-
A few years back, I used to work on an 'enterprise' system that touted itself for the 'increased' data accuracy that it provides its clients, and one day, my employers wanted me to change their DB schema to accommodate a new feature for their system, except there was one problem: the database had no referential integrity! Each table had a primary key and some foreign keys pointing to other tables, but none of the tables were actually linked together. When I asked the 'senior' programmer why they did this, his explanation was that their system maintained the links automatically, despite the fact that the DB itself was designed to have 'soft' deletes, and none of these soft deletes actually cascaded across the entire system. When I browsed the entire code base, however, there was nothing to indicate this sort of behavior. In short, the whole DB (and the application) was a mess, and not even the upper management knew about it. Now my first impression of this was a "WTF? That's just...immoral!", but it got me thinking...is not linking the DB tables together a viable strategy? Traditional DBA wisdom (from "within the box", per se) would say that referential integrity using the DB is important, but is it possible to do with out it? Anyway, here's my question: Is it a horror, or not? And if it isn't a horror, why would you say it isn't?
Do you know...LinFu?
Philip Laureano wrote:
and not even the upper management knew about it.
As if they know what is under the code :laugh: :laugh: :laugh: I've to come across upper management who know what goes on in the code. Of course there some, but those are exceptions
try { } catch (UpperManagmentException ex) { }
/* I can C */ // or !C Yusuf
-
More along the lines of "does b*gger all".
Deja View - the feeling that you've seen this post before.
-
Philip Laureano wrote:
and not even the upper management knew about it.
As if they know what is under the code :laugh: :laugh: :laugh: I've to come across upper management who know what goes on in the code. Of course there some, but those are exceptions
try { } catch (UpperManagmentException ex) { }
/* I can C */ // or !C Yusuf
Compile Time Error #OMGWTF error UpperManagmentException can never be caught.
Otherwise [Microsoft is] toast in the long term no matter how much money they've got. They would be already if the Linux community didn't have it's head so firmly up it's own command line buffer that it looks like taking 15 years to find the desktop. -- Matthew Faithfull
-
A few years back, I used to work on an 'enterprise' system that touted itself for the 'increased' data accuracy that it provides its clients, and one day, my employers wanted me to change their DB schema to accommodate a new feature for their system, except there was one problem: the database had no referential integrity! Each table had a primary key and some foreign keys pointing to other tables, but none of the tables were actually linked together. When I asked the 'senior' programmer why they did this, his explanation was that their system maintained the links automatically, despite the fact that the DB itself was designed to have 'soft' deletes, and none of these soft deletes actually cascaded across the entire system. When I browsed the entire code base, however, there was nothing to indicate this sort of behavior. In short, the whole DB (and the application) was a mess, and not even the upper management knew about it. Now my first impression of this was a "WTF? That's just...immoral!", but it got me thinking...is not linking the DB tables together a viable strategy? Traditional DBA wisdom (from "within the box", per se) would say that referential integrity using the DB is important, but is it possible to do with out it? Anyway, here's my question: Is it a horror, or not? And if it isn't a horror, why would you say it isn't?
Do you know...LinFu?
I believe it is. You're leaving data integrity up to the USERS! Are you kidding me!? :omg: Data integrity is everything. You can always index and perform other optimizations if you want to speed things up, even throw hardware at it if necessary, but fixing corrupted data is a nightmare with no easy solution. Given the horsepower of modern systems, there's no excuse for not using this important feature.
-
Expert Coming wrote:
They should both do the checks.
I disagree.
Expert Coming wrote:
the database should expect a specific type of data.
It should expect nothing. Like any code, the caller should never be trusted (unless of course you are the guaranteed only caller).
xacc.ide - now with IronScheme support
IronScheme - 1.0 alpha 1 out nowExpect isn't the right word, but I do think that the database needs to know what it is storing, and the application needs to know what kind of data the database wants.
The best way to accelerate a Macintosh is at 9.8m/sec² - Marcus Dolengo
-
Compile Time Error #OMGWTF error UpperManagmentException can never be caught.
Otherwise [Microsoft is] toast in the long term no matter how much money they've got. They would be already if the Linux community didn't have it's head so firmly up it's own command line buffer that it looks like taking 15 years to find the desktop. -- Matthew Faithfull
dan neely wrote:
can never be caught
Even when they do get caught they get big bonuses. (boni?) :mad:
-
A few years back, I used to work on an 'enterprise' system that touted itself for the 'increased' data accuracy that it provides its clients, and one day, my employers wanted me to change their DB schema to accommodate a new feature for their system, except there was one problem: the database had no referential integrity! Each table had a primary key and some foreign keys pointing to other tables, but none of the tables were actually linked together. When I asked the 'senior' programmer why they did this, his explanation was that their system maintained the links automatically, despite the fact that the DB itself was designed to have 'soft' deletes, and none of these soft deletes actually cascaded across the entire system. When I browsed the entire code base, however, there was nothing to indicate this sort of behavior. In short, the whole DB (and the application) was a mess, and not even the upper management knew about it. Now my first impression of this was a "WTF? That's just...immoral!", but it got me thinking...is not linking the DB tables together a viable strategy? Traditional DBA wisdom (from "within the box", per se) would say that referential integrity using the DB is important, but is it possible to do with out it? Anyway, here's my question: Is it a horror, or not? And if it isn't a horror, why would you say it isn't?
Do you know...LinFu?
A few years ago worked on an open source php application that implemented its own object model of the database. All the relationships (and types) were defined in each extension of the base and that worked quite well in making easy coding (only used code in base class with data from the actual objects themselves - well, only things defined in objects were structure, relationships and some name formatting). But to let the end user do all the deletes/updates by hand, hahaha.
-
A few years back, I used to work on an 'enterprise' system that touted itself for the 'increased' data accuracy that it provides its clients, and one day, my employers wanted me to change their DB schema to accommodate a new feature for their system, except there was one problem: the database had no referential integrity! Each table had a primary key and some foreign keys pointing to other tables, but none of the tables were actually linked together. When I asked the 'senior' programmer why they did this, his explanation was that their system maintained the links automatically, despite the fact that the DB itself was designed to have 'soft' deletes, and none of these soft deletes actually cascaded across the entire system. When I browsed the entire code base, however, there was nothing to indicate this sort of behavior. In short, the whole DB (and the application) was a mess, and not even the upper management knew about it. Now my first impression of this was a "WTF? That's just...immoral!", but it got me thinking...is not linking the DB tables together a viable strategy? Traditional DBA wisdom (from "within the box", per se) would say that referential integrity using the DB is important, but is it possible to do with out it? Anyway, here's my question: Is it a horror, or not? And if it isn't a horror, why would you say it isn't?
Do you know...LinFu?
-
A few years back, I used to work on an 'enterprise' system that touted itself for the 'increased' data accuracy that it provides its clients, and one day, my employers wanted me to change their DB schema to accommodate a new feature for their system, except there was one problem: the database had no referential integrity! Each table had a primary key and some foreign keys pointing to other tables, but none of the tables were actually linked together. When I asked the 'senior' programmer why they did this, his explanation was that their system maintained the links automatically, despite the fact that the DB itself was designed to have 'soft' deletes, and none of these soft deletes actually cascaded across the entire system. When I browsed the entire code base, however, there was nothing to indicate this sort of behavior. In short, the whole DB (and the application) was a mess, and not even the upper management knew about it. Now my first impression of this was a "WTF? That's just...immoral!", but it got me thinking...is not linking the DB tables together a viable strategy? Traditional DBA wisdom (from "within the box", per se) would say that referential integrity using the DB is important, but is it possible to do with out it? Anyway, here's my question: Is it a horror, or not? And if it isn't a horror, why would you say it isn't?
Do you know...LinFu?
In my mind, it's simple. Each tier should take care of itself. The database should take care of ensuring data integrity. Beyond that, all business rules belong in the middle tier. Data integrity should also be enforced at the middle tier level, but there's no excuse for the database not taking care of itself. You cannot ensure that all applications accessing the database will enforce the rules (ever hear of SQL Server Management Console, TOAD, et. al.?); therefore it's up to the database to protect itself. Even when using IsDeleted flags instead of true deletes, you can enforce a certain amount of consistency using stored procedures, triggers, etc.
-
The scary part is that they're a Microsoft Certified Gold Partner.
Do you know...LinFu?
-
At Microsoft, it's never a bug. :) It's just a Microsoft Certified "Gold" Feature...:P
Do you know...LinFu?
-
A few years back, I used to work on an 'enterprise' system that touted itself for the 'increased' data accuracy that it provides its clients, and one day, my employers wanted me to change their DB schema to accommodate a new feature for their system, except there was one problem: the database had no referential integrity! Each table had a primary key and some foreign keys pointing to other tables, but none of the tables were actually linked together. When I asked the 'senior' programmer why they did this, his explanation was that their system maintained the links automatically, despite the fact that the DB itself was designed to have 'soft' deletes, and none of these soft deletes actually cascaded across the entire system. When I browsed the entire code base, however, there was nothing to indicate this sort of behavior. In short, the whole DB (and the application) was a mess, and not even the upper management knew about it. Now my first impression of this was a "WTF? That's just...immoral!", but it got me thinking...is not linking the DB tables together a viable strategy? Traditional DBA wisdom (from "within the box", per se) would say that referential integrity using the DB is important, but is it possible to do with out it? Anyway, here's my question: Is it a horror, or not? And if it isn't a horror, why would you say it isn't?
Do you know...LinFu?
We Can maintain integrity from Front End tool, but it is better to use Relations between tables, in some cases we need to make database flexible. so don't under astimate your seniors.
Best Regards, Chetan Patel
-
At Microsoft, it's never a bug. :) It's just a Microsoft Certified "Gold" Feature...:P
Do you know...LinFu?
That made my day! :laugh: