No more stored procedures
-
Rocky Moore wrote:
don't you have to add code to make sure that never happens, such as checking if the GUID is already used?
I don't. I really don't expect it to be an issue. Besides the probability of a GUID being identical, there's also the probability that it will be identical in the same table. It seems extremely remote.
Rocky Moore wrote:
Also, isn't it a lot of overhead in the indexes for the larger data of a GUID compared to a int?
There's a lot of debate[^] on the subject. (And a lot of links you can read up on). Someone published some code somewhere that helps with clustering, but I'd rather not touch it. The thing I like about GUID's is it makes it easy to merge offline changes. And I have to deal with that occasionally. Marc
People are just notoriously impossible. --DavidCrow
There's NO excuse for not commenting your code. -- John Simmons / outlaw programmer
People who say that they will refactor their code later to make it "good" don't understand refactoring, nor the art and craft of programming. -- Josh SmithMarc Clifton wrote:
The thing I like about GUID's is it makes it easy to merge offline changes.
Not to mention that they're required for replication, anyway.
Grim
(aka Toby)
MCDBA, MCSD, MCP+SB
SELECT * FROM users WHERE clue IS NOT NULL GO
(0 row(s) affected)
-
That's the reason! Use a piece of code that runs a stored procedure and worry about the syntax in the database. The only reason I've found not to use stored procedures (in real life) is that the debugging is too hard. There is no reason to use ten layers and 100 stored automatically generated stored procedures when all you want is to write a single value in a simple table or something.
---------- Siderite
So, you don't like it because it's harder? You're one of those developers that does everything the easy way? Hmmm....
"Quality Software since 1983!"
http://www.smoothjazzy.com/ - see the "Programming" section for (freeware) JazzySiteMaps, a simple application to generate .Net and Google-style sitemaps! -
Joe Woodbury wrote:
I actually know of cases where they cause lower performance
Can you post an example? I've been doing this for many years and I have never seen that happen except in cases where the stored proc was horribly written. Embedded/dynamic SQL can be poorly written as well.
"Quality Software since 1983!"
http://www.smoothjazzy.com/ - see the "Programming" section for (freeware) JazzySiteMaps, a simple application to generate .Net and Google-style sitemaps!One case that makes sense is when the execution plan of the proc (at least of the static SQLs within it) is compiled according to the set of parameters the proc is called with when doing the (first) compilation. This may not necessarily be the best plan (to get the desired result) in other cases. I imagine unchanging dynamic SQL (text the same, but parameterized) is the same. However, dynamically structured SQL (not just parameterized SQL but changing the actual text) is recompiled with each new SQL (text) so the plan can reflect the actual query. This probably only makes sense if the original stored proc is structured to handle multiple scenarios (e.g search permutations) while composing case-specific dynamic SQL would be leaner and produce a better execution plan. E.g. the stored proc could have lots of 'OR's and joining on too much to cover every parameter scenario, like a general purpose search procedure with lots of parameters. There might be too many search permutations to have a proc for each one, whereas dynamic SQL's recompilation overhead could be less than the benefit of a better executition plan.
-
Rocky Moore wrote:
Yeah, I sure hope mission critical apps think about it
I love how "mission critical apps" is the ultimate "ooh, now I'm scared" buzzword to drop on people. Consider: While each generated GUID is not guaranteed to be unique, the total number of unique keys (2128 or 3.40282366×1038) is so large that the probability of the same number being generated twice is very small. Now your basic int, a 32 bit value, (4 bytes), is going to rollover much sooner than a GUID is going to collide. What does your DB do when the autonumbering rolls over? I actually tested that once. Now consider some other numbers. Even at 2^31 for a signed int, thats 2 trillion records. Hmmm. I don't know a lot of mission critical apps that will hit 2 trillion. Ever. I can think of a couple, like cataloging stars or grains of sand. More numbers. At 4 bytes for an int just to store the ID, that's 8GB if you were to have a record using every possible integer ID. Let's say, ooh, on average, another 1000 bytes for data per row? So that's another 2^31 * 1000, or 2 terrabytes of data. That terrabyte star catalog is the only thing I can think of that requires that much space. So, let's be real when we talk about mission critical apps. 2 trillion records? 2 terrabytes of disk space? And those numbers are using an integer ID and based on the range of values that it is capable of.
Rocky Moore wrote:
In a large application, they would have to be unique to maybe 20-100 tables.
No. Only the primary key needs to be unique. Not the FK's! And identical keys in separate tables isn't an issue either!
Rocky Moore wrote:
Not to mention that have no order so you end up using another field to force order.
Ummm...PK's should be abstracted anyways. You'd never order on a PK!
Rocky Moore wrote:
Little things like this bug me so bad
Well, we pick our battles. If you actually stop and think about the issue, I think there's very little to be bugged about, and actually a case more for being bugged about your mission critical apps using an integer ID. Marc
People are just notoriously impossible. --DavidCrow
There'sI've worked with one database where having an integer instead of a guid made a difference. That was a table with information on every dell system sold driving their support website. At the time it was 700 million rows, but servers have gotten more powerful since then, and in that case 64bit systems solve it. The main limit we were running into wasn't disk space, but memory space for the SQL server data cache. Since we were running with a lot of users, getting the fastest possible response for the queries was essential.
Using the GridView is like trying to explain to someone else how to move a third person's hands in order to tie your shoelaces for you. -Chris Maunder
-
There is exactly no protection with encrypted stored procedures, take a quick look on the internet, there are plenty of free tools to decrypt them.
-
Sure. But how is this any different to your executable with embedded SQL? As long as your user has a sysadmin login there's nothing can do to stop a determined user doing what they want with.
-
mfhobbs wrote:
determined user
That being the key distinction here as I've said repeatedly in this thread.
Yeh, but the point it that you're trying to have your cake and eat it too. You can't give a user high levels of security rights (so they can modify or replace stored procedures - encrypted or not) and then pretend that by embedding your SQL in the exe that you are gaining anything in this regard when the same user can very easily do anything they want (with these rights) directly on the db or even use SQL Profiler to grab your SQL and play with it in Query Analyzer.
-
Yeh, but the point it that you're trying to have your cake and eat it too. You can't give a user high levels of security rights (so they can modify or replace stored procedures - encrypted or not) and then pretend that by embedding your SQL in the exe that you are gaining anything in this regard when the same user can very easily do anything they want (with these rights) directly on the db or even use SQL Profiler to grab your SQL and play with it in Query Analyzer.
mfhobbs wrote:
so they can modify or replace stored procedures
But there would be none to modify. The point (one of my former employer's point anyway) is; with stored procedures, the code can be modified in the field with no one the wiser, a properly permitted but malicious person can alter how the system behaves, you know, like ignore his own transactions or something. Certainly the same can be done with code too, but it's not as easy.
-
mfhobbs wrote:
so they can modify or replace stored procedures
But there would be none to modify. The point (one of my former employer's point anyway) is; with stored procedures, the code can be modified in the field with no one the wiser, a properly permitted but malicious person can alter how the system behaves, you know, like ignore his own transactions or something. Certainly the same can be done with code too, but it's not as easy.
-
Yeh, but the point it that you're trying to have your cake and eat it too. You can't give a user high levels of security rights (so they can modify or replace stored procedures - encrypted or not) and then pretend that by embedding your SQL in the exe that you are gaining anything in this regard when the same user can very easily do anything they want (with these rights) directly on the db or even use SQL Profiler to grab your SQL and play with it in Query Analyzer.
The main point for me is to protect the database from casual users who may be well meaning but break something. Sure they can easily modify the schema themselves but that's *many* orders of magnitude easier to discover than a modified stored procedure query. When I'm supporting someone remotely in another country who barely can write english I much prefer that I get a plain error message that a column or table is missing than some wierd mysterious behaviour that could take forever to track down because they are unwilling to admit or simply don't know that someone change a query in some subtle way. You have to understand I'm not just arguing this from some ivory tower philisophical point of view, we actually have to support, at no cost because that's our policy, software we develop and sell commercially used world wide in over 50 countries by thousands of different users. What may work or make sense for someone working on an in-house app is a vastly different prospect for myself. Offering free technical support to widely used software does wonders to ensure that you are always doing things in the most practical and defensive way humanly possible. There is never an outright right and wrong, just different points of view, my point of view and years of experience have led me to the best possible method which is not stored procedures.
-
The main point for me is to protect the database from casual users who may be well meaning but break something. Sure they can easily modify the schema themselves but that's *many* orders of magnitude easier to discover than a modified stored procedure query. When I'm supporting someone remotely in another country who barely can write english I much prefer that I get a plain error message that a column or table is missing than some wierd mysterious behaviour that could take forever to track down because they are unwilling to admit or simply don't know that someone change a query in some subtle way. You have to understand I'm not just arguing this from some ivory tower philisophical point of view, we actually have to support, at no cost because that's our policy, software we develop and sell commercially used world wide in over 50 countries by thousands of different users. What may work or make sense for someone working on an in-house app is a vastly different prospect for myself. Offering free technical support to widely used software does wonders to ensure that you are always doing things in the most practical and defensive way humanly possible. There is never an outright right and wrong, just different points of view, my point of view and years of experience have led me to the best possible method which is not stored procedures.
I can understand your situation now. But your post talks about all commercial projects. In our current scenario (not an in-house project) we have a >10 years young system with 100s of clients in only a few countries. There used to be problems with some stored procs being changed... but mostly only by our own 'onsite' consultants! (Encrypting the procs certainly reduced that problem on another project I worked on though it is not done here if only to allow consultants to make rapid changes as need be.) But for us, managing dynamic sql pushed from tens of disparate applications has been a historic nightmare (gridlocking data model changes) that we are moving away from.
-
One case that makes sense is when the execution plan of the proc (at least of the static SQLs within it) is compiled according to the set of parameters the proc is called with when doing the (first) compilation. This may not necessarily be the best plan (to get the desired result) in other cases. I imagine unchanging dynamic SQL (text the same, but parameterized) is the same. However, dynamically structured SQL (not just parameterized SQL but changing the actual text) is recompiled with each new SQL (text) so the plan can reflect the actual query. This probably only makes sense if the original stored proc is structured to handle multiple scenarios (e.g search permutations) while composing case-specific dynamic SQL would be leaner and produce a better execution plan. E.g. the stored proc could have lots of 'OR's and joining on too much to cover every parameter scenario, like a general purpose search procedure with lots of parameters. There might be too many search permutations to have a proc for each one, whereas dynamic SQL's recompilation overhead could be less than the benefit of a better executition plan.
I'll agree with that, but I think that's a case of poor design. In cases where the system can't be redesigned, sometimes dynamic statements are necessary, but if you can plan ahead for this, you can design the system to not require dynamic statements. The issue with many concurrent users doesn't apply. The proc itself doesn't have to be locked or anything, so multiple users can run it just fine. Multiple users calling the same stored proc is no different from multiple users executing their own SQL statements. Concurrency issues will be the same with either method. That's not what I meant when I asked for an example. That's a theoretical case. Show me an actual real-world example where using procs slowed things down. I have data showing that procs are faster. You're making a claim based on theory. Show me the numbers.... I have numbers for this somewhere, and the difference wasn't huge. I find it funny that after an extensive Google search about this topic, I found a lot of people swearing up and down on both sides of the issue, but with no hard numbers. If you're going to talk about performance, theory is useless. There are many examples of things that theoretically ought to be faster, but are slower in practice. When I get home I'll dig up my testing program and post some numbers about this. It's not a surprising difference, but it's a difference. With very special cases, like a search proc, it may be difficult to write a proper stored proc, but that's no reason to throw them out completely. Each case must be considered on its own merit. Most of the time a stored proc is the way to go.
"Quality Software since 1983!"
http://www.smoothjazzy.com/ - see the "Programming" section for (freeware) JazzySiteMaps, a simple application to generate .Net and Google-style sitemaps! -
Whoa! Why so harsh? The devil as they say is in the details, I never once said what you seem to think I said, what I said was that there is not enough of a difference (in many cases no difference) in time to be perceived by the end user of the software. I don't know who you write software for, but I write software for people and people don't give a damn how the program works, they just want the features they need in an easy to use program. Arguing over something that even if it were true (and it clearly isn't an issue as much as you seem to think it is anymore) would add up to a sum total of perhaps at most 10 seconds total extra time over the course of an entire day for a single user is ... well...geeky at best. ;)
Jasmine2501 wrote:
Test it yourself before you go around saying things like this.
The fact is I did nothing but testing for a solid month with 4 different database back ends when I was considering the switch to dynamic sql in the first place. My experience is based on real life testing and profiling with a slew of complex stored procedures ported to ansi sql from Microsoft SQL server to 4 other platforms. In fact as I was developing the DAL (Data Access Layer) in my app and moving each former stored procedure to dynamic sql I profiled each and every one. You are operating under assumptions that date back to at least SQL server 6.5: I'll quote you the relevant bits from sql server books online: "In SQL Server version 6.5 and earlier, stored procedures were a way to partially precompile an execution plan. At the time the stored procedure was created, a partially compiled execution plan was stored in a system table. Executing a stored procedure was more efficient than executing an SQL statement because SQL Server did not have to compile an execution plan completely, it only had to finish optimizing the stored plan for the procedure. Also, the fully compiled execution plan for the stored procedure was retained in the SQL Server procedure cache, meaning that subsequent executions of the stored procedure could use the precompiled execution plan. SQL Server 2000 and SQL Server version 7.0 incorporate a number of changes to statement processing that extend many of the performance benefits of stored procedures to all SQL statements. SQL Server 2000 and SQL Server 7.0 do not save a partially compiled plan for stored procedures when they are created. A stored procedure is compiled at execution time, like any other Trans
Uh, I have that book too. I've read those very paragraphs several times. Sorry to be a bitch about it. You never said you tested it, so I assumed you didn't. I still maintain that executing the same statements with dynamic calls will always be slower than executing the same statements within a proc. I have data for this (MSSQL 2000), but if you have something contradictory, I'd like to see it. I'm open to improvement if I have a good reason why. Geeky yes, but applies sometimes. Most users won't notice a half-second delay, and in some cases it's cool to ignore that. It's certainly not worth spending months of development dollars eliminating things like that, unless the problem compounds itself to the point where the users start to see a slowdown. Then you've got to fix it, and if the problem is being caused by bad design, it's going to be harder to fix.
Some Microsoft Flunky wrote:
This reduces the relative performance benefit of precompiling stored procedures by extending execution plan reuse to all SQL statements
Note: They said "Reduces", not "eliminates".
"Quality Software since 1983!"
http://www.smoothjazzy.com/ - see the "Programming" section for (freeware) JazzySiteMaps, a simple application to generate .Net and Google-style sitemaps! -
Rocky Moore wrote:
Yeah, I sure hope mission critical apps think about it
I love how "mission critical apps" is the ultimate "ooh, now I'm scared" buzzword to drop on people. Consider: While each generated GUID is not guaranteed to be unique, the total number of unique keys (2128 or 3.40282366×1038) is so large that the probability of the same number being generated twice is very small. Now your basic int, a 32 bit value, (4 bytes), is going to rollover much sooner than a GUID is going to collide. What does your DB do when the autonumbering rolls over? I actually tested that once. Now consider some other numbers. Even at 2^31 for a signed int, thats 2 trillion records. Hmmm. I don't know a lot of mission critical apps that will hit 2 trillion. Ever. I can think of a couple, like cataloging stars or grains of sand. More numbers. At 4 bytes for an int just to store the ID, that's 8GB if you were to have a record using every possible integer ID. Let's say, ooh, on average, another 1000 bytes for data per row? So that's another 2^31 * 1000, or 2 terrabytes of data. That terrabyte star catalog is the only thing I can think of that requires that much space. So, let's be real when we talk about mission critical apps. 2 trillion records? 2 terrabytes of disk space? And those numbers are using an integer ID and based on the range of values that it is capable of.
Rocky Moore wrote:
In a large application, they would have to be unique to maybe 20-100 tables.
No. Only the primary key needs to be unique. Not the FK's! And identical keys in separate tables isn't an issue either!
Rocky Moore wrote:
Not to mention that have no order so you end up using another field to force order.
Ummm...PK's should be abstracted anyways. You'd never order on a PK!
Rocky Moore wrote:
Little things like this bug me so bad
Well, we pick our battles. If you actually stop and think about the issue, I think there's very little to be bugged about, and actually a case more for being bugged about your mission critical apps using an integer ID. Marc
People are just notoriously impossible. --DavidCrow
There'sMarc Clifton wrote:
While each generated GUID is not guaranteed to be unique, the total number of unique keys (2128 or 3.40282366×1038) is so large that the probability of the same number being generated twice is very small.
Small, but can still happen..
Marc Clifton wrote:
Now your basic int, a 32 bit value, (4 bytes), is going to rollover much sooner than a GUID is going to collide. What does your DB do when the autonumbering rolls over? I actually tested that once. Now consider some other numbers. Even at 2^31 for a signed int, thats 2 trillion records. Hmmm. I don't know a lot of mission critical apps that will hit 2 trillion. Ever. I can think of a couple, like cataloging stars or grains of sand. More numbers. At 4 bytes for an int just to store the ID, that's 8GB if you were to have a record using every possible integer ID. Let's say, ooh, on average, another 1000 bytes for data per row? So that's another 2^31 * 1000, or 2 terrabytes of data. That terrabyte star catalog is the only thing I can think of that requires that much space. So, let's be real when we talk about mission critical apps. 2 trillion records? 2 terrabytes of disk space? And those numbers are using an integer ID and based on the range of values that it is capable of.
Yep, I agree, an autoincrement int field is usually good for most tables while still providing order. GUID seems like overkill unless you have to merge with other data.
Marc Clifton wrote:
No. Only the primary key needs to be unique. Not the FK's! And identical keys in separate tables isn't an issue either!
Yeah, I was thinking about more in the line of merging records such as in an enterprise but my mind must have wondered on the post ;) One of those days I guess where the mind fog was a bit stronger ;)
Marc Clifton wrote:
Ummm...PK's should be abstracted anyways. You'd never order on a PK!
I often order on PK when using IDs, as they are using sequential and always keep latest entries last in the data, but then most data is accessed via the primary key ID making it quick
Marc Clifton wrote:
Well, we pick our battles. If you actually stop and think about the issue, I think there's very little to be bugged about, and actually a case more for being bugged about your mi
-
I've just recieved an email from my supervisor, asking me not to use any server-side functions, stored procedures, views or queries and to keep all database coding within the code itself - just in case we need to change databases or sell to a client that doesn't use the same database that we do. We write in-house web apps (classic ASP and C#) using SQL Server 2000 and have so far sold a total of zero applications to third parties (We are not a software house - just a small IT department serving the rest of the company). Pseudo-code for the offending stored procedure that prompted the new policy is shown below:
begin insert data into table select scope_identity() as userid end
I was instructed to change it to two separate calls from within the code:recordset.open( "insert data into table" ) ... recordset.open( "select @@identity" )
Any thoughts? I have mine, but I'd be interested in hearing from others...
The StartPage Randomizer | The Timelapse Project | A Random Web Page
begin insert data into table select scope_identity() as userid end
Ok, sure. I can see where this would not work in every available dbms. There may be a slight syntax difference or, like Oracle, it could use an entirely different way of generating identity values and returning them to the caller. However...
recordset.open( "insert data into table" ) ... recordset.open( "select @@identity" )
...will suffer from the exact same problem! Only now, instead of being centrally located in one known place where we can easily find it and work on it -- it is scattered around who knows how many places (and who the hell is going to know?) around the app(s) code.
Stored procedures, besides all the other advantages they provide (security not being the least of them), allow us the opportunity to locate as much dbms-specific code in one central location. This is precisely what you want if you ever attempt to port the code.
If your supervisor is indeed worried about having to port your app to a different dbms, then he has taken the exact wrong step and will be making the problem worse. Talk him out of this lunacy if you can.TommCatt -- In theory, there is no difference between theory and practice. In practice, there is.
-
I'll agree with that, but I think that's a case of poor design. In cases where the system can't be redesigned, sometimes dynamic statements are necessary, but if you can plan ahead for this, you can design the system to not require dynamic statements. The issue with many concurrent users doesn't apply. The proc itself doesn't have to be locked or anything, so multiple users can run it just fine. Multiple users calling the same stored proc is no different from multiple users executing their own SQL statements. Concurrency issues will be the same with either method. That's not what I meant when I asked for an example. That's a theoretical case. Show me an actual real-world example where using procs slowed things down. I have data showing that procs are faster. You're making a claim based on theory. Show me the numbers.... I have numbers for this somewhere, and the difference wasn't huge. I find it funny that after an extensive Google search about this topic, I found a lot of people swearing up and down on both sides of the issue, but with no hard numbers. If you're going to talk about performance, theory is useless. There are many examples of things that theoretically ought to be faster, but are slower in practice. When I get home I'll dig up my testing program and post some numbers about this. It's not a surprising difference, but it's a difference. With very special cases, like a search proc, it may be difficult to write a proper stored proc, but that's no reason to throw them out completely. Each case must be considered on its own merit. Most of the time a stored proc is the way to go.
"Quality Software since 1983!"
http://www.smoothjazzy.com/ - see the "Programming" section for (freeware) JazzySiteMaps, a simple application to generate .Net and Google-style sitemaps!I agree, I like procs too, I'm not saying they are slower at all. I think a proc-based design 'architecturally scales' better (my buzzphrase for the day). On projects I get paid to work on, I would only use dynamic SQL as a last resort. On the other extreme, on a different type of project where I think an off-the-shelf ORM could do the job, I'd use one (not happened yet but only because of fate).
-
And just use one sql call: recordset.open( "insert data into table; select @@SCOPE_IDENTITY" )
or just use one sql call with no response chatter (NO ACK)... recordset.open( "SET NOCOUNT ON;insert data into table; select @@SCOPE_IDENTITY" )
-
David Stone wrote:
That's the dumbest thing I've ever heard.
Not quite the dumbest, but it ranks pretty highly.
Grim
(aka Toby)
MCDBA, MCSD, MCP+SB
SELECT * FROM users WHERE clue IS NOT NULL GO
(0 row(s) affected)
I think most developers have enjoyed the distinct separation of programmable data objects and the OO code. Correct me if I am wrong, but I am pretty sure that there are important differences between, for example, an Oracle DB and SQL Server. I know @@Identity does not exist in Oracle.
-
I've just recieved an email from my supervisor, asking me not to use any server-side functions, stored procedures, views or queries and to keep all database coding within the code itself - just in case we need to change databases or sell to a client that doesn't use the same database that we do. We write in-house web apps (classic ASP and C#) using SQL Server 2000 and have so far sold a total of zero applications to third parties (We are not a software house - just a small IT department serving the rest of the company). Pseudo-code for the offending stored procedure that prompted the new policy is shown below:
begin insert data into table select scope_identity() as userid end
I was instructed to change it to two separate calls from within the code:recordset.open( "insert data into table" ) ... recordset.open( "select @@identity" )
Any thoughts? I have mine, but I'd be interested in hearing from others...
The StartPage Randomizer | The Timelapse Project | A Random Web Page
Keeping the focus off of performance issues, since this was not one of the supervisor's requirements, I would like to point out that this change will not achieve portability. Using Microsoft supplied libraries is what most programmers use to access SQL Server from a client program, and using Oracle libraries is what you tend to use when accessing Oracle databases. Even when you use only Microsoft code, often you have to rewrite your SQL syntax to be Oracle compatible. So just saying keep all logic on the client side is not sufficient. You need to archetect and test your code against all possible targets. There are many ways to do it, and one it to develope a stored procedure library for each target database & version. Another would be to buy a 3rd party middleware package to use in client code that uses only the lowest common data types & functions that all target databases support identically with no stored procedures required, or with them suppling the procedures. Good luck with the company debates, TC
-
I've just recieved an email from my supervisor, asking me not to use any server-side functions, stored procedures, views or queries and to keep all database coding within the code itself - just in case we need to change databases or sell to a client that doesn't use the same database that we do. We write in-house web apps (classic ASP and C#) using SQL Server 2000 and have so far sold a total of zero applications to third parties (We are not a software house - just a small IT department serving the rest of the company). Pseudo-code for the offending stored procedure that prompted the new policy is shown below:
begin insert data into table select scope_identity() as userid end
I was instructed to change it to two separate calls from within the code:recordset.open( "insert data into table" ) ... recordset.open( "select @@identity" )
Any thoughts? I have mine, but I'd be interested in hearing from others...
The StartPage Randomizer | The Timelapse Project | A Random Web Page
1. Every techique has its strengths and weaknesses. 2. If you really want to sell your applications to more than one party then using stored procedures gives tremendous advantages as follows: a) It separates your user interface and source code from data management. You do not need to touch your source code thereby trying to eliminate the need to have multiple versions of the source code. b) if say, you want to provide Oracle instead of SQL Server, the syntactical changes need to be made only in stored procedure, so long as the result set or result(e.g. count of records etc.) remains same. c) Many databases allow you to store procedures in an encrypted form thereby disabling accidental changes to it etc. 4. It totally eliminates the need to give rights to base tables or crucial views and forces all users to necessarily go through your consistent stores procedure path. Overall, it provides tremendous advantages as against few disadvantages. Dilip Nagle