No more stored procedures
-
I've just recieved an email from my supervisor, asking me not to use any server-side functions, stored procedures, views or queries and to keep all database coding within the code itself - just in case we need to change databases or sell to a client that doesn't use the same database that we do. We write in-house web apps (classic ASP and C#) using SQL Server 2000 and have so far sold a total of zero applications to third parties (We are not a software house - just a small IT department serving the rest of the company). Pseudo-code for the offending stored procedure that prompted the new policy is shown below:
begin insert data into table select scope_identity() as userid end
I was instructed to change it to two separate calls from within the code:recordset.open( "insert data into table" ) ... recordset.open( "select @@identity" )
Any thoughts? I have mine, but I'd be interested in hearing from others...
The StartPage Randomizer | The Timelapse Project | A Random Web Page
-
Most major version control systems such as SourceSafe, CVS and PVS allow for version control of the underlying script. In fact, SQL Server 2005 is tightly integrated on a project basis with SourceSafe out of the box. So I do not understand this as a reason, but hey to each his own.
-
I've just recieved an email from my supervisor, asking me not to use any server-side functions, stored procedures, views or queries and to keep all database coding within the code itself - just in case we need to change databases or sell to a client that doesn't use the same database that we do. We write in-house web apps (classic ASP and C#) using SQL Server 2000 and have so far sold a total of zero applications to third parties (We are not a software house - just a small IT department serving the rest of the company). Pseudo-code for the offending stored procedure that prompted the new policy is shown below:
begin insert data into table select scope_identity() as userid end
I was instructed to change it to two separate calls from within the code:recordset.open( "insert data into table" ) ... recordset.open( "select @@identity" )
Any thoughts? I have mine, but I'd be interested in hearing from others...
The StartPage Randomizer | The Timelapse Project | A Random Web Page
Only using clientside SQL open you up to the possibility of SQL injection attacks!!!
-
hmmm... maybe the semi-colon is not general across databases. Perhaps detect if SQL Server before using it otherwise you may have to use it. Also... is @@SCOPE_IDENTITY database general? If not, you may have to have 'dialects' for different databases... e.g. http://coldfusion.sys-con.com/read/43794.htm
I believe @@SCOPE_IDENTITY is not database general. Does your boss know this? If you really want to be database independant, you'll have to use 'dialects' as mfhobbs suggested or use an OR mapping tool such as nhibernate.
-
I've just recieved an email from my supervisor, asking me not to use any server-side functions, stored procedures, views or queries and to keep all database coding within the code itself - just in case we need to change databases or sell to a client that doesn't use the same database that we do. We write in-house web apps (classic ASP and C#) using SQL Server 2000 and have so far sold a total of zero applications to third parties (We are not a software house - just a small IT department serving the rest of the company). Pseudo-code for the offending stored procedure that prompted the new policy is shown below:
begin insert data into table select scope_identity() as userid end
I was instructed to change it to two separate calls from within the code:recordset.open( "insert data into table" ) ... recordset.open( "select @@identity" )
Any thoughts? I have mine, but I'd be interested in hearing from others...
The StartPage Randomizer | The Timelapse Project | A Random Web Page
Wow! Your question has started a huge debate on so many related topics. I haven't read the entire thread, so I'm sure some of what I say will just be a rehash, but here's my 2 cents: 1) To be truely DB agnostic you need a Data Access Layer written for each Db. Your sample is proof of that (@@IDENTITY and SCOPE_IDENTITY()). 2) Stored procedures DO reduce network traffic, but the network is where the load is seen, not the database. Given the exact same resultset, a batch of 10 lines multiplied over 100s of queries per second WILL create a greater network load than using stored procedures. However, as long as your dynamic queries use parameters you will not see any significant load on the db. This however brings us back to #1 - Sql Server Parameters use @paramName while other vendors just use '?' (I have to admit I haven't used other vendors, this is just based on sample code I've seen). Anyhow, using stored procedures vs. dynamic sql is simply a design choice. Which must be weighed against your program's requirements. But your 'dynamic' sql should ALWAYS use parameters. The difference between using parameters and not using them is a factor of as much as 200 in favor of parameters. 3) using @@IDENTITY in the same batch runs the risk of pulling the wrong identity value and increases dramatically when run in a separate batch. 4) Do your web apps have any use to third parties? That's the first question to be asked. Unless your company is intending to change their business model and start adding technical support and investing time and money into deploying and supporting web applications on external systems and pulling resources away from your core functions, I don't see the point in even addressing the issue. I would say in most cases, applications written for internal use and then retro-fitted for use by third parties are begging for problems. I'm not just referring to portability issues here, I'm talking about culture. Applications are written with specific groups of end users in mind, in this case your internal company culture. Which means the UI is designed with your company in mind and the way they use the application. If a different company starts using your application they have different goals in mind and will want to use the application in different ways than you do based on their business processes and how they want to integrate with their existing IT systems.
-
I've just recieved an email from my supervisor, asking me not to use any server-side functions, stored procedures, views or queries and to keep all database coding within the code itself - just in case we need to change databases or sell to a client that doesn't use the same database that we do. We write in-house web apps (classic ASP and C#) using SQL Server 2000 and have so far sold a total of zero applications to third parties (We are not a software house - just a small IT department serving the rest of the company). Pseudo-code for the offending stored procedure that prompted the new policy is shown below:
begin insert data into table select scope_identity() as userid end
I was instructed to change it to two separate calls from within the code:recordset.open( "insert data into table" ) ... recordset.open( "select @@identity" )
Any thoughts? I have mine, but I'd be interested in hearing from others...
The StartPage Randomizer | The Timelapse Project | A Random Web Page
Don't get your hopes up that inline SQL will just run on any database you want. Even coding directly to the ANSI specification (92, 99, or 03) does not guarantee you can just point your app to any database and have it work or work without any issues. (like performance issues) Even in your simple example, "select @@identity" will work on MSSQL databases. Just like Java: "Write once, test everywhere...." You might want to think about abstracting your data access into different interfaces. Something like having an MSSQL interface, a Access DB interface, a MySQL interface. This, combined with a good unit testing plan, can get you close to DB independence...
Cheers
-
I've just recieved an email from my supervisor, asking me not to use any server-side functions, stored procedures, views or queries and to keep all database coding within the code itself - just in case we need to change databases or sell to a client that doesn't use the same database that we do. We write in-house web apps (classic ASP and C#) using SQL Server 2000 and have so far sold a total of zero applications to third parties (We are not a software house - just a small IT department serving the rest of the company). Pseudo-code for the offending stored procedure that prompted the new policy is shown below:
begin insert data into table select scope_identity() as userid end
I was instructed to change it to two separate calls from within the code:recordset.open( "insert data into table" ) ... recordset.open( "select @@identity" )
Any thoughts? I have mine, but I'd be interested in hearing from others...
The StartPage Randomizer | The Timelapse Project | A Random Web Page
Who is this idiot? Somebody needs to have a talk with him. Using stored procs is the only way to make sure that the application doesn't have to be re-designed when you change database vendors. A stored proc call has the same syntax on almost all servers. The instructions within those procs may be vastly different depending on the flavour of SQL. In order to keep things consistent when you change database vendors, you simply make sure the new database has the same stored procs and that those procs return the same datasets as the old ones. This makes the change of database vendor *transparent* to your applications, which is what you want. Without procs, you have to rewrite and recompile the application. With procs, you can make database installer/setup scripts for many DB vendors and simply install the databases with that, while your application remains the same for all installations. This is pretty simple. Even your idiot boss should understand it. There's about 500 other reasons to use stored procs, performance and security being the big ones, but your boss's reasoning is just plain wrong. This is not a case of a difference of opinion - what he's saying is incorrect. You might not mention this to him, but not all vendors support SQL user-functions, so he's right about those, but not about stored procs since almost all DB vendors support those (even MySQL has it now). If he's really reluctant after having his reasoning corrected, you might look for another job. There are people out there who deliberately try to make things harder on developers. Typically they are expert programmers with a sadistic streak, so they know they are asking you to do something wrong. Don't tolerate this treatment... if bosses don't respect your expertise, then it reflects badly on them, and eventually someone will notice that he's asking employees to waste time and money, simply so he can assert his authority. Weak... very weak.
"Quality Software since 1983!"
http://www.smoothjazzy.com/ - see the "Programming" section for (freeware) JazzySiteMaps, a simple application to generate .Net and Google-style sitemaps! -
Joe Woodbury wrote:
Stored procedures improve performance
To who? Not the end user, that's been proven time and again so often that I'm stunned to see some people here still contributing to that fallacy.
No it has not. On the Yahoo SQL mailing list, we argued about this for a while, then someone did some tests. Stored procs ALWAYS performed faster. The difference was small, but there was a difference. Test it yourself before you go around saying things like this. I think I know the real reason people don't like stored procs, and it has nothing to do with performance, it has to do with lazy and ignorant programmers. Overall, it doesn't bother me a whole lot. My apps will be better than yours. Mine will be more secure, perform better, and be more portable and easier to update. I'm happy to compete with you on that playing field. -- modified at 12:49 Thursday 4th January, 2007 It's possible that it's RTT that's making it slower, and it definitely contributes, but RTT still hurts performance. Saying it's not the proc itself that's hurting things is beside the point, it's still slower.
"Quality Software since 1983!"
http://www.smoothjazzy.com/ - see the "Programming" section for (freeware) JazzySiteMaps, a simple application to generate .Net and Google-style sitemaps! -
After I posted that I realized I should have written "stored procedures CAN improve performance" since I actually know of cases where they cause lower performance (which I eluded to in a later sentence.) (I modified my original post with a note indicated that I had done so.)
Anyone who thinks he has a better idea of what's good for people than people do is a swine. - P.J. O'Rourke
Joe Woodbury wrote:
I actually know of cases where they cause lower performance
Can you post an example? I've been doing this for many years and I have never seen that happen except in cases where the stored proc was horribly written. Embedded/dynamic SQL can be poorly written as well.
"Quality Software since 1983!"
http://www.smoothjazzy.com/ - see the "Programming" section for (freeware) JazzySiteMaps, a simple application to generate .Net and Google-style sitemaps! -
I've just recieved an email from my supervisor, asking me not to use any server-side functions, stored procedures, views or queries and to keep all database coding within the code itself - just in case we need to change databases or sell to a client that doesn't use the same database that we do. We write in-house web apps (classic ASP and C#) using SQL Server 2000 and have so far sold a total of zero applications to third parties (We are not a software house - just a small IT department serving the rest of the company). Pseudo-code for the offending stored procedure that prompted the new policy is shown below:
begin insert data into table select scope_identity() as userid end
I was instructed to change it to two separate calls from within the code:recordset.open( "insert data into table" ) ... recordset.open( "select @@identity" )
Any thoughts? I have mine, but I'd be interested in hearing from others...
The StartPage Randomizer | The Timelapse Project | A Random Web Page
There are many good reasons why you should use stored procedures over dynamic sql. I'd recommend that you continue to use stored procedures - they can actually isolate your application from changes to the database schema. If you embed code like this in your application:
recordset.open( "insert data into table" ) ... recordset.open( "select @@identity" )
-You will need to re-write your application for compatability with Oracle and other database vendors. It might be easier to convert the T-SQL SP's to PL-SQL SPs and avoid a re-compile of your application. You can use SPs to enhance security - does your application really need write access to the underlying tables? You can grant execute permissions to the SPs and not require write access to the underlying tables. They can also help prevent SQL Injection attacks - something you might want to be careful of with dynamic sql. My advice - use dynamic sql if you have to, otherwise stick with SPs. Dynamic SQL has its uses though...a web application I developed for work relys heavily on dynamic sql - in this particular case, dynamic sql was the only option available. -
That's the reason! Use a piece of code that runs a stored procedure and worry about the syntax in the database. The only reason I've found not to use stored procedures (in real life) is that the debugging is too hard. There is no reason to use ten layers and 100 stored automatically generated stored procedures when all you want is to write a single value in a simple table or something.
---------- Siderite
I suppose if maintenance weren't an issue. BTW, if you are having trouble debugging stored procedures, then they are too big.
On two occasions I have been asked [by members of Parliament], 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question. - Charles Babbage
-
Joe Woodbury wrote:
I actually know of cases where they cause lower performance
Can you post an example? I've been doing this for many years and I have never seen that happen except in cases where the stored proc was horribly written. Embedded/dynamic SQL can be poorly written as well.
"Quality Software since 1983!"
http://www.smoothjazzy.com/ - see the "Programming" section for (freeware) JazzySiteMaps, a simple application to generate .Net and Google-style sitemaps!If the server has a very large number of clients running queries which case the same stored proc to be used.
Anyone who thinks he has a better idea of what's good for people than people do is a swine. - P.J. O'Rourke
-
Sadly that's much less protection than simply using dynamic sql with an obfuscated and string encrypted application, there are numerous well publicized ways of decrypting sp's.
I submit that if your "customer" is willing to go through that much hassle to muck up your application, there's nothing you can do to protect it, anyway.
Grim
(aka Toby)
MCDBA, MCSD, MCP+SB
SELECT * FROM users WHERE clue IS NOT NULL GO
(0 row(s) affected)
-
No it has not. On the Yahoo SQL mailing list, we argued about this for a while, then someone did some tests. Stored procs ALWAYS performed faster. The difference was small, but there was a difference. Test it yourself before you go around saying things like this. I think I know the real reason people don't like stored procs, and it has nothing to do with performance, it has to do with lazy and ignorant programmers. Overall, it doesn't bother me a whole lot. My apps will be better than yours. Mine will be more secure, perform better, and be more portable and easier to update. I'm happy to compete with you on that playing field. -- modified at 12:49 Thursday 4th January, 2007 It's possible that it's RTT that's making it slower, and it definitely contributes, but RTT still hurts performance. Saying it's not the proc itself that's hurting things is beside the point, it's still slower.
"Quality Software since 1983!"
http://www.smoothjazzy.com/ - see the "Programming" section for (freeware) JazzySiteMaps, a simple application to generate .Net and Google-style sitemaps!Whoa! Why so harsh? The devil as they say is in the details, I never once said what you seem to think I said, what I said was that there is not enough of a difference (in many cases no difference) in time to be perceived by the end user of the software. I don't know who you write software for, but I write software for people and people don't give a damn how the program works, they just want the features they need in an easy to use program. Arguing over something that even if it were true (and it clearly isn't an issue as much as you seem to think it is anymore) would add up to a sum total of perhaps at most 10 seconds total extra time over the course of an entire day for a single user is ... well...geeky at best. ;)
Jasmine2501 wrote:
Test it yourself before you go around saying things like this.
The fact is I did nothing but testing for a solid month with 4 different database back ends when I was considering the switch to dynamic sql in the first place. My experience is based on real life testing and profiling with a slew of complex stored procedures ported to ansi sql from Microsoft SQL server to 4 other platforms. In fact as I was developing the DAL (Data Access Layer) in my app and moving each former stored procedure to dynamic sql I profiled each and every one. You are operating under assumptions that date back to at least SQL server 6.5: I'll quote you the relevant bits from sql server books online: "In SQL Server version 6.5 and earlier, stored procedures were a way to partially precompile an execution plan. At the time the stored procedure was created, a partially compiled execution plan was stored in a system table. Executing a stored procedure was more efficient than executing an SQL statement because SQL Server did not have to compile an execution plan completely, it only had to finish optimizing the stored plan for the procedure. Also, the fully compiled execution plan for the stored procedure was retained in the SQL Server procedure cache, meaning that subsequent executions of the stored procedure could use the precompiled execution plan. SQL Server 2000 and SQL Server version 7.0 incorporate a number of changes to statement processing that extend many of the performance benefits of stored procedures to all SQL statements. SQL Server 2000 and SQL Server 7.0 do not save a partially compiled plan for stored procedures when they are created. A stored procedure is compiled at execution time, like any other Trans
-
Rocky Moore wrote:
don't you have to add code to make sure that never happens, such as checking if the GUID is already used?
I don't. I really don't expect it to be an issue. Besides the probability of a GUID being identical, there's also the probability that it will be identical in the same table. It seems extremely remote.
Rocky Moore wrote:
Also, isn't it a lot of overhead in the indexes for the larger data of a GUID compared to a int?
There's a lot of debate[^] on the subject. (And a lot of links you can read up on). Someone published some code somewhere that helps with clustering, but I'd rather not touch it. The thing I like about GUID's is it makes it easy to merge offline changes. And I have to deal with that occasionally. Marc
People are just notoriously impossible. --DavidCrow
There's NO excuse for not commenting your code. -- John Simmons / outlaw programmer
People who say that they will refactor their code later to make it "good" don't understand refactoring, nor the art and craft of programming. -- Josh SmithMarc Clifton wrote:
The thing I like about GUID's is it makes it easy to merge offline changes.
Not to mention that they're required for replication, anyway.
Grim
(aka Toby)
MCDBA, MCSD, MCP+SB
SELECT * FROM users WHERE clue IS NOT NULL GO
(0 row(s) affected)
-
That's the reason! Use a piece of code that runs a stored procedure and worry about the syntax in the database. The only reason I've found not to use stored procedures (in real life) is that the debugging is too hard. There is no reason to use ten layers and 100 stored automatically generated stored procedures when all you want is to write a single value in a simple table or something.
---------- Siderite
So, you don't like it because it's harder? You're one of those developers that does everything the easy way? Hmmm....
"Quality Software since 1983!"
http://www.smoothjazzy.com/ - see the "Programming" section for (freeware) JazzySiteMaps, a simple application to generate .Net and Google-style sitemaps! -
Joe Woodbury wrote:
I actually know of cases where they cause lower performance
Can you post an example? I've been doing this for many years and I have never seen that happen except in cases where the stored proc was horribly written. Embedded/dynamic SQL can be poorly written as well.
"Quality Software since 1983!"
http://www.smoothjazzy.com/ - see the "Programming" section for (freeware) JazzySiteMaps, a simple application to generate .Net and Google-style sitemaps!One case that makes sense is when the execution plan of the proc (at least of the static SQLs within it) is compiled according to the set of parameters the proc is called with when doing the (first) compilation. This may not necessarily be the best plan (to get the desired result) in other cases. I imagine unchanging dynamic SQL (text the same, but parameterized) is the same. However, dynamically structured SQL (not just parameterized SQL but changing the actual text) is recompiled with each new SQL (text) so the plan can reflect the actual query. This probably only makes sense if the original stored proc is structured to handle multiple scenarios (e.g search permutations) while composing case-specific dynamic SQL would be leaner and produce a better execution plan. E.g. the stored proc could have lots of 'OR's and joining on too much to cover every parameter scenario, like a general purpose search procedure with lots of parameters. There might be too many search permutations to have a proc for each one, whereas dynamic SQL's recompilation overhead could be less than the benefit of a better executition plan.
-
Rocky Moore wrote:
Yeah, I sure hope mission critical apps think about it
I love how "mission critical apps" is the ultimate "ooh, now I'm scared" buzzword to drop on people. Consider: While each generated GUID is not guaranteed to be unique, the total number of unique keys (2128 or 3.40282366×1038) is so large that the probability of the same number being generated twice is very small. Now your basic int, a 32 bit value, (4 bytes), is going to rollover much sooner than a GUID is going to collide. What does your DB do when the autonumbering rolls over? I actually tested that once. Now consider some other numbers. Even at 2^31 for a signed int, thats 2 trillion records. Hmmm. I don't know a lot of mission critical apps that will hit 2 trillion. Ever. I can think of a couple, like cataloging stars or grains of sand. More numbers. At 4 bytes for an int just to store the ID, that's 8GB if you were to have a record using every possible integer ID. Let's say, ooh, on average, another 1000 bytes for data per row? So that's another 2^31 * 1000, or 2 terrabytes of data. That terrabyte star catalog is the only thing I can think of that requires that much space. So, let's be real when we talk about mission critical apps. 2 trillion records? 2 terrabytes of disk space? And those numbers are using an integer ID and based on the range of values that it is capable of.
Rocky Moore wrote:
In a large application, they would have to be unique to maybe 20-100 tables.
No. Only the primary key needs to be unique. Not the FK's! And identical keys in separate tables isn't an issue either!
Rocky Moore wrote:
Not to mention that have no order so you end up using another field to force order.
Ummm...PK's should be abstracted anyways. You'd never order on a PK!
Rocky Moore wrote:
Little things like this bug me so bad
Well, we pick our battles. If you actually stop and think about the issue, I think there's very little to be bugged about, and actually a case more for being bugged about your mission critical apps using an integer ID. Marc
People are just notoriously impossible. --DavidCrow
There'sI've worked with one database where having an integer instead of a guid made a difference. That was a table with information on every dell system sold driving their support website. At the time it was 700 million rows, but servers have gotten more powerful since then, and in that case 64bit systems solve it. The main limit we were running into wasn't disk space, but memory space for the SQL server data cache. Since we were running with a lot of users, getting the fastest possible response for the queries was essential.
Using the GridView is like trying to explain to someone else how to move a third person's hands in order to tie your shoelaces for you. -Chris Maunder