Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. The Lounge
  3. Would this interest anyone?

Would this interest anyone?

Scheduled Pinned Locked Moved The Lounge
databasequestioncsharpsql-serveralgorithms
17 Posts 4 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • P PIEBALDconsult

    I agree, but wouldn't you still land the incoming data in a staging table before performing such checks and moving the data to its destination?

    L Offline
    L Offline
    lmoelleb
    wrote on last edited by
    #8

    Most likely yes. And if tokens needs to be extracted I would try to do it in the middle tier before streaming the data into temp tables. But I can't rule out there are use-cases where that is just too slow - and if you are locked into a relational database this might be a solution in that case... my solution would probably be to change job :)

    1 Reply Last reply
    0
    • L lmoelleb

      So far I have not had a use-case that would justify this, and I will fight hard to keep it like that. :) My validation belongs in the middle tier (so whatever host your REST API or whatever). The database maintains relational integrity, null checks - that type of stuff. My web server is the ONLY system with connection to my tables. Some validation is also present client side for usability (fail fast), but then it is always duplicated in the middle tier - it needs to be as the REST API can of course be called from anything, not just your own UI. So the only real use case I can see, is when you need to stream large datasets where the validation code need access to a lot of data already in the database - as this would be too slow in my model as the middle tier would have to do a lot of reads while validating the data being streamed through it. Or am I misunderstanding the benefit of this?

      H Offline
      H Offline
      honey the codewitch
      wrote on last edited by
      #9

      Have you ever worked in an environment where your servers are not allowed to do direct table access? If so, how do you update multiple records from a stored procedure? Or even do a request for multiple results using multiple keys from a stored proc *while maintaining atomic operations?* I've run into this situation several times, where at the very least I needed to send a delimited list of ints to a stored proc, and parse it in the proc. That can be done with splits of course, but to do it cleanly you actually need to do more than that, to check for multiple runs of whitespace and such (or commas if you're doing that) What then? Have you ever run into that?

      Real programmers use butterflies

      L P D 3 Replies Last reply
      0
      • H honey the codewitch

        Have you ever worked in an environment where your servers are not allowed to do direct table access? If so, how do you update multiple records from a stored procedure? Or even do a request for multiple results using multiple keys from a stored proc *while maintaining atomic operations?* I've run into this situation several times, where at the very least I needed to send a delimited list of ints to a stored proc, and parse it in the proc. That can be done with splits of course, but to do it cleanly you actually need to do more than that, to check for multiple runs of whitespace and such (or commas if you're doing that) What then? Have you ever run into that?

        Real programmers use butterflies

        L Offline
        L Offline
        lmoelleb
        wrote on last edited by
        #10

        Nope. Stored procedures for security was the norm back when VB applications accessed the database directly. Modern .NET or Java makes it a lot easier to write secure code than stored procedures - and if you want security you want the code to be as simple as possible. I have run into people who questioned this (and typically their security knowledge was... well... not as up to date as one could wish), but I have never had any serious issue deploying into pretty secure environments. I am sure those environment exists as all it takes is a CTO stuck in the past. I would avoid a work environment where this is the case - and the product I work on now will not be sold to a customer that have this requirement.

        H 1 Reply Last reply
        0
        • H honey the codewitch

          Have you ever worked in an environment where your servers are not allowed to do direct table access? If so, how do you update multiple records from a stored procedure? Or even do a request for multiple results using multiple keys from a stored proc *while maintaining atomic operations?* I've run into this situation several times, where at the very least I needed to send a delimited list of ints to a stored proc, and parse it in the proc. That can be done with splits of course, but to do it cleanly you actually need to do more than that, to check for multiple runs of whitespace and such (or commas if you're doing that) What then? Have you ever run into that?

          Real programmers use butterflies

          P Offline
          P Offline
          PIEBALDconsult
          wrote on last edited by
          #11

          No, I've only worked for worthwhile employers.

          1 Reply Last reply
          0
          • H honey the codewitch

            I think you're not hearing me. This isn't about users. It's about overcoming a situation where you're literally not allowed to do direct access to core tables in a DB from an external server like a webserver - from anything in the "DMZ" for that matter So what bogs down a DB server more? Running EXEC 50 times or running EXEC 1 time and processing a bunch of data in batch, even if doing so requires string ops? Furthermore, what of maintaining concurrency and doing things atomically in that situation? I'm not sure you've ever run into the situation where you need to send multiple keys to a stored procedure even. You write as though you've never had to deal with that.

            Real programmers use butterflies

            P Offline
            P Offline
            PIEBALDconsult
            wrote on last edited by
            #12

            I have always been in the back-end, never even in the DMZ. Were I to dabble farther out, I would have a Web Service performing all of that stuff. And still, it wouldn't deal with CSV, that would be handled by something else -- such as SSIS. It always comes down to something has to access the tables and adding needless complexity is a Very Bad Idea. It sounds like you are misusing Procedures, like so many who think they must. If I recall correctly SQL Server (v6 and v7*) prior to 2000 (?) -- or maybe it was just the ODBC driver, I don't fully remember -- didn't support transactions, so using Procedures to handle multi-statement transactions was critical. But that simply is not the case anymore. -- PTSD over converting Oracle PRO*C code to T-SQL with ODBC in the late 90s. *shudder* What I find most irksome is that so many developers think that even single-statement transactions must be performed by Procedures and that doing so somehow magically provides "security". * My copy of the "SQL Server 7 Developer's Guide" (1999) seems to indicate that transactions are supported by DAO and ODBCDirect, but maybe by not ODBC. In the late 90s were using SQL Server 6 on Windows-NT.

            H 1 Reply Last reply
            0
            • H honey the codewitch

              Have you ever worked in an environment where your servers are not allowed to do direct table access? If so, how do you update multiple records from a stored procedure? Or even do a request for multiple results using multiple keys from a stored proc *while maintaining atomic operations?* I've run into this situation several times, where at the very least I needed to send a delimited list of ints to a stored proc, and parse it in the proc. That can be done with splits of course, but to do it cleanly you actually need to do more than that, to check for multiple runs of whitespace and such (or commas if you're doing that) What then? Have you ever run into that?

              Real programmers use butterflies

              D Offline
              D Offline
              DerekT P
              wrote on last edited by
              #13

              Yes. The standard framework I developed and use for pretty much all my web-based apps is access via SPs only; no direct table access at all. Generally I don't need to update multiple rows but I certainly can do, though I prefer to avoid it. If there are multiple updates required then I wrap the lot in a transaction persisted across SP calls. (For one thing, if something fails the webserver then has a much easier time of working out what failed). And yes, there are also occasions when I need to send delimited lists to an SP (typically keys to match an IN clause) but then I just use a UDF to parse it and return a table result. (And the webserver is responsible for ensuring the list is "clean" wrt whitespace etc). Conceptually, I'm not sure I'd feel comfortable loading an unverified CSV straight into the database using generic type validation routines. Generally there's more to checking input than just checking field type, and that sort of stuff belongs in the business layer, well away from the database. It feels a bit like this is a workaround to gain direct table access by the backdoor, bypassing the strong typing that SP parameters at least partially enforce.

              H 1 Reply Last reply
              0
              • L lmoelleb

                Nope. Stored procedures for security was the norm back when VB applications accessed the database directly. Modern .NET or Java makes it a lot easier to write secure code than stored procedures - and if you want security you want the code to be as simple as possible. I have run into people who questioned this (and typically their security knowledge was... well... not as up to date as one could wish), but I have never had any serious issue deploying into pretty secure environments. I am sure those environment exists as all it takes is a CTO stuck in the past. I would avoid a work environment where this is the case - and the product I work on now will not be sold to a customer that have this requirement.

                H Offline
                H Offline
                honey the codewitch
                wrote on last edited by
                #14

                That's good to know. It has been years since I coded for SQL professionally. I didn't realize that had changed so much.

                Real programmers use butterflies

                L 1 Reply Last reply
                0
                • D DerekT P

                  Yes. The standard framework I developed and use for pretty much all my web-based apps is access via SPs only; no direct table access at all. Generally I don't need to update multiple rows but I certainly can do, though I prefer to avoid it. If there are multiple updates required then I wrap the lot in a transaction persisted across SP calls. (For one thing, if something fails the webserver then has a much easier time of working out what failed). And yes, there are also occasions when I need to send delimited lists to an SP (typically keys to match an IN clause) but then I just use a UDF to parse it and return a table result. (And the webserver is responsible for ensuring the list is "clean" wrt whitespace etc). Conceptually, I'm not sure I'd feel comfortable loading an unverified CSV straight into the database using generic type validation routines. Generally there's more to checking input than just checking field type, and that sort of stuff belongs in the business layer, well away from the database. It feels a bit like this is a workaround to gain direct table access by the backdoor, bypassing the strong typing that SP parameters at least partially enforce.

                  H Offline
                  H Offline
                  honey the codewitch
                  wrote on last edited by
                  #15

                  I don't disagree with what you wrote, I'd just like to clarify that "typed" is my term. What's literally happening is each field is being regex checked for consistency, such that Integers are matched with one, strings with the other. The text is "tokenized"

                  Real programmers use butterflies

                  1 Reply Last reply
                  0
                  • P PIEBALDconsult

                    I have always been in the back-end, never even in the DMZ. Were I to dabble farther out, I would have a Web Service performing all of that stuff. And still, it wouldn't deal with CSV, that would be handled by something else -- such as SSIS. It always comes down to something has to access the tables and adding needless complexity is a Very Bad Idea. It sounds like you are misusing Procedures, like so many who think they must. If I recall correctly SQL Server (v6 and v7*) prior to 2000 (?) -- or maybe it was just the ODBC driver, I don't fully remember -- didn't support transactions, so using Procedures to handle multi-statement transactions was critical. But that simply is not the case anymore. -- PTSD over converting Oracle PRO*C code to T-SQL with ODBC in the late 90s. *shudder* What I find most irksome is that so many developers think that even single-statement transactions must be performed by Procedures and that doing so somehow magically provides "security". * My copy of the "SQL Server 7 Developer's Guide" (1999) seems to indicate that transactions are supported by DAO and ODBCDirect, but maybe by not ODBC. In the late 90s were using SQL Server 6 on Windows-NT.

                    H Offline
                    H Offline
                    honey the codewitch
                    wrote on last edited by
                    #16

                    What doing that does is allow the DBA to restrict access from the servers to the database tables, so if a box in the DMZ is compromised, they have limited access to the database, and usually read only, aside from specific functions. There's nothing magic about it.

                    Real programmers use butterflies

                    1 Reply Last reply
                    0
                    • H honey the codewitch

                      That's good to know. It has been years since I coded for SQL professionally. I didn't realize that had changed so much.

                      Real programmers use butterflies

                      L Offline
                      L Offline
                      lmoelleb
                      wrote on last edited by
                      #17

                      One thing I forgot to mention - if the rule is "no machine with direct internet access is allowed to connect to the database", then standard reverse proxy between the internet exposed endpoint and the "real" webserver will do the trick.

                      1 Reply Last reply
                      0
                      Reply
                      • Reply as topic
                      Log in to reply
                      • Oldest to Newest
                      • Newest to Oldest
                      • Most Votes


                      • Login

                      • Don't have an account? Register

                      • Login or register to search.
                      • First post
                        Last post
                      0
                      • Categories
                      • Recent
                      • Tags
                      • Popular
                      • World
                      • Users
                      • Groups