Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. The Lounge
  3. Imposing data type and length restrictions at the database level is stupid! - Part II

Imposing data type and length restrictions at the database level is stupid! - Part II

Scheduled Pinned Locked Moved The Lounge
performancecssdatabasedesign
24 Posts 13 Posters 3 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • L Lost User

    I had my students develop the DB first; from that, one can easily build a web or desktop UI, independent from each other. Even other apps can access it, and when using a mainstream DB this decouples the data-layer from the rest.

    5teveH wrote:

    First of all, we already do this. I'm pretty sure most developers would define the MaxLength for a TextBox and use a Calendar Widget for date input. Secondly, validation logic should be developed as reusable code - which minimises the need for future developers to learn and re-code that logic.

    The max length, as a simple example, does apply as much to the DB as the UI. Most generate their UI based on the restrictions provided by reflection.

    5teveH wrote:

    Performance. You are going to have to take my word for it, but I am 100% sure that with less disk, less CPU and less memory, I can deliver better performance than could be achieved using a traditional RDBMS. Also, indexing data does not have a performance hit. And the greater the volume of data, the more confident I would be that performance would be better.

    You don't have to take mine; you gimme a model, I put it into 3BCNF and hand you the metrics. I do not ask for belief, I provide measurements. Indexing data HAS a performance hit; the PC has to do something additional, how can you claim that the extra processing has no performance hit??

    Bastard Programmer from Hell :suss: "If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.

    5 Offline
    5 Offline
    5teveH
    wrote on last edited by
    #21

    Eddy Vluggen wrote:

    Indexing data HAS a performance hit

    True. I was responding to a comment about indexing variable length data having a much bigger performance hit - and that was not at all clear. Yes, I totally accept that everything you do on a system is going to take some resource - including indexing data. But, certainly on the database I work with, (which is completely variable length), there is no 'real world' impact when you index data. Which, I don't think is surprising. If you design a database from the ground up, to be variable length, you are going to deal with things like indexing variable length data as the rule, rather than the exception.

    L 1 Reply Last reply
    0
    • 5 5teveH

      I thought this might stir up a few of the 'traditionalists' on here! :-D Rather than answer the many comments individually, I'll try to respond with this single post. I had spent nearly 5 years developing COBOL applications before our boss told us about the Pick database that we were going to be moving to. To be honest, I couldn't see how it could possibly work and raised the same concerns that have been posted here today. There are a couple of common observations: Including 'DB rules' in the code/UI. First of all, we already do this. I'm pretty sure most developers would define the MaxLength for a TextBox and use a Calendar Widget for date input. Secondly, validation logic should be developed as reusable code - which minimises the need for future developers to learn and re-code that logic. Performance. You are going to have to take my word for it, but I am 100% sure that with less disk, less CPU and less memory, I can deliver better performance than could be achieved using a traditional RDBMS. Also, indexing data does not have a performance hit. And the greater the volume of data, the more confident I would be that performance would be better.

      A Offline
      A Offline
      Asday
      wrote on last edited by
      #22

      > You are going to have to take my word for it, but I am 100% sure that with less disk, less CPU and less memory, I can deliver better performance than could be achieved using a traditional RDBMS. I'm not gonna. If you were such a wizard, someone would have hired you to write the next PostgreSQL and make a mint.

      1 Reply Last reply
      0
      • 5 5teveH

        Eddy Vluggen wrote:

        Indexing data HAS a performance hit

        True. I was responding to a comment about indexing variable length data having a much bigger performance hit - and that was not at all clear. Yes, I totally accept that everything you do on a system is going to take some resource - including indexing data. But, certainly on the database I work with, (which is completely variable length), there is no 'real world' impact when you index data. Which, I don't think is surprising. If you design a database from the ground up, to be variable length, you are going to deal with things like indexing variable length data as the rule, rather than the exception.

        L Offline
        L Offline
        Lost User
        wrote on last edited by
        #23

        Variable length is a pointer to a blob. Indexing gives a performance gain if done right. Enjoy. I'll be asking double to clean up after you.

        Bastard Programmer from Hell :suss: "If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.

        1 Reply Last reply
        0
        • 5 5teveH

          I thought this might stir up a few of the 'traditionalists' on here! :-D Rather than answer the many comments individually, I'll try to respond with this single post. I had spent nearly 5 years developing COBOL applications before our boss told us about the Pick database that we were going to be moving to. To be honest, I couldn't see how it could possibly work and raised the same concerns that have been posted here today. There are a couple of common observations: Including 'DB rules' in the code/UI. First of all, we already do this. I'm pretty sure most developers would define the MaxLength for a TextBox and use a Calendar Widget for date input. Secondly, validation logic should be developed as reusable code - which minimises the need for future developers to learn and re-code that logic. Performance. You are going to have to take my word for it, but I am 100% sure that with less disk, less CPU and less memory, I can deliver better performance than could be achieved using a traditional RDBMS. Also, indexing data does not have a performance hit. And the greater the volume of data, the more confident I would be that performance would be better.

          J Offline
          J Offline
          jschell
          wrote on last edited by
          #24

          5teveH wrote:

          You are going to have to take my word for it, but I am 100% sure that with less disk, less CPU and less memory, I can deliver better performance than could be achieved using a traditional RDBMS. Also, indexing data does not have a performance hit. And the greater the volume of data, the more confident I would be that performance would be better.

          And from a different thread...

          5teveH wrote:

          after nearly 40 years of working with this database, (Pick/Universe),

          I have been dealing with database for 40 years. Different databases. Different industries. Different types of enterprise systems. And during that time I have also seen databases change. For instance just the infrastructure that they run on has changed enough that things that used to matter do not and things that were never even looked at before matter much more now. So that 40 years of experience doesn't even translate well into deciding what one should do now versus one did then. About the only real thing it allows is telling stories and being able better to recognize 'tribal knowledge' which is based on something that is no longer valid. The first comment it completely open ended without restrictions and not even providing specific definitions. In my experience making broad sweeping claims about anything always leads to one thing - failure. Followed by a lot of rationalizations and back sliding about what was really meant by the original claims. For starters "performance" can mean almost anything but in the real world customers have real needs for what "performance" means. They don't care about benchmarks. They do care about how long they have to sit around waiting for results to show up on the screen and how long it takes to come up with new features that they think they need. (The two are not complementary.) Moreover in terms of actual performance and the enterprise level performance is achieved by requirements modifications and not technology. One might gain a 1% boost with technology but might gain 6,000% by changing requirements. That last is based on a real world example.

          1 Reply Last reply
          0
          Reply
          • Reply as topic
          Log in to reply
          • Oldest to Newest
          • Newest to Oldest
          • Most Votes


          • Login

          • Don't have an account? Register

          • Login or register to search.
          • First post
            Last post
          0
          • Categories
          • Recent
          • Tags
          • Popular
          • World
          • Users
          • Groups