Database makers not paying attention?
-
I recently had a project that I would call forms processing. While it is typical in the situation to have over 100 fields of data, only a maximum of a dozen fields you would work with programmatically or even care about except to store and print out later. If you only using this as a data store and have no other need, why should you be forced to break this up into some kind of grouping to cater to the database or tool maker? Sometimes data has no (or we do not care about) any relationship within itself to promote modularizing, it is always read and stored as a whole. This is the second time I found a need for large data stores of this nature in the last couple of years. I was ripping along with morning and getting and error from the Data Adapter wizard saying it could not generate the select. Well, after a while search the net, I found that the wise Microsoft developers decided that you will never need more than 100 fields and that they were going to enforce this on you. I have the tool because I find it handles my needs and does so very quickly. Not the case for large tables. So, there are several methods around this (including building the select, insert, update code by hand, hey 117 fields, I do not think so!) but they all require time that I did not plan nor expect to waste because someone decided 100 fields are too many. Databases seem to always try limiting you. I understand that some things are for performance, but as a developer I am aware that you can put in options to override limitations to make your product fit more situation. It is like MS Access files still being limited to 2,000 bytes of actual data per record, come on, most of my email is larger than that. At least with MS SQL Server they have an option to override their default record limit. I know some databases do have options to help out, but most still limit you and try forcing you into a relation grouping when sometimes it just does not exist. Anyone else get bit by these limitations? Rocky <>< www.HintsAndTips.com www.GotTheAnswerToSpam.com
-
I recently had a project that I would call forms processing. While it is typical in the situation to have over 100 fields of data, only a maximum of a dozen fields you would work with programmatically or even care about except to store and print out later. If you only using this as a data store and have no other need, why should you be forced to break this up into some kind of grouping to cater to the database or tool maker? Sometimes data has no (or we do not care about) any relationship within itself to promote modularizing, it is always read and stored as a whole. This is the second time I found a need for large data stores of this nature in the last couple of years. I was ripping along with morning and getting and error from the Data Adapter wizard saying it could not generate the select. Well, after a while search the net, I found that the wise Microsoft developers decided that you will never need more than 100 fields and that they were going to enforce this on you. I have the tool because I find it handles my needs and does so very quickly. Not the case for large tables. So, there are several methods around this (including building the select, insert, update code by hand, hey 117 fields, I do not think so!) but they all require time that I did not plan nor expect to waste because someone decided 100 fields are too many. Databases seem to always try limiting you. I understand that some things are for performance, but as a developer I am aware that you can put in options to override limitations to make your product fit more situation. It is like MS Access files still being limited to 2,000 bytes of actual data per record, come on, most of my email is larger than that. At least with MS SQL Server they have an option to override their default record limit. I know some databases do have options to help out, but most still limit you and try forcing you into a relation grouping when sometimes it just does not exist. Anyone else get bit by these limitations? Rocky <>< www.HintsAndTips.com www.GotTheAnswerToSpam.com
Rocky Moore wrote: While it is typical in the situation to have over 100 fields of data, only a maximum of a dozen fields you would work with programmatically or even care about except to store and print out later. Um, maybe you should think about redesigning your db. Rocky Moore wrote: Anyone else get bit by these limitations? Refer to my first comment. ;) Jeremy Falcon
-
I recently had a project that I would call forms processing. While it is typical in the situation to have over 100 fields of data, only a maximum of a dozen fields you would work with programmatically or even care about except to store and print out later. If you only using this as a data store and have no other need, why should you be forced to break this up into some kind of grouping to cater to the database or tool maker? Sometimes data has no (or we do not care about) any relationship within itself to promote modularizing, it is always read and stored as a whole. This is the second time I found a need for large data stores of this nature in the last couple of years. I was ripping along with morning and getting and error from the Data Adapter wizard saying it could not generate the select. Well, after a while search the net, I found that the wise Microsoft developers decided that you will never need more than 100 fields and that they were going to enforce this on you. I have the tool because I find it handles my needs and does so very quickly. Not the case for large tables. So, there are several methods around this (including building the select, insert, update code by hand, hey 117 fields, I do not think so!) but they all require time that I did not plan nor expect to waste because someone decided 100 fields are too many. Databases seem to always try limiting you. I understand that some things are for performance, but as a developer I am aware that you can put in options to override limitations to make your product fit more situation. It is like MS Access files still being limited to 2,000 bytes of actual data per record, come on, most of my email is larger than that. At least with MS SQL Server they have an option to override their default record limit. I know some databases do have options to help out, but most still limit you and try forcing you into a relation grouping when sometimes it just does not exist. Anyone else get bit by these limitations? Rocky <>< www.HintsAndTips.com www.GotTheAnswerToSpam.com
Hiya! Oh, I *LOVE* these discussions :) Incidently, your overall problem sounds liek an isse with the data adapter wizard and not your RDBMS. I don't know what the limit for Sql Server is off the top of my head but typically its comparable to Oracle. Oracle 9i's column limitation is 1000 per table and roughly 30ish per index depending on the index type. I would be very surprised if Sql Servers is below 256 which was a fairly common one as of about 4 years ago. 100 fields in a table for most RDBMS's is NO problem. The limits on columns are there because of the storage mechanism, not a logical limit of the table. Basically the way the block or page (depending on flavor) is stored just can't handle anymore than that set limit assuming smallest datatype typically. Also, a suggestion. Don't complain about a RDBMS database being relational. That's like asking a dog not to pee on a hydrant. That's what it does. If you want it to be non-relational, go use a flat database that allows sql querys. There are a few out there and thats what they are for. Basically, as a DBA, my first look would be at your data for proper table structure. If I can't fnid ANY relation, I would either A) make one, or B) chose a differnt technology (like a flat db. peace! :) Mark Conger Sonork:100.28396
-
Rocky Moore wrote: While it is typical in the situation to have over 100 fields of data, only a maximum of a dozen fields you would work with programmatically or even care about except to store and print out later. Um, maybe you should think about redesigning your db. Rocky Moore wrote: Anyone else get bit by these limitations? Refer to my first comment. ;) Jeremy Falcon
Jeremy Falcon wrote: Um, maybe you should think about redesigning your db. Uh, what redesigning? You have data that is only stored and has no value other than archive that is not related within itself and servers no purpose to relate them, so what design? Spend hours breaking them down into different tables trying to make some phoney relationship, just so that you can query all tables for all fields every time you want to update, insert, delete a record? Does not that sound like a lot of work for no value? Modualize data for the simple purpose of putting it back together again? Rocky <>< www.HintsAndTips.com www.GotTheAnswerToSpam.com
-
Jeremy Falcon wrote: Um, maybe you should think about redesigning your db. Uh, what redesigning? You have data that is only stored and has no value other than archive that is not related within itself and servers no purpose to relate them, so what design? Spend hours breaking them down into different tables trying to make some phoney relationship, just so that you can query all tables for all fields every time you want to update, insert, delete a record? Does not that sound like a lot of work for no value? Modualize data for the simple purpose of putting it back together again? Rocky <>< www.HintsAndTips.com www.GotTheAnswerToSpam.com
Rocky Moore wrote: You have data that is only stored and has no value other than archive that is not related within itself and servers no purpose to relate them, so what design? You should be thinking in rows and not columns. Why not have an archive table? That design would not limit the number of archives. Also, you can always relate it to the data you’re archiving. That's not a phony relationship. Jeremy Falcon
-
Hiya! Oh, I *LOVE* these discussions :) Incidently, your overall problem sounds liek an isse with the data adapter wizard and not your RDBMS. I don't know what the limit for Sql Server is off the top of my head but typically its comparable to Oracle. Oracle 9i's column limitation is 1000 per table and roughly 30ish per index depending on the index type. I would be very surprised if Sql Servers is below 256 which was a fairly common one as of about 4 years ago. 100 fields in a table for most RDBMS's is NO problem. The limits on columns are there because of the storage mechanism, not a logical limit of the table. Basically the way the block or page (depending on flavor) is stored just can't handle anymore than that set limit assuming smallest datatype typically. Also, a suggestion. Don't complain about a RDBMS database being relational. That's like asking a dog not to pee on a hydrant. That's what it does. If you want it to be non-relational, go use a flat database that allows sql querys. There are a few out there and thats what they are for. Basically, as a DBA, my first look would be at your data for proper table structure. If I can't fnid ANY relation, I would either A) make one, or B) chose a differnt technology (like a flat db. peace! :) Mark Conger Sonork:100.28396
Mark Conger wrote: Basically, as a DBA, my first look would be at your data for proper table structure. If I can't fnid ANY relation, I would either A) make one, or B) chose a differnt technology (like a flat db. Be careful, Mark [edit]Freudian slip; I meant Rocky[/edit] doesn't like suggestions for improvement. :) Jeremy Falcon
-
Hiya! Oh, I *LOVE* these discussions :) Incidently, your overall problem sounds liek an isse with the data adapter wizard and not your RDBMS. I don't know what the limit for Sql Server is off the top of my head but typically its comparable to Oracle. Oracle 9i's column limitation is 1000 per table and roughly 30ish per index depending on the index type. I would be very surprised if Sql Servers is below 256 which was a fairly common one as of about 4 years ago. 100 fields in a table for most RDBMS's is NO problem. The limits on columns are there because of the storage mechanism, not a logical limit of the table. Basically the way the block or page (depending on flavor) is stored just can't handle anymore than that set limit assuming smallest datatype typically. Also, a suggestion. Don't complain about a RDBMS database being relational. That's like asking a dog not to pee on a hydrant. That's what it does. If you want it to be non-relational, go use a flat database that allows sql querys. There are a few out there and thats what they are for. Basically, as a DBA, my first look would be at your data for proper table structure. If I can't fnid ANY relation, I would either A) make one, or B) chose a differnt technology (like a flat db. peace! :) Mark Conger Sonork:100.28396
Mark Conger wrote: Incidently, your overall problem sounds liek an isse with the data adapter wizard and not your RDBMS. Yes, this specific limitation was with just the VS.NET database tool. I did mention databases and their tools. It is the same thinking though that has kept databases limited for years. As I mentioned they are getting better, but it has been a long time coming. As an example, I believe the default record size limit on SQL Server is 8K. You can now adjust the level. One of the problems I have seen in recent years is that a company will expose a SQL server through their LANs and end up using it as a sole communication channel. If you need to store something it has to go in SQL server since that is their access. So, in many instances that places you into a situation where you must the SQL server work as a file server or chat server or whatever they want you to build. Is it a matter that people are expecting too much out of database or maybe the major databases have overshadowed other technologies? Mark Conger wrote: Also, a suggestion. Don't complain about a RDBMS database being relational. That's like asking a dog not to pee on a hydrant. That's what it does. If you want it to be non-relational, go use a flat database that allows sql querys. There are a few out there and thats what they are for. I do not, and use them in this method 99% of the time, but for those times you must make a solution work using the tools that are at hand, it would seem as they could also be used as a simple datastore at the same time without jumping through hoops. It is just the point that there is data that has no relationship or it would be a waste of time to create one since all the data will be required all the time. Mark Conger wrote: If I can't fnid ANY relation, I would either A) make one, Why? Is it just to have the fun on joining tables on every query? If there is no need for multiple tables, why would you want to make one? I really do not understand, if it will cost you more work, slows down the performance and bring zero benefit, why would you want to? Rocky <>< www.HintsAndTips.com www.GotTheAnswerToSpam.com
-
I recently had a project that I would call forms processing. While it is typical in the situation to have over 100 fields of data, only a maximum of a dozen fields you would work with programmatically or even care about except to store and print out later. If you only using this as a data store and have no other need, why should you be forced to break this up into some kind of grouping to cater to the database or tool maker? Sometimes data has no (or we do not care about) any relationship within itself to promote modularizing, it is always read and stored as a whole. This is the second time I found a need for large data stores of this nature in the last couple of years. I was ripping along with morning and getting and error from the Data Adapter wizard saying it could not generate the select. Well, after a while search the net, I found that the wise Microsoft developers decided that you will never need more than 100 fields and that they were going to enforce this on you. I have the tool because I find it handles my needs and does so very quickly. Not the case for large tables. So, there are several methods around this (including building the select, insert, update code by hand, hey 117 fields, I do not think so!) but they all require time that I did not plan nor expect to waste because someone decided 100 fields are too many. Databases seem to always try limiting you. I understand that some things are for performance, but as a developer I am aware that you can put in options to override limitations to make your product fit more situation. It is like MS Access files still being limited to 2,000 bytes of actual data per record, come on, most of my email is larger than that. At least with MS SQL Server they have an option to override their default record limit. I know some databases do have options to help out, but most still limit you and try forcing you into a relation grouping when sometimes it just does not exist. Anyone else get bit by these limitations? Rocky <>< www.HintsAndTips.com www.GotTheAnswerToSpam.com
-
Mark Conger wrote: Incidently, your overall problem sounds liek an isse with the data adapter wizard and not your RDBMS. Yes, this specific limitation was with just the VS.NET database tool. I did mention databases and their tools. It is the same thinking though that has kept databases limited for years. As I mentioned they are getting better, but it has been a long time coming. As an example, I believe the default record size limit on SQL Server is 8K. You can now adjust the level. One of the problems I have seen in recent years is that a company will expose a SQL server through their LANs and end up using it as a sole communication channel. If you need to store something it has to go in SQL server since that is their access. So, in many instances that places you into a situation where you must the SQL server work as a file server or chat server or whatever they want you to build. Is it a matter that people are expecting too much out of database or maybe the major databases have overshadowed other technologies? Mark Conger wrote: Also, a suggestion. Don't complain about a RDBMS database being relational. That's like asking a dog not to pee on a hydrant. That's what it does. If you want it to be non-relational, go use a flat database that allows sql querys. There are a few out there and thats what they are for. I do not, and use them in this method 99% of the time, but for those times you must make a solution work using the tools that are at hand, it would seem as they could also be used as a simple datastore at the same time without jumping through hoops. It is just the point that there is data that has no relationship or it would be a waste of time to create one since all the data will be required all the time. Mark Conger wrote: If I can't fnid ANY relation, I would either A) make one, Why? Is it just to have the fun on joining tables on every query? If there is no need for multiple tables, why would you want to make one? I really do not understand, if it will cost you more work, slows down the performance and bring zero benefit, why would you want to? Rocky <>< www.HintsAndTips.com www.GotTheAnswerToSpam.com
Rocky Moore wrote: I really do not understand, if it will cost you more work, slows down the performance and bring zero benefit, why would you want to? One: doing a simple join is as easy as falling off a log. Two: Doing a single one-to-one join won't hinder perfomance in any noticable way. That's what RDBMSs are created for in the first place. Three: Using columns in that manner limits you to the amount of columns you allocate. Databases were just not meant to work that way. That's thinking in a flat-file model. Jeremy Falcon
-
The Answer: "CSV files" The Question: "How can i make a bad situation worse?"
--- the work, which will become a new genre unto itself, will be called...:rolleyes: Jeremy Falcon
-
Rocky Moore wrote: You have data that is only stored and has no value other than archive that is not related within itself and servers no purpose to relate them, so what design? You should be thinking in rows and not columns. Why not have an archive table? That design would not limit the number of archives. Also, you can always relate it to the data you’re archiving. That's not a phony relationship. Jeremy Falcon
Jeremy Falcon wrote: You should be thinking in rows and not columns. Not sure, are you thinking of a vertical table? Why would I want that? If it has a record for each field but the only data in that record is that one field, that would not do much other than increase the size of the database and slow does the system. The vertical tables work great when you have to keep information on a field, like that last time it was modified and who did that, but for bulk data that would not work well. Kind of lost me on the "archive table". I was actually thinking of maybe an xml store for most fields and cramming them into one but that just seems like a lot of overhead for no real value. Rocky <>< www.HintsAndTips.com www.GotTheAnswerToSpam.com
-
The Answer: "CSV files" The Question: "How can i make a bad situation worse?"
--- the work, which will become a new genre unto itself, will be called...:laugh::laugh::laugh: VisiCalc? Heard in Bullhead City - "You haven't lost your girl -
you've just lost your turn..." [sigh] So true... -
Rocky Moore wrote: I really do not understand, if it will cost you more work, slows down the performance and bring zero benefit, why would you want to? One: doing a simple join is as easy as falling off a log. Two: Doing a single one-to-one join won't hinder perfomance in any noticable way. That's what RDBMSs are created for in the first place. Three: Using columns in that manner limits you to the amount of columns you allocate. Databases were just not meant to work that way. That's thinking in a flat-file model. Jeremy Falcon
Jeremy Falcon wrote: One: doing a simple join is as easy as falling off a log. It is not the join that is at issue, it is the fact that system must "make" that join, costing performance for no benefit. It is faster to remain in one table if it is possible and not related for use to other sections. Jeremy Falcon wrote: Doing a single one-to-one join won't hinder perfomance in any noticable way. Noticable? It still has to go through the work of performing the join that it would not have to do otherwise and will take resources from the server to perform this task every time the data is required. Jeremy Falcon wrote: Using columns in that manner limits you to the amount of columns you allocate Hmmm??? Jeremy Falcon wrote: Databases were just not meant to work that way. That's thinking in a flat-file model. Databases are a data store, not just an index. There purpose is to store and organize data. While this is a "flat data", that does not mean I should have to purchase another product just because I have ran into "flat data". Databases in todays world should be easy to work in either format, and the truth is they do for the most part. It is the mindset that all data should be broken into smaller objects all linked together at "all costs". There are times when it takes less time and increases performance not to break up the data and that the data does not benefit in any means from being broken up. That is the point I am getting at. Sometimes you have flat boring data that just wants to remain that way and it will have to be stored in a relational database. Rocky <>< www.HintsAndTips.com www.GotTheAnswerToSpam.com
-
Jeremy Falcon wrote: One: doing a simple join is as easy as falling off a log. It is not the join that is at issue, it is the fact that system must "make" that join, costing performance for no benefit. It is faster to remain in one table if it is possible and not related for use to other sections. Jeremy Falcon wrote: Doing a single one-to-one join won't hinder perfomance in any noticable way. Noticable? It still has to go through the work of performing the join that it would not have to do otherwise and will take resources from the server to perform this task every time the data is required. Jeremy Falcon wrote: Using columns in that manner limits you to the amount of columns you allocate Hmmm??? Jeremy Falcon wrote: Databases were just not meant to work that way. That's thinking in a flat-file model. Databases are a data store, not just an index. There purpose is to store and organize data. While this is a "flat data", that does not mean I should have to purchase another product just because I have ran into "flat data". Databases in todays world should be easy to work in either format, and the truth is they do for the most part. It is the mindset that all data should be broken into smaller objects all linked together at "all costs". There are times when it takes less time and increases performance not to break up the data and that the data does not benefit in any means from being broken up. That is the point I am getting at. Sometimes you have flat boring data that just wants to remain that way and it will have to be stored in a relational database. Rocky <>< www.HintsAndTips.com www.GotTheAnswerToSpam.com
ROFL! Rocky, honestly if you are THAT worried about excessive operations you should NOT be running on an NT/SQL server platform. SQL server is the only RDBMS I know that will actually put interactive sessions on hold when its background processing queue reaches a backlog threshold. Crashed it many times this way. not to mention the extra overhead on threadng in NT. Joins are not a quantifyable performance hit. If you believe they are, chances are you are dealing with a poor table design and should fire your DBA. joins are what RDBMS are good at. The rest of the commentary aside, I don't know your app. The all info in a single row is a perfectly viable table organization when done within reason. I have about 85 logical tables that use this premise, some are about 75Million rows at the low end. It is a technique commonly used in data warehousing in the ETL process. Its not unheard of in OLAP dbs. Data warehousing, data warehousing, data warehousing... The mindset you refered to is a pretty common one and it spawns from the fact that typically data will organize itself so that it WILL be able to be narrowed down in that manner. What you are dealing with sounds like the exception to the norm to me and I've been in your situation many times and ultimately found some way to better organize the data. To me, it really sounds like your DBA needs a kick in the hind quarters. DBAS tend to be lazy when it comes to large scale table reorgs. Mark Conger Sonork:100.28396
-
Jeremy Falcon wrote: You should be thinking in rows and not columns. Not sure, are you thinking of a vertical table? Why would I want that? If it has a record for each field but the only data in that record is that one field, that would not do much other than increase the size of the database and slow does the system. The vertical tables work great when you have to keep information on a field, like that last time it was modified and who did that, but for bulk data that would not work well. Kind of lost me on the "archive table". I was actually thinking of maybe an xml store for most fields and cramming them into one but that just seems like a lot of overhead for no real value. Rocky <>< www.HintsAndTips.com www.GotTheAnswerToSpam.com
Rocky Moore wrote: Not sure, are you thinking of a vertical table? Yes. Rocky Moore wrote: If it has a record for each field but the only data in that record is that one field, that would not do much other than increase the size of the database and slow does the system. Says who? Rocky Moore wrote: The vertical tables work great when you have to keep information on a field, like that last time it was modified and who did that, but for bulk data that would not work well. There is no difference (outside of milliseconds) in efficiency when using a horizontal or vertical design - especially if doing a single-pass lookup. It's simply a matter of design. For example, using a vertical table in this instance allows you have x amount of backups and instead of just say 100. Rocky Moore wrote: Kind of lost me on the "archive table". Just trying to picture your db in my head. Nevermind that bit. :) Rocky Moore wrote: I was actually thinking of maybe an xml store for most fields and cramming them into one but that just seems like a lot of overhead for no real value. :wtf: Jeremy Falcon