I've never been a SQL genius, nor have I ever been tutored in it. I'm working on an app that needs every bit of data that is in the DB to be put into datasets. Easy, I thought, just use SELECT * ... it keeps your SQL queries clean and easy, and there is no bulk because we need every field anyway. One of my co-programmers called me a retard and told me that SELECT * is a lot slower than the alternative of directly accessing the fields. I put this to the test, but I'm not sure it's representative. Mocked up a simple console app that read from a table with +100 columns and +5000 lines, filled it in a dataset, added the datasets to a list, and then printed out a count of the list (the 25's, also the number of times the test is repeated) (This is printed because I learned that in code optimizing, variables that are never accessed are discarded) Clearly without the * takes longer than just starring. What are some of your thoughts, findings, comments or experiences with this?
With * : 25 //times test is executed
00:00:10.8395254 //time it takes in seconds
Without * : 25
00:00:10.3142815
With * : 25
00:00:10.1382555
Without * : 25
00:00:09.9356686
Sorry if this should've been in SQL forum, but it's not really a request for help, I just hope to get a better understanding of selects.
Visual Studio can't evaluate this, can you? public object moo { __get { return moo; } __set { moo = value; } }