.net performance test. Can you guess which method is the most efficient?
-
ToddHileHoffer wrote:
public void initList3(DataTable dt) { foreach (DataRow r in dt.Rows) { DropDownList3.Items.Add(new ListItem(r[0].ToString(), r[1].ToString())); } }
Because it doesn't have to map a name to an index, and because your databind is essentially going to look them up by name, assign a reference to the data table as the source, etc.
Christian Graus Driven to the arms of OSX by Vista.
agreed. Actually just got back from a TechEd conference where they discussed just that scenario. But they did not go over the DataBind() so not sure if they virtualize because if your list is more than what is displayed then that will be a big impact. I would suggest doing 2 bind scenarios. One large list (at least 10x the visual rows) and a small list (the number of visual rows). *Note: I am not a asp.net developer so not sure about how controls work on a asp.net page or even if there is a difference.
-
I never use DataBind... never have, never will.
I am guessing WPF is not on your "TODO" list either since Microsoft pushes binding on that majorly.
-
I'm not a technical guru, and often I have heard people argue about best performance and the difference being milliseconds (like in this case), yet they will "forget" to index a SQL table, or write bad SQL queries etc. I hardly ever bother with performance in the UI layer as data driven applications your data IO is your crucial hit point. Reading 1000 rows badly compared to reading it well from a SQL table will have far bigger impact than adding the results to a list in different ways. I suppose if you are already doing high performance data IO then UI performace can get important, but who has the luxury of time for low results tweaking :-D
____________________________________________________________ Be brave little warrior, be VERY brave
Agreed, You can always swap out a control but your underlying data mechanism is usually not going to be changing. Why does it even matter doesn't everyone load everything async now?
-
I hate using databinding. Considering the code to populate manually is the same as the code to bind I always bind. Databinding is so limited and causes more problems than it is worth.
Need software developed? Offering C# development all over the United States, ERL GLOBAL, Inc is the only call you will have to make.
Happiness in intelligent people is the rarest thing I know. -- Ernest Hemingway
Most of this sig is for Google, not ego. -
I'm not a technical guru, and often I have heard people argue about best performance and the difference being milliseconds (like in this case), yet they will "forget" to index a SQL table, or write bad SQL queries etc. I hardly ever bother with performance in the UI layer as data driven applications your data IO is your crucial hit point. Reading 1000 rows badly compared to reading it well from a SQL table will have far bigger impact than adding the results to a list in different ways. I suppose if you are already doing high performance data IO then UI performace can get important, but who has the luxury of time for low results tweaking :-D
____________________________________________________________ Be brave little warrior, be VERY brave
Thanks for the reply. Acutually, I am only reading the data from the employees table once per day because it is copied down an hr application. The datatable is kept in the cache and refreshed only once per day. I wrote an article about how I did this a few months ago. http://www.codeproject.com/KB/aspnet/LookUpDataCache.aspx[^]
I didn't get any requirements for the signature
-
I am guessing WPF is not on your "TODO" list either since Microsoft pushes binding on that majorly.
Jordon4Acclaim wrote:
I am guessing WPF is not on your "TODO" list either since Microsoft pushes binding on that majorly.
I have spent the last 8 years learning HTML, Javascript ASP.Net and AJAX. Not to mention that my company has purchased the RAD AJAX controls from Telerik for me. Learning WPF is not on the radar for me at all. If WPF becomes ubiquitus then I will use it, but I'm not ready to commit to a new front end just yet. I mean the RAD AJAX from Telerik is really great at this point, I'm not ready to switch. Besides, I'm not sure is Silverlight is all that great. My favorite websites such as digg, my (bank site), codeproject etc... are all done in HTML. Most of time I don't even enable flash in my browser.
I didn't get any requirements for the signature
-
maxxx# wrote:
Isn't this one of those times, though, that a) you rarely have 1000 items in a dropdown and b) the real time taken (from the user's perspective) doesn't usually affect the app significantly enough to bother.
Try financial applications. Where I work, we have applications that get Inventory (and other) Items out of QuickBooks and send them to a handheld computer so that users can do inventory management-type things with them. All these Items go into a dropdown, which is really the best way to make them available. It would be nice to have *only* 1000 items in a dropdown. We previously had a problem where loading a new Item list would take a couple days once you passed about 10 or 20 thousand items. And, of course, since binding has to check for duplicates, you would begin to see, after the first couple thousand items were loaded, load times for individual items taking over 10 seconds to load. We did eventually get the load time for 20,000-30,000 items to happen in well under 30 seconds, by changing the data binding settings (I'm not completely sure about the details, as I wasn't the person who actually performed the fix). So, yeah, this can definitely be a very real issue. Don't knock the theory just because you don't have a use for it.
It's hard to imagine a situation where 20,000 items in a drop down is anything other than bad user interface design.
-
I know there are many ways to add data to a dropdownlist in asp.net. So I thought I would use JetBrains dotTrace to see if it is more efficient to call a controls DataBind() method or add new listItems with your own code. The results were a bit surprising. I will post them later tonight after you all have chance to guess. Which method do you think will be the most efficient and why? public void initList(DataTable dt) { DropDownList1.DataSource = dt; DropDownList1.DataTextField = "empName"; DropDownList1.DataValueField = "empNumber"; DropDownList1.DataBind(); } public void initList2(DataTable dt) { foreach (DataRow r in dt.Rows) { DropDownList2.Items.Add(new ListItem(r["empName"].ToString(), r["empNumber"].ToString())); } } public void initList3(DataTable dt) { foreach (DataRow r in dt.Rows) { DropDownList3.Items.Add(new ListItem(r[0].ToString(), r[1].ToString())); } }
I didn't get any requirements for the signature
While academically interesting it is commercial meaningless for several reasons. Firstly taking a single, small sampling of data and then generating a theory on it is easy to punch holes through. Start with the fact that memory and processor utilization can easily impact any sampling that you might do. You would need to run the test several times to start normalizing the results. The more you run the less likely the utilization noise is taking its affect. The second problem is the sample size itself. Small sample sizes cause the overhead of the mechanism used to be more important than the work itself. Larger sammple sizes tend to reduce the impact. For example in databinding the string names have to be mapped to the data source. This can be an expensive operation. Say it takes 10ms but then each actual insert thereafter takes 1ms. Unless your sample set is large enough to amortize the 10ms over time then it will dominate your experiment. You can argue that it should but caching and processor pipelining are based upon these same amortize over time concepts and they seem to be really helping out performance so there must be something to it. I would say that in a code review I might let you get away with the first or second code blocks but the third block will send you back to programming school. The third block of code is unmaintainable. You are trading off maintainability for performance. The general rule is 90/10 or 80/20. I doubt that this data binding scenario is in the top 10% of your performance issues. Remember that performance should be analyzed not by how much time it takes but by how often it impacts the program. Hence a little used feature (like reporting) might run acceptably slow provided the main UI is still running fast. So the fact that you're trying to do local optimization of this code block is a poor idea. If you really feel that performance is more important than maintainability then I would question why you are using classes and methods at all. A single main function with all the logic contained within it will always run faster than classes/methods. Performance concerns must always be gauged against maintainability and other factors. Finally I would tend to agree that your second code block would normally perform better. Data binding (as with most auto-generated features) isn't designed to run faster than hand generated code but rather be more maintainable and require less time to implement. Data binding requires 3 lines of code and is easy to read. Your second bl
-
It's hard to imagine a situation where 20,000 items in a drop down is anything other than bad user interface design.
couldn't agree more. some many ways to prevent loading 20k + items into a DDL. :wtf:
-
It's hard to imagine a situation where 20,000 items in a drop down is anything other than bad user interface design.
-
couldn't agree more. some many ways to prevent loading 20k + items into a DDL. :wtf: