.net performance test. Can you guess which method is the most efficient?
-
I never use DataBind... never have, never will.
I'm with Christian on this one, in that I use DataBind... whenever I can, or at least whenever it makes sense to me to do so. Howver I don't write apps for public consumption, only for my own use/enjoyment and therefore rarely have truly large amounts of data to handle. Nonetheless those apps that I do write seem to run at a reasonable speed.
Henry Minute Never read Medical books. You could die of a misprint. - Mark Twain
-
http://farm4.static.flickr.com/3024/3101270498_dc5b5bfb95_o.png[^] Amazing that initList is 10X slower than doing it manually. It really makes me wonder... I was also surprised that initList2 and initList3 took exactly the same amount of time.
I didn't get any requirements for the signature
I hate using databinding. Considering the code to populate manually is the same as the code to bind I always bind. Databinding is so limited and causes more problems than it is worth.
Need software developed? Offering C# development all over the United States, ERL GLOBAL, Inc is the only call you will have to make.
Happiness in intelligent people is the rarest thing I know. -- Ernest Hemingway
Most of this sig is for Google, not ego. -
Yes, I agree. Slow or not, I can assume that databinding works. One line to debug instead of 10 is a powerful arguement. Our iPhone app, I wrote my own scrolling/zooming code, and abandoned it for generic code that worked no better, simply because going from 200 lines of code to two, meant a lot less work when it came to tracking down bugs.
Christian Graus Driven to the arms of OSX by Vista.
I'd back this up I'd say I find databinding faster where it matters most i.e. getting a demo out to the client so they can confirm it's what they actually wanted. I tend to go for 1. Get something to the client so I know I'm not wasting my time developing something they aren't going to be happy with. 2. Sort out performance issues if (and only if) needed. Databind is faster to set up leaving more time in the budget for everything else. I'll change it if I need to but ultimately (from a business point of view) the only 2 things that matter are is the client happy and is it in budget. Personally I'll always go for the fast and elegant solution if I have a choice but that's often slower to write and not exactly what I get paid for.
-
So true. At my previous-previous job, I kinda did my version of databinding, which turned out to be at least twice as fast as MS' implementation. Funny thing, I didnt understand databinding that well to begin with, which made me do my own implementation, which turned out to be faster. And then a new dude joined, and I was being heavily criticized for rolling my own. Good thing it would take too long to change to databinding. heh.
:badger:
I've always done the DataBinding manually as it gives me more control on handling data and it is more fun. Now I just have another good reason keep doing it my way.
-
You ain't seen nothing, it slows down by 2 to 5 orders of magnitude for dynamic, updating scenarios.. that is across techs: web, desktop, browser.. MS never ever did data-binding right apart from those basic, static, form apps, occasional dynamic update, but nothing scalable, you know: IT Programming Kind. (add some XML to make it faster :laugh: )
Try and set the datasource after you set the DisplayMember and Value Member and you will find that the time taken to bind is reduced significantly. My case (with approx 600 rows in a C# WINDOWS app): datasource set before DisplayMember and Value Member = 106 ms datasource set after DisplayMember and Value Member = 36 ms When I tried to manually bind, I got a result of 106 ms.
-
I know there are many ways to add data to a dropdownlist in asp.net. So I thought I would use JetBrains dotTrace to see if it is more efficient to call a controls DataBind() method or add new listItems with your own code. The results were a bit surprising. I will post them later tonight after you all have chance to guess. Which method do you think will be the most efficient and why? public void initList(DataTable dt) { DropDownList1.DataSource = dt; DropDownList1.DataTextField = "empName"; DropDownList1.DataValueField = "empNumber"; DropDownList1.DataBind(); } public void initList2(DataTable dt) { foreach (DataRow r in dt.Rows) { DropDownList2.Items.Add(new ListItem(r["empName"].ToString(), r["empNumber"].ToString())); } } public void initList3(DataTable dt) { foreach (DataRow r in dt.Rows) { DropDownList3.Items.Add(new ListItem(r[0].ToString(), r[1].ToString())); } }
I didn't get any requirements for the signature
I'm not a technical guru, and often I have heard people argue about best performance and the difference being milliseconds (like in this case), yet they will "forget" to index a SQL table, or write bad SQL queries etc. I hardly ever bother with performance in the UI layer as data driven applications your data IO is your crucial hit point. Reading 1000 rows badly compared to reading it well from a SQL table will have far bigger impact than adding the results to a list in different ways. I suppose if you are already doing high performance data IO then UI performace can get important, but who has the luxury of time for low results tweaking :-D
____________________________________________________________ Be brave little warrior, be VERY brave
-
Isn't this one of those times, though, that a) you rarely have 1000 items in a dropdown and b) the real time taken (from the user's perspective) doesn't usually affect the app significantly enough to bother. I confess to rarely using databinding - more because I am from the old school of liking to control what I am doing than for any performance considerations Where drop downs have a handful of items - especially where it's a simple case of selecting the ID for something from a list of options, it's far more easy to just bind 'em and to hell with the extra 37 milliseconds. (In the real world, I have my own controls for doing this, and they don't use binding, ut again, not because of performance issues.
If I knew then what I know today, then I'd know the same now as I did then - then what would be the point? .\\axxx (That's an 'M')
maxxx# wrote:
Isn't this one of those times, though, that a) you rarely have 1000 items in a dropdown and b) the real time taken (from the user's perspective) doesn't usually affect the app significantly enough to bother.
Try financial applications. Where I work, we have applications that get Inventory (and other) Items out of QuickBooks and send them to a handheld computer so that users can do inventory management-type things with them. All these Items go into a dropdown, which is really the best way to make them available. It would be nice to have *only* 1000 items in a dropdown. We previously had a problem where loading a new Item list would take a couple days once you passed about 10 or 20 thousand items. And, of course, since binding has to check for duplicates, you would begin to see, after the first couple thousand items were loaded, load times for individual items taking over 10 seconds to load. We did eventually get the load time for 20,000-30,000 items to happen in well under 30 seconds, by changing the data binding settings (I'm not completely sure about the details, as I wasn't the person who actually performed the fix). So, yeah, this can definitely be a very real issue. Don't knock the theory just because you don't have a use for it.
-
ToddHileHoffer wrote:
public void initList3(DataTable dt) { foreach (DataRow r in dt.Rows) { DropDownList3.Items.Add(new ListItem(r[0].ToString(), r[1].ToString())); } }
Because it doesn't have to map a name to an index, and because your databind is essentially going to look them up by name, assign a reference to the data table as the source, etc.
Christian Graus Driven to the arms of OSX by Vista.
agreed. Actually just got back from a TechEd conference where they discussed just that scenario. But they did not go over the DataBind() so not sure if they virtualize because if your list is more than what is displayed then that will be a big impact. I would suggest doing 2 bind scenarios. One large list (at least 10x the visual rows) and a small list (the number of visual rows). *Note: I am not a asp.net developer so not sure about how controls work on a asp.net page or even if there is a difference.
-
I never use DataBind... never have, never will.
I am guessing WPF is not on your "TODO" list either since Microsoft pushes binding on that majorly.
-
I'm not a technical guru, and often I have heard people argue about best performance and the difference being milliseconds (like in this case), yet they will "forget" to index a SQL table, or write bad SQL queries etc. I hardly ever bother with performance in the UI layer as data driven applications your data IO is your crucial hit point. Reading 1000 rows badly compared to reading it well from a SQL table will have far bigger impact than adding the results to a list in different ways. I suppose if you are already doing high performance data IO then UI performace can get important, but who has the luxury of time for low results tweaking :-D
____________________________________________________________ Be brave little warrior, be VERY brave
Agreed, You can always swap out a control but your underlying data mechanism is usually not going to be changing. Why does it even matter doesn't everyone load everything async now?
-
I hate using databinding. Considering the code to populate manually is the same as the code to bind I always bind. Databinding is so limited and causes more problems than it is worth.
Need software developed? Offering C# development all over the United States, ERL GLOBAL, Inc is the only call you will have to make.
Happiness in intelligent people is the rarest thing I know. -- Ernest Hemingway
Most of this sig is for Google, not ego. -
I'm not a technical guru, and often I have heard people argue about best performance and the difference being milliseconds (like in this case), yet they will "forget" to index a SQL table, or write bad SQL queries etc. I hardly ever bother with performance in the UI layer as data driven applications your data IO is your crucial hit point. Reading 1000 rows badly compared to reading it well from a SQL table will have far bigger impact than adding the results to a list in different ways. I suppose if you are already doing high performance data IO then UI performace can get important, but who has the luxury of time for low results tweaking :-D
____________________________________________________________ Be brave little warrior, be VERY brave
Thanks for the reply. Acutually, I am only reading the data from the employees table once per day because it is copied down an hr application. The datatable is kept in the cache and refreshed only once per day. I wrote an article about how I did this a few months ago. http://www.codeproject.com/KB/aspnet/LookUpDataCache.aspx[^]
I didn't get any requirements for the signature
-
I am guessing WPF is not on your "TODO" list either since Microsoft pushes binding on that majorly.
Jordon4Acclaim wrote:
I am guessing WPF is not on your "TODO" list either since Microsoft pushes binding on that majorly.
I have spent the last 8 years learning HTML, Javascript ASP.Net and AJAX. Not to mention that my company has purchased the RAD AJAX controls from Telerik for me. Learning WPF is not on the radar for me at all. If WPF becomes ubiquitus then I will use it, but I'm not ready to commit to a new front end just yet. I mean the RAD AJAX from Telerik is really great at this point, I'm not ready to switch. Besides, I'm not sure is Silverlight is all that great. My favorite websites such as digg, my (bank site), codeproject etc... are all done in HTML. Most of time I don't even enable flash in my browser.
I didn't get any requirements for the signature
-
maxxx# wrote:
Isn't this one of those times, though, that a) you rarely have 1000 items in a dropdown and b) the real time taken (from the user's perspective) doesn't usually affect the app significantly enough to bother.
Try financial applications. Where I work, we have applications that get Inventory (and other) Items out of QuickBooks and send them to a handheld computer so that users can do inventory management-type things with them. All these Items go into a dropdown, which is really the best way to make them available. It would be nice to have *only* 1000 items in a dropdown. We previously had a problem where loading a new Item list would take a couple days once you passed about 10 or 20 thousand items. And, of course, since binding has to check for duplicates, you would begin to see, after the first couple thousand items were loaded, load times for individual items taking over 10 seconds to load. We did eventually get the load time for 20,000-30,000 items to happen in well under 30 seconds, by changing the data binding settings (I'm not completely sure about the details, as I wasn't the person who actually performed the fix). So, yeah, this can definitely be a very real issue. Don't knock the theory just because you don't have a use for it.
It's hard to imagine a situation where 20,000 items in a drop down is anything other than bad user interface design.
-
I know there are many ways to add data to a dropdownlist in asp.net. So I thought I would use JetBrains dotTrace to see if it is more efficient to call a controls DataBind() method or add new listItems with your own code. The results were a bit surprising. I will post them later tonight after you all have chance to guess. Which method do you think will be the most efficient and why? public void initList(DataTable dt) { DropDownList1.DataSource = dt; DropDownList1.DataTextField = "empName"; DropDownList1.DataValueField = "empNumber"; DropDownList1.DataBind(); } public void initList2(DataTable dt) { foreach (DataRow r in dt.Rows) { DropDownList2.Items.Add(new ListItem(r["empName"].ToString(), r["empNumber"].ToString())); } } public void initList3(DataTable dt) { foreach (DataRow r in dt.Rows) { DropDownList3.Items.Add(new ListItem(r[0].ToString(), r[1].ToString())); } }
I didn't get any requirements for the signature
While academically interesting it is commercial meaningless for several reasons. Firstly taking a single, small sampling of data and then generating a theory on it is easy to punch holes through. Start with the fact that memory and processor utilization can easily impact any sampling that you might do. You would need to run the test several times to start normalizing the results. The more you run the less likely the utilization noise is taking its affect. The second problem is the sample size itself. Small sample sizes cause the overhead of the mechanism used to be more important than the work itself. Larger sammple sizes tend to reduce the impact. For example in databinding the string names have to be mapped to the data source. This can be an expensive operation. Say it takes 10ms but then each actual insert thereafter takes 1ms. Unless your sample set is large enough to amortize the 10ms over time then it will dominate your experiment. You can argue that it should but caching and processor pipelining are based upon these same amortize over time concepts and they seem to be really helping out performance so there must be something to it. I would say that in a code review I might let you get away with the first or second code blocks but the third block will send you back to programming school. The third block of code is unmaintainable. You are trading off maintainability for performance. The general rule is 90/10 or 80/20. I doubt that this data binding scenario is in the top 10% of your performance issues. Remember that performance should be analyzed not by how much time it takes but by how often it impacts the program. Hence a little used feature (like reporting) might run acceptably slow provided the main UI is still running fast. So the fact that you're trying to do local optimization of this code block is a poor idea. If you really feel that performance is more important than maintainability then I would question why you are using classes and methods at all. A single main function with all the logic contained within it will always run faster than classes/methods. Performance concerns must always be gauged against maintainability and other factors. Finally I would tend to agree that your second code block would normally perform better. Data binding (as with most auto-generated features) isn't designed to run faster than hand generated code but rather be more maintainable and require less time to implement. Data binding requires 3 lines of code and is easy to read. Your second bl
-
It's hard to imagine a situation where 20,000 items in a drop down is anything other than bad user interface design.
couldn't agree more. some many ways to prevent loading 20k + items into a DDL. :wtf:
-
It's hard to imagine a situation where 20,000 items in a drop down is anything other than bad user interface design.
-
couldn't agree more. some many ways to prevent loading 20k + items into a DDL. :wtf: