Max throughput Microsoft SQL Server?
-
hi... is there a max limit to throughput of Microsoft SQL Server cluster in terms of how many request (or... *simple* query) the cluster can take per minute? What about a single SQL server instance, assuming you can scale up throwing in $$$ put more RAM/CPU/faster disks...etc Thanks
dev
No.
-
hi... is there a max limit to throughput of Microsoft SQL Server cluster in terms of how many request (or... *simple* query) the cluster can take per minute? What about a single SQL server instance, assuming you can scale up throwing in $$$ put more RAM/CPU/faster disks...etc Thanks
dev
Yes you can get better server performance o a single server by money at it, not your best question. There is an entire industry based around server performance, not going to be answered by a forum post.
Never underestimate the power of human stupidity RAH
-
hi... is there a max limit to throughput of Microsoft SQL Server cluster in terms of how many request (or... *simple* query) the cluster can take per minute? What about a single SQL server instance, assuming you can scale up throwing in $$$ put more RAM/CPU/faster disks...etc Thanks
dev
-
hi... is there a max limit to throughput of Microsoft SQL Server cluster in terms of how many request (or... *simple* query) the cluster can take per minute? What about a single SQL server instance, assuming you can scale up throwing in $$$ put more RAM/CPU/faster disks...etc Thanks
dev
devvvy wrote:
hi... is there a max limit to throughput of Microsoft SQL Server cluster in terms of how many request (or... *simple* query) the cluster can take per minute?
Everything in computing has limits but I doubt that has anything to do with your question. Presumably you or someone else you know thinks that X is 'better' than SQL server without even doing any real analysis. Here is an example of someone using SQL Server http://highscalability.com/blog/2014/7/21/stackoverflow-update-560m-pageviews-a-month-25-servers-and-i.html[^] Myself I was tasked with doing performance tests using the application that I was working on, and at the time the application server could handle 100+ TPS a second sustained where I estimated that the entire US market for that business domain was only 2000 TPS. And during that test I couldn't get SQL Server to even provide any real CPU/Memory load on some pretty crappy equipment. Consequently my conclusion was that for any conceivable reality a single SQL server instance would serve the needs of the company. Now it is certainly possible that you have a business domain that has some very specific data needs based on the business needs. Not hypothetical tech arguments. If so then you need to do the following - Collect the actual business requirements that might lead to performance problems. - Collect the actual market potential of business. One might reasonably claim as a top level that owning the entire market in the world is the goal but claiming astronomical numbers without any real world basis is foolhardy. So is making up numbers without looking at markets at all. - Do an analysis of likely transaction flows based on the first item. - Identify possible bottle necks. - AFTER doing the above then look for solutions that will solve any bottle necks at the data persistence layer. Finally expect that if you have bottle necks at the data persistence layer then you MUST expect that you are going to need to have other architectural changes necessary to deal with that. A specific type of data persistence server will NOT solve problems of this sort.
-
devvvy wrote:
hi... is there a max limit to throughput of Microsoft SQL Server cluster in terms of how many request (or... *simple* query) the cluster can take per minute?
Everything in computing has limits but I doubt that has anything to do with your question. Presumably you or someone else you know thinks that X is 'better' than SQL server without even doing any real analysis. Here is an example of someone using SQL Server http://highscalability.com/blog/2014/7/21/stackoverflow-update-560m-pageviews-a-month-25-servers-and-i.html[^] Myself I was tasked with doing performance tests using the application that I was working on, and at the time the application server could handle 100+ TPS a second sustained where I estimated that the entire US market for that business domain was only 2000 TPS. And during that test I couldn't get SQL Server to even provide any real CPU/Memory load on some pretty crappy equipment. Consequently my conclusion was that for any conceivable reality a single SQL server instance would serve the needs of the company. Now it is certainly possible that you have a business domain that has some very specific data needs based on the business needs. Not hypothetical tech arguments. If so then you need to do the following - Collect the actual business requirements that might lead to performance problems. - Collect the actual market potential of business. One might reasonably claim as a top level that owning the entire market in the world is the goal but claiming astronomical numbers without any real world basis is foolhardy. So is making up numbers without looking at markets at all. - Do an analysis of likely transaction flows based on the first item. - Identify possible bottle necks. - AFTER doing the above then look for solutions that will solve any bottle necks at the data persistence layer. Finally expect that if you have bottle necks at the data persistence layer then you MUST expect that you are going to need to have other architectural changes necessary to deal with that. A specific type of data persistence server will NOT solve problems of this sort.
-
devvvy wrote:
hi... is there a max limit to throughput of Microsoft SQL Server cluster in terms of how many request (or... *simple* query) the cluster can take per minute?
Everything in computing has limits but I doubt that has anything to do with your question. Presumably you or someone else you know thinks that X is 'better' than SQL server without even doing any real analysis. Here is an example of someone using SQL Server http://highscalability.com/blog/2014/7/21/stackoverflow-update-560m-pageviews-a-month-25-servers-and-i.html[^] Myself I was tasked with doing performance tests using the application that I was working on, and at the time the application server could handle 100+ TPS a second sustained where I estimated that the entire US market for that business domain was only 2000 TPS. And during that test I couldn't get SQL Server to even provide any real CPU/Memory load on some pretty crappy equipment. Consequently my conclusion was that for any conceivable reality a single SQL server instance would serve the needs of the company. Now it is certainly possible that you have a business domain that has some very specific data needs based on the business needs. Not hypothetical tech arguments. If so then you need to do the following - Collect the actual business requirements that might lead to performance problems. - Collect the actual market potential of business. One might reasonably claim as a top level that owning the entire market in the world is the goal but claiming astronomical numbers without any real world basis is foolhardy. So is making up numbers without looking at markets at all. - Do an analysis of likely transaction flows based on the first item. - Identify possible bottle necks. - AFTER doing the above then look for solutions that will solve any bottle necks at the data persistence layer. Finally expect that if you have bottle necks at the data persistence layer then you MUST expect that you are going to need to have other architectural changes necessary to deal with that. A specific type of data persistence server will NOT solve problems of this sort.
-
thanks I just saw a tweet yesterday claiming MongoDB can take few million hits over few minutes duration. I just want to know if, for example, on an average machine or even workstation, what kind of hits Microsoft SQL server can process?
dev
There's such a thing called Transaction Processing Performance Council[^], they have many different lists of test all adjusted to get a different vendor on the top, but they will still give you an idea of what to expect.
Wrong is evil and must be defeated. - Jeff Ello[^]
-
thanks I just saw a tweet yesterday claiming MongoDB can take few million hits over few minutes duration. I just want to know if, for example, on an average machine or even workstation, what kind of hits Microsoft SQL server can process?
dev
devvvy wrote:
I just saw a tweet yesterday claiming MongoDB can take few million hits over few minutes duration
And I believe I saw an article where someone was getting something like 1 million TPS on something. But still pointless unless there is a real potential for that sort of traffic.
-
devvvy wrote:
I just saw a tweet yesterday claiming MongoDB can take few million hits over few minutes duration
And I believe I saw an article where someone was getting something like 1 million TPS on something. But still pointless unless there is a real potential for that sort of traffic.