SQL Performance Question
-
I have a set of querys that pull data from a bunch of tables in a database and return totals for important datapoints for our company (ie Total Revenue for the day, for the month...quarter and so on). I have build a windows "dashboard" to display these values and auto refresh every 5 minutes. The problem with this is if 10 or so people have this installed on their system, the database is getting constantly hammered every 5 minutes and pulling from our most important tables. There has got to be a better way to do this and I figured the smart readers of codeproject would give me a suggestion or advice. Thanks JGA
-
I have a set of querys that pull data from a bunch of tables in a database and return totals for important datapoints for our company (ie Total Revenue for the day, for the month...quarter and so on). I have build a windows "dashboard" to display these values and auto refresh every 5 minutes. The problem with this is if 10 or so people have this installed on their system, the database is getting constantly hammered every 5 minutes and pulling from our most important tables. There has got to be a better way to do this and I figured the smart readers of codeproject would give me a suggestion or advice. Thanks JGA
When people learn about relational databases there is a lot of emphasis put on normalisation of the data model. However, like all optimisations, it really depends on the situation. Obviously you have a data model and it works reasonably well for most of the things that you do. However, a highly normalised data model is inefficient for reporting. What I suggest is to create a report table (or tables) that contain duplicate information to the main database, however these tables are flattened so that hold data in the format that your SELECT statement is currently creating. For example. Say you have a SELECT statement being kicked off for your "dashboard" like this
SELECT A.C1, A.C2, B.C3, C.C4, C.C5, D.C6 FROM A INNER JOIN B ON A.PKA = B.FKA INNER JOIN C ON A.PKA = C.FKA INNER JOIN D ON C.PKC = D.FKC
You create a single reporting table that contains C1..C6 in a flattened format. You have a process that once every 5 minutes updates the report table. All your "dashboard" users access the report table meaning that your main database is hit only once per 5 minutes for the data and the users access the fast tables that are pre-formatted for their needs. Does this help?
My: Blog | Photos "Man who stand on hill with mouth open will wait long time for roast duck to drop in." -- Confucious
-
When people learn about relational databases there is a lot of emphasis put on normalisation of the data model. However, like all optimisations, it really depends on the situation. Obviously you have a data model and it works reasonably well for most of the things that you do. However, a highly normalised data model is inefficient for reporting. What I suggest is to create a report table (or tables) that contain duplicate information to the main database, however these tables are flattened so that hold data in the format that your SELECT statement is currently creating. For example. Say you have a SELECT statement being kicked off for your "dashboard" like this
SELECT A.C1, A.C2, B.C3, C.C4, C.C5, D.C6 FROM A INNER JOIN B ON A.PKA = B.FKA INNER JOIN C ON A.PKA = C.FKA INNER JOIN D ON C.PKC = D.FKC
You create a single reporting table that contains C1..C6 in a flattened format. You have a process that once every 5 minutes updates the report table. All your "dashboard" users access the report table meaning that your main database is hit only once per 5 minutes for the data and the users access the fast tables that are pre-formatted for their needs. Does this help?
My: Blog | Photos "Man who stand on hill with mouth open will wait long time for roast duck to drop in." -- Confucious