Scaling out transaction processing
-
I’m seeking ideas for an efficient technique to sequence transactions in a SQLServer table for consumption by a serviced component app-server running on several physical machines. Currently the component app-server is selecting “TOP 1” transaction primary key from the table and inserting it into a logical locking table with an expiration timestamp and the current machine name. The “TOP 1” selection also excludes transactions where the primary key is already in the logical locking table for another physical machine where the expiration date stamp has not yet expired. The table locks on the "logical locking" table during logical lock inserts is limiting concurrency but this is the only method I could think of to ensure each server is processing a unique row in the table. Thanks
-
I’m seeking ideas for an efficient technique to sequence transactions in a SQLServer table for consumption by a serviced component app-server running on several physical machines. Currently the component app-server is selecting “TOP 1” transaction primary key from the table and inserting it into a logical locking table with an expiration timestamp and the current machine name. The “TOP 1” selection also excludes transactions where the primary key is already in the logical locking table for another physical machine where the expiration date stamp has not yet expired. The table locks on the "logical locking" table during logical lock inserts is limiting concurrency but this is the only method I could think of to ensure each server is processing a unique row in the table. Thanks
Anubis333 wrote:
efficient technique to sequence transactions in a SQLServer table for consumption by a serviced component app-server running on several physical machines.
What if you have already painted yourself into a corner? What problem are you trying to solve? I could guess based on your post, but if wrong, it's a waste of time.
-
Anubis333 wrote:
efficient technique to sequence transactions in a SQLServer table for consumption by a serviced component app-server running on several physical machines.
What if you have already painted yourself into a corner? What problem are you trying to solve? I could guess based on your post, but if wrong, it's a waste of time.
If it takes a server 60 minutes to process one transaction, I could increase the transactions per hour by running 20 transactions on 20 different servers at the same time. The problem I’m trying to solve is, I'm trying to find an efficient way to allow distributed processing of independent transactions on multiple machines. The result being the same transaction is not processed more than once on the other 19 servers. In short, the original question I asked is the problem I’m trying to solve.
-
If it takes a server 60 minutes to process one transaction, I could increase the transactions per hour by running 20 transactions on 20 different servers at the same time. The problem I’m trying to solve is, I'm trying to find an efficient way to allow distributed processing of independent transactions on multiple machines. The result being the same transaction is not processed more than once on the other 19 servers. In short, the original question I asked is the problem I’m trying to solve.
Anubis333 wrote:
I'm trying to find an efficient way
Anubis333 wrote:
The table locks on the "logical locking" table during logical lock inserts is limiting concurrency
So there is an efficiency problem with your current solution?
Anubis333 wrote:
If it takes a server 60 minutes to process one transaction
It's difficult to believe that any time required for the database to execute table locks could significantly deteriorate a 60 minute transaction? :confused:
-
Anubis333 wrote:
I'm trying to find an efficient way
Anubis333 wrote:
The table locks on the "logical locking" table during logical lock inserts is limiting concurrency
So there is an efficiency problem with your current solution?
Anubis333 wrote:
If it takes a server 60 minutes to process one transaction
It's difficult to believe that any time required for the database to execute table locks could significantly deteriorate a 60 minute transaction? :confused:
:omg: The 60 minutes thing was a hypothetical situation to help define what I was trying to accomplish, it had nothing to do with my current solution. I wasn’t seeking critiques on my current solution only ideas for a better/different method of achieving the same goal. As always, I’m getting responses on everything except the question at hand. The question at hand being, "I’m seeking ideas for an efficient technique to sequence transactions in a SQLServer table for consumption by a serviced component app-server running on several physical machines"