Check this out
-
Pretty cool. My only concern is how practical is it. Sure the Nasdaq can afford it but what about Joe Average? Memory may be cheaper than ever but per mb it is way more expensive than hard disk technology. Also I would LOVE (ok this is totaly twisted and not the utterances of a productive citizen *looks over shoulder for parole officer* ;)) but imagine if an in-memory database system lost power, or crashed... Everything, gone. Sure power-loss and crashes can harm hard disks too but then you get a data recovery expert who comes around, extracts the data and puts it on a new disk. With memory, no chance. Though I am sure these guys have thought of it :) regards, Paul Watson Cape Town, South Africa e: paulmwatson@email.com w: vergen.org
The power loss problem exists and is a very serious one. But the databases, I think, should be some kind of memory mapped files and periodically written to disk, but since all or a large part of the database resides in memory (depending on memory availability), the queries will be extremely fast. There is the chance of certain queries having to fetch data from the disk (in case memory requirements exceed available) making them slower. I think there is a huge market, if one of these products develop into a robust implementation will good recovery features in case of hardware failure. -- Thomas
-
Unless I am missing something, it looks to me that this is not the future of the database technology but rather one of the implementations of the present database technology. I've been working on something like that quite a while ago and it was implemented long before I went into it. I can not give any details because of the NDA signed, but a detailed web search might show that there is nothing much revolutionary here.
The database technology is not new. It is just that the whole or a large part of the database residing in memory is not a reality in mainstream database products. The challenges in making the database reside in memory is in the recovery features. The actual indexing/query and such will not be very different or may even be the same. Now, memory has become cheaper and the research done by many people over many years is making an appearance as commercial products -- hence i change my observation: these new products can be the future commercial database products (not technology). -Thomas
-
The database technology is not new. It is just that the whole or a large part of the database residing in memory is not a reality in mainstream database products. The challenges in making the database reside in memory is in the recovery features. The actual indexing/query and such will not be very different or may even be the same. Now, memory has become cheaper and the research done by many people over many years is making an appearance as commercial products -- hence i change my observation: these new products can be the future commercial database products (not technology). -Thomas
-
Yup, very old tech. We developed systems like this for process control 20+ years ago. Power failure recovery really wasn't much of an issue. Tim Smith Descartes Systems Sciences, Inc.
We have a quote server that processes quotes from multiple exchanges and databases them. We use a custom-built in-memory database. There are multiple servers reading the same datafeeds, so power failure recovery is not an issue for this, but that led me to think that if we have a - transaction based - recoverable from power failures/crashes - SQL supported - cluster supporting database server on the same lines, it can be used as a replacement for the SQL server that I am using now. It will decrease the latency for a customer order sent to the exchange from 20 to 30 ms currently to 100 micro seconds. This can make one server handle more requests per second. Making the basic database was straight-forward, but a robust implementation of the above four features will make it an alternative to the disk based database servers. I am working on a general in memory database that does not have these features (to publish on CP). -- Thomas
-
We have a quote server that processes quotes from multiple exchanges and databases them. We use a custom-built in-memory database. There are multiple servers reading the same datafeeds, so power failure recovery is not an issue for this, but that led me to think that if we have a - transaction based - recoverable from power failures/crashes - SQL supported - cluster supporting database server on the same lines, it can be used as a replacement for the SQL server that I am using now. It will decrease the latency for a customer order sent to the exchange from 20 to 30 ms currently to 100 micro seconds. This can make one server handle more requests per second. Making the basic database was straight-forward, but a robust implementation of the above four features will make it an alternative to the disk based database servers. I am working on a general in memory database that does not have these features (to publish on CP). -- Thomas
The only feature we lacked that you listed was real transaction support. But, we did recovery, SQL, and full and partial redundancy with load balancing. Our DCU (distributed computing units) and RTU (remote telemetry units) were capable of providing subset databases in order to allow for plant operations when the primary computer systems failed. The RTUs were really fun because we were trying to do distributed databases over radio (slowwwwwww). Of course, the biggest nightmare in this is getting nodes to properly resync with the rest of the network when they come back online. As far as high speed access, we didn't do relational, we did variant on hierarchical. Our indexer was dynamic and provided nice on demand indexing. Oh, it was also a very strange variant on OODBMS. It didn't really allow for methods to be invoked on records, but it fully supported multiple layers of get/put support for each field in a record. This greatly improved the performance of the database system. Tim Smith Descartes Systems Sciences, Inc.
-
We have a quote server that processes quotes from multiple exchanges and databases them. We use a custom-built in-memory database. There are multiple servers reading the same datafeeds, so power failure recovery is not an issue for this, but that led me to think that if we have a - transaction based - recoverable from power failures/crashes - SQL supported - cluster supporting database server on the same lines, it can be used as a replacement for the SQL server that I am using now. It will decrease the latency for a customer order sent to the exchange from 20 to 30 ms currently to 100 micro seconds. This can make one server handle more requests per second. Making the basic database was straight-forward, but a robust implementation of the above four features will make it an alternative to the disk based database servers. I am working on a general in memory database that does not have these features (to publish on CP). -- Thomas
Sorry if I sound like I am trying to rain on your parade. It wasn't intended. I am actually a big fan of memory resident databases. They are actually very common in industrial automation. Much work has been done with SQLServer to provide high speed memory resident capabilities (I think the original software came from South Africa and purchased by Wonderware. But I might have my facts wrong.) Tim Smith Descartes Systems Sciences, Inc.
-
Sorry if I sound like I am trying to rain on your parade. It wasn't intended. I am actually a big fan of memory resident databases. They are actually very common in industrial automation. Much work has been done with SQLServer to provide high speed memory resident capabilities (I think the original software came from South Africa and purchased by Wonderware. But I might have my facts wrong.) Tim Smith Descartes Systems Sciences, Inc.
That is fine. :) It actually helped me a lot in knowing that this is not at all new trend, but just a trend in mainstream commercial databases. The memory resident features of SQL Server are certainly working. I tried to put a SQL Server database on a Ramdrive and guess what? there was less than 10% increase in performance. I found that the database is more processor-intensive. But, then I do not understand the milliseconds response to queries. I would have imagined that an in-memory database query should run in microseconds. I am not talking about very complex queries. I am talking about a single table queried with a field which is the primary index of that table. And with memdb and TimesTen claiming microsecond query execution times, it seems that they have done something different. Thanks for your comments. --Thomas
-
http://www.memdb.com/ I think this is the future of database technology. Another company, TimesTen (an HP subsidiary - www.timesten.com) already has a commercial product. It is used by the primex auction system at Nasdaq. - Thomas
What's so amazing about this? Surely using something like: std::map memDB; is already doing this? I use this all of the time, until I identify that I'm going to have memory problems (anything over a 1Mb or so, I'd look for some disk based database). Regards, Ray
-
What's so amazing about this? Surely using something like: std::map memDB; is already doing this? I use this all of the time, until I identify that I'm going to have memory problems (anything over a 1Mb or so, I'd look for some disk based database). Regards, Ray
-
http://www.memdb.com/ I think this is the future of database technology. Another company, TimesTen (an HP subsidiary - www.timesten.com) already has a commercial product. It is used by the primex auction system at Nasdaq. - Thomas
Well, just my 2 cents. There are not many ways to speed up Database Systems : 1. Software optimisation : - a. algorithms to optimise the code/efficiency - b. uses the memory at the most ! up to 100%, in the case of memdb. 2. Hadware optimisation - faster HD (with mem) - 100% memory HD already in use in Web technologies (32Go) So, I don't think memdb will rules, because you can already run an Oracle or SQLServer application on a memHD without touching the code of the application. It only costs hardware. In the case of memdb, you have to buy hardware, you have to change the code, and the database system is quite recent (reliability ? scallability ?...). Guy LECOMTE
-
Well, just my 2 cents. There are not many ways to speed up Database Systems : 1. Software optimisation : - a. algorithms to optimise the code/efficiency - b. uses the memory at the most ! up to 100%, in the case of memdb. 2. Hadware optimisation - faster HD (with mem) - 100% memory HD already in use in Web technologies (32Go) So, I don't think memdb will rules, because you can already run an Oracle or SQLServer application on a memHD without touching the code of the application. It only costs hardware. In the case of memdb, you have to buy hardware, you have to change the code, and the database system is quite recent (reliability ? scallability ?...). Guy LECOMTE
I tried putting a SQL Server database on a free-ware Ram drive. It did not give me more than 10% increase in query speeds. All the times were is milliseconds range. I tried this out to increase the response of database application that I am developing now. But, these are database products, that can be installed like SQL Server or Oracle, and gives query execution times in microseconds (in 10s of microseconds) or so they claim. This demonstrates that what they are doing is much more than putting the SQL database in a memory HDD. There are a lot of research material about these on the web. But, what I was trying to say is that: Just like we can use SQL Server or Oracle out of the box, we now have database servers that provide 10 or even 100 times better query response times - and they install out-of-the-box. They are new systems and I do not expect people to jump all over it. TimesTen is obviously reliable and scalable (NASDAQ is using it), but it might be very expensive also. Memdb is a new software and is not proven. I think, it is just in its nascent stages and has a huge market potential. I stumbled across these products, when looking for high-performance databases. I had sent a query to memDb about the scability, SQL support and reliability features that they have. I will share any interesting stuff that they send me. - Thomas
-
What's so amazing about this? Surely using something like: std::map memDB; is already doing this? I use this all of the time, until I identify that I'm going to have memory problems (anything over a 1Mb or so, I'd look for some disk based database). Regards, Ray
Here I am talking about a software that - installs as a database server (just like MS SQL or Oracle) - has SQL support - increases query response time by utilizing in-memory db technology - can be used from any program written in any language (provided they support ODBC, OLEDB/ADO etc. I sure hope they do :)) It is entirely different from std::map. - Thomas
-
I tried putting a SQL Server database on a free-ware Ram drive. It did not give me more than 10% increase in query speeds. All the times were is milliseconds range. I tried this out to increase the response of database application that I am developing now. But, these are database products, that can be installed like SQL Server or Oracle, and gives query execution times in microseconds (in 10s of microseconds) or so they claim. This demonstrates that what they are doing is much more than putting the SQL database in a memory HDD. There are a lot of research material about these on the web. But, what I was trying to say is that: Just like we can use SQL Server or Oracle out of the box, we now have database servers that provide 10 or even 100 times better query response times - and they install out-of-the-box. They are new systems and I do not expect people to jump all over it. TimesTen is obviously reliable and scalable (NASDAQ is using it), but it might be very expensive also. Memdb is a new software and is not proven. I think, it is just in its nascent stages and has a huge market potential. I stumbled across these products, when looking for high-performance databases. I had sent a query to memDb about the scability, SQL support and reliability features that they have. I will share any interesting stuff that they send me. - Thomas
Have you tried CodeBase from Sequiter software ? It is very fast, portable with source code, not so expensive. http://www.sequiter.com/ If your queries aren't too much complex, it is worth a try ! regards Guy LECOMTE