Solid State Drives
-
In a Windows C++ environment during writing to a log file I observe a periodic hiccup of approx 16 milliseconds in between writes. So I wrote a little function that writes out 20,000 sequentially numbered messages using a while loop. Approximately every 3000 to 4000 messages I see a 16 millisecond delay. This is with no programmatic flushing. With flushing after each write it happens after approx. every 2000 messages. How can I get around this recurring hiccup? Would a solid state disk solve this problem? Or at least dramatic reduce the hiccup time? Thanks ak
-
In a Windows C++ environment during writing to a log file I observe a periodic hiccup of approx 16 milliseconds in between writes. So I wrote a little function that writes out 20,000 sequentially numbered messages using a while loop. Approximately every 3000 to 4000 messages I see a 16 millisecond delay. This is with no programmatic flushing. With flushing after each write it happens after approx. every 2000 messages. How can I get around this recurring hiccup? Would a solid state disk solve this problem? Or at least dramatic reduce the hiccup time? Thanks ak
I suggest you calculate the number of bytes moved in between two hiccups, and I bet it will be a multiple of the sector size (512B), possibly simply the cluster size (which could be the next power of 2 multiple of 512B that exceeds the partition size divided by 64K). What probably is happening is a new cluster needs to be allocated, causing an update of the FAT table. A possible workaround would be to preallocate the file, which implies you need to know and allocate the maximum size before you start writing the data. A possible approach is by using a "memory mapped file". AFAIK solid state disks are faster, especially when reading data and hopping around (their seek time is much better). So the same problem would exist, at a much smaller scale. :)
Luc Pattyn [Forum Guidelines] [My Articles] Nil Volentibus Arduum
Please use <PRE> tags for code snippets, they preserve indentation, improve readability, and make me actually look at the code.
modified on Tuesday, January 11, 2011 7:41 PM
-
I suggest you calculate the number of bytes moved in between two hiccups, and I bet it will be a multiple of the sector size (512B), possibly simply the cluster size (which could be the next power of 2 multiple of 512B that exceeds the partition size divided by 64K). What probably is happening is a new cluster needs to be allocated, causing an update of the FAT table. A possible workaround would be to preallocate the file, which implies you need to know and allocate the maximum size before you start writing the data. A possible approach is by using a "memory mapped file". AFAIK solid state disks are faster, especially when reading data and hopping around (their seek time is much better). So the same problem would exist, at a much smaller scale. :)
Luc Pattyn [Forum Guidelines] [My Articles] Nil Volentibus Arduum
Please use <PRE> tags for code snippets, they preserve indentation, improve readability, and make me actually look at the code.
modified on Tuesday, January 11, 2011 7:41 PM
Luc Pattyn wrote:
AFAIK solid state disks are faster, especially when reading data and hopping around (their seek time is much better). So the same problem would exist, at a much smaller scale.
You have a potential much bigger problem with SSDs. You cant erase single cells in an SSD, but instead it erases pretty large blocks of data (usually 128kB). So if you change a few bytes in a file, the drive will write another block, and mark the original block for deletion, and do the actual delete whenever it's idle. If you use an SSD for logging purposes, it might never get idle for long enough to get the erasing done and you would end up with a drive that's not having any free blocks despite being far from full. When this happens you will get a hiccup of a totally different magnitude.
-
Luc Pattyn wrote:
AFAIK solid state disks are faster, especially when reading data and hopping around (their seek time is much better). So the same problem would exist, at a much smaller scale.
You have a potential much bigger problem with SSDs. You cant erase single cells in an SSD, but instead it erases pretty large blocks of data (usually 128kB). So if you change a few bytes in a file, the drive will write another block, and mark the original block for deletion, and do the actual delete whenever it's idle. If you use an SSD for logging purposes, it might never get idle for long enough to get the erasing done and you would end up with a drive that's not having any free blocks despite being far from full. When this happens you will get a hiccup of a totally different magnitude.
Interesting. So for continuous writes, the app should not flush, and one should have Windows cache the file in chunks that correspond to the block size of the SSD, avoiding almost all partial-block writes. Not sure that can be organized easily. :)
Luc Pattyn [Forum Guidelines] [My Articles] Nil Volentibus Arduum
Please use <PRE> tags for code snippets, they preserve indentation, improve readability, and make me actually look at the code.
-
Luc Pattyn wrote:
AFAIK solid state disks are faster, especially when reading data and hopping around (their seek time is much better). So the same problem would exist, at a much smaller scale.
You have a potential much bigger problem with SSDs. You cant erase single cells in an SSD, but instead it erases pretty large blocks of data (usually 128kB). So if you change a few bytes in a file, the drive will write another block, and mark the original block for deletion, and do the actual delete whenever it's idle. If you use an SSD for logging purposes, it might never get idle for long enough to get the erasing done and you would end up with a drive that's not having any free blocks despite being far from full. When this happens you will get a hiccup of a totally different magnitude.
-
Even in a degraded state (and forcing that is getting hard even for benchmarkers) a current generation SSD will still outperform a mechanical drive.
3x12=36 2x12=24 1x12=12 0x12=18
The original response talks about preallocating files. I'd like to try an experiment with this approach and just see what the improvement is. Can anyone suggest some VS c++ code that will accomplish this. 1) preallocated file creation, that would be reused with each running of the program 2) code that opens the file 3) code that sequencially writes ascii data msgs to the file starting at the beginning. Thanks
-
Even in a degraded state (and forcing that is getting hard even for benchmarkers) a current generation SSD will still outperform a mechanical drive.
3x12=36 2x12=24 1x12=12 0x12=18
Yes it will indeed. And in all normal use you won't notice it at all.
-
Luc Pattyn wrote:
AFAIK solid state disks are faster, especially when reading data and hopping around (their seek time is much better). So the same problem would exist, at a much smaller scale.
You have a potential much bigger problem with SSDs. You cant erase single cells in an SSD, but instead it erases pretty large blocks of data (usually 128kB). So if you change a few bytes in a file, the drive will write another block, and mark the original block for deletion, and do the actual delete whenever it's idle. If you use an SSD for logging purposes, it might never get idle for long enough to get the erasing done and you would end up with a drive that's not having any free blocks despite being far from full. When this happens you will get a hiccup of a totally different magnitude.
The original response talks about preallocating files. I'd like to try an experiment with this approach and just see what the improvement is. Can anyone suggest some VS c++ code that will accomplish this. 1) preallocated file creation, that would be reused with each running of the program 2) code that opens the file 3) code that sequencially writes ascii data msgs to the file starting at the beginning. Thanks