Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. General Programming
  3. Hardware & Devices
  4. Solid State Drives

Solid State Drives

Scheduled Pinned Locked Moved Hardware & Devices
questionc++architecturehelpworkspace
8 Posts 4 Posters 2 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • A Offline
    A Offline
    Alan Kurlansky
    wrote on last edited by
    #1

    In a Windows C++ environment during writing to a log file I observe a periodic hiccup of approx 16 milliseconds in between writes. So I wrote a little function that writes out 20,000 sequentially numbered messages using a while loop. Approximately every 3000 to 4000 messages I see a 16 millisecond delay. This is with no programmatic flushing. With flushing after each write it happens after approx. every 2000 messages. How can I get around this recurring hiccup? Would a solid state disk solve this problem? Or at least dramatic reduce the hiccup time? Thanks ak

    L 1 Reply Last reply
    0
    • A Alan Kurlansky

      In a Windows C++ environment during writing to a log file I observe a periodic hiccup of approx 16 milliseconds in between writes. So I wrote a little function that writes out 20,000 sequentially numbered messages using a while loop. Approximately every 3000 to 4000 messages I see a 16 millisecond delay. This is with no programmatic flushing. With flushing after each write it happens after approx. every 2000 messages. How can I get around this recurring hiccup? Would a solid state disk solve this problem? Or at least dramatic reduce the hiccup time? Thanks ak

      L Offline
      L Offline
      Luc Pattyn
      wrote on last edited by
      #2

      I suggest you calculate the number of bytes moved in between two hiccups, and I bet it will be a multiple of the sector size (512B), possibly simply the cluster size (which could be the next power of 2 multiple of 512B that exceeds the partition size divided by 64K). What probably is happening is a new cluster needs to be allocated, causing an update of the FAT table. A possible workaround would be to preallocate the file, which implies you need to know and allocate the maximum size before you start writing the data. A possible approach is by using a "memory mapped file". AFAIK solid state disks are faster, especially when reading data and hopping around (their seek time is much better). So the same problem would exist, at a much smaller scale. :)

      Luc Pattyn [Forum Guidelines] [My Articles] Nil Volentibus Arduum

      Please use <PRE> tags for code snippets, they preserve indentation, improve readability, and make me actually look at the code.

      modified on Tuesday, January 11, 2011 7:41 PM

      J 1 Reply Last reply
      0
      • L Luc Pattyn

        I suggest you calculate the number of bytes moved in between two hiccups, and I bet it will be a multiple of the sector size (512B), possibly simply the cluster size (which could be the next power of 2 multiple of 512B that exceeds the partition size divided by 64K). What probably is happening is a new cluster needs to be allocated, causing an update of the FAT table. A possible workaround would be to preallocate the file, which implies you need to know and allocate the maximum size before you start writing the data. A possible approach is by using a "memory mapped file". AFAIK solid state disks are faster, especially when reading data and hopping around (their seek time is much better). So the same problem would exist, at a much smaller scale. :)

        Luc Pattyn [Forum Guidelines] [My Articles] Nil Volentibus Arduum

        Please use <PRE> tags for code snippets, they preserve indentation, improve readability, and make me actually look at the code.

        modified on Tuesday, January 11, 2011 7:41 PM

        J Offline
        J Offline
        Jorgen Andersson
        wrote on last edited by
        #3

        Luc Pattyn wrote:

        AFAIK solid state disks are faster, especially when reading data and hopping around (their seek time is much better). So the same problem would exist, at a much smaller scale.

        You have a potential much bigger problem with SSDs. You cant erase single cells in an SSD, but instead it erases pretty large blocks of data (usually 128kB). So if you change a few bytes in a file, the drive will write another block, and mark the original block for deletion, and do the actual delete whenever it's idle. If you use an SSD for logging purposes, it might never get idle for long enough to get the erasing done and you would end up with a drive that's not having any free blocks despite being far from full. When this happens you will get a hiccup of a totally different magnitude.

        List of common misconceptions

        L D A 3 Replies Last reply
        0
        • J Jorgen Andersson

          Luc Pattyn wrote:

          AFAIK solid state disks are faster, especially when reading data and hopping around (their seek time is much better). So the same problem would exist, at a much smaller scale.

          You have a potential much bigger problem with SSDs. You cant erase single cells in an SSD, but instead it erases pretty large blocks of data (usually 128kB). So if you change a few bytes in a file, the drive will write another block, and mark the original block for deletion, and do the actual delete whenever it's idle. If you use an SSD for logging purposes, it might never get idle for long enough to get the erasing done and you would end up with a drive that's not having any free blocks despite being far from full. When this happens you will get a hiccup of a totally different magnitude.

          List of common misconceptions

          L Offline
          L Offline
          Luc Pattyn
          wrote on last edited by
          #4

          Interesting. So for continuous writes, the app should not flush, and one should have Windows cache the file in chunks that correspond to the block size of the SSD, avoiding almost all partial-block writes. Not sure that can be organized easily. :)

          Luc Pattyn [Forum Guidelines] [My Articles] Nil Volentibus Arduum

          Please use <PRE> tags for code snippets, they preserve indentation, improve readability, and make me actually look at the code.

          1 Reply Last reply
          0
          • J Jorgen Andersson

            Luc Pattyn wrote:

            AFAIK solid state disks are faster, especially when reading data and hopping around (their seek time is much better). So the same problem would exist, at a much smaller scale.

            You have a potential much bigger problem with SSDs. You cant erase single cells in an SSD, but instead it erases pretty large blocks of data (usually 128kB). So if you change a few bytes in a file, the drive will write another block, and mark the original block for deletion, and do the actual delete whenever it's idle. If you use an SSD for logging purposes, it might never get idle for long enough to get the erasing done and you would end up with a drive that's not having any free blocks despite being far from full. When this happens you will get a hiccup of a totally different magnitude.

            List of common misconceptions

            D Offline
            D Offline
            Dan Neely
            wrote on last edited by
            #5

            Even in a degraded state (and forcing that is getting hard even for benchmarkers) a current generation SSD will still outperform a mechanical drive.

            3x12=36 2x12=24 1x12=12 0x12=18

            A J 2 Replies Last reply
            0
            • D Dan Neely

              Even in a degraded state (and forcing that is getting hard even for benchmarkers) a current generation SSD will still outperform a mechanical drive.

              3x12=36 2x12=24 1x12=12 0x12=18

              A Offline
              A Offline
              Alan Kurlansky
              wrote on last edited by
              #6

              The original response talks about preallocating files. I'd like to try an experiment with this approach and just see what the improvement is. Can anyone suggest some VS c++ code that will accomplish this. 1) preallocated file creation, that would be reused with each running of the program 2) code that opens the file 3) code that sequencially writes ascii data msgs to the file starting at the beginning. Thanks

              1 Reply Last reply
              0
              • D Dan Neely

                Even in a degraded state (and forcing that is getting hard even for benchmarkers) a current generation SSD will still outperform a mechanical drive.

                3x12=36 2x12=24 1x12=12 0x12=18

                J Offline
                J Offline
                Jorgen Andersson
                wrote on last edited by
                #7

                Yes it will indeed. And in all normal use you won't notice it at all.

                List of common misconceptions

                1 Reply Last reply
                0
                • J Jorgen Andersson

                  Luc Pattyn wrote:

                  AFAIK solid state disks are faster, especially when reading data and hopping around (their seek time is much better). So the same problem would exist, at a much smaller scale.

                  You have a potential much bigger problem with SSDs. You cant erase single cells in an SSD, but instead it erases pretty large blocks of data (usually 128kB). So if you change a few bytes in a file, the drive will write another block, and mark the original block for deletion, and do the actual delete whenever it's idle. If you use an SSD for logging purposes, it might never get idle for long enough to get the erasing done and you would end up with a drive that's not having any free blocks despite being far from full. When this happens you will get a hiccup of a totally different magnitude.

                  List of common misconceptions

                  A Offline
                  A Offline
                  Alan Kurlansky
                  wrote on last edited by
                  #8

                  The original response talks about preallocating files. I'd like to try an experiment with this approach and just see what the improvement is. Can anyone suggest some VS c++ code that will accomplish this. 1) preallocated file creation, that would be reused with each running of the program 2) code that opens the file 3) code that sequencially writes ascii data msgs to the file starting at the beginning. Thanks

                  1 Reply Last reply
                  0
                  Reply
                  • Reply as topic
                  Log in to reply
                  • Oldest to Newest
                  • Newest to Oldest
                  • Most Votes


                  • Login

                  • Don't have an account? Register

                  • Login or register to search.
                  • First post
                    Last post
                  0
                  • Categories
                  • Recent
                  • Tags
                  • Popular
                  • World
                  • Users
                  • Groups