Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. General Programming
  3. C / C++ / MFC
  4. Delay of 250micro Sec

Delay of 250micro Sec

Scheduled Pinned Locked Moved C / C++ / MFC
jsonhelp
3 Posts 3 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • S Offline
    S Offline
    Shiva Prasad
    wrote on last edited by
    #1

    hi, I need to create a delay in step of 250 micro sec..But GetTickCount API counts the tick in terms of milli seconds. Is there any other method to achieve the same using inline assembly, but only issue is it shouldn't be processor specific/dependent... can someone post a snippet for the same.. thanx..

    C R 2 Replies Last reply
    0
    • S Shiva Prasad

      hi, I need to create a delay in step of 250 micro sec..But GetTickCount API counts the tick in terms of milli seconds. Is there any other method to achieve the same using inline assembly, but only issue is it shouldn't be processor specific/dependent... can someone post a snippet for the same.. thanx..

      C Offline
      C Offline
      Chris Losinger
      wrote on last edited by
      #2

      http://www.codeproject.com/system/simpletime.asp[^] Cleek | Image Toolkits | Thumbnail maker

      1 Reply Last reply
      0
      • S Shiva Prasad

        hi, I need to create a delay in step of 250 micro sec..But GetTickCount API counts the tick in terms of milli seconds. Is there any other method to achieve the same using inline assembly, but only issue is it shouldn't be processor specific/dependent... can someone post a snippet for the same.. thanx..

        R Offline
        R Offline
        Rilhas
        wrote on last edited by
        #3

        Hi, I don't think you will be able to do this in Windows. This is because the Windows timers don't really provide such high resolution. With multimedia timers you can get about 1ms resolution, at best. Check out the timeBeginPeriod function. If you are into developing hardware you might solve your problem by developing a microcontroller to connect to a serial port, and then user the received serial data as event triggers. With a serial data rate of 128000 bps and 10 bits per byte, you would get about 12800 bytes per second (78 us per byte). Nevertheless, no real guarantees of accurate timming exist, because Windows could always group several bytes together before generating an event to the application, especially if there are other processes running. (you could try a loopback serial cable to avoid developing hardware). What you could do, using assembly language or C, is to take advantage of the fairly predictable execution rate of processors (after the startup stabilization and cache warming up) to estimate how many instructions execute within a given timer tick interval. For example, if 10ms elapse between timer ticks, and during that time your processor executes 3,000,000 for() loops, then it is a reasonable approximation to consider that your processor will execute 300,000 instruction in 1 ms, or 75,000 in 250 us. So, to wait 250 us you would do "for(int i=0; i<75000; i++);". For this to be accurate your program should not be interrupted by other processes in the system (so it should have a high execution priority), and the for() loops you use to measure execution rate should be very similar (ideally the same) as the for() loops used for waiting. And, of course, the execution time wasted between waits should be negligeble. One thing you must take into account when writting delays with this technique, is that the execution speed should be measured for each processor, or, ideally, before any execution. For the loops to be similar you could start a loop that you know will take about 1s or so to execute, and when it finished measure the time diference. For example: void main(void) { double t0, t1; t0=((double)clock())/((double)CLOCKS_PER_SEC); for(int i=0; i<500000000; i++); t1=((double)clock())/((double)CLOCKS_PER_SEC); printf("t1-t0=%g", t1-t0); } On my computer, which is a Pentium 4 running at 3.2 GHz, this 500,000,000 count loop takes about 1.1 seconds. If I run this application on my old 386 at 25 MHz I could expect it to take over 2 minutes, which could be unacceptable. I

        1 Reply Last reply
        0
        Reply
        • Reply as topic
        Log in to reply
        • Oldest to Newest
        • Newest to Oldest
        • Most Votes


        • Login

        • Don't have an account? Register

        • Login or register to search.
        • First post
          Last post
        0
        • Categories
        • Recent
        • Tags
        • Popular
        • World
        • Users
        • Groups