Hi, I don't think you will be able to do this in Windows. This is because the Windows timers don't really provide such high resolution. With multimedia timers you can get about 1ms resolution, at best. Check out the timeBeginPeriod function. If you are into developing hardware you might solve your problem by developing a microcontroller to connect to a serial port, and then user the received serial data as event triggers. With a serial data rate of 128000 bps and 10 bits per byte, you would get about 12800 bytes per second (78 us per byte). Nevertheless, no real guarantees of accurate timming exist, because Windows could always group several bytes together before generating an event to the application, especially if there are other processes running. (you could try a loopback serial cable to avoid developing hardware). What you could do, using assembly language or C, is to take advantage of the fairly predictable execution rate of processors (after the startup stabilization and cache warming up) to estimate how many instructions execute within a given timer tick interval. For example, if 10ms elapse between timer ticks, and during that time your processor executes 3,000,000 for() loops, then it is a reasonable approximation to consider that your processor will execute 300,000 instruction in 1 ms, or 75,000 in 250 us. So, to wait 250 us you would do "for(int i=0; i<75000; i++);". For this to be accurate your program should not be interrupted by other processes in the system (so it should have a high execution priority), and the for() loops you use to measure execution rate should be very similar (ideally the same) as the for() loops used for waiting. And, of course, the execution time wasted between waits should be negligeble. One thing you must take into account when writting delays with this technique, is that the execution speed should be measured for each processor, or, ideally, before any execution. For the loops to be similar you could start a loop that you know will take about 1s or so to execute, and when it finished measure the time diference. For example: void main(void) { double t0, t1; t0=((double)clock())/((double)CLOCKS_PER_SEC); for(int i=0; i<500000000; i++); t1=((double)clock())/((double)CLOCKS_PER_SEC); printf("t1-t0=%g", t1-t0); } On my computer, which is a Pentium 4 running at 3.2 GHz, this 500,000,000 count loop takes about 1.1 seconds. If I run this application on my old 386 at 25 MHz I could expect it to take over 2 minutes, which could be unacceptable. I