Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. General Programming
  3. C / C++ / MFC
  4. Funny timing results in threading project

Funny timing results in threading project

Scheduled Pinned Locked Moved C / C++ / MFC
c++questiondata-structurestoolsannouncement
5 Posts 3 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • C Offline
    C Offline
    Cyrilix
    wrote on last edited by
    #1

    I'm timing a threading utility that pushes a bunch of work to the thread pool, executes the work, and then figures out how much time it has taken overall. The concept is really simple. That said, when I run the tests regularly in release mode with /O2 enabled, the time either takes 0.25s or 0.33s, and it randomly alternates between the two. Now, if I add a printf() statement right when the thread pool is shutting down, printing the number of work units that each thread has completed, the timing result is always 0.25s -- and by always, I mean 100%. It will never be 0.33s. In fact, I tried replacing this printf() statement with one that simply prints out "WTF?", and the result is the same (the time always ends up being 0.25s and never 0.33s). This makes me think that there is something with console operations that is changing the innards of my timing, however, I have no clue what it is. For reference, I am using Visual C++ 2005, and QueryPerformanceCounter() / GetTickCount() to do my timing (I've used both, and they both give me these kinds of different results). Anyone know what's going on? EDIT: I forgot to mention one important thing. Here is how my timing works: #1: Initialize thread pool #2: Get count of start time #3: Queue and process work #4: Get count of end time #5: Release thread pool Now, we can truly understand the hilarity of these results, as the printf() statement, within the release of the thread pool, is not even within the count of the start and end! It's as if something I am doing in the future is changing the result of what is happening now.

    C L 2 Replies Last reply
    0
    • C Cyrilix

      I'm timing a threading utility that pushes a bunch of work to the thread pool, executes the work, and then figures out how much time it has taken overall. The concept is really simple. That said, when I run the tests regularly in release mode with /O2 enabled, the time either takes 0.25s or 0.33s, and it randomly alternates between the two. Now, if I add a printf() statement right when the thread pool is shutting down, printing the number of work units that each thread has completed, the timing result is always 0.25s -- and by always, I mean 100%. It will never be 0.33s. In fact, I tried replacing this printf() statement with one that simply prints out "WTF?", and the result is the same (the time always ends up being 0.25s and never 0.33s). This makes me think that there is something with console operations that is changing the innards of my timing, however, I have no clue what it is. For reference, I am using Visual C++ 2005, and QueryPerformanceCounter() / GetTickCount() to do my timing (I've used both, and they both give me these kinds of different results). Anyone know what's going on? EDIT: I forgot to mention one important thing. Here is how my timing works: #1: Initialize thread pool #2: Get count of start time #3: Queue and process work #4: Get count of end time #5: Release thread pool Now, we can truly understand the hilarity of these results, as the printf() statement, within the release of the thread pool, is not even within the count of the start and end! It's as if something I am doing in the future is changing the result of what is happening now.

      C Offline
      C Offline
      carrivick
      wrote on last edited by
      #2

      Maybe overwritting your stack at some point ? Put some Sleep calls to make the execution times longer see if they follow the time you set

      C 1 Reply Last reply
      0
      • C Cyrilix

        I'm timing a threading utility that pushes a bunch of work to the thread pool, executes the work, and then figures out how much time it has taken overall. The concept is really simple. That said, when I run the tests regularly in release mode with /O2 enabled, the time either takes 0.25s or 0.33s, and it randomly alternates between the two. Now, if I add a printf() statement right when the thread pool is shutting down, printing the number of work units that each thread has completed, the timing result is always 0.25s -- and by always, I mean 100%. It will never be 0.33s. In fact, I tried replacing this printf() statement with one that simply prints out "WTF?", and the result is the same (the time always ends up being 0.25s and never 0.33s). This makes me think that there is something with console operations that is changing the innards of my timing, however, I have no clue what it is. For reference, I am using Visual C++ 2005, and QueryPerformanceCounter() / GetTickCount() to do my timing (I've used both, and they both give me these kinds of different results). Anyone know what's going on? EDIT: I forgot to mention one important thing. Here is how my timing works: #1: Initialize thread pool #2: Get count of start time #3: Queue and process work #4: Get count of end time #5: Release thread pool Now, we can truly understand the hilarity of these results, as the printf() statement, within the release of the thread pool, is not even within the count of the start and end! It's as if something I am doing in the future is changing the result of what is happening now.

        L Offline
        L Offline
        Lost User
        wrote on last edited by
        #3

        Hi, I believe when a thread does not have any sychronization objects associated with it, the OS kernel will yield and pre-empt the thread in a semi-random way, based on a priority-driven, preemptive scheduling algorithm. However once a thread has created a synchronization object the OS kernel begins to internally use this object as a reference point for yeilding process time. So by simply having created a synchronization object within a thread, the quantum timeslices should indeed become more uniform. http://msdn2.microsoft.com/en-us/library/ms686364.aspx[^] Maybe when your application is opening STD_INPUT_HANDLE or STD_OUTPUT_HANDLE an internal synchronization object is being created and causing the uniform quantum slices. This is just purely a guess. It should also be noted that when using QueryPerformanceCounter on a multiprocessor computer that in order to retrieve the most accurate timestamp you should set the threads affinity mask to a single processor using SetThreadAffinityMask. Best Wishes, -Randor (David Delaune)

        C 1 Reply Last reply
        0
        • L Lost User

          Hi, I believe when a thread does not have any sychronization objects associated with it, the OS kernel will yield and pre-empt the thread in a semi-random way, based on a priority-driven, preemptive scheduling algorithm. However once a thread has created a synchronization object the OS kernel begins to internally use this object as a reference point for yeilding process time. So by simply having created a synchronization object within a thread, the quantum timeslices should indeed become more uniform. http://msdn2.microsoft.com/en-us/library/ms686364.aspx[^] Maybe when your application is opening STD_INPUT_HANDLE or STD_OUTPUT_HANDLE an internal synchronization object is being created and causing the uniform quantum slices. This is just purely a guess. It should also be noted that when using QueryPerformanceCounter on a multiprocessor computer that in order to retrieve the most accurate timestamp you should set the threads affinity mask to a single processor using SetThreadAffinityMask. Best Wishes, -Randor (David Delaune)

          C Offline
          C Offline
          Cyrilix
          wrote on last edited by
          #4

          Thanks for the reply. Your explanation sounds plausible, although there doesn't seem like a way for me to confirm this. I am pretty sure this has nothing to do with multiprocessors and QueryPerformanceCounter() as there is a lot of info that suggests the use of /pmtimer in the C:/boot.ini file to fix incorrect timing results. I'm also unsure as to how well using SetThreadAffinityMask() works for timing, as I don't believe it provides any guarantees, only a suggestion. In another application that I was developing, using SetThreadAffinityMask() before calling QPC still gave me some timing results that were off, on 32-bit Windows XP (64-bit Windows XP, however, didn't have any such issues).

          1 Reply Last reply
          0
          • C carrivick

            Maybe overwritting your stack at some point ? Put some Sleep calls to make the execution times longer see if they follow the time you set

            C Offline
            C Offline
            Cyrilix
            wrote on last edited by
            #5

            I will give this a try. Thanks.

            1 Reply Last reply
            0
            Reply
            • Reply as topic
            Log in to reply
            • Oldest to Newest
            • Newest to Oldest
            • Most Votes


            • Login

            • Don't have an account? Register

            • Login or register to search.
            • First post
              Last post
            0
            • Categories
            • Recent
            • Tags
            • Popular
            • World
            • Users
            • Groups