Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. Product Lifecycle
  3. Application Lifecycle
  4. Benchmark deviations

Benchmark deviations

Scheduled Pinned Locked Moved Application Lifecycle
databasetestingbeta-testing
6 Posts 3 Posters 5 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • S Offline
    S Offline
    S Becker
    wrote on last edited by
    #1

    Hello together, we want to run benchmark tests on our testing PC. We were suprised that the benchmark sometimes shows strong deviations. (sometimes only 20% of the best benchmark value, with several windows versions). I can not explain what happens there. We even made an image of the pc's hard drive that we reinstall every time we run the (two) benchmarks. Still we have those deviations. The next time when i will do some testing i will check if the following services are not running - the index service - the defragmentation service - the antivir service Do you have any ideas what has do be stopped/made/run in addition to the above to make the benchmark more consistent. I hope this is the right forum for this.

    Regards Sascha

    J S 2 Replies Last reply
    0
    • S S Becker

      Hello together, we want to run benchmark tests on our testing PC. We were suprised that the benchmark sometimes shows strong deviations. (sometimes only 20% of the best benchmark value, with several windows versions). I can not explain what happens there. We even made an image of the pc's hard drive that we reinstall every time we run the (two) benchmarks. Still we have those deviations. The next time when i will do some testing i will check if the following services are not running - the index service - the defragmentation service - the antivir service Do you have any ideas what has do be stopped/made/run in addition to the above to make the benchmark more consistent. I hope this is the right forum for this.

      Regards Sascha

      J Offline
      J Offline
      jschell
      wrote on last edited by
      #2

      S. Becker wrote:

      to make the benchmark more consistent.

      My guess would be that your benchmark is either just wrong or you do not understand what it is measuring. Should note as well that attempting to measure a PC/OS without understanding the real business use (generally an application) is an excercise that is doomed to fail.

      S 1 Reply Last reply
      0
      • J jschell

        S. Becker wrote:

        to make the benchmark more consistent.

        My guess would be that your benchmark is either just wrong or you do not understand what it is measuring. Should note as well that attempting to measure a PC/OS without understanding the real business use (generally an application) is an excercise that is doomed to fail.

        S Offline
        S Offline
        S Becker
        wrote on last edited by
        #3

        We did the benchmark tests with two different benchmark tools and both showed those strong deviations. So i do not think that it is a problem of the benchmark tools. The way the benchmark is run is defined in a word document. Even if we would not understand what has been measured: if the input is the same the output should be also the same. My job is now to make clear what went wrong because if the (benchmark) pc not allways produces the (more or less) same result how can our software be measured. Thank you.

        Regards Sascha

        L J 2 Replies Last reply
        0
        • S S Becker

          We did the benchmark tests with two different benchmark tools and both showed those strong deviations. So i do not think that it is a problem of the benchmark tools. The way the benchmark is run is defined in a word document. Even if we would not understand what has been measured: if the input is the same the output should be also the same. My job is now to make clear what went wrong because if the (benchmark) pc not allways produces the (more or less) same result how can our software be measured. Thank you.

          Regards Sascha

          L Offline
          L Offline
          Lost User
          wrote on last edited by
          #4

          S. Becker wrote:

          We did the benchmark tests with two different benchmark tools

          Maybe you should talk to the people who provided the tools. Benchmarking computers is a difficult activity at the best of times because there are so many variables to be taken into account, particularly with multi-tasking operating systems.

          One of these days I'm going to think of a really clever signature.

          1 Reply Last reply
          0
          • S S Becker

            We did the benchmark tests with two different benchmark tools and both showed those strong deviations. So i do not think that it is a problem of the benchmark tools. The way the benchmark is run is defined in a word document. Even if we would not understand what has been measured: if the input is the same the output should be also the same. My job is now to make clear what went wrong because if the (benchmark) pc not allways produces the (more or less) same result how can our software be measured. Thank you.

            Regards Sascha

            J Offline
            J Offline
            jschell
            wrote on last edited by
            #5

            S. Becker wrote:

            how can our software be measured.

            By running your software and exposing it to various loads.

            1 Reply Last reply
            0
            • S S Becker

              Hello together, we want to run benchmark tests on our testing PC. We were suprised that the benchmark sometimes shows strong deviations. (sometimes only 20% of the best benchmark value, with several windows versions). I can not explain what happens there. We even made an image of the pc's hard drive that we reinstall every time we run the (two) benchmarks. Still we have those deviations. The next time when i will do some testing i will check if the following services are not running - the index service - the defragmentation service - the antivir service Do you have any ideas what has do be stopped/made/run in addition to the above to make the benchmark more consistent. I hope this is the right forum for this.

              Regards Sascha

              S Offline
              S Offline
              S Becker
              wrote on last edited by
              #6

              In the end it was a mix of a to small sample for calculating the relative variance and some unnecessary processes (upgrade service for ex.) Now with the bigger sample and the stopped processes the results are more consistent. Thank you for your help.

              Regards Sascha

              1 Reply Last reply
              0
              Reply
              • Reply as topic
              Log in to reply
              • Oldest to Newest
              • Newest to Oldest
              • Most Votes


              • Login

              • Don't have an account? Register

              • Login or register to search.
              • First post
                Last post
              0
              • Categories
              • Recent
              • Tags
              • Popular
              • World
              • Users
              • Groups