Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. General Programming
  3. Algorithms
  4. Bignum timing

Bignum timing

Scheduled Pinned Locked Moved Algorithms
performancetutorialquestion
6 Posts 2 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • M Offline
    M Offline
    Member 4194593
    wrote on last edited by
    #1

    I am working on a bignum library and have been checking out some of the functions. I would like to know it the times I am seeing are good, bad, or indifferent. To test big numbers, I used the RSA challenge file. The times are in seconds and hundredths. The initialize time is setting high priority and allocating all of virtual memory. The read and split time is the time to read the file (one time) and tokenize it. The edit time is for deleting unneeded lines in the file. To get any difference in time between the different RSA instances, I had to loop 100,000 times to get different time values for the different instances (as you can see, the read/split time and the edit time are the default .01 second), thus those times are listed as 100K*xxx, i.e. "RSA-2048 100K*DTB conversion time: 2.06" means 20.6 microseconds for any single loop. A single RSA instance includes the following data which is verified during the conversion (the first RSA instance is used as this example):

    RSA-576

    Status: Factored

    Decimal Digits: 174

    18819881292060796383869723946165043980716356337941
    73827007633564229888597152346654853190606065047430
    45317388011303396716199692321205734031879550656996
    221305168759307650257059

    Decimal Digit Sum: 785

    The edit deletes the blank lines and the Status line. The initialize, read, and edit are only done one time. The conversion time for each RSA instance includes validating that only decimal digits are present in the data, and that there the correct number of digits, and that the digit sum matches. I load the number as a radix 10 value and convert it to binary, and verify that the result has the correct number of bits. I save this binary value over the decimal digits so I can load it later without conversion. I also load this binary value to insure correct operation with radix 1. Each RSA instance is processed 100,000 times and then the time is calculated. The SQRT times include loading the binary value and taking the SQRT and getting its remainder, and then squaring the SQRT and adding the remainder and comparing the result with the RSA instance value. Each RSA instance is processed 100,000 times and then the time is calculated. With all of the above operations as described, are the times at all reasonable, i.e. 20.6 microseconds for DTB conversion and validation of a 2048 bit binary number and 63.6 microseconds to load and take the SQRT and validate (SQRT**2 + remainder == semi prime) for the 2048 bit binary value? Dave.

    P 1 Reply Last reply
    0
    • M Member 4194593

      I am working on a bignum library and have been checking out some of the functions. I would like to know it the times I am seeing are good, bad, or indifferent. To test big numbers, I used the RSA challenge file. The times are in seconds and hundredths. The initialize time is setting high priority and allocating all of virtual memory. The read and split time is the time to read the file (one time) and tokenize it. The edit time is for deleting unneeded lines in the file. To get any difference in time between the different RSA instances, I had to loop 100,000 times to get different time values for the different instances (as you can see, the read/split time and the edit time are the default .01 second), thus those times are listed as 100K*xxx, i.e. "RSA-2048 100K*DTB conversion time: 2.06" means 20.6 microseconds for any single loop. A single RSA instance includes the following data which is verified during the conversion (the first RSA instance is used as this example):

      RSA-576

      Status: Factored

      Decimal Digits: 174

      18819881292060796383869723946165043980716356337941
      73827007633564229888597152346654853190606065047430
      45317388011303396716199692321205734031879550656996
      221305168759307650257059

      Decimal Digit Sum: 785

      The edit deletes the blank lines and the Status line. The initialize, read, and edit are only done one time. The conversion time for each RSA instance includes validating that only decimal digits are present in the data, and that there the correct number of digits, and that the digit sum matches. I load the number as a radix 10 value and convert it to binary, and verify that the result has the correct number of bits. I save this binary value over the decimal digits so I can load it later without conversion. I also load this binary value to insure correct operation with radix 1. Each RSA instance is processed 100,000 times and then the time is calculated. The SQRT times include loading the binary value and taking the SQRT and getting its remainder, and then squaring the SQRT and adding the remainder and comparing the result with the RSA instance value. Each RSA instance is processed 100,000 times and then the time is calculated. With all of the above operations as described, are the times at all reasonable, i.e. 20.6 microseconds for DTB conversion and validation of a 2048 bit binary number and 63.6 microseconds to load and take the SQRT and validate (SQRT**2 + remainder == semi prime) for the 2048 bit binary value? Dave.

      P Offline
      P Offline
      PIEBALDconsult
      wrote on last edited by
      #2

      You don't mean a big integer class? http://msdn.microsoft.com/en-us/library/system.numerics.biginteger(v=vs.110).aspx[^]

      M 2 Replies Last reply
      0
      • P PIEBALDconsult

        You don't mean a big integer class? http://msdn.microsoft.com/en-us/library/system.numerics.biginteger(v=vs.110).aspx[^]

        M Offline
        M Offline
        Member 4194593
        wrote on last edited by
        #3

        PIEBALDconsult wrote:

        You don't mean a big integer class?

        No, I prefer to roll my own, only my functions process only positive integers. I do not use C++ or .NET, I am a MASM(5,6,7,8,9) programmer. OBTW, I read the class documentation but could not find a SQRT function, did I miss it somewhere? If you are interested, I can email you the RSA.txt file (if you do not already have it) and you could implement and test the SQRT function and report the timings from the big integer class. My timings were taken on an HP Pavilion dv7-6c23cl - 6GB memory, quad AMD, 2.5GHz, 32 bit assembly, console application. What I was looking for was anyone's WAG about how long the square roots of 576 bit to 2048 bit integers should take. Dave.

        1 Reply Last reply
        0
        • P PIEBALDconsult

          You don't mean a big integer class? http://msdn.microsoft.com/en-us/library/system.numerics.biginteger(v=vs.110).aspx[^]

          M Offline
          M Offline
          Member 4194593
          wrote on last edited by
          #4

          I have several more questions to ask about bignumb processing, but since you were were the only one to respond to my original question I thought I'd just ask you directly via email, however your email seams to be blocked so I will ask here directly. When you select the low limit for a potential P value of 40% of the decimal digit count, what would that be? Lets take a simple example - 40% of a 15 digit number would be 6 digits. My assumption is that it would be 100,000, 6 digits and the lowest number in 6 digits. If I wanted to insure that this was an odd number, would that change to 99,999 or to 999,999? 99,999 is not a 6 digit number, but 999,999 is certainly greater than the minimum 6 digit number of 100,000. Which should be used as the limit to match what RSA would generate for encryption? To calculate this in binary, consider that 10^14 is 100,000,000,000,000 (15 digits) = 0x3,8D7E,A4C6,8000 (50 bits). 40% of 50 bits is 20 bits. The lowest 20 bit number is 0x8,0000 (the lowest number in 20 bits). If I wanted to insure that this was an odd number, would that change to 0x7,FFFF or 0xF,FFFF? 0x7,FFFF is just a 19 bit number, but 0xF,FFFF is certainly is certainly greater than the minimum 20 bit number of 0x8,0000. The upper limit of 45% is easier to figure out. Just create the maximum number for that digit count (or bit count) and increment the count and create the number as a base to a power, then decrement the result by 1 (it will always be odd, either 99999... or FFFFF...). Dave.

          P 1 Reply Last reply
          0
          • M Member 4194593

            I have several more questions to ask about bignumb processing, but since you were were the only one to respond to my original question I thought I'd just ask you directly via email, however your email seams to be blocked so I will ask here directly. When you select the low limit for a potential P value of 40% of the decimal digit count, what would that be? Lets take a simple example - 40% of a 15 digit number would be 6 digits. My assumption is that it would be 100,000, 6 digits and the lowest number in 6 digits. If I wanted to insure that this was an odd number, would that change to 99,999 or to 999,999? 99,999 is not a 6 digit number, but 999,999 is certainly greater than the minimum 6 digit number of 100,000. Which should be used as the limit to match what RSA would generate for encryption? To calculate this in binary, consider that 10^14 is 100,000,000,000,000 (15 digits) = 0x3,8D7E,A4C6,8000 (50 bits). 40% of 50 bits is 20 bits. The lowest 20 bit number is 0x8,0000 (the lowest number in 20 bits). If I wanted to insure that this was an odd number, would that change to 0x7,FFFF or 0xF,FFFF? 0x7,FFFF is just a 19 bit number, but 0xF,FFFF is certainly is certainly greater than the minimum 20 bit number of 0x8,0000. The upper limit of 45% is easier to figure out. Just create the maximum number for that digit count (or bit count) and increment the count and create the number as a base to a power, then decrement the result by 1 (it will always be odd, either 99999... or FFFFF...). Dave.

            P Offline
            P Offline
            PIEBALDconsult
            wrote on last edited by
            #5

            It seems you know more about it than I do.

            M 1 Reply Last reply
            0
            • P PIEBALDconsult

              It seems you know more about it than I do.

              M Offline
              M Offline
              Member 4194593
              wrote on last edited by
              #6

              Thank you for the reply. I guess I'll just select the smallest low number to guarantee that I get most of the created keys. I already plan to check that low limit and if I find that that low limit value is above the solution value, I will just use it as a high limit and use the value 1 as thee low limit. Dave.

              1 Reply Last reply
              0
              Reply
              • Reply as topic
              Log in to reply
              • Oldest to Newest
              • Newest to Oldest
              • Most Votes


              • Login

              • Don't have an account? Register

              • Login or register to search.
              • First post
                Last post
              0
              • Categories
              • Recent
              • Tags
              • Popular
              • World
              • Users
              • Groups