Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. The Lounge
  3. An Alogorithm's running time.

An Alogorithm's running time.

Scheduled Pinned Locked Moved The Lounge
question
15 Posts 7 Posters 1 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • V Offline
    V Offline
    Vijay Rajanna
    wrote on last edited by
    #1

    Hi, Since I couldn't find any of the technology forum on CP to post my question, I felt "Lounge" is the right place to post it. My questions is this... I'm finding it really hard to understand what it literally means " When an algorithms running time is logarithmic" ? How this can be explained in layman terms ? What I can understand is.. When the running time is linear... => Running time increases proportionally to increase in number of inputs. When the running time is quadratic..ie.. n2 => Running time doubles, proportionally to increase in number of inputs. So, Can someone explain me the logarithmic behavior of a algorithms. All, I know about logarithms is the easy way to express huge numbers (exponents).. Any links would be much appreciated. :) Regards, Vijay.

    M M C A L 6 Replies Last reply
    0
    • V Vijay Rajanna

      Hi, Since I couldn't find any of the technology forum on CP to post my question, I felt "Lounge" is the right place to post it. My questions is this... I'm finding it really hard to understand what it literally means " When an algorithms running time is logarithmic" ? How this can be explained in layman terms ? What I can understand is.. When the running time is linear... => Running time increases proportionally to increase in number of inputs. When the running time is quadratic..ie.. n2 => Running time doubles, proportionally to increase in number of inputs. So, Can someone explain me the logarithmic behavior of a algorithms. All, I know about logarithms is the easy way to express huge numbers (exponents).. Any links would be much appreciated. :) Regards, Vijay.

      M Offline
      M Offline
      Maximilien
      wrote on last edited by
      #2

      It simply means that the running time is logarithmic to the input size. Just imaging the curves on a 2D graph (X=data size, Y=theorical running time) O(N) is a line; running time is "eqaul" to the data siz. O(N-square) is a curve that will grow faster and faster as the datasize increase. O(log N) is a log curve that will grow but the growth rate will decrease as the datasize increase. i.e. if you increase the data size by "one unit" the running time will only increase by a factor of log of 1 unit. The best example of an O(log N) algorithm is the Binary search algorithm; its worst case scenario will be in O(log N).

      Watched code never compiles.

      V 2 Replies Last reply
      0
      • V Vijay Rajanna

        Hi, Since I couldn't find any of the technology forum on CP to post my question, I felt "Lounge" is the right place to post it. My questions is this... I'm finding it really hard to understand what it literally means " When an algorithms running time is logarithmic" ? How this can be explained in layman terms ? What I can understand is.. When the running time is linear... => Running time increases proportionally to increase in number of inputs. When the running time is quadratic..ie.. n2 => Running time doubles, proportionally to increase in number of inputs. So, Can someone explain me the logarithmic behavior of a algorithms. All, I know about logarithms is the easy way to express huge numbers (exponents).. Any links would be much appreciated. :) Regards, Vijay.

        M Offline
        M Offline
        mav northwind
        wrote on last edited by
        #3

        Hi! Even though the Lounge may not be the perfect place for this question, it's pretty easy to answer. With O(log n) your running time grows with the logarithm of input values. I.e. when the algorithm takes 1s for 10 inputs, it'll take 2s for 100 inputs, 3s for 1000 inputs and so on.

        Regards, mav -- Black holes are the places where God divided by 0...

        1 Reply Last reply
        0
        • V Vijay Rajanna

          Hi, Since I couldn't find any of the technology forum on CP to post my question, I felt "Lounge" is the right place to post it. My questions is this... I'm finding it really hard to understand what it literally means " When an algorithms running time is logarithmic" ? How this can be explained in layman terms ? What I can understand is.. When the running time is linear... => Running time increases proportionally to increase in number of inputs. When the running time is quadratic..ie.. n2 => Running time doubles, proportionally to increase in number of inputs. So, Can someone explain me the logarithmic behavior of a algorithms. All, I know about logarithms is the easy way to express huge numbers (exponents).. Any links would be much appreciated. :) Regards, Vijay.

          C Offline
          C Offline
          Chris Losinger
          wrote on last edited by
          #4

          O(1) = time to execute is independent of the size of the input O(n) = time is directly proportional to the size of the input O(n-squared) = time is proportional to the square of the size of the input O(log n) = time is proportional to the log of the size of the input O(n!) = time is proportional to the size of the input, factorial (ouch!) finding an item in a sorted array using binary search[^] is O(log n). (or, exactly what Max said...)

          image processing toolkits | batch image processing

          1 Reply Last reply
          0
          • V Vijay Rajanna

            Hi, Since I couldn't find any of the technology forum on CP to post my question, I felt "Lounge" is the right place to post it. My questions is this... I'm finding it really hard to understand what it literally means " When an algorithms running time is logarithmic" ? How this can be explained in layman terms ? What I can understand is.. When the running time is linear... => Running time increases proportionally to increase in number of inputs. When the running time is quadratic..ie.. n2 => Running time doubles, proportionally to increase in number of inputs. So, Can someone explain me the logarithmic behavior of a algorithms. All, I know about logarithms is the easy way to express huge numbers (exponents).. Any links would be much appreciated. :) Regards, Vijay.

            A Offline
            A Offline
            AspDotNetDev
            wrote on last edited by
            #5

            There is an "Algorithms" forum, though I don't think this question is too inappropriate for the Lounge.

            Somebody in an online forum wrote:

            INTJs never really joke. They make a point. The joke is just a gift wrapper.

            1 Reply Last reply
            0
            • M Maximilien

              It simply means that the running time is logarithmic to the input size. Just imaging the curves on a 2D graph (X=data size, Y=theorical running time) O(N) is a line; running time is "eqaul" to the data siz. O(N-square) is a curve that will grow faster and faster as the datasize increase. O(log N) is a log curve that will grow but the growth rate will decrease as the datasize increase. i.e. if you increase the data size by "one unit" the running time will only increase by a factor of log of 1 unit. The best example of an O(log N) algorithm is the Binary search algorithm; its worst case scenario will be in O(log N).

              Watched code never compiles.

              V Offline
              V Offline
              Vijay Rajanna
              wrote on last edited by
              #6

              Maximilien wrote:

              O(log N) is a log curve that will grow but the growth rate will decrease as the datasize increase. i.e. if you increase the data size by "one unit" the running time will only increase by a factor of log of 1 unit.

              I understand this, so what I can infer from this is ... 1) With the increase in the number of inputs, the increase in running time is not significant, 2) Since Log(N) < N always, and if an algorithm takes 1s for 1 input data, and if this is logarithmic in nature... Running itme = Log(10) = 1 1 < 10 , which means, in logarithmic algorithms "running time is always less than number of inputs".. Is my understanding right.. Or am I misinterpreting.. :^)

              A L 2 Replies Last reply
              0
              • V Vijay Rajanna

                Maximilien wrote:

                O(log N) is a log curve that will grow but the growth rate will decrease as the datasize increase. i.e. if you increase the data size by "one unit" the running time will only increase by a factor of log of 1 unit.

                I understand this, so what I can infer from this is ... 1) With the increase in the number of inputs, the increase in running time is not significant, 2) Since Log(N) < N always, and if an algorithm takes 1s for 1 input data, and if this is logarithmic in nature... Running itme = Log(10) = 1 1 < 10 , which means, in logarithmic algorithms "running time is always less than number of inputs".. Is my understanding right.. Or am I misinterpreting.. :^)

                A Offline
                A Offline
                AspDotNetDev
                wrote on last edited by
                #7

                There is an invisible constant multiplier and overhead. So really it's Runtime = C * Log(Inputs) + D. With a large enough number, the "D" constant becomes insignificant and the "C" constant matters less than if the algorithm were of a worse performing function (e.g., N vs Log(N)). Look into Big O notation.

                Somebody in an online forum wrote:

                INTJs never really joke. They make a point. The joke is just a gift wrapper.

                1 Reply Last reply
                0
                • M Maximilien

                  It simply means that the running time is logarithmic to the input size. Just imaging the curves on a 2D graph (X=data size, Y=theorical running time) O(N) is a line; running time is "eqaul" to the data siz. O(N-square) is a curve that will grow faster and faster as the datasize increase. O(log N) is a log curve that will grow but the growth rate will decrease as the datasize increase. i.e. if you increase the data size by "one unit" the running time will only increase by a factor of log of 1 unit. The best example of an O(log N) algorithm is the Binary search algorithm; its worst case scenario will be in O(log N).

                  Watched code never compiles.

                  V Offline
                  V Offline
                  Vijay Rajanna
                  wrote on last edited by
                  #8

                  Also, A common observation with all logarithmic algorithms is , we don't end up processing all input elements. Eg : Binary search, Searching an element in Binary search tree, etc

                  1 Reply Last reply
                  0
                  • V Vijay Rajanna

                    Maximilien wrote:

                    O(log N) is a log curve that will grow but the growth rate will decrease as the datasize increase. i.e. if you increase the data size by "one unit" the running time will only increase by a factor of log of 1 unit.

                    I understand this, so what I can infer from this is ... 1) With the increase in the number of inputs, the increase in running time is not significant, 2) Since Log(N) < N always, and if an algorithm takes 1s for 1 input data, and if this is logarithmic in nature... Running itme = Log(10) = 1 1 < 10 , which means, in logarithmic algorithms "running time is always less than number of inputs".. Is my understanding right.. Or am I misinterpreting.. :^)

                    L Offline
                    L Offline
                    Lost User
                    wrote on last edited by
                    #9

                    Vijay Sringeri wrote:

                    1 < 10 , which means, in logarithmic algorithms "running time is always less than number of inputs"..

                    Sort of right sort of wrong. What it means is you will not need touch each of the inputs (time is really irrelevant).

                    Computers have been intelligent for a long time now. It just so happens that the program writers are about as effective as a room full of monkeys trying to crank out a copy of Hamlet.

                    V 1 Reply Last reply
                    0
                    • V Vijay Rajanna

                      Hi, Since I couldn't find any of the technology forum on CP to post my question, I felt "Lounge" is the right place to post it. My questions is this... I'm finding it really hard to understand what it literally means " When an algorithms running time is logarithmic" ? How this can be explained in layman terms ? What I can understand is.. When the running time is linear... => Running time increases proportionally to increase in number of inputs. When the running time is quadratic..ie.. n2 => Running time doubles, proportionally to increase in number of inputs. So, Can someone explain me the logarithmic behavior of a algorithms. All, I know about logarithms is the easy way to express huge numbers (exponents).. Any links would be much appreciated. :) Regards, Vijay.

                      L Offline
                      L Offline
                      Lost User
                      wrote on last edited by
                      #10

                      When talking about Big O, we do not actually talk about time but data points (in and out). So O(N) means each input must be touched. Where as O(log N) means we must touch log N inputs to get our output. Time is relative. If it takes 10s to 'touch' an input then you can calculate how long it will take to calculate the output of a dataset of 1 million.

                      Computers have been intelligent for a long time now. It just so happens that the program writers are about as effective as a room full of monkeys trying to crank out a copy of Hamlet.

                      V M 2 Replies Last reply
                      0
                      • L Lost User

                        Vijay Sringeri wrote:

                        1 < 10 , which means, in logarithmic algorithms "running time is always less than number of inputs"..

                        Sort of right sort of wrong. What it means is you will not need touch each of the inputs (time is really irrelevant).

                        Computers have been intelligent for a long time now. It just so happens that the program writers are about as effective as a room full of monkeys trying to crank out a copy of Hamlet.

                        V Offline
                        V Offline
                        Vijay Rajanna
                        wrote on last edited by
                        #11

                        Collin Jasnoch wrote:

                        What it means is you will not need touch each of the inputs (time is really irrelevant).

                        Exactly , this is what that I was looking for.. :-), but nobody explains this while doing run time analysis.. Thanks.

                        1 Reply Last reply
                        0
                        • L Lost User

                          When talking about Big O, we do not actually talk about time but data points (in and out). So O(N) means each input must be touched. Where as O(log N) means we must touch log N inputs to get our output. Time is relative. If it takes 10s to 'touch' an input then you can calculate how long it will take to calculate the output of a dataset of 1 million.

                          Computers have been intelligent for a long time now. It just so happens that the program writers are about as effective as a room full of monkeys trying to crank out a copy of Hamlet.

                          V Offline
                          V Offline
                          Vijay Rajanna
                          wrote on last edited by
                          #12

                          Collin Jasnoch wrote:

                          we do not actually talk about time but data points (in and out).

                          Thanks Collin, this is actually the core which one should understand... Basically these notations indicates how many input data need to be processed to get output.. Thanks.

                          1 Reply Last reply
                          0
                          • V Vijay Rajanna

                            Hi, Since I couldn't find any of the technology forum on CP to post my question, I felt "Lounge" is the right place to post it. My questions is this... I'm finding it really hard to understand what it literally means " When an algorithms running time is logarithmic" ? How this can be explained in layman terms ? What I can understand is.. When the running time is linear... => Running time increases proportionally to increase in number of inputs. When the running time is quadratic..ie.. n2 => Running time doubles, proportionally to increase in number of inputs. So, Can someone explain me the logarithmic behavior of a algorithms. All, I know about logarithms is the easy way to express huge numbers (exponents).. Any links would be much appreciated. :) Regards, Vijay.

                            realJSOPR Offline
                            realJSOPR Offline
                            realJSOP
                            wrote on last edited by
                            #13

                            It's like trying to eat a donut in a single bite. The more donut you get in your mouth, the longer it takes you to swallow it all. Mmmmmm.... donuts.

                            ".45 ACP - because shooting twice is just silly" - JSOP, 2010
                            -----
                            You can never have too much ammo - unless you're swimming, or on fire. - JSOP, 2010
                            -----
                            "Why don't you tie a kerosene-soaked rag around your ankles so the ants won't climb up and eat your candy ass." - Dale Earnhardt, 1997

                            1 Reply Last reply
                            0
                            • L Lost User

                              When talking about Big O, we do not actually talk about time but data points (in and out). So O(N) means each input must be touched. Where as O(log N) means we must touch log N inputs to get our output. Time is relative. If it takes 10s to 'touch' an input then you can calculate how long it will take to calculate the output of a dataset of 1 million.

                              Computers have been intelligent for a long time now. It just so happens that the program writers are about as effective as a room full of monkeys trying to crank out a copy of Hamlet.

                              M Offline
                              M Offline
                              Maximilien
                              wrote on last edited by
                              #14

                              Collin Jasnoch wrote:

                              Where as O(log N) means we must touch log N inputs to get our output.

                              Only in the best case scenario; in the worse case, it can be very different and even worse.

                              Watched code never compiles.

                              L 1 Reply Last reply
                              0
                              • M Maximilien

                                Collin Jasnoch wrote:

                                Where as O(log N) means we must touch log N inputs to get our output.

                                Only in the best case scenario; in the worse case, it can be very different and even worse.

                                Watched code never compiles.

                                L Offline
                                L Offline
                                Lost User
                                wrote on last edited by
                                #15

                                Yes. You must touch Log N inputs and likely a few more... But not N and certainly not N^2 The notation comes from taking series out infinitely. F(n) = (n)^-(1/5) + Log(n) then one can say it is of Log(n) complexity, because as n gets larger and larger the 5th root of n is neglagable and the Log(n) outweighs it.

                                Computers have been intelligent for a long time now. It just so happens that the program writers are about as effective as a room full of monkeys trying to crank out a copy of Hamlet.

                                1 Reply Last reply
                                0
                                Reply
                                • Reply as topic
                                Log in to reply
                                • Oldest to Newest
                                • Newest to Oldest
                                • Most Votes


                                • Login

                                • Don't have an account? Register

                                • Login or register to search.
                                • First post
                                  Last post
                                0
                                • Categories
                                • Recent
                                • Tags
                                • Popular
                                • World
                                • Users
                                • Groups