searching and order of aalgorithm [modified]
-
Alan, With O(n lg(n)) time to sort, doesn't that use up all of the time allotted (overall time was to be O(n lg(n)))? Wouldn't a B+ tree sort speed up that somewhat to leave a little time for the combinatorial arithmetic? With a B+ tree (n-ary tree) you eliminate many wrong values instead of just two at each step as is done in a binary tree. Once the array of values was sorted, the B+ tree search would also find the correct value quicker than O(n lg(n)) as would be the case using a binary search for the second value. Dave.
Big O notation is a measure of the execution time/complexity relative to the input size. Hence,
O(n log(n))
means that the complexity in whatever unit is equal toC * n * log(n)
where C is a constant. If you add an extra statement to your inner loop, you're increasing C without increasing the Big O complexity. We say it this way, because all we're trying to measure is how much longer it would take if, say, we doubled the input size. For anO(n)
algorithm, doubling the input will double the complexity. So in essence...O(n log(n)) + O(n log(n)) = O(n log(n))
Proud to have finally moved to the A-Ark. Which one are you in? Developer, Author (Guardians of Xen)
-
Big O notation is a measure of the execution time/complexity relative to the input size. Hence,
O(n log(n))
means that the complexity in whatever unit is equal toC * n * log(n)
where C is a constant. If you add an extra statement to your inner loop, you're increasing C without increasing the Big O complexity. We say it this way, because all we're trying to measure is how much longer it would take if, say, we doubled the input size. For anO(n)
algorithm, doubling the input will double the complexity. So in essence...O(n log(n)) + O(n log(n)) = O(n log(n))
Proud to have finally moved to the A-Ark. Which one are you in? Developer, Author (Guardians of Xen)
O(IC) ;)
Luc Pattyn
I only read code that is properly indented, and rendered in a non-proportional font; hint: use PRE tags in forum messages
-
O(IC) ;)
Luc Pattyn
I only read code that is properly indented, and rendered in a non-proportional font; hint: use PRE tags in forum messages
:)
Proud to have finally moved to the A-Ark. Which one are you in? Developer, Author (Guardians of Xen)
-
Big O notation is a measure of the execution time/complexity relative to the input size. Hence,
O(n log(n))
means that the complexity in whatever unit is equal toC * n * log(n)
where C is a constant. If you add an extra statement to your inner loop, you're increasing C without increasing the Big O complexity. We say it this way, because all we're trying to measure is how much longer it would take if, say, we doubled the input size. For anO(n)
algorithm, doubling the input will double the complexity. So in essence...O(n log(n)) + O(n log(n)) = O(n log(n))
Proud to have finally moved to the A-Ark. Which one are you in? Developer, Author (Guardians of Xen)
I understand Big O, but in this case, after the sort, for each item, you do a binary search on the array looking for a value that when added to the initial value equals some X. Anything you do to speed up this process will help more than just removing some instruction from the inner loop. You have just added another (n (n log(n))) timeslice to the inner loop, not just some constant. Dave.
-
I understand Big O, but in this case, after the sort, for each item, you do a binary search on the array looking for a value that when added to the initial value equals some X. Anything you do to speed up this process will help more than just removing some instruction from the inner loop. You have just added another (n (n log(n))) timeslice to the inner loop, not just some constant. Dave.
But you're not doing a binary search on each item. You're doing one sort and one binary search. Think of it this way... Say the sort and the next operation were each O(n)... Obviously they wouldn't be, but as an example. Then you're doing an O(n) followed by an O(n), or 2 * O(n). But in Big O, we eliminate constants, so it simplifies to just O(n)... If the two operations are of different complexity, we take the larger one... If it was O(n log(n)) followed by O(n), we'd say the Big O was O(n log n).
Proud to have finally moved to the A-Ark. Which one are you in? Developer, Author (Guardians of Xen)
-
But you're not doing a binary search on each item. You're doing one sort and one binary search. Think of it this way... Say the sort and the next operation were each O(n)... Obviously they wouldn't be, but as an example. Then you're doing an O(n) followed by an O(n), or 2 * O(n). But in Big O, we eliminate constants, so it simplifies to just O(n)... If the two operations are of different complexity, we take the larger one... If it was O(n log(n)) followed by O(n), we'd say the Big O was O(n log n).
Proud to have finally moved to the A-Ark. Which one are you in? Developer, Author (Guardians of Xen)
Ian Shlasko wrote:
You're doing one sort and one binary search.
But you are not doing that, you are doing the binary search n times looking for a pair that add up to the constant X. Dave.
-
1. Sort the array in ascending order. (This is O(n lg(n)). 2. Use two integer indexes, i & j, that point to the array elements to add. The first one (i) starts at 0 (the index of the lowest number). 3. Do a binary search in the array to find the array element that is closest to x when added to the element at index 0 (j). If array [i] + array [j] == x, we're done. 4. Loop: Advance i to the next element. 5. While array [i] + array [j] > x, decrease j. If array [i] + array [j] == x, we're done. 6. When array [i] + array [j] < x, go to Step 4. When i >= j, there's no solution and we're done.
I agree with your algo, however I would rephrase it in a more symmetric way, making the O(n) part more clear:
1. Sort the array in ascending order
2. let int i=0 and int j=n-1
3. if i>=j then search is over
4. calculate sum=array[i]+array[j]
5. if sum<x, increment i and goto (3)
6. if sum>x, decrement j and goto (3)
7. since sum==x, solution found (either stop; or increment i, decrement j and goto 3)(1) is O(n ln(n)) (2)-(7) is O(n) as i and j move towards each other in steps of 1 hence overall O(n ln(n)) :)
Luc Pattyn
I only read code that is properly indented, and rendered in a non-proportional font; hint: use PRE tags in forum messages
-
Ian Shlasko wrote:
You're doing one sort and one binary search.
But you are not doing that, you are doing the binary search n times looking for a pair that add up to the constant X. Dave.
Sorry, mixed it up a bit, but I'm still right about the complexity. A binary search is
O(log n)
... And we'd be doing thatn
times... Hence,O(n log(n))
for all of the searches. But this is still done AFTER the sort, not inside it, so it adds to the sort instead of multiplying with it.Proud to have finally moved to the A-Ark. Which one are you in? Developer, Author (Guardians of Xen)
-
Sorry, mixed it up a bit, but I'm still right about the complexity. A binary search is
O(log n)
... And we'd be doing thatn
times... Hence,O(n log(n))
for all of the searches. But this is still done AFTER the sort, not inside it, so it adds to the sort instead of multiplying with it.Proud to have finally moved to the A-Ark. Which one are you in? Developer, Author (Guardians of Xen)
You had it right the first time -- Only one binary search is done. It's done to find the upper bound of number pairs that could possibly add up to x.
-
I agree with your algo, however I would rephrase it in a more symmetric way, making the O(n) part more clear:
1. Sort the array in ascending order
2. let int i=0 and int j=n-1
3. if i>=j then search is over
4. calculate sum=array[i]+array[j]
5. if sum<x, increment i and goto (3)
6. if sum>x, decrement j and goto (3)
7. since sum==x, solution found (either stop; or increment i, decrement j and goto 3)(1) is O(n ln(n)) (2)-(7) is O(n) as i and j move towards each other in steps of 1 hence overall O(n ln(n)) :)
Luc Pattyn
I only read code that is properly indented, and rendered in a non-proportional font; hint: use PRE tags in forum messages