Algorithm Complexity
-
Bassam Abdul-Baki wrote:
Is anyone aware of such a method?
I generally use a profiler. if you really want a number, test it. Call 1 MILLION calls of a given operation, time it, come up wth a numerical scale for each operand based on the time it takes to call each operation. Now you have a good number.
_________________________ Asu no koto o ieba, tenjo de nezumi ga warau. Talk about things of tomorrow and the mice in the ceiling laugh. (Japanese Proverb)
Profilers help some but are not perfect. The easiest way, assuming you don't already have a profiler installed -- and sometimes even then, to tell which of two algorithms is going to be faster, is to write a program that calls both alternative and which gets the time before, between each alternative, and at the end of its run. Have the test run 1 million iterations of each case -- or maybe 10,000 if the the program is taking too long. The wall clock approach gets you a good rough estimate in all cases and in some few cases, it gives you better results. Memory caching often confuses profilers -- only a true test on real data using wall clock time will let you overcome these problems. Now, you can spend a great deal of time getting your cache architecture described to the profiler, but you'll still have to run a wall clock test to verify that your profile is configured correctly. If you haven't already done all this, maybe a wall clock test is good enough.
-
It depends how those functions are implemented. Note that they might well be single instructions on the processor (and most of them are - FCOS, FSIN, FSINCOS, FPTAN, FSQRT on x86) but the processor itself might implement them as a sequence of smaller operations. It might matter a bit more on a device that doesn't implement floating-point in hardware, for example Pocket PCs have to use software FP. That said because of the vast speed difference between CPU and even the on-board caches, you can execute many many floating point operations in the time it takes to load from memory. If you need speed, look at your access pattern before the theoretical complexity of your algorithm.
DoEvents
: Generating unexpected recursion since 1991 -
Is there a known method for comparing different algorithms? What we're looking for is a way to assign weights to different operands (+,-,*,/) and functions (sin,cos,tan,cot,sqrt) and see how they compare to each other. Is anyone aware of such a method? Thanks
"Religion is assurance in numbers." - Bassam Abdul-Baki Web - Blog - RSS - Math - LinkedIn - BM
A method for comparing different algorithms is "Algorithm Analysis". Basically it is a 5 step process: Step 1. What is the INPUT SIZE of the algorithm? E.G. An array[20] or array[n] Step 2. Identify the/a basic operation that is performed at each step of the algorithm. E.G. For i=0 to n-2 do For j=i+1 to n-1 do if A[i] == A[j] return sin(A[i]); return false; The basic operation could be a comparison, an assignment statement or addition. Step 3. Check to see if the worst case and best case are the same. Will the running time complexity be the same. Time complexities are one of the following: Big-Oh, Big-Omega, or Big-Theta Step 4: Create a mathematical summation for the number of times the algorithm's basic operation is executed. Step 5: Using standard formulas and rules of sum manipulation, either find a closed formula for the count or , at the very least, establish its order of growth. In the example above the basic operation is a comparison. In the worst case there are either no equal elements or an array with the last two elements are the only pair of equal elements. So compute for the worst case. You have two SUMS (note: lack of math notation): 1 from i=0 to n-2 and 2 from j=i+1 to n-1. The two sums are on 1. 1 is the # of times the basic operation is executed. The short is you get a result from your SUM = (n-1)n/2. In the worst case the algorithm needs to compare all n(n-1)/2 distinct pairs of its n elements. This algorithm would have a running time complexity of Big-Theta(n^2). So just give your boss a time complexity: Big-Oh, Big-Omega, or Big-Theta.
-
Is there a known method for comparing different algorithms? What we're looking for is a way to assign weights to different operands (+,-,*,/) and functions (sin,cos,tan,cot,sqrt) and see how they compare to each other. Is anyone aware of such a method? Thanks
"Religion is assurance in numbers." - Bassam Abdul-Baki Web - Blog - RSS - Math - LinkedIn - BM
-
Hmm, not really possible. It is highly implementation dependant, and hardware dependant. For example tan can have a lot more calculations required than sqrt. But depending on how the mix of operations you could have vastly different results. Say your CPU has 2 FPU and 4 int, the algorithm that runs best on that hardware, could be vastly different than the one on a system with 4FPU and 2 int. The ability of the CPU to do more than one calculation at a time, has caused typical complexity calculations to lose a lot of their meaning. If you have the right mix you might see an order of magnitude increase in an algorithm, or you might have one extra multiply needed for the next step that requires each step of the algorithm to take 2 cycles rather than 1 for example. You could have an algorithm that is really hard to predict how the if statements will branch, this would kill a P4, with a really large instruction pipeline, branching the wrong way can cost 10 times as much as a calculation. You could get an idea of the number of multiplies, adds, branches etc needed, but how that translates to the hardware will change drastically. Even when you think what you know what the algorithm you coded was, that doesn't mean that the compiler compiled it that way, it could have inlined parts, moved operations around, etc. The only way you'd know for sure what instructions get sent to the CPU would be to code in assembly. But even then, a lot of modern CPUs will do things like branch prediction, or out of order instruction execution, so what actually ends up running on the hardware could be vastly different than you coded.
-
Is there a known method for comparing different algorithms? What we're looking for is a way to assign weights to different operands (+,-,*,/) and functions (sin,cos,tan,cot,sqrt) and see how they compare to each other. Is anyone aware of such a method? Thanks
"Religion is assurance in numbers." - Bassam Abdul-Baki Web - Blog - RSS - Math - LinkedIn - BM
If you are using C++ or another language that compiles to native code, then you could try another approach: instruction count. Essentially, you try different algorithms and see what Assembler code the [optimizing] compiler comes up with. The more instructions the more complexity. While using this to measure performance has its drawbacks, it's fine for complexity. Assuming a good compiler is generating instructions effectively, if one algorithm takes triple the instructions that another one has, you know it is much more complex. That being said, the highest performing search and sort algorithms are the most complex. Hope this technique helps. (of course, we could have helped more if we knew what the purpose of complexity assessment was: help with maintenance, memory requirements, or raw speed? They each have different coding strategies. Complexity measurement is only a means to an end, so whats the end in mind?)
-
Is there a known method for comparing different algorithms? What we're looking for is a way to assign weights to different operands (+,-,*,/) and functions (sin,cos,tan,cot,sqrt) and see how they compare to each other. Is anyone aware of such a method? Thanks
"Religion is assurance in numbers." - Bassam Abdul-Baki Web - Blog - RSS - Math - LinkedIn - BM
Off the top of my head if you are only talking about +, -, *, /, sin, cos, tan, cot, sqrt maybe you could break it down as follows. Knowing that computers only know how to add, and achieve other arithmetic functions such as subtracting by fliping bits to get the twos complement and then adding the result to the first number, and calling algebraic functions like sin and cos for example in C# can involve calling methods of the Math class (e.g. Math.sin) then maybe you can say: for arithmetic functions a average weight of 1 can be used due to the relatively lesser number of instructions executed to accomplish the task. Calling methods of the Math class involves pushing the current stack and branching to the instruction address of the method in the class which itself may cause other functions to be called whose end result is calling a number of arithmetic functions to be executed. So say you give trigonometric function calls a weight of 10 because pushing the stack executing instructions and function calls and then popping it again to return where you called the function from is very expensive machine cyclewise. So for example say you are calculating some kind of transform that involves both arithmetic and trigonometric functions like "1 + Math.sin(x)" you would assign a weight of 1 to the addition symbol and add a weight of 10 for calling the trigonometric function "sin". Of course, in the case of sin, cos, tan, and cot, if one were to be more precise then each function would have its own weight since each does cause a different number of instructions to be executed; albeit sin and cos would be pretty close in number of instructions, while cot would add a weight of one for the arithmetic function of dividing one by the tangent. It all depends on how precise you want to be. You could drive yourself crazy trying to be exact with these metrics. I personally like the simple route by just saying arithmetic functions have an average weight of one and trigonometric functions have an average weight of ten. Of course this is just me speaking extemporaneously maybe somebody else can tell you a better way of doing this.
-
Hi, maybe there is a way in between: Count the calls to non-trivial functions like trigonometrical ones in each algorithm and then time some iterations with only one of these calls so you can get an estimate on the weight of each of those function calls. This way you might not get a result like "A has O(n^2) and B has O(3n^2)" but "A's complexity is significantly higher than B's" or "both algorithms are about equal in complexity". Maybe put some tech talk on top of it noting that real-life complexity depends on platform used etc. :-)
I think it will be quite instructive to run Algorithms A and B with progressively increasing 'n'. That way, you will be able to regress the relationships appropriately. This will make it possible to derive a relationship of the form O(a*n^b) for each. taintransit
-
Is there a known method for comparing different algorithms? What we're looking for is a way to assign weights to different operands (+,-,*,/) and functions (sin,cos,tan,cot,sqrt) and see how they compare to each other. Is anyone aware of such a method? Thanks
"Religion is assurance in numbers." - Bassam Abdul-Baki Web - Blog - RSS - Math - LinkedIn - BM
Seems to me that the easiest way to compare an algorithm complexity of two different algorithms, is to count the number of operations (with no regard to their weight) required to process an input of a million records or more (one record never works out...). This would be the more academic approach, since it really doesnt work if one algorithm which seems less complex uses a subroutine which would increase the actual complexity by a significant magnitude, but for academic purposes it's just fine. If you're looking for a more engineering approach, you could time the processing time of the same million records input with the different algorithms, and compare the times. The same approach works for timing operations. Though there will be some deviation resulting from the other operations in the timing commands, the deviation should stay constant (providing the machine is otherwise not processing any other tasks in parallel, and that the same machine is used for all testing).
-
Relative weights would vary by CPU architecture. Depending on your data set size and linearity cache sizes would also come into play. Your best bet is the compare actual runtimes, iterating enough that timer noise isn't an issue.
TimerA.Start()
for (int i = 0; i < bigNum; i++)
DoAlgorithmA()
TimerA.Stop()
TimerB.Start()
for (int i = 0; i < bigNum; i++)
DoAlgorithmB()
TimerB.Stop()if (TimerA.Time > TimerB.Time)
Print ("A is slower");
else
Print ("B is slower");-- If you view money as inherently evil, I view it as my duty to assist in making you more virtuous.
If you choose to test performance using this type of metric, use the System.Diagnostics.StopWatch rather than the timer. It is more accurate for such purposes...you can measure all the down to clock ticks.
Life should NOT be a journey to the grave with the intention of arriving safely in an attractive and well preserved body, but rather to skid in sideways, burger in one hand, drink in the other, body thoroughly used up, totally worn out and screaming "WOO HOO......What a ride!"