Running out of Memory - Maths Check
-
I had some time to get an estimate for the accumulation of the precision-based error, and found that it's upper limit is about
machine_precision*4^iterations
, and the average error would be aroundmachine_precision*2^iterations
. That means the maximum error approaches 1 after about 28 iterations for double values (55-56 bit mantisse), and the average error approaches 1 after about 56 iterations. If you intend to store intermediate results, you might consider approaching this problem from the other side: 1. partition your intervals according to your machine precision, e. g. 2^16x2^16 points 2. for each point, store 1 if after one iteration the divergence criterium is met, or 0 if not. This is the iteration counter. 3. repeat for each point with a value of 0: 3.1 convert the result after one iteration to the resolution of your point array (i. e. check which pixel is closest) 3.2 check the iteration counter for the point corresponding to that value: if it is not 0, store this value+1. Else your counter is still 0. 3.3 when you're done with all points that still have a counter of 0, start all over again, for at most as many times as appropriate to your initial resolution (e. g. no more than 16 times if you started at 2^16x2^16) This is a reversion of the actual algorithm, deliberately adding in the error (in step 3.1) that you would otherwise get when calculating at a given resolution (in this case 16 bit). -
I had some time to get an estimate for the accumulation of the precision-based error, and found that it's upper limit is about
machine_precision*4^iterations
, and the average error would be aroundmachine_precision*2^iterations
. That means the maximum error approaches 1 after about 28 iterations for double values (55-56 bit mantisse), and the average error approaches 1 after about 56 iterations. If you intend to store intermediate results, you might consider approaching this problem from the other side: 1. partition your intervals according to your machine precision, e. g. 2^16x2^16 points 2. for each point, store 1 if after one iteration the divergence criterium is met, or 0 if not. This is the iteration counter. 3. repeat for each point with a value of 0: 3.1 convert the result after one iteration to the resolution of your point array (i. e. check which pixel is closest) 3.2 check the iteration counter for the point corresponding to that value: if it is not 0, store this value+1. Else your counter is still 0. 3.3 when you're done with all points that still have a counter of 0, start all over again, for at most as many times as appropriate to your initial resolution (e. g. no more than 16 times if you started at 2^16x2^16) This is a reversion of the actual algorithm, deliberately adding in the error (in step 3.1) that you would otherwise get when calculating at a given resolution (in this case 16 bit).From here: http://mrob.com/pub/muency/accuracy.html[^] The commonly-seen views of the Mandelbrot Set have been challenged by arguments based on techniques like error-term analysis (see Propagation of uncertainty). They show that if you want to get an accurate view of a lemniscate ZN you need to use at least N digits in all your calculations. The result is that most views of the Mandelbrot Set that people see on computers are (in theory) completely inaccurate or even "fictitious". However, except for certain specific purposes (notably using iteration to trace an external ray), the problem is much smaller. The overwhelming amount of experimental evidence shows that ordinary Mandelbrot images (plotting e.g. the dwell for each point on a pixel grid) are indistinguishable from equivalent results produced by exact calculation. The images look the same to the human eye regardless of how many digits you use, as long as the number of digits is sufficient to distinguish the coordinates of the parameter value C. This is because the roundoff errors added by each step in the iteration tend to cancel out as they would if randomly distributed, rather than systematically biased in one certain direction. See Systematic error, "Systematic versus random error". Not that this means the discussion about errors is without value, especially since the above explanation of why you might choose to ignore it relies on an assumed internal cancellation of errors through randomness. Indeed the above validates the discussion but perhaps helps put into context how much effort one might want to put into higher accuracy vs. say better functionality etc. Mike
-
From here: http://mrob.com/pub/muency/accuracy.html[^] The commonly-seen views of the Mandelbrot Set have been challenged by arguments based on techniques like error-term analysis (see Propagation of uncertainty). They show that if you want to get an accurate view of a lemniscate ZN you need to use at least N digits in all your calculations. The result is that most views of the Mandelbrot Set that people see on computers are (in theory) completely inaccurate or even "fictitious". However, except for certain specific purposes (notably using iteration to trace an external ray), the problem is much smaller. The overwhelming amount of experimental evidence shows that ordinary Mandelbrot images (plotting e.g. the dwell for each point on a pixel grid) are indistinguishable from equivalent results produced by exact calculation. The images look the same to the human eye regardless of how many digits you use, as long as the number of digits is sufficient to distinguish the coordinates of the parameter value C. This is because the roundoff errors added by each step in the iteration tend to cancel out as they would if randomly distributed, rather than systematically biased in one certain direction. See Systematic error, "Systematic versus random error". Not that this means the discussion about errors is without value, especially since the above explanation of why you might choose to ignore it relies on an assumed internal cancellation of errors through randomness. Indeed the above validates the discussion but perhaps helps put into context how much effort one might want to put into higher accuracy vs. say better functionality etc. Mike
Interesting. I have to admit I was a bit doubtful of my own line of argumentation, since I've seen incredibly detailed pictures from the Mandelbrot Set as early as 25 years ago, and I doubt many of them (if any) were calculated using arbitrary precision. Nor did their smoothness indicate anything even close to the error levels that error propagation theory would suggest. Then again, I've seen some fixed point 16 bit implementations that were clearly useless for anything but calculating the full picture (at a low resolution, preferably) - zooming in pretty quickly revealed the huge errors this limited accuracy produced. In any case, you should make sure that when you zoom in to some interesting area, your accuracy is still some orders of magnitude above the distance between pixels, or else you'll get the same kind of artifacts I mentioned in the previous paragraph. P.S.: I considered how to model the cancelling out: the systematic error based on machine precision has a uniform distribution. Iterating this calculation, is like adding independent variables (up to 4 times in one iteration step), resulting in a distribution that looks more like a normal distribution. The expected error will be 0, on average, but what is of a greater interest is the likelyhood that the error exceeds some value that makes the result unusable (an error about the size of 1, or greater). Unfortunately my knowledge of probability theory is somewhat rusted, but I suppose if you can determine that likelyhood and it is on the order of no more than 1/(number of pixels), then you still should get good results for visualization purposes.
-
So, a quick maths check. If I want to store the results of an iterative algorithm for every pixel in an image (fractals!) then I think it's a dumb thing to start with but want to check before I even start bothering to write the class that would store the results. Assume image size is 1000 x 1000, so 1 million pixels Algorithm is an iteration of
Zn+1 = Zn^2 + Z0
up to a maximum of say 10,000 times Each iteration results in a complex number A complex number is a structure with 2 doubles (let's assume it occupies 128 bits for simplicty) Theoretically this could result in 10,000 x 1,000,000 complex number instances This is 1 x 10^10 instances, at 128 bits each that 1.28 x 10^12 bits Which is: 1.6 x 10^11 bytes 1.56 x 10^8 K bytes 152,588 M bytes 149 G bytes That would seem a dumb thing to do But I'm assuming that my maths is correct and there aren't memory optimisations I know nothing about. MikeTry doing the computation 1 pixel at a time, and discarding results from all the iterations except 0, n-1 and n, where n is the total number of iterations in the computation so far. This would reduce memory usage by complex numbers to just 48 bytes! EDIT: Probably not 48 bytes, but 3 complex numbers need to be stored. This could be any amount, depending on the precision used.