Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. General Programming
  3. Algorithms
  4. Running out of Memory - Maths Check

Running out of Memory - Maths Check

Scheduled Pinned Locked Moved Algorithms
algorithmsperformance
24 Posts 9 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • M M Badger

    The colouriser needs all the iterations for a given pixel (= a complex) since there are n ways to colourise a fractal, some of which manipulate all the values of Zn achieved during an iterative cycle. However, the colouriser doesn't need all iterations for all pixels at the same time, so I am now sending all iterations for one pixel to the colouriser, it can then do whatever it needs, store its necessary data (number of iterations, escape angle, whatever) and then the cycle starts again for the next pixel. I think this is quite close to your suggestion? Mostly the colourisers store their data in List(Of T) objects since they can cope better than arrays with changing dimensions. Mike

    U Offline
    U Offline
    User 9085707
    wrote on last edited by
    #14

    Mike-MadBadger wrote:

    I am now sending all iterations for one pixel to the colouriser, it can then do whatever it needs, store its necessary data (number of iterations, escape angle, whatever) and then the cycle starts again for the next pixel. I think this is quite close to your suggestion?

    Yes, I was thinking in terms of storing only what is absolutely necessary for each pixel. Good luck. Paul

    1 Reply Last reply
    0
    • J Jeremy David Thomson

      You may want to check out www.FractalForums.com. The people that wrote mandelbulb (mandelbox?) hang out there and may be a source of programming knowhow. I'm user panzerboy there, I've written a few plugins for FractalExtreme. I use that to make zoom movies which I post on YouTube as CommandLineCowboy. FractalExtreme is somewhat limited in its colouring, its strictly iteration count indexing into a 228 element palette table. I often do multiple renders with different palettes and mappings layering the results. If you check out my latest videos you'll see that you can still accomplish quite a lot with simple iteration count colouring. Your 'average of all iterations' comment intrigues me. In Fractal Extreme you can hold Shift and Ctrl and click on the initial mandelbrot set to see the 'iteration paths'. Values in the M-set spiral inwards around a central point which would be your average value. The values outside the set with high iteration counts buzz around a point before going outside the window. Again that average would the central point only slightly altered by the last few iterations. So I'm thinking you may not need all the iterations to find the axis of the spirals.

      M Offline
      M Offline
      M Badger
      wrote on last edited by
      #15

      Jeremy David Thomson wrote:

      Your 'average of all iterations' comment intrigues me

      Then I was probably a little misleading, I was just trying to pick an example as to why you might want anything other than than the classic 'IterationCount / MaxIterations' which is what most folk doing this 'rite of passage' project would recognise. More practical examples might have been standard deviation or other more complex statistical analysis - many (all?) of which have been done by someone at some point to varying degrees of success and aesthetic value.

      Jeremy David Thomson wrote:

      You may want to check out www.FractalForums.com

      That I have discovered, though I have only scratched the surface at this point. It seems one of the better (or at least more broad ranging) places to go, though there are literally millions of places to get little bits and pieces of useful information. Thanks for your time.

      1 Reply Last reply
      0
      • M M Badger

        So, a quick maths check. If I want to store the results of an iterative algorithm for every pixel in an image (fractals!) then I think it's a dumb thing to start with but want to check before I even start bothering to write the class that would store the results. Assume image size is 1000 x 1000, so 1 million pixels Algorithm is an iteration of Zn+1 = Zn^2 + Z0 up to a maximum of say 10,000 times Each iteration results in a complex number A complex number is a structure with 2 doubles (let's assume it occupies 128 bits for simplicty) Theoretically this could result in 10,000 x 1,000,000 complex number instances This is 1 x 10^10 instances, at 128 bits each that 1.28 x 10^12 bits Which is: 1.6 x 10^11 bytes 1.56 x 10^8 K bytes 152,588 M bytes 149 G bytes That would seem a dumb thing to do But I'm assuming that my maths is correct and there aren't memory optimisations I know nothing about. Mike

        U Offline
        U Offline
        User 3138470
        wrote on last edited by
        #16

        Howdy Hmm, is double precision complex really needed, in your case e.q. storing all in int's could reduce required space 4x. also maybe the whole image could be 7zipped. You could allocate each time 1M ints array and run 7zip through it, I assume a lot of compression could be achieved that way :-) Dunno though :-)

        S 1 Reply Last reply
        0
        • U User 3138470

          Howdy Hmm, is double precision complex really needed, in your case e.q. storing all in int's could reduce required space 4x. also maybe the whole image could be 7zipped. You could allocate each time 1M ints array and run 7zip through it, I assume a lot of compression could be achieved that way :-) Dunno though :-)

          S Offline
          S Offline
          Stefan_Lang
          wrote on last edited by
          #17

          Zipping wouldn't work due to the very nature of fractals: they're all about chaos and unpredictability, whereas compression algorithms are all about detecting patterns and predictability of data sequences. The two don't mesh. Ever. P.S: for the same reason restricting yourself to lower precision won't work either. In fact, with a precision of only about 55 bit (which I think is the usual length of the mantisse in a double), doing more than about 500 iterations will yield no meaningful data. It may still look nice when visualized (which is usually the point of fractal programs), but you are effectively looking at rounding errors. I would go so far as to say you need an arbitrary math library for more than 100 iterations.

          1 Reply Last reply
          0
          • M M Badger

            So, a quick maths check. If I want to store the results of an iterative algorithm for every pixel in an image (fractals!) then I think it's a dumb thing to start with but want to check before I even start bothering to write the class that would store the results. Assume image size is 1000 x 1000, so 1 million pixels Algorithm is an iteration of Zn+1 = Zn^2 + Z0 up to a maximum of say 10,000 times Each iteration results in a complex number A complex number is a structure with 2 doubles (let's assume it occupies 128 bits for simplicty) Theoretically this could result in 10,000 x 1,000,000 complex number instances This is 1 x 10^10 instances, at 128 bits each that 1.28 x 10^12 bits Which is: 1.6 x 10^11 bytes 1.56 x 10^8 K bytes 152,588 M bytes 149 G bytes That would seem a dumb thing to do But I'm assuming that my maths is correct and there aren't memory optimisations I know nothing about. Mike

            S Offline
            S Offline
            Stefan_Lang
            wrote on last edited by
            #18

            I'm a bit in a hurry right now, but I'd like to point out that for 10000 iterations you need arbitrary precision math! The rounding errors from each iteration grow exponentially, and so will the size you need to store each resulting value in sufficient precision. Unless of course you're fine that you're effectively looking at rounding errors after a few hundred iterations.

            M 1 Reply Last reply
            0
            • S Stefan_Lang

              I'm a bit in a hurry right now, but I'd like to point out that for 10000 iterations you need arbitrary precision math! The rounding errors from each iteration grow exponentially, and so will the size you need to store each resulting value in sufficient precision. Unless of course you're fine that you're effectively looking at rounding errors after a few hundred iterations.

              M Offline
              M Offline
              M Badger
              wrote on last edited by
              #19

              Stefan_Lang wrote:

              for 10000 iterations you need arbitrary precision math

              Good point, well made. << Heads off to investigate >> To be fair, at this point it's more proof of concept, can I design a set of classes that allow you to plug any fractal set into any colourising method (with any mapping technique to boot), it's not intended to be a finished product, why would the world need another fractal generator? Thanks, Mike

              S 1 Reply Last reply
              0
              • M M Badger

                Stefan_Lang wrote:

                for 10000 iterations you need arbitrary precision math

                Good point, well made. << Heads off to investigate >> To be fair, at this point it's more proof of concept, can I design a set of classes that allow you to plug any fractal set into any colourising method (with any mapping technique to boot), it's not intended to be a finished product, why would the world need another fractal generator? Thanks, Mike

                S Offline
                S Offline
                Stefan_Lang
                wrote on last edited by
                #20

                I had some time to get an estimate for the accumulation of the precision-based error, and found that it's upper limit is about machine_precision*4^iterations, and the average error would be around machine_precision*2^iterations. That means the maximum error approaches 1 after about 28 iterations for double values (55-56 bit mantisse), and the average error approaches 1 after about 56 iterations. If you intend to store intermediate results, you might consider approaching this problem from the other side: 1. partition your intervals according to your machine precision, e. g. 2^16x2^16 points 2. for each point, store 1 if after one iteration the divergence criterium is met, or 0 if not. This is the iteration counter. 3. repeat for each point with a value of 0: 3.1 convert the result after one iteration to the resolution of your point array (i. e. check which pixel is closest) 3.2 check the iteration counter for the point corresponding to that value: if it is not 0, store this value+1. Else your counter is still 0. 3.3 when you're done with all points that still have a counter of 0, start all over again, for at most as many times as appropriate to your initial resolution (e. g. no more than 16 times if you started at 2^16x2^16) This is a reversion of the actual algorithm, deliberately adding in the error (in step 3.1) that you would otherwise get when calculating at a given resolution (in this case 16 bit).

                M 2 Replies Last reply
                0
                • S Stefan_Lang

                  I had some time to get an estimate for the accumulation of the precision-based error, and found that it's upper limit is about machine_precision*4^iterations, and the average error would be around machine_precision*2^iterations. That means the maximum error approaches 1 after about 28 iterations for double values (55-56 bit mantisse), and the average error approaches 1 after about 56 iterations. If you intend to store intermediate results, you might consider approaching this problem from the other side: 1. partition your intervals according to your machine precision, e. g. 2^16x2^16 points 2. for each point, store 1 if after one iteration the divergence criterium is met, or 0 if not. This is the iteration counter. 3. repeat for each point with a value of 0: 3.1 convert the result after one iteration to the resolution of your point array (i. e. check which pixel is closest) 3.2 check the iteration counter for the point corresponding to that value: if it is not 0, store this value+1. Else your counter is still 0. 3.3 when you're done with all points that still have a counter of 0, start all over again, for at most as many times as appropriate to your initial resolution (e. g. no more than 16 times if you started at 2^16x2^16) This is a reversion of the actual algorithm, deliberately adding in the error (in step 3.1) that you would otherwise get when calculating at a given resolution (in this case 16 bit).

                  M Offline
                  M Offline
                  M Badger
                  wrote on last edited by
                  #21

                  Thanks for spending time on this - I'm going to have to sit down with a large coffee (or 8) before I get my head around this. Separately I found GNU MP and some .Net wrappers. When I get there I'll have a decent look at both options Thanks again, Mik

                  1 Reply Last reply
                  0
                  • S Stefan_Lang

                    I had some time to get an estimate for the accumulation of the precision-based error, and found that it's upper limit is about machine_precision*4^iterations, and the average error would be around machine_precision*2^iterations. That means the maximum error approaches 1 after about 28 iterations for double values (55-56 bit mantisse), and the average error approaches 1 after about 56 iterations. If you intend to store intermediate results, you might consider approaching this problem from the other side: 1. partition your intervals according to your machine precision, e. g. 2^16x2^16 points 2. for each point, store 1 if after one iteration the divergence criterium is met, or 0 if not. This is the iteration counter. 3. repeat for each point with a value of 0: 3.1 convert the result after one iteration to the resolution of your point array (i. e. check which pixel is closest) 3.2 check the iteration counter for the point corresponding to that value: if it is not 0, store this value+1. Else your counter is still 0. 3.3 when you're done with all points that still have a counter of 0, start all over again, for at most as many times as appropriate to your initial resolution (e. g. no more than 16 times if you started at 2^16x2^16) This is a reversion of the actual algorithm, deliberately adding in the error (in step 3.1) that you would otherwise get when calculating at a given resolution (in this case 16 bit).

                    M Offline
                    M Offline
                    M Badger
                    wrote on last edited by
                    #22

                    From here: http://mrob.com/pub/muency/accuracy.html[^] The commonly-seen views of the Mandelbrot Set have been challenged by arguments based on techniques like error-term analysis (see Propagation of uncertainty). They show that if you want to get an accurate view of a lemniscate ZN you need to use at least N digits in all your calculations. The result is that most views of the Mandelbrot Set that people see on computers are (in theory) completely inaccurate or even "fictitious". However, except for certain specific purposes (notably using iteration to trace an external ray), the problem is much smaller. The overwhelming amount of experimental evidence shows that ordinary Mandelbrot images (plotting e.g. the dwell for each point on a pixel grid) are indistinguishable from equivalent results produced by exact calculation. The images look the same to the human eye regardless of how many digits you use, as long as the number of digits is sufficient to distinguish the coordinates of the parameter value C. This is because the roundoff errors added by each step in the iteration tend to cancel out as they would if randomly distributed, rather than systematically biased in one certain direction. See Systematic error, "Systematic versus random error". Not that this means the discussion about errors is without value, especially since the above explanation of why you might choose to ignore it relies on an assumed internal cancellation of errors through randomness. Indeed the above validates the discussion but perhaps helps put into context how much effort one might want to put into higher accuracy vs. say better functionality etc. Mike

                    S 1 Reply Last reply
                    0
                    • M M Badger

                      From here: http://mrob.com/pub/muency/accuracy.html[^] The commonly-seen views of the Mandelbrot Set have been challenged by arguments based on techniques like error-term analysis (see Propagation of uncertainty). They show that if you want to get an accurate view of a lemniscate ZN you need to use at least N digits in all your calculations. The result is that most views of the Mandelbrot Set that people see on computers are (in theory) completely inaccurate or even "fictitious". However, except for certain specific purposes (notably using iteration to trace an external ray), the problem is much smaller. The overwhelming amount of experimental evidence shows that ordinary Mandelbrot images (plotting e.g. the dwell for each point on a pixel grid) are indistinguishable from equivalent results produced by exact calculation. The images look the same to the human eye regardless of how many digits you use, as long as the number of digits is sufficient to distinguish the coordinates of the parameter value C. This is because the roundoff errors added by each step in the iteration tend to cancel out as they would if randomly distributed, rather than systematically biased in one certain direction. See Systematic error, "Systematic versus random error". Not that this means the discussion about errors is without value, especially since the above explanation of why you might choose to ignore it relies on an assumed internal cancellation of errors through randomness. Indeed the above validates the discussion but perhaps helps put into context how much effort one might want to put into higher accuracy vs. say better functionality etc. Mike

                      S Offline
                      S Offline
                      Stefan_Lang
                      wrote on last edited by
                      #23

                      Interesting. I have to admit I was a bit doubtful of my own line of argumentation, since I've seen incredibly detailed pictures from the Mandelbrot Set as early as 25 years ago, and I doubt many of them (if any) were calculated using arbitrary precision. Nor did their smoothness indicate anything even close to the error levels that error propagation theory would suggest. Then again, I've seen some fixed point 16 bit implementations that were clearly useless for anything but calculating the full picture (at a low resolution, preferably) - zooming in pretty quickly revealed the huge errors this limited accuracy produced. In any case, you should make sure that when you zoom in to some interesting area, your accuracy is still some orders of magnitude above the distance between pixels, or else you'll get the same kind of artifacts I mentioned in the previous paragraph. P.S.: I considered how to model the cancelling out: the systematic error based on machine precision has a uniform distribution. Iterating this calculation, is like adding independent variables (up to 4 times in one iteration step), resulting in a distribution that looks more like a normal distribution. The expected error will be 0, on average, but what is of a greater interest is the likelyhood that the error exceeds some value that makes the result unusable (an error about the size of 1, or greater). Unfortunately my knowledge of probability theory is somewhat rusted, but I suppose if you can determine that likelyhood and it is on the order of no more than 1/(number of pixels), then you still should get good results for visualization purposes.

                      1 Reply Last reply
                      0
                      • M M Badger

                        So, a quick maths check. If I want to store the results of an iterative algorithm for every pixel in an image (fractals!) then I think it's a dumb thing to start with but want to check before I even start bothering to write the class that would store the results. Assume image size is 1000 x 1000, so 1 million pixels Algorithm is an iteration of Zn+1 = Zn^2 + Z0 up to a maximum of say 10,000 times Each iteration results in a complex number A complex number is a structure with 2 doubles (let's assume it occupies 128 bits for simplicty) Theoretically this could result in 10,000 x 1,000,000 complex number instances This is 1 x 10^10 instances, at 128 bits each that 1.28 x 10^12 bits Which is: 1.6 x 10^11 bytes 1.56 x 10^8 K bytes 152,588 M bytes 149 G bytes That would seem a dumb thing to do But I'm assuming that my maths is correct and there aren't memory optimisations I know nothing about. Mike

                        E Offline
                        E Offline
                        Edward Giles
                        wrote on last edited by
                        #24

                        Try doing the computation 1 pixel at a time, and discarding results from all the iterations except 0, n-1 and n, where n is the total number of iterations in the computation so far. This would reduce memory usage by complex numbers to just 48 bytes! EDIT: Probably not 48 bytes, but 3 complex numbers need to be stored. This could be any amount, depending on the precision used.

                        1 Reply Last reply
                        0
                        Reply
                        • Reply as topic
                        Log in to reply
                        • Oldest to Newest
                        • Newest to Oldest
                        • Most Votes


                        • Login

                        • Don't have an account? Register

                        • Login or register to search.
                        • First post
                          Last post
                        0
                        • Categories
                        • Recent
                        • Tags
                        • Popular
                        • World
                        • Users
                        • Groups