Hacker News new | past | comments | ask | show | jobs | submit login
Generating the Mandelbrot Set (scionofbytes.me)
90 points by scionofbytes on May 31, 2019 | hide | past | favorite | 76 comments



It’s a right of passage to generate the Mandelbrot set. I have fond memories of being handed the scientific American article introducing it- apparently you can download a pdf version here: https://static.scientificamerican.com/sciam/assets/media/inl...

As a high school student I was fascinated and implemented it on any machine I had access to. I remember being bored in math class and coding it on my TI-81 calculator. That took a while to render in black and white but it did work!

Great memories. Thanks for posting.


On a side note, the actual saying is "rite of passage". "rite" as in ritual.


Programming in the built calculator language was worse than hand coding assembly language, due to the tedious input method.

I killed the battery by setting the bailout to a ridiculously high value and forgetting about it overnight.

I still have the calculator, but the memory was cleared at one point. I have been tempted to recreate the Mandelbrot program to show my kids, but I can't bring myself to devote that much of my life to it again.


Me too! I think I used a TI85 or 89 but yup! It was awesome learning how to make them after years of playing with them on Fractint, just zooming around.


I miss Fractint. I spent hours playing with that thing as a kid, with no idea about the math behind it (and no Numberphile to help explain it). So I just zoomed and plotted and got really familiar with which particular SVGA modes I could use and which I couldn't.

I still remember discovering Render To Disk and realizing I could save stuff out and open it up in Rainbow Paint... which has been lost in the sands of internet.


> As a high school student I was fascinated and implemented it on any machine I had access to. I remember being bored in math class and coding it on my TI-81 calculator.

In a recent contract I had very little to do for a few weeks and access to a ~9000 core server cluster and a workload-distribution framework...


Wait! Don't stop, tell us the rest of the story...


The story goes much the same way as the others I suspect, only this one could draw high res images very fast. I learned the maths behind a dozen or so fractal types and various colouring strategies, made some very pretty pictures.

Unfortunately as it was on client time and equipment I couldn't take it with me and it lives on only as a piece of sample code attached to the client's distributed processing framework!

But damn - drawing 38400 x 21600 resolution sets in record time was fun :)


I remember it too. I got the algorithm from some programming magazine, without understanding the maths behind it. I went to the local bookstore in my little town to find books on the topic, but they didn't have any. I envy the kids today who have access to so much resources.


luckily I had a subscription to Scientific American so when I read that article I immediately wrote it up in the language at hand - turbo pascal - which would take overnight to generate a full screen Mandelbrot image - that was about 1987 or so - excellent motivation for young brash explorers who loved the unknown - the algorithm was just easy enough to get working without too much frustration yet capable of generating infinitely beautifully complex views into reality


Thanks for posting the Sci Am article. I discovered Mandelbrot around 1994, and proceeded to program it in QB. Soon after, I found an old Sci Am article sitting around, and it was the one with the Mandelbrot! We almost never bought Sci Am articles, so it was a very pleasant surprise.


> TI-81 calculator

I did one on my Casio FX-7000G. The fun of programming under real[1] constraints!

[1] Yes, yes, it wasn't punch cards or core memory or using a hammer to magnetise rocks...


A right of passage - yes. Another object to render is the Lorenz Attractor - not quite as cool, but interesting in a different way.


I keep reading that the Lorenz can be modeled on an analogue computer, and I want to find an example of how, so I can port it to a synthesizer and listen to it


Here is my collection of Lorenz equation implemented in 140 characters of Javascript: https://www.dwitter.net/h/lorenz My favorite is the flying butterfly with flapping wings.


It's defined as 3 differential equations. So you use some multipliers and summers to calculate those 3 equations and feed those into 3 integrators. The integrator. Outputs are what you plot on a scope or listen to. Use pots for the variable gains in the equations.

Ahhhhhh analog.


I could have written every word of your comment and it would have been accurate haha.


I once wrote a PostScript program that generates Mandelbrot, tuned it so that it finishes up in reasonable time, and sent to some old laser printer. On next day morning the machine finished up churning numbers and spat out a known picture.

BTW: .ps (PostScript) is a fun little stack based language, similar to JVM. Document is simply the result of the program execution. You can program it by hand, though I do not recommend anything large.


It's a very fun exercise when you are starting with PostScript... and indeed, unless you are familiar with Forth or any other stack based language, it's a big pain to write. A long time ago I wrote a very short "tutorial" [1] on my blog (which right now looks very bad due to change in stylesheets that don't have translated well... The code for the Mandelbrot set there is broken (due to googlecode dying), but I just moved from my local copy to a gist [2]. And the code for the Christmas postcard (with Koch snowflakes and Gosper curves) is here [3]. It is commented, but not sure how useful it can be.

[1] https://www.mostlymaths.net/2008/12/quick-postscript-program...

[2] https://gist.github.com/rberenguel/9a5239034cafdeaabdf665444...

[3] https://gist.github.com/rberenguel/38f5106fee2a795be45d6a335...

Edit: having a blog with posts from 10 years ago you want to reference is painful... Broken links, grammar errors, unclear writing, typos...


It was really cool in the early 80s became the printer was often the most powerful computer in the office.

Multi-megabyte memories and fast RISC processors with FPUs were not uncommon.


I used to work in a proprietary concatenative/stack language that was used for the GUI front-end for medical software. I wrote a Mandelbrot in it that used zero variable names, just two functions with a recursive call. It was beautifully terse and dense. That sort of language is really satisfying for that sort of problem.


It's 'similar to [the] JVM' in that the word 'stack' pops up when either is described. In most substantial ways, they are quite dissimilar.


I managed to get an Apple LaserWriter II to render a Mandelbrot set very nicely at 300dpi in reasonable time by tracing the boundaries of a few select level sets: much fewer points to examine so much faster.


If you want to use a browser to generate the Mandelbrot set I think the best option is a WebGL shader. I found this, but there should be plenty of variants out there as it's a classic example of a parallel problem that benefits immensely from using the GPU.

https://www.shadertoy.com/view/4df3Rn


As the comments on that page indicate, you have problems with float precision pretty quickly. It's hard to intuit just how fast you're shedding bits when you zoom in.

Some quick DDG-ing around suggests that there isn't a great solution for arbitrary-precision GPU numbers, and that trying to force it on to GPUs either loses all the speed advantages it has over CPUs or even just loses to CPUs. Corrections solicited.

(Remember... GPUs aren't "faster" than CPUs. They're different. There are things GPUs are much faster than CPUs, but there's also plenty of tasks... indeed, it's most of them that we actually perform... where CPUs are faster. You can get differentials of 100x-1000x in either direction. That's why we use CPUs.)


I know GPUs aren't necessarily faster than CPUs, but they are more parallel. The article's graphics didn't work in my browser so I haven't looked at the results other than the rendered images, but there's no arbitrary precision in the linked code. It's an attempt to speed up the rendering using threads, when the GPU would be faster.

I found this version [0] that uses perturbation theory to get deep zoom on the GPU. I haven't seen it before and haven't looked into it, but it seems neat.

[0] https://github.com/munrocket/deep-fractal


"they are more parallel"

They are more parallel for very specific things, most of which go flying out the window when you try to use arbitrary precision floats. GPUs are not parallel arrays of CPUs; they're parallel arrays of what are very stupid and unoptimized execution units from a CPU's point of view. (The "stupid and unoptimized" is to some extent a feature; you can control them deeply, unlike the high performance CPUs, which are to the point where they're not just second-guessing you, they're second-guessing their second guesses. But they don't have a lot of caches, they don't reorder things themselves, etc.)

From the linked project: "Using the perturbation method, we can calculate only one point with an arbitrary precision in CPU and restore the rest of the picture with float precision in GPU."

Now that's where the big wins really are. I Have A Dream (which will never happen) of writing a programming language that makes it easier to mix CPU and GPU execution in some sensible manner.


> They are more parallel for very specific things

And the Mandelbrot set is one of those things, at least when you're not doing arbitrary precision which the article isn't doing anyway.

> I Have A Dream (which will never happen) of writing a programming language that makes it easier to mix CPU and GPU execution in some sensible manner.

You need to take care so the synchronization isn't eating your performance wins though. I know you wrote "in a sensible manner", so you probably already thought of that. I don't know if another programming language is what the world needs, but there are libraries available to at least ease the pain.

I recently found Vulkan compute for people. I haven't tried it yet, maybe you'll find it interesting.

https://github.com/Glavnokoman/vuh


"You need to take care so the synchronization isn't eating your performance wins though. I know you wrote "in a sensible manner", so you probably already thought of that."

Like I said, I haven't gotten too far into it, but one of the other ideas that keeps coming to mind is to have some sort of cost model in the language, because one of my other I Have A Dream elements of a new language is the ability to make assertions about blocks of code like "this code is inlined" or (in a GC'ed language) "this variable is stack allocated", or in the case of crypto code, "this code is optimized only to constant-time operations". These assertions wouldn't make the thing they are asserting happen, because experience over the decades shows that that doesn't work (witness the way that inline pragmas have gotten consistently weaker over the years to the point that I believe they're no-ops in many environments), but they'd be signs to the compiler to say back to the user that if the constraint is violated, here's a clear message explaining why.

The idea could be adapted to something like "this block of code only copies the data to the GPU once" or some variant of that claim (a given byte only copied once).

I think a similar thing could be used for some interesting network/cloud transparency, too, where the language permits wiring up function calls/messages/whatever transparently, but you can use labeling to make compile-time (or possibly start-up time) assertions about how expensive the result is going to be. I've tossed around the idea of making a function call not necessarily be the fundamental primitive, but maybe have something weaker than that be the cross-module default, like an Erlang message pass. One of the reasons networks are such a pain to work with in conventional programming languages is the mismatch between function calls and network operations; it would be interesting to expand on Erlang and see if we might be able to address that via weakening the function call across modules.

Anyhow, like I said, I'll never get to any of this, but I do sort of feel like there's a surprising amount of room for a new language out there that doesn't get explored very well because people keep remaking Python or something like D over and over again, just with slightly different spelling.


You probably already know of it, but you might find Halide interesting. It aims to separate computation strategy and dataflow, in a performant fashion.

Also, potentially pyCUDA and other things that google autocompletes when you type "pyCUDA vs".

None of them are exactly what you're talking about, of course.


This is one of those things I have in my list of things to try out one of these days. If I'm not mistaken, this is what Google use to implement their image algorithms in their Google Camera Android app. Marc Levoy is one of the co-authors of the Halide paper [0] and is now working at Google, so it's natural. Do you happen to know how widely used it is in industry?

[0] http://graphics.stanford.edu/papers/halide-cacm18/halide-cac...

https://halide-lang.org/


If you mean me (and I guess others aren't likely to come across this conversation):

I was very impressed by Halide but I'm not involved and have no idea how widely it is used, sorry. I could imagine that a lot of potential users enjoy tweaking GPU and SIMD code enough, or have enough confidence in their results, that they don't give it a good look.


It hits an iteration count brick wall very fast, unsure if it's an inherent limitation of the techniques in use or laziness about providing an adjustment.


> I Have A Dream (which will never happen) of writing a programming language that makes it easier to mix CPU and GPU execution in some sensible manner.

for a brief and beautiful moment there were CPUs designed in a way they could be used as GPUs and yet be programmed as CPUs. Intel called them Larrabee at first, then Xeon Phi.


It should be possible to use fixed point math for a fractal. That should fit in the GPU well.

Obviously for more zoom, you're still going to need more bits (and therefore different codepaths for different zoom levels).

Fancy approximations can probably be done to look at computation results from neighbouring pixels too. The number of bits to store the difference from an intermediate pixel is far smaller.


For a fun variation, check out this twist I made. I use Mandelbrot as a warp instead of visualizing the iterations. https://www.shadertoy.com/view/XsKfDc

If it doesn’t work in your browser because of the camera input, try replacing the iChannel0 input with one of the built-in videos. Firefox seems to work best for me to use the Camera input, and turning your head into MandelHeads is pretty hilarious...


Picky point:

> Let’s just say that for our purposes we can think of complex numbers existing on a line perpindicular to our regular number line. If you take a piece of graph paper and mark a line from left to right, and then another line that crosses that line at 90 degrees, from top to bottom, you can imagine that the horizontal line represents the real number line, and the vertical line represents the complex number line.

The vertical line is the imaginary number line. 2 is as much a complex number as 3i is (2 + 0i vs 0 + 3i).


The Mandelbrot set is bizarre, astoundingly so. E.g., since it is a fractal, it is astoundingly irregular.

Mandelbrot argued that commonly shapes in nature are fractals. So, we could go to a mountain range, find a lake, and then suspect that the surface is a 2D fractal.

Well, as bizarre as the Mandelbrot set is, actually it is a closed set in the usual topology for the plane. Then there is a theorem that there is a function from the plane to the line that is 0 on the Mandelbrot set, strictly positive otherwise, and infinitely differentiable. Similarly there is a function that is 0 on the boundary of that set, negative in the interior, and positive otherwise.

So, in general, with the Mandelbrot set just an example, these results hold for any closed set in the plane. And the result generalizes to dimension 1 and all finite dimensions greater than 2. So, a set is closed if and only it it is the level set of an infinitely differentiable function 0 on the boundary of the set, negative in the interior, and positive otherwise.

This result can be used to show that in the constraint qualifications for the Karush-Kuhn-Tucker conditions, the (i) Zangwill and (ii) Kuhn-Tucker constraint qualifications are independent. For a while that fact was an outstanding unanswered question about constraint qualifications.

Somewhere in one of the papers of Arrow, Hurwicz, and Uzawa on mathematical economics there is a similar question about constraint qualifications asked but not answered that is easily answered with an example based on these results about infinitely differentiable functions and closed sets.

Back to the mountains, we can argue that the surface of the land, above, at the boundary, and on the bottom of the lake can be infinitely differentiable while the lake is the Mandelbrot set or any closed set.

Bizarre stuff!


Here's a fun fact about the Mandelbrot set, which I will communicate in Python in the hope that you can have a little fun witnessing the results:

  def N(eps):
    c = -0.75 + eps*1j
    n = 0
    z = 0
    while abs(z) < 2:
      z = z*z + c
      n += 1
    return n

  [N(eps) for eps in [0.1, 0.01, 0.001, 0.0001, 0.00001, 0.000001]]


When you begin with z = k i - 0.75, you can obtain an arbitrarily good approximation of pi by multiplying the p for which f_p(z) = z^2 + z diverges by k. The approximation increases in precision as k tends to 0.

There's a neat video explaining it here: https://www.youtube.com/watch?v=d0vY0CKYhPY


That was interesting. It continues for a little bit, but it soon falls apart:

    $ pypy3 man.py
    [33, 315, 3143, 31417, 314160, 3141593, 31415927, 7853981629]


It continues forever! See here for example: https://mathoverflow.net/questions/215187/is-there-a-referen...

I believe you are just running into the limits of double precision arithmetic; indeed log_2((10^8)^2) > 53.


Interestingly, 7.853981629 is close to pi*2.5


very cool!


What I like about generating this picture is that the color of every pixel is completely independent. If you have 2 cores, you can generate it twice as fast. If you have a million cores, you can generate it a million times faster.

Apparently I attempted this in Haskell 10 years ago: https://gist.github.com/jrockway/291074. I seem to recall adding things to that GTK image builder the slowest part by far. I bet things are better 10 years later though.


Take a look at Fractint (fractint.org) and all kind of fractals it can generate.

I became obsessed with IFS (iterated function systems) a looooong time ago. Take a look at the colage theorem and try to make your own ;-)

Cheers


Oh, me too. I was obsessed with Fractint when I was a teenager - It's what got me into coding. Wrote some Pascal that morphed one IFS pattern into another & was soooo pleased with myself :-)

Some modern fractal things that I appreciate:

Example IFS generator: http://sirxemic.github.io/ifs-animator/ Mandelbulb 3D (amazing): http://www.mandelbulb.com

If anyone wants to start with something simple, try and write a Sierpinski Gasket generator. Very very simple, yet infinitely complicated.


Fractint was one of a number of programs that were very popular for a period because they really showed off graphics cards that, for the first time on consumer hardware, could display reasonably high resolution graphics (EGA then VGA).


There's some fun to be had with the Java based jwildfire these days, http://www.jwildfire.org/, although arguably it's a bit more focussed on making things pretty.


I thought for a moment you were referring to FRACTRAN (https://en.wikipedia.org/wiki/FRACTRAN), which is a whole different kind of interesting.


Playing with Fractint and reading Chaos shaped my world view more than most things in my youth. It formed the basis for what passes for spirituality in me.


I have a python fractal drawer here - https://github.com/starnose/fract

That can do a load of different types (Pickover Biomorphs, Newton-Raphson approximations, burning ships, Mandel-drops etc). Always loved drawing fractals. One day I'll get around to implementing Ray-marching and draw the mandelbulb!


Hah - s/he's complaining about "slow" being 2-7 seconds...

...I remember back when I was in high school making a generator for this in BASIC on an Apple IIe, running 16 colors in the "double high res mode" (needed the 80-column card for it - only machine that had it) - and having it run for hours just to generate one image.

Slow...sigh.


I have those memories as well, only it was a Commodore 64. I believe I found the program used to generate the set in Compute! magazine and had to type it manually. Good times.


same here - regular hi res took overnight sometimes.


Pardon the off-topic comment, but is anyone else finding the discussed web page to be sluggish to the point of unusability? (This is on Firefox 67, on Windows 7.) I'm not talking about page load time; once the page has finished loading:

- trying to scroll the page using arrow keys or Page Up/Down keys takes several seconds between key press and response

- when dragging the scroll bar manually, the new scrolled-in area is blank white for several seconds before finally filling in with content

- even right-click, or click-drag to select text, takes a few seconds to respond

I tried disabling javascript on the page, but it makes no difference, so it's not some runaway script. I'm guessing that maybe there are some very high resolution Mandelbrot images that are being scaled in the browser?


FF 67.0 on Win10 w/uBlock Origin, i3 7100/8 GB, snappy. Scrolling back up reloads/regens images, so there's a sub-second flash of empty in the image locations, but otherwise it's not misbehaving for me at all.


https://www.sqlite.org/lang_with.html scroll down to the bottom to see a sqlite mandelbrot


It gives me warm fuzzies being surrounded by fellow Mandelbrot lovers on this thread. I've been enchanted by fractals ever since seeing IFS ferns that someone made in BASIC on a green monochrome screen.

I've wanted for a long time to get around to generating the Mandelbrot as a 3D height map (Fractint could do that from a static viewpoint), and make it explorable. Ideally where you can change size to explore it all...


That sounds like a cool idea. Do you mean the height would be based on color with the blackness being either a deep point or a high plateau?


Yeah, pretty much, though really the colors are based on how long it takes a point to "escape" the set, if it does, with the blackness being ones that never do (to whatever level of diligence you have time to compute "never"). I was thinking the center black bug would be a deep crevasse or the "floor", with mountains rising on either side, so you could wander around in the canyons around a giant flat lowland, surrounded by mountain ridges.


I always found implementing fractals very satisfying, so this was a good read. Shameless plug for my own subpar Julia set viewer (Julia set results "make up" the Mandlebrot set): http://thejamespaterson.com/scripts/julia/


Your website hurt my brain but I enjoyed the Julia Fractal generator.


I made a pull request with large performance improvements, and compatibility with all modern browsers: https://github.com/ScionOfBytes/smooth-mandelbrot/pull/1


I’ve got some Mandelbrot resources at the bottom of this Github page:

https://github.com/melling/ComputerGraphics


It took 4 Amiga 500s about 3 days to render the mandelbrot set at 320x200 when I was in High School. In case anyone is plotting out the increase in computer power over the years.


This article reminded me of the good old days as well. I wrote some toy Mandelbrot renderers on Amiga. Mine didn't have an FPU so any trivial algorithm would choke itself with the floating point emulation code and would probably take days to finish. So, the first fix would be to rewrite it for integer math. (And, eventually, write a proper floating-point implementation at the time when I did obtain an Amiga with an FPU...)

The results were nothing close to real-time but the speed difference was astronomical. With a bit of tricking the compiler to produce good code you could get a screenful in a few minutes. There were good Mandelbrot zoomers for Amiga written in assembler that you could, maybe barely but still, call real-time renderers. Of course, I added panning and zooming in mine as well eventhough the performance at the time was more akin to "real-long-time" rather than "real-time".

The number of optimisations available are mind-boggling, especially for real-time renderers which only need to care about correctness in the final image -- which often won't be rendered at all because the user has moved elsewhere or zoomed deeper before getting there.

So, rendering as you iterate is one thing you can do: as soon as the image has enough detail, it's "finished" for all practical purposes and either the user moves around elsewhere or leaves the renderer to finish the final iterations with hopefully only minor updates on the actual screen.

Also, because the set is the (typically) black area the details for the nice color shades are quicker to compute because those pixels run away sooner. On the other hand, the shaded areas are really not where the genuine details are so you're left with a lot of ways to precalculate estimated values for these pixels when you're certain you're not too close to the set. A real-time renderer can also save work in only computing new pixels revealed by panning and reusing pixels that have only moved but still on the screen. To some extent it's also possible to quickly extrapolate temporary pixel colors after panning, then try to lock down where there might be pixels that belong to the set.

Of course, the usual, more mechanical inner-loop optimisations apply as well. This becomes mandatory at deeper zoom levels where you raise the threshold for discarding pixels, and you actually have to run lots of iterations for nearly all pixels.

I just love these applications where a seemingly simple algorithms lends itself to endless amount of performance optimisations. Especially with fractals the limits are literally... unlimited.


you had to code it in assembly! I believe it took about a few minutes on my Amiga.


I wrote a version in False[1] (sadly lost to the sands of time) which was pretty nippy on my A5000.

[1] http://strlen.com/false-language/


That's surprising. BBC BASIC on a 2Mhz 6502 could render 128x256 (a square using fat pixels) in about 6 hours. I've seen a recent optimised 6502 assembly version, also on the BBC Micro, that runs in about a minute.


Did that with Apple II+. 32 bit floats helped it be less slow, but it wasn't fast by any measure. I also exploited some PAL-M artifacting to fake more colors (with bigger pixels, which also sped up things)


Also popular around that time in the 80s:

https://en.wikipedia.org/wiki/Bifurcation_diagram

And the book Chaos, by James Gleick

Oh, and The Cuckoo's Egg, by Clifford Stoll


Invariably when learning a new language, making a Mandelbrot generator is my first project (after tutorials). Added benefit: I can compare benchmarks since Mandelbrot algorithm is CPU heavy.


As I was reading this I just kept hearing in my head...

Take a point called Z in the complex plane,

let z[1] be z^2+C...

let z[2] be z[1]^2+c...

let z[3] be z[2]^2+c

And so on.

https://www.youtube.com/watch?v=QrztrxV9OtQ


Mandelbrot is of the devil Bobby Boucer! It is entirely too easy to lose a Saturday on YouTube just watching these videos.


Whilst we're on the subject, let's not forget ffmpeg:

     ffplay -f lavfi -i mandelbrot




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: