Hacker News new | past | comments | ask | show | jobs | submit login

Wonderful article on fractals and fractal zooming/rendering! I had never considered the inherent limitations and complications of maintaining accuracy when doing deep zooms. Some questions that came up for me while reading the article:

1. What are the fundamental limits on how deeply a fractal can be accurately zoomed? What's the best way to understand and map this limit mathematically?

2. Is it possible to renormalize a fractal (perhaps only "well behaved"/"clean" fractals like Mandelbrot) at an arbitrary level of zoom by deriving a new formula for the fractal at that level of zoom? (Intuition says No, well, maybe but with additional complexities/limitations; perhaps just pushing the problem around). (My experience with fractal math is limited.) I'll admit this is where I met my own limits of knowledge in the article as it discussed this as normalizing the mantissa, and the limit is that now you need to compute each pixel on CPU.

3. If we assume that there are fundamental limits on zoom, mathematically speaking, then should we consider an alternative that looks perfect with no artifacts (though it would not be technically accurate) at arbitrarily deep levels of zoom? Is it in principle possible to have the mega-zoomed-in fractal appear flawless, or is it provable that at some level of zoom there is simply no way to render any coherent fractal or appearance of one?

I always thought of fractals as a view into infinity from the 2D plane (indeed the term "fractal" is meant to convey a fractional dimension above 2). But, I never considered our limits as sentient beings with physical computers that would never be able to fully explore a fractal, thus it is only an infinity in idea, and not in reality, to us.




> What are the fundamental limits on how deeply a fractal can be accurately zoomed?

This question is causing all sorts of confusion.

There is no fundamental limit on how much detail a fractal contains, but if you want to render it, there's always going to be a practical limit on how far it can accurately be zoomed.

Our current computers kinda struggle with hexadecuple precision floats (512-bit).


1. No limit. But you need to find an interesting point, the information is encoded in the numerous digits of this (x,y) point for Mandelbrot. Otherwise you’ll end up in a flat space at some point when zooming

2. Renormalization to do what ? In the case of Mandelbrot you can use a neighbor point to create the Julia of it and have similar patterns in a more predictable way

3. You can compute the perfect version but it takes more time, this article discusses optimizations and shortcuts


1. There must be a limit; there are only around 10^80 atoms in our universe, so even a universe-sized supercomputer could not calculate an arbitrarily deep zoom that required 10^81 bits of precision. Right?

2. Renormalization just "moves the problem around" since you lose precision when you recalculate the image algorithm at a specific zoom level. This would create discrepancies as you zoom in and out.

3. You cannot; because of the fundamental limits on computing power. I think you cannot compute a mathematically accurate and perfect Mandelbrot set at an arbitrarily high level of zoom, say 10^81, because we don't have enough compute or memory available to have the required precision


1. You asked about the fundamental limits, not the practical limits. Obviously practically you're limited by how much memory you have and how much time you're willing to let the computer run to draw the fractal.


1. Mandelbrot is infinite. The number pi is infinite too and contains more information than the universe

2. I dont know what you mean or look for with normalization so I can’t answer more

3. It depends on what you mean by computing Mandelbrot. We are always making approximations for visualisation by humans, that’s what we’re talking about here. If you mean we will never discover more digits in pi than there is atoms in the universe then yes I agree but that’s a different problem


Pi doesn't contain a lot of information since it can be computed with a reasonably small program. For numbers with high information content you want other examples like Chaitin's constant.


> Pi doesn't contain a lot of information since it can be computed with a reasonably small program.

It can be described with a small program. But it contains more information than that. You can only compute finite approximations, but the quantity of information in pi is infinite.

The computation is fooling you because the digits of pi are not all equally significant. This is irrelevant to the information theory.


No, it does not contain more information than the smallest representation. This is fundamental, and follows from many arguments, e.g., Shannon information, compression, Chaitan’s work, Kolmogorov complexity, entropy, and more.

The phrase “infinite number of 0’s” does not contain infinite information. It contains at most what it took to describe it.


Descriptions are not all equally informative. "Infinite number of 0s" will let you instantly know the value of any part of the string that you might want to know.

The smallest representation of Chaitin's constant is "Ω". This matches the smallest representation of pi.


„Representation“ has a formal definition in information theory that matches a small program that computes the number but does not match „pi“ or „omega“.


No, it doesn't. That's just the error of achieving extreme compression by not counting the information you included in the decompressor. You can think about an algorithm in the abstract, but this is not possible for a program.


You seem wholly confused about the concept of information. Have you had a course on information theory? If not, you should not argue against those who’ve learned it much better. Cover’s book “Elements of information theory” is a common text that would clear up all your confusion.

The “information” in a sequence of symbols is a measure of the “surprise” on obtaining the next symbol, and this is given a very precise mathematical definition, satisfying a few important properties. The resulting formula for many cases looks like the formula derived for entropy in statistical mechanics, so is often called symbol entropy (and leads down a lot of deep connections between information and reality, the whole “It from Bit” stuff…).

For a sequence to have infinite information, it must provide nonzero “surprise” for infinitely many symbols. Pi does not do this, since it has a finite specification. After the specification is given, there is zero more surprise. For a sequence to have infinite information, it cannot have a finite specification. End of story.

The specification has the information, since during the specification one could change symbols (getting a different generated sequence). But once the specification is finished, that is it. No more information exists.

Information content also does not care about computational efficiency, otherwise the information in a sequence would vary as technology changes, which would be a poor definition. You keep confusing these different topics.

Now, if you’ve never studied this topic properly, stop arguing things you don’t understand with those who’ve learned do. It’s foolish. If you’ve studied information theory in depth, then you’d not keep doubling down on this claim. We’ve given you enough places to learn the relevant topics.


Actually it does, you can look it up. It’s naturally a bit more involved than what I use in a causal HN comment.


> could not calculate an arbitrarily deep zoom that required 10^81 bits of precision. Right?

I’m here to nitpick.

Number of bits is not strictly 1:1 to number of particles. I would propose to use distances between particles to encode information.


... and how would you decode that information? Heisenberg sends his regards.

EDIT: ... and of course the point isn't that it's 1:1 wrt. bits and atoms, but I think the point was that there is obviously some maximum information density -- too much information in "one place" leads to a black hole.


Fun fact: the maximum amount of information you can store in a place is the entropy of a black hole, and it's proportional to the surface area, not the volume.


Yeah, I forgot to mention that in my edit. The area relation throws up so many weird things about what information and space even is, etc.


10^81 zoom is easy. You run out of bits at 2^(10^81) or 2^100000000000000000000000000000000000000000000000000000000000000000000000000000000.


We can create enough compute and SRAM memory for a few hundred million dollars. If we apply science there are virtually no limits within in a few years.

See my other post in this discussion.


In the case of Mandelbrot, there is a self similar renormalization process, so you can obtain such a formula. For the "fixed points" of the renormalization process, the formula is super simple; for other points, you might need more computations, but it's nevertheless an efficient method. There is a paper of Bartholdi where he explains this in terms of automata.


As for practical limits, if you do the arithmetic naively, then you'll generally need O(n) memory to capture a region of size 10^-n (or 2^-n, or any other base). It seems to be the exception rather than the rule when it's possible to use less than O(n) memory.

For instance, there's no known practical way to compute the 10^100th bit of sqrt(2), despite how simple the number is. (Or at least, a thorough search yielded nothing better than Newton's method and its variations, which must compute all the bits. It's even worse than π with its BBP formula.)

Of course, there may be tricks with self-similarity that can speed up the computation, but I'd be very surprised if you could get past the O(n) memory requirement just to represent the coordinates.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: