Hacker News new | past | comments | ask | show | jobs | submit login
On the (Small) Number of Atoms in the Universe (norvig.com)
298 points by lispython on April 28, 2016 | hide | past | favorite | 164 comments



I like how Ken Jennings dealt with the 'Go complexity' analogy:

"Go is famously a more complex game than chess, with its larger board, longer games, and many more pieces. Google’s DeepMind artificial intelligence team likes to say that there are more possible Go boards than atoms in the known universe, but that vastly understates the computational problem. There are about 10^170 board positions in Go, and only 10^80 atoms in the universe. That means that if there were as many parallel universes as there are atoms in our universe (!), then the total number of atoms in all those universes combined would be close to the possibilities on a single Go board."

http://www.slate.com/articles/technology/technology/2016/03/...


Comparing combinations with numbers of items is unfair.

In Go, the number of items is the number of pieces, and it's very small.

In the universe, the number of combinations of positions of all the atoms is, well, wonderful.


I don't think anyone believes that Go is somehow more complex than the universe it is a subset of. The point is that enumerating all cases of Go is impossible and always will be, so more sophisticated analysis is required.


Indeed, enumerating all

208168199381979984699478633344862770286522453884530548425 639456820927419612738015378525648451698519643907259916015 628128546089888314427129715319317557736620397247064840935

positions in Go is impossible.


That's called "counting". Enumeration is iterative.


In your enumeration, what's the board look like at position 348277381979984699478633344862652779770286522453884530548425639456820927419612?


I can't say, because I didn't enumerate them. I only counted them. See

http://tromp.github.io/go/legal.html

for the method used, which is a form of dynamic programming.


Though if it is dynamic programming, then it should be possible for you to answer dogecoinbase's question using not much more computational power than you used to count them in the first place, right?

If you think of dynamic programming as counting the number of paths in a directed graph (in this case, from skimming the paper, the nodes correspond to border states), then given a path number, you can trace the path backwards through the graph, as long as you remember the number of paths ending in every vertex.


Yes, you could if you preserved all intermediate counts. But the graph I used has 362 layers each of which can have up to 363 billion nodes, so I had to recycle the space used for the counts (4TB per layer). Also, I didn't even compute with full counts. I reconstructed them using the Chinese Remaineder theorem from 9 separate modular counts. So, yes it's possible, but highly impractical...


1/ Convert the number to base 3.

2/ Each digit represents an intersection on the goban, assuming the following mapping: 0 = no stone, 1 = black stone, 2 = white stone.


That would work for all positions regardless of legality, which are in 1-1 correspondence to {0,1,2}^(19*19).

The count above is for legal positions only, i.e. those where every connected group of stones is adjacent to an empty point.


Sametmax's point is that you're comparing a simple count (atoms) to a factorial (combinations of pieces). For example, it's hardly surprising that 6! is larger than 6 - factorials grow much faster than simple counts.


I _love_ the way you stated this. Thank you!


Compared with a googol our Universe has negligible atoms , 10^100 - 10^80 = ~10^100

Compared with a googolplex (10^(10^100)) the entire Evrettian metaverse is negligible as (10^(10^100) - 10^80^2 * (average quarks in atom) * leptons(10^200) * dark multiplier(10^2) = ~1 googolplex

Has anyone ever used a googolplex for anything ?

[For ~ read approximately]


Graham's number - one googolplex raised to the googolplexed power =~ Graham's number, which has been used in a proof. There have been larger numbers used in proofs but none yet as famous as Graham's number, about which Martin Gardner wrote a popular mathematics article. http://iteror.org/big/Source/Graham-Gardner/GrahamsNumber.ht...


Rayo's number is so big it is almost impossible to convey how big it is. It is certainly much much much bigger than Graham's number. (Unlike Graham's number, it appears not to serve any useful purpose beyond winning "who can name the biggest number?" games.)

https://en.wikipedia.org/wiki/Rayo%27s_number


Matthieu Walraet "used" a googolplex as a lower bound on the number of possible Go games, in http://matthieuw.github.io/go-games-number/GoGamesNumber.pdf

Does that count:-?


That is a staggering amount of possible Go games, no wonder tree search failed to improve without the Convnet pruning.

Makes me wonder if Deepmind could learn Go without first learning from the big dataset of expert games to train the convnet to prune the tree.

Which implies that Deepmind couldn't learn to play Go without first being taught by us (the expert games).

So AlphaGo learnt Go from us. It took a human brain to crack the problem of Go and the AI learned from our solutions it did not discover them itself - still a very great breakthrough.

Would Lee Sedol have won if he could use MCTS to assist his evaluation ?

( Arguably the MCTS is a non-AI component of deepmind & does not learn ).

MCTS = Monte Carlo Tree Search, where repeated random playouts evaluate moves by randomly sampling the tree of following (googolplex) possible moves.


The tree searched by MCTS is WAY smaller than a googolplex, since it excludes the eye-filling moves that allow games to go on past 400 or so moves.


In the OP article Norvig suggests 10^172 possible games and games average 200 moves.

Walraet paper has 10^10^103, so 1000 googolplexes - this is very large.

Monte Carlo methods are specifically for guess-timating intractably big searches.

MCTS is close to the human heuristic for evaluating Go positions - counting who currently controls what, multiplied by the strength of each position (in Go 2 'eyes' make a block impervious to harm). Early on this is very subtle to discern - which makes Go a subtle art.

The human heuristic is complex, the MCTS random playout is overly simple but uses gigaflops of random sampling. MCTS was the basis of the best computer (the new innovation is the convnets that prunes the tree before MCTS).

The convnets are trained on a big database of expert games which is why I wonder if the MCTS is the differenting factor & Lee Sedol with an MCTS would beat the Convnet & MCTS.

This is important because: does AlphaGo plan ?

If not, the implication is planning is a poorer heuristic compared with whatever AlphaGo actually does.

The convnet provably doesn't plan, it estimates what a human expert would do and performs tne same whether playing game or just predicting next move from random configurations.

AI research has always held planning in high regard.


Actually, MCTS usually doesn't allow eye-filling. That's how it determines a game is over - all of the territory is "eyes". Otherwise, the playout would go on forever, or would stop at an arbitrary stopping point (board width * board length * 3 or something) which is not as accurate.


Don't conflate the "Observable Universe" with the actual Universe. We flat out don't know how big the actual Universe is. So, it could be 10^80, 10^800, or even A(10, 80)* Atoms.

*https://en.wikipedia.org/wiki/Ackermann_function


It could be infinite.


If that is the case was the universe once finite and then went infinite during the early (big bang) expansion? I don't understand how something could have expanded if it was always infinite in size. I'm not even sure the concept of expansion even makes sense. What is infinite + 1? It's just infinite. It seems more like the expansion is a distribution of internal things.


You can see back as far as the Big Bang, approximately 14 billion years ago, so all matter in the visible Universe is within 14 billion light years of the Earth. However, the space the Universe occupies is only really finite if it's positively curved. If it's flat or negatively curved, the space it occupies is infinite, and if its density is constant, it must contain an infinite amount of matter even though we only see part of it. The Big Bang is a singularity - a point with infinite density. It's hard to get your head around intuitively, but it all makes sense mathematically. (The maths is pretty hard, and rarely covered at undergraduate level.)

The simplest model in rough agreement with observations is called the Einstein-de Sitter Model, and it's flat with a zero cosmological constant. See http://www.britannica.com/science/cosmology-astronomy/Relati...

More general models are covered here: https://en.wikipedia.org/wiki/Friedmann%E2%80%93Lema%C3%AEtr...


Radius of the visible universe is actually 45.7 billion ly.



Off the cuff thought: the overall universe is infinite and not expanding, and it's only the visible universe that's expanding into that infinite space.

Now try to wrap your mind around this: someone that's one light year to the left is going to see a slightly different visible universe, also expanding, into the same infinite space. But if we look in their direction, we see the edge of our visible universe expanding into the void, but from their point of view looking in the same direction our edge is one light year short of their edge. So what's our edge expanding into?


I've always wondered; is there a "last" galaxy in any direction, such that for an observer in that galaxy, no further light or radiation can be detected from that direction? (outside that galaxy)

That, must be a terrifying place to live in......


The typical way to picture the universe is like the surface of an expanding balloon, a 2-manifold which happens to be embedded in 3-space. If you picture galaxies as spots on the balloon, there's no "last" galaxy, they're all roughly equidistant from their neighbors. Analogously, our Euclidean universe is thought of (in terms of noncompact spatial dimensions) as an expanding 3-manifold. There's no last galaxy there either. However, if you include time, then at some point in the distant future there will be a last galaxy because old stars will all burn out or be sucked into black holes. And further in the future protons will decay, so there will be no baryonic matter left, plus black holes will eventually evaporate. Much sooner than that though, the continued expansion of spacetime means that galaxies will in time disappear over the expanding cosmological horizon, and future lifeforms will know nothing of the universe outside their own aging galaxies. See: https://en.wikipedia.org/wiki/Future_of_an_expanding_univers...


> If you picture galaxies as spots on the balloon, there's no "last" galaxy

What about looking "up" and "down?" i.e. into the inside of balloon or away from its surface?

I have a hard time wrapping my head around the balloon surface analogy, because galaxies seem to be in all directions of each other..


The analogy fails there because our 3manifold space isn't necessarily "embedded" into a higher dimensional construct the way the surface of a sphere is embedded into our 3 dimensional universe. So there isn't necessarily a hyperdimensional up or down to consider.

Or, alternatively, if you think of time as a dimension, then we can call "up" the future and "down" the past. In that case, if you look down inside the past, you'll see that in that analogy the center of the universe corresponds with the big bang! And all the galaxies are equidistant from that point/moment in spacetime.


The surface of the ballon represents a 2d space, where you cannot look "up-down" in the 3rd dimension, just like in our 3d space you cannot look "up-down" in the 4th dimension.


I wonder if there would be a Restaurant at the End of the Universe


The last thing we can see with light in any direction is the background radiation. That's when light and matter separated.

That's why gravitational waves are such a big deal: they allow us to look further. (Not further than the limit imposed by the speed of light, though.)


I think they would have to be at the edge of their galaxy for this effect to take place ..


What about look up or down? If it's a flatter galaxy like our Milky Way spiral.

https://news.ycombinator.com/item?id=11594728


Obviously I have no evidence, but my suspicions are that its like the pac-man world, it just continuously loops on itself. Its just really big...


Too late to edit, but I must add that I'm not necessarily asking if there's an "edge" of the Universe.


I don't see how our situation is any less terrifying haha


It's certainly less terrifying than having to look at an infinite abyss.


I really enjoy your belief that you're somehow not always looking at an infinite abyss :P


Math includes the idea of orders of infinity. There are infinite prime numbers, there are more positive integers, even more integers (positive and negative), even more rational numbers (A/B), and even more numbers (rational + irrational {e, Pi} etc)...


In what sense are there more rational numbers than prime numbers? They can be put into bijection with each other, so we generally think of them as being same infinity. There are more real nubmers, of course, by Cantor's diagonalization, so your basic point is true.


The set of all prime numbers is contained within the set of rational numbers, but they are rational numbers that are not within the set of prime numbers.

Cantor's diagonalization is simply demonstrating that same inequality by showing a number in set A is not in set B.

Just because you can map two infinity's to each other does not mean they are of the same size consider: Limit(0->inifinity) of (x - (x/2)) algebraically that's clearly Limit(0->inifinity) of X/2 which is infinity.

PS: What makes Cantor's diagonalization interesting is you can repeat it recursively an infinite number of times. This is more obvious in base 2.


The existence of a bijection between two sets is what "same size" means in set theory. Yes, there are non-prime integers, but you can establish a bijection between the two, so their cardinalities are equal (both have a cardinality of aleph zero).

The reals, on the other hand, cannot be placed in a bijection with the natural numbers, and there are therefore "more" reals than naturals (i.e. there is an injection from the naturals to the reals, but not from the reals to the naturals -- any function from reals to naturals must have some pair x ≠ y with f(x) = f(y)).


I didn't go far into set theory but, informally, it seems to me that for any given natural number N, there will be N natural numbers less than or equal to N (obviously) and P prime numbers less than or equal to N, and P < N. Doesn't taking the limit as N goes to infinity show that there are fewer primes than there are natural numbers? Where am I going wrong?


It might be easier to think of it this way: Are there more natural numbers than there are even natural numbers?

The answer is no, as illustrated by the Hotel paradox[0], in which we have a infinitely many rooms and want to accommodate a (possibly infinite) number of guests.. To summarize: For any finite number of guests, you can always find an even-numbered room to correspond to that guest (assume the guests are numbered sequentially, then double their number and put them in the room that has that number.). This creates a one-to-one correspondence, which means that the sets are the same size.

You might say, 'well, that only works because we're dealing with a finite number of guests. But we're talking about infinity'. There are a few different ways of answering that question. In my opinion, the easiest way to look at it is to remember that there is no such number as 'infinity' - when we say 'infinity', we're really trying to express the concept of growing without bound. So, the above strategy (double the person's number) works for any arbitrarily large group. At no point does it stop working, even as the group size grows larger and larger, so we can say that the two sets have the same size.

On the other hand, we have no such strategy for putting every irrational number in one-to-one correspondence with the counting numbers. That proof is a little harder, and the analogy with the hotel guests breaks down, unfortunately, so it's a bit tougher to explain.

[0] https://en.wikipedia.org/wiki/Hilbert's_paradox_of_the_Grand...


> Where am I going wrong?

In short, infinities are complicated and intuition doesn't work well.

In slightly longer, you're talking about the difference of the rate of growth of two functions rather than actually about the cardinalities of the sets.

We define two sets as having the same cardinality when we can create a bijection between them. We can list the primes in order from smallest to largest and number them with the natural numbers. So we'll have 2 match up with 0, 3 with 1, 5 with 2, 7 with 3, etc. Every single prime number will correspond with a natural number AND every single natural number will correspond with a prime number, no exceptions. So they must be the same size. They are also the same size as the integers and the rational numbers but the set of real numbers is a bigger infinity.


You're confusing a classification system with size.

Is the set of Real Numbers larger, smaller, or the same size as the set of points in a finite 2d object? Can you setup a bijection in either direction?


Assuming you consider the dimensions/axes/whatever of the 2d space to be indexed by reals (which is conventional), then yes one can construct a bijection:

    - normalise x and y coordinates in the shape into the interval (0, 1)
    - interleave the bits of the normalised x and y coordinates
This gives a single real value in the interval (0, 1), which exists and is unique for every point in the space (so it is an injection), and it covers every real number in that interval (so it is a surjection).

This gives you a bijection between points in (a) 2-dimensional space and a segment of the real line (which, in turn, has a bijection with the whole real line if you want to specify that).

Once again, cardinality in set theory is based on injections and bijections. If there is an injection from X into Y, then Y is at least as big as X. If there is a bijection between them (i.e. injections in both directions), then they are the same size.

(Also, bijections are inherently bidirectional.)


You used a countable infinity to tile an uncountable one. The set of 0 to 1 line segments being a countable infinity. 0 to 1 maps 1, 1 to 2 maps to 2 ect.


I believe you misread the text you're replying to. The bijection is between the points in the 2d space and the points in the line segment. Both of these are uncountably infinite.


>Cantor's diagonalization is simply demonstrating that same inequality by showing a number in set A is not in set B.

If that were true, why go to all the trouble, just show 1/2 which is not a natural number, or sqrt(2) which is not a rational number.

Cantor's diagonalization is proving that no mapping exists between the natural numbers and the real numbers in [0, 1]; that no matter what mapping you (try to) come up, there will be a number you would miss.

The primes and rationals have the same size (cardinality) as the natural numbers, namely countably infinite. See https://en.wikipedia.org/wiki/Countable_set#Formal_overview_...


So, you can show the Real numbers between 0 and 1 is larger than the number of Natural numbers.

There are an infant number of points between 0 and 1 and an infinite number of points between 0 and 2. The distance between 0 and 2 is larger. The number of points between 0 and 1 is smaller than the number of points on the unit circle AND they are a different class of infinity.


"The distance between 0 and 2 is larger" is a question about the metric of the space, not size of sets. There are the same number of points in both sets, since f(x) = 2x is a bijection between them.

Try to make arguments from axioms and definitions rather than asserting things from intuition. Intuition is often a useful tool, but (1) it's not an argument, and (2) it's not very helpful once you step into the infinite realm. Incidentally, that's why I went for programming: it's like math, but with no infinity (unless you're using floats, but that's a much easier infinity).


You say same number, you mean same flavor. Either the 'number' is not in R and thus it's not a number or you end up with a host of contradictions.

But, I have had my fun poking people who don't really get set theory.


Best trolling on HN in weeks.


Where in math is the subset partial ordering used to describe one set as larger than another?

Diagonalization isn't showing that a number in set A isn't in set B - that's obviously true for reals and integers, but it's also true for rationals and integers. It's showing that there does not exist a mapping from B to A where there's an element in B for each element in A.

We're obviously not using the same definition of "size". I generally think in terms of cardinality, what are you thinking of?


Cardinality is the number of elements in a set.

If every element in set A is in set B, and there are elements in set A left over it's larger because that's what larger means. {A,B} < {A,B,C}

There are countable and uncountable infinite set's. All countable set's have a bijection with N. However, there are more than two sizes of infinite sets. Real numbers < Imaginary numbers.


That's not what larger means, that's what (maps into a) strict subset means. In finite numbers, that's the same as larger (greater cardinality), but it's very much not the case for infinite numbers.

There are more than two sizes of infinite sets, but there are just as many real numbers as imaginary numbers for the same reason there's just as many integers as rational numbers.

Try reading this: https://en.wikipedia.org/wiki/Cardinality#Infinite_sets


We agree that {A,B,C} has a lower cardinality than {A,B}.

Now, feel free to try and map the set of Real numbers to the set of irrational numbers. ex: e + ei.


{A,B,C} is of greater cardinality than {A,B}. However, the argument that "a rule works for finite numbers, so it must work for infinite numbers" is clearly false.

For a your mapping, see: http://math.stackexchange.com/questions/512397/is-there-a-si...


The distance between 0 and 1 is smaller than the distance between 0 and 2. The number of points between 0 and 1 is larger than the number of rational numbers.


> The distance between 0 and 1 is smaller than the distance between 0 and 2.

This is correct. However, the number of real-valued points between 0 and 1 is the same as the number of real-valued points between 0 and 2.

> The number of points between 0 and 1 is larger than the number of rational numbers.

This is also true because there are uncountably many real-valued points between 0 and 1 and countably many rational numbers.


1. e + ei is not real.

2. f(x) = x + sqrt(2) if there exists an integer k>=0 such that x - k * sqrt(2) is rational; f(x) = x otherwise.

This function maps all real numbers to irrational numbers, 1-to-1.


e is real, e + ei is irrational.


I believe you mean "complex". You can form a bijection between the reals and the complex numbers by interleaving the digits, as any Google search can tell you.


For every point on your mapimg I define two points X + 1 and x + 1i. You can assign infinity to one of them but not both.


I'm afraid you have some basic confusion with mathematical concepts, including "mapping", "irrational", "complex", and "infinity".

So, after you define x+1 and x+i, now what? Also please bear in mind that "infinity" is neither real nor complex: if you have a well-defined mapping into complex numbers, then by definition, it never maps to infinity.

(Yes, there are some "functions" like y = 1/x that "maps to infinity", but it's simply mathematicians being lazy and abusing notations because everybody around them understands what's going on.)


O, I get it.

There is a basic contradiction in set theory. Called Russell's paradox, there are two ways around it. First is ignoring it, aka everything builds from it's self nothing can become recursive. Or Zer's something or other that basically removed membership and equality and hides in the corner crying.

As such infinity is generally assumed not to exit in R. And causes all this all infinite set's map able to each other are equivalent size crap. It's also why real mathematicians laugh at the set guys.

But, sorry the way R was initially defined it included infinity and you only get to put it into 1 place on your mapping. Or as a math professor said, what angle is the highest number in R mapping to.


Where did you get all this idea? You know enough terms, yet somehow you have an incorrect understanding of pretty much everything.

If you are interested, please read an actual math textbook. (Yes, they can be a giant time sink, but at least you'll learn the correct meanings of sets and functions.)


I have taken plenty of high level math classes for fun and easy A's, I even had a department head yell at me for not getting a PHD. So, I can speak the lingo.

But, the absurd results are not generalizable outside of their assumptions sorry Axioms. Set theory being one of the most obvious cases.

It's sadly like a religion in many ways, follow enough false statements and you can prove anything. Yet, if you find a contradiction then don't actually accept at least one of your assumptions are false.


e is irrational, e+ei can't be irrational because it isn't real (all irrationals are reals).


Sorry, imaginary number composed of two irrational numbers.


The cardinality of the set of prime numbers and of rationals are both aleph-null[1]. Same with the list of all integers, the list of all positive integers, and the list of all even integers. They all have the same cardinality, and that cardinality is aleph-null.

But please, continue to argue with mathematical definitions that have been established for over a century.

[1]: https://en.wikipedia.org/wiki/Aleph_number#Aleph-naught


Within a segment of numbers I understand how there are more rational numbers than integers, but I don't understand it in the context of infinity. How can there be more rational numbers than integers when in both cases there are infinite amounts? Are there mathematical operations or concepts that depend on this (in the context of infinity, not subsets)?


There are different kinds of infinity. The integers are countable, the real numbers aren't. You can prove that if you try to map each integer to some rational number, there will be some rational numbers that are not on that list --- there are more of them. See https://en.wikipedia.org/wiki/Cantor%27s_diagonal_argument.


> You can prove that if you try to map each integer to some rational number, there will be some rational numbers that are not on that list --- there are more of them.

Did you mean reals here? There _is_ a (bijection) mapping between integers and rationals.

https://en.wikipedia.org/wiki/Cantor_pairing_function#Cantor...


in mathematics, we lose concept of how many and fall back to cardinality, which has a lose correlation with how many. So asking "are there the same number" of integers as rational numbers gets a little iffy until we make some definitions. We just say that we can create a bijection from integers to rationals. They each index the other, and for each thing in one, there is one and only one thing in the other. Does this mean there are the same "many"? Well loosely, and in the context of cardinality, yes. But things get weird because there are the same "many" of the whole as a subset (ie, there are as "many" even numbers as integers, as "many" positive numbers as positive and negative integers, etc).

But most people would disagree with you when you say "how can there be more rational numbers than integers..." because while we don't have a firm grasp of how many, we definitely would say that having the same cardinality means that there isn't some notion of "more".

I'm not sure what you mean by mathematical operations or concepts that depend on this.


Thank you for your answer.

> I'm not sure what you mean by mathematical operations or concepts that depend on this.

I suppose I meant to ask if there was any practical application of the concepts you described.


I don't know the answer to your question, but I can imagine how something can expand if it's infinite. Imagine an infinitelylong rubber band in front of you. Now imagine grabbing it with two hands and pulling them away from each other, causing the rubber band to stretch. If you take a sharpie, and paint dots on a spot representing planets or whatever else, they will pull away from eachother as you stretch the rubber band.


Take the set of positive integers (0, 1, 2…). Double them, so you have (0, 2, 4…). You had an infinite list of numbers, all "compact" (no room for more positive integers inbetween them), and with a simple mathematical transform, you've given yourself space for a second equally-sized infinity of integers to fit neatly inside of them.



While this comparison highlights that yes, there are very many possible Go games, it's really apples to oranges.

The real comparison would be the number of pieces on a Go board (19x19 = 361) compared to the number of atoms in the universe. And then to compare the number of possible board positions in Go, with the number of possible atom positions in the universe, and in this case I think the universe wins.....


especially considering all go boards exist _within_ the universe!


Go is very complex, and the fact that DeepMind could tackle this complexity is a huge technical achievement. No minimax-based AI could have tackled such a large state space.

However, other problems have even larger state spaces. Imagine writing an AI which read project Euler problem descriptions (in English) and output working code (in some given programming language). Keep outputs limited to 100-line scripts, max 80 characters per line.

There's roughly 100 usable characters in ASCII, so the possible space of 100-line programs is roughly:

(10^2)^(80 * 100) = 10^16000.

You could simplify this by having the AI work with predefined tokens rather than individual characters, but it's still a vast amount of combinations. Then consider 1000-line or 10000-line programs, and you see how high a mountain AI still has to climb. Humans are able to "compress" this state space via conceptual reasoning, which is much more complex than the "pattern recognition" many deep learning researchers are chasing.

(See "Introduction to Objectivist Epistemology" for more on how humans think in concepts - I'm planning to write more at some point on how this book shows where the practical limits of AI lie).


> the total number of atoms in all those universes combined would be close

Close?? Wouldn't it still be roughly 10 billion times smaller...?


When you're dealing with numbers on the order of 10^80 to 10^170, I think you're entitled to calling that "close".


The ratio is 10^90 which is not small. The subtractive difference rounds to 10^170. In either case, I think its fair to say that 10^170 is unimaginably larger than 10^80.

But if differences become so large we cannot imagine the differences, then we could imagine there are no differences at all, so ... psychologically/subjectively there would be no difference?


The comparison in question is between 10^170 and 10^160 (= 10^80 * 10^80). So "just" a factor of 10^10.


Scott Aaronson's blog post on large numbers is also a very interesting read:

http://www.scottaaronson.com/writings/bignumbers.html


Ha, trip down memory lane:

> And in Go even an amateur human can still rout the world’s top-ranked computer programs


I think that the context matters. While his actual statement is now false, he was really talking about a "solution" to Go, i.e. an algorithm that can compete with any opponent (and back then, Go programs couldn't "even" beat humans). Google's algorithm is (probably) nowhere near a "solution" to Go, but an algorithm that can beat currently-living human opponents. I.e. it is quite likely that a rather simple algorithm would still beat Google's program, only that people's minds don't (or can't) employ that algorithm when they play.


Very enjoyable.

However, I think I found a mistake:

"For example, ‘5 tetrated to the 3’ means 5 raised to its own power 3 times, or 5^5^5"

(I am paraphrasing slightly here because the essay uses an image to show 5^5^5 in normal notation (http://www.scottaaronson.com/cgi-bin/mimetex.cgi?5^{5^5}))

However, shouldn't this be 5^5^5^5, if we're raising 5 to its own power three times?


Not quite - think of it this way:

5 x 3 = 5 + 5 + 5

5 ^ 3 = 5 x 5 x 5

5 t 3 = 5 ^ 5 ^ 5

Where t is tetration. Each one counts 3 fives.


Makes sense. Thanks.


Nope, see the description of titration on on wikipedia.


Here's another great one - and ballpark calculations point to it being likely true:

"..the number of atoms in a grapefruit is about equal to the number of blueberries you would need to fill up the entire sphere of planet Earth." [https://capitolhillscience8.wordpress.com/2012/10/03/just-ho...]

Edit: well, except that the Earth is shaped more like an oblate spheroid [https://en.wikipedia.org/wiki/Figure_of_the_Earth]


Actually the Earth is nearly perfect, far more perfect than most any sphere with interact with in day to day life. The equatorial bulge is 20 miles or so, which is approximately .33% Technically an oblate spheroid, but barely.


There's a great Isaac Asimov essay on this topic: http://chem.tufts.edu/answersinscience/relativityofwrong.htm


Great find, thanks!


And for comparison, the variation in surface height relative to sealevel is over 10 miles (6 miles down to the bottom of deep ocean trenches; 5 miles up to the top of mountains)


The earth is more round than a pool ball; it'd be a perfect sphere to any human eye at virtually any scale. The bulge is too small to notice.


whereas a grapefruit is a perfect sphere?


That actually makes the Earth seem small to me. I wouldn't have blinked if someone had told me the blueberries would fill up a sphere the size of the Solar System. Just shows how hard it is to visualize these numbers.


The earth is small. If you build a scale model of the solar system the size of a football field, with the sun and one end and Neptune at the other (Pluto has been laid off as a planet) then the sun will be about the size of a ping pong ball and the earth will be the size of a poppy seed (and it will be about ten feet from the sun). Jupiter is about the size of a pea at this scale. Alpha Centauri is about four miles away.

And not only do you live on a poppy seed, you live on a very thin layer on the surface of this poppy seed. Blow the poppy seed up to the size of a basketball and the habitable layer is about the thickness of a sheet of paper.


> Alpha Centauri is about four miles away.

Actually, around 500 miles.

Distance to Alpha Centauri: 4.37 light years = 276,364 astronomical units.

In your diagram, the Earth is 10 feet from the Sun. Multiply by 276,364 to get 2,763,640 feet, or 523 miles.

The scale jump from distances around the Solar System to the next closest star is mind boggling.


Wow, you're right. That'll teach me to try to do the math in my head.


Planets seem like a really big waste of mass in that sense. Computational dust in a cloud around a sun would be a more efficient allocation.


"The Integral Trees" by Larry Niven is a novel in which people live in a free-floating cloud of atmospheric gas that is gravitationally stable in a multi-star system. They live on enormous trees with canopies on each end, which are blown in opposite directions by the winds (so they look like the integral sign).


That really depends on your quality metric. If you care about computational speed (because there's only a finite amount of time before the heat death of the universe) then the speed of light starts to be a limiting factor, and concentrating all the computation in a small space makes sense.


But pretty great in surface area. Success is all about the metric you choose.


Ha! Relative to what goal, exactly, are planets a waste of mass?


To the goal of sustaining intelligent life. I'm being semi-facetious of course, but scifi authors and futurists have discussed dismantling the planets to build more mass-efficient structures for a long time.


I'm curious how the author found the link to this - I looked at Norvig's home page but could not find it, which made me wonder how many more goodies he's got up there that we don't know about!


https://www.google.co.uk/search?q=norvig.com&oq=norvig.com&a...

There's a few interesting things on there!


It's in the RSS feed.


Some search engines have a "link:" modifier that you might already be aware of. That can help you find paths leading to a particular page.


And it's even smaller if compared to Graham's number. Every time I try to imagine that one it feels like I'm going to mental asylum.


Related only to large numbers, but is there some theory about generalizing and extending our usual mathematical operators +, ×, and ^ (power)?

+ applied N times becomes ×N

× applied N times becomes ^N

^ applied N times becomes ...?

etcetera

And would such a theory have any practical use?


Knuth's up-arrow notation is one of a few ways to address the addition->multiplication->exponentiation->tetration->... extensions: https://en.wikipedia.org/wiki/Knuth%27s_up-arrow_notation

As for practicalities, the mathematician in me will let the scientists deal with that.


I like the description of graham's number (by graham) using the up-arrow notation https://youtu.be/GuigptwlVHo?t=31


> And would such a theory have any practical use?

Yes, sureley. In Computablilty Theory you have the famous Ackermann-function[1]. It is actually an operator-extension, like you just described. It is important, because it grows overexponentially, but is still computable (unlike e.g. the busy-beaver-function).

[1]: https://en.wikipedia.org/wiki/Ackermann_function


Given that Graham's Number is an (over-)extension of the Ackermann-function that means that it's still computable, correct?

Given the series leading up to Graham's Number, G = g_64 (g_1, g_2, ...), BB(n) will outgrow g_n, right? If so what's the smallest n such that BB(n) > g_n?


After a bit of research I found that there is a proof [0][1] that BB(12) > g_1. But that's still a ways from g_2, let alone g_64.

[0]: "A Lower Bound on Rado's Sigma Function for Binary Turing Machines" by Milton Green (1964)

[1]: https://en.wikipedia.org/wiki/Busy_beaver#Known_values_for_....


Graham's number comes from such an extension, and is about 64 such operations deeper. It's useful in the sense that it provides an upper bound for some proof that hasn't yet been proven for ALL numbers.

http://waitbutwhy.com/2014/11/1000000-grahams-number.html



If 12megapixels can produce 10 to the power 86696638 images, and we came up with a way of enumerating those images, could we then build a function that given anyone of those images return the index of that image within reasonable time with current hardware. ie. "you have just taken 3999999987493th image"?


A friend of mine made a tool that did exactly this, but with Haikus instead. He had a dictionary of syllables, and then just iterated through the syllables (following the rules of a haiku). You can type in a haiku and find its index, or just iterate through the indices to see the (mostly nonsense) generated haikus.


Yes, but it wouldn't save any space. As a thought experiment, think of it this way:

How would we enumerate all these several gazillion image possibilities?

Well. Let's say number one is all black. Every pixels and every channel is all zero in its value. And let's say the last image to be enumerated is all white. 255 for each pixel and each channel.

Every conceivable image is created in between these two ends. For example, image two is all black, but the last pixel has a value of 1 instead of 0 for its value channel.

Image 1840274917 has pixel 27581 slightly reddish.

Hey, wait a minute, you've just created an image format for describing the data within the image! The only space you're saving is that (given this format) you save space on darker images, because they're likely lower in the sequence.

But that's only because this specification demands that each image be the same exact size and can make assumptions based on that. A lossless format like PNG would be able to perform much better over a wider range of images. (Eg all white will be huge in our system, but cheap in PNG)


This numberphile video discusses a similar concept:

The 'Everything' Formula - http://youtu.be/_s5RFgd59ao


It's the number represented by that file's full binary value (all the bytes concatenated together).


not only could you enumerate all images, you can also enumerate all books. the library of babel does this and youre looking at the particular page where this exact paragraph is written, for hacker news by dackerman, no less.

https://libraryofbabel.info/bookmark.cgi?ci.pnvgmzgyrce29


The unintuitiveness of how many combinations you can get from such a small amount of items is why the birthday "paradox" is so interesting.

https://en.wikipedia.org/wiki/Birthday_problem


There is a theory that states the number of atoms (well, electrons) in the universe is exactly 1.

https://en.wikipedia.org/wiki/One-electron_universe


But electrons interact with each other, don't they? How'd that work?


The electron goes forward and backward in time and interacts with itself. (In quantum field theory you can replace a positron with an electron going backwards in time). Regrettably the theory does not work, since it would predict the same number of electrons and positrons in the universe.


Feynman later proposed this interpretation of the positron as an electron moving backward in time in his 1949 paper "The Theory of Positrons".[2] Yoichiro Nambu later applied it to all production and annihilation of particle-antiparticle pairs, stating that "the eventual creation and annihilation of pairs that may occur now and then is no creation or annihilation, but only a change of direction of moving particles, from past to future, or from future to past."[3]

This sounds like SciFi material.


The positrons might be hiding in the protons!


Physics is sufficiently weird that I can't tell if this is just spitballing or an actual theory I haven't heard of before.

Edit: holy crap, these people think nucleons are made of muons that are made of electrons and positrons: http://wlsprojects.com/structure-inside-proton.html . Solves the matter vs antimatter problem, I guess.


The explanation I gave was the one used by Wheeler in a famous phone call with Feynman.


It's the same electron interacting with itself at different times is what I think Wheeler would argue

Disclaimer: I am not a physicist, but I love this theory


The rules probably aren't the same as they are in Back to the Future.


I think he's comparing apples with oranges: maybe 10^80 is not so big, but the number of configurations of the 10^80 atoms is huge.


That is entirely the point of the article.


It would be cool if there were more talk about properties we have observed of these configurations/which are more probable at any given instance, and efficient ways of computing such.


I think that was his point.


Reading this made me think how can you know the number of atoms in the universe (observable, smellable, touchable, whatever ). So I go and check on Wikipedia and of course its just a guesstimation based on assumptions and hypothesuses.

This again reminds me how science today is no different than religion. Of course there is nothing wrong with having the number based on assumptions , but take it out of the field of study and suddenly it is a fact :)

Like in this article and the discussion in HN where its just the number and its name, and the fact that this him be is just somebody's wild guess is totally ignored. Same with Jesus , he exists and he loves you and the fact that it was just somebody's idea is totally left out.


https://en.m.wikipedia.org/wiki/The_Library_of_Babel

This is a fun, thought-provoking story about a large combination of things.


Assuming space is quantized at the Planck scale, the numbers of atoms in the universe is tiny compared to the number of space points.

Assuming the many worlds interpretation of quantum physics is true, then the number of atoms includes all combinations of locations, momentum, etc., and the real number of atoms is vastly vastly greater than combinations of just about anything else you might imagine. (Except for combinations of configurations of quantized spacial points!)


I'd argue the natural scale for thinking about the number of combinations is the log scale - in other words, the entropy. Entropy, like the number of atoms, is an additive property of a system.

From this point of view this article is inappropriately comparing two scales. It's nothing more than saying "e^x >> x for big x".


If you believe in the axiom of choice (I don't) then you can imagine a process which has more degrees of freedom in in than anything at all.


It is an axiom, it is not something that you believe in. You assume it and you prove stuff with it. You do not and you prove stuff without it. If you are really good, you show what theorems require it.

You can believe or not in some physical reality to math (I happen to), and then the axioms do become somewhat more than just logic statements, but that is different.


> You can believe or not in some physical reality to math (I happen to)

I'm not so sure. Is there any evidence that the value of any physical quantity is an irrational number?

Math was originally inspired by physical reality, but I'm not sure it's so closely related that the concept of the Axiom of Choice being "true" even means anything.


You don't believe that the product of non-empty sets is non-empty?


You could always be a intuitionist/constructivist, and enjoy "choice" as a theorem. It seems completely reasonable to give up uncountable infinities.

http://arxiv.org/abs/math/0404335


The parallel you drew between such a day to day object and such an important matter is interesting. Do you know of another comparison like this?


Such a small place, this universe.

Interesting POV.


It actually boggles the mind that a 12-pixel image has more combinations than atoms in the universe (!!!).


Well, I think the example with 12-pixel image is a bit misleading, because the picture focuses reader's attention on those 12 pixels, while skipping over the fact that each pixel can have ~17 million colors. More appropriate representation would be a cuboid made of 3x4x24 blocks.


As a rule of thumb, it looks like 59! is about the number of atoms in the universe.

59! =~ 1.38 × 10^80


but atoms of universe are infinite...guy is so wrong


You should look up what "observable" universe means.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: