Hacker News new | past | comments | ask | show | jobs | submit login
How to fit any dataset with a single parameter (arxiv.org)
217 points by tambourine_man on Sept 29, 2021 | hide | past | favorite | 146 comments



This reminds me of a joke idea I read somewhere: You can encode the entire Encyclopaedia Britannica using a single mark on a simple stick!

Just encode the text as a ascii codes after the decimal dot of a zero. (0.656168.. etc). Then just mark that ratio of the sticks length and you're done...


My shot at the calculation for fun :-)

Stick encoding with graphite resolution (0.335 * 10^-9 meter) [1]: "Uti" (31 bits -> 3 UTF8 characters)

Stick encoding with Planck resolution (1.616255 * 10^-35 meter) [2]: "Utility ought " (115 bits -> 14 UTF8 characters)

Complete first sentence: "Utility ought to be the principal intention of every publication." [3]

It appears that this storage scheme may not be suited towards the safekeeping of literature.

[1] https://www.wolframalpha.com/input/?i=floor%28floor%28log_2%...

[2] https://www.wolframalpha.com/input/?i=floor%28floor%28log_2%...

[3] https://digital.nls.uk/encyclopaedia-britannica/archive/1441...


"Simply" scale up the stick to your desired minimum level of precision!


Stick scaling left as an exercise for the reader.


You might have to wait for the universe to expand a bit more.


Fortunately, waiting is easy :)


I would look more into the limitations of the read tech (how accurately can we measure the stick length and the mark location) rather than inherent limitations of the medium. Very few technologies reach the actual theoretical limitations of the materials they are made of.


Straightforward length measurement (by interferometry) has a resolution limited by the wavelength of light that can be generated with lasers, so up to the near ultraviolet, at a few hundred nanometers.

Resolution smaller than a wavelength can be achieved with vernier techniques (like in a caliper), but those require a pair of light sources with precise frequency/phase relationships between themselves, which are difficult to make at such high frequencies, so it is hard to improve much the resolution.

I have not looked to see if there have been any progresses in recent years, but I would guess that a very approximate limit for the resolution of length measurement would be around 100 nm. So measuring the length of an 1 meter stick might provide up to log_2(10^7) bits, so about 23 ... 24 bits.

Of course, any temperature fluctuation would change the length of the stick by much more than the resolution.

However that can be avoided by encoding the information not in the absolute length, but in the ratio between the lengths of 2 segments marked on the stick.

No matter what, it is possible to write much more bit symbols on any stick than it is possible to encode in the measurements of one or a few lengths marked on the stick.

That is due to the fact that halving the size of a bit symbol doubles the quantity of information written on the stick, while halving the length corresponding to the measurement resolution provides just 1 single bit of extra stored information.


> (115 bits -> 14 UTF8 characters)

Not exactly. If the 115bits are the hash/retrieval key of the actual content, then that can be a lot of information. Just have to have a big enough DB.


I don't see this as a joke, but a radical and important point.

Reality, in being geometrical, is infinitely informationally dense (with a discrete conception of information).

This distinction between geometrical space and time, and discrete algorithmic computability is unbridgeable.

And hence there is an extemely firm footing on which to reject: AI, brain scanning readers, teleporters, etc and most sci-fi computationalim.

Almost nothing can be simulated, as in, realised by a merely computational system.


I just wanted to add that reality being a non-discrete geometric thing is still an assumption - we don't know for sure that it isn't discrete and a lot of quantum stuff points more toward a discrete reality than a continuous one.

So assuming continuous space/time and discrete information then I'd agree, but as far as we know space/time aren't continuous, but just appear that way to us. It doesn't seem like we know for sure that it is discrete, but at the least I'd say it's solid evidence that stuff like brain scanning is definitely in the realm of possibility.

This answer from the Physics StackExchange nicely covers how time/space could appear continuous to us even if they are in fact discrete at lower levels. Also some interesting discussion in the other answers https://physics.stackexchange.com/a/35676


I think for my purposes defining continuous = unmeasurably discrete produces the same results.

Ie., there is an irreducible geometrical continuity in the sense that no discontinuity can ever appear. The state density is maximal.

via this route we reporduce the same point: computationalism/simulation'ism' is then just the thesis that computers qua measurably discrete systems can realise dense unmeasurable discrete systems.

This can be shown to be impossible with much the same argument: spatial and temporal geometrical properties obtain in virtue of dense discreteness; and fail to obtain at measureable levels.

The key property of continuity is its irreducibility to measurably discrete systems. That irreducibility isn't, however , limited to continuity .

Wolfram makes this point about the failures of reductionism in a perfectly discrete context, ie., that no CA can compute a CA whose complexity is greater than it can summarise.

I prefer to press a continuous angle: our best theories of all of reality are continuous and geometrical . That energy levels are discrete in bounded quantum systems has almost nothing to do with the indispensability of contintuous mathemtatjcs in every known physical system -- including that very bounded wavefn


I agree that we can’t reject the hypothesis that reality is continuous (and, even if spacetime turns out to be discrete, it seems hard to imagine that the amplitudes of the wavefunction(s) have finitely many possible complex values, though I suppose we can’t rule it out)

I disagree that this necessarily implies any difficulty for the possibility of brain scanning and AI.

Just as things sampled faster than the nyquist frequency (or twice it or whatever) of a uh, band limited thing, can be perfectly recovered, (I mean there’s still discritization of the amplitudes but I hear this can also be handled), I don’t see why uh, arbitrarily high frequency (in space and time) should be necessary in order to model the behavior of a brain to the point of long-term indistinguishability.

(That being said, I don’t particularly expect whole brain emulation to ever be achieved, I just don’t see “spacetime is continuous (or well approximated as continuous)” as being a strong argument for it being impossible.)

I’m not sure what you mean by computationalism.

If you mean the idea that the way the world works is computable in the abstract sense (not requiring any practical bounds on the computational resources needed), then the idea that the world is discrete and finite, merely with extremely fine grains, then this poses no issue for computation in that abstract sense (just make the imaginary computer even bigger).

If you mean like, an accurate simulation of the past of the world being run within the world, yeah that doesn’t work.


Argument:

1) Intelligence requires being spatiotemproally acting-on and being acted-on by a spatiotemporally dynamic environment.

2) Dynamical spa'temp properties of the body enable (1)

3) These properties are continuous features of the bodily (eg., organic plasticity)

4) continuous properties are irreducible to measurably discrete ones

5) computers are systems which we build that have measurably discrete systems

therefore,

C) computers cannot be intelligent


Simulation doesn't mean exact simulation. The value of a continuous property is noise after a certain decimal and can be disregarded for the purposes of simulation. I also believe that 1) is false.


If meat is magical, just build computers out of meat then.


Sure, I agree.

The issue is the word "computer" means not "device we have made" but "universal turing machine".

Ie., a computer is any system which realises a function from the Naturals to the Naturals.

Physics barely, if at all, has any use whatsoever for those functions. It is a very important point.

Computer scientists (ie., discrete mathematicians) are not the people who are even able to describe, engineer and build whatever is needed for an AI -- if, as I claim, continuous dynamical properties are needed.

(As, for example, they are needed by pretty much every system.)


Computer scientists do study computers that use real numbers, for example it's known that such computers can solve all problems in #P in polynomial time. Many other areas of computer science also use plenty of real analysis.


It seems at point 3 that organic plasticity requires features smaller than a Planck length, is that right?


Whatever the length of the smallest discrete unit of spacetime, say, L -- then continuous spatial and temporal properties are those which have a state density O(1/L^2).

This may seem weird, and indeed, it's far less weird if you just say "continuous".

But here's an intuition: spatio-temporal continuity is "scale-free" in the sense that stuff happening at the sub-proton is affected by stuff happening at the galactic.

Thus reality has to be able to "zoom" from the sub-proton to the galactic.

In the case of organic plasticity, I do think that macroscopic effects which are whole-body distributed (including, eg., thoughts) have to drive protein expression at the sub-sub-celluar.

Consider simulating that with a low-state discrete computer: it is many orders of magnitude more data than a planet-sized computer could store and many more years than the lifetime of the universe (consider the number of molecules to store, and their interaction effects from whole-body down).

Running operations at anything in the nanoseconds makes this simulation impossible. It simply does need to be much closer to O(1/L^2).


I don’t think claims 1 or 4 are justified (where claim 4 is interpreted in the relevant sense), and I’m not sure quite what you mean by claim 2.


I think there are at least two things wrong with this take:

One is that I don't think it follows from the premise that the continuity of the physical world precludes AI, brain scanning, etc. Even if the physical world were continuous (likely not, see below), an arbitrary degree of approximation could be attained, in principle. At the very least I would not call the footing "firm".

The second is that the universe is very likely not continuous anyway. The Beckenstein bound[1] puts an upper limit on the number of bits of information a region of space may contain. If the ruler tickmark were either measured or localized to the precision required to encode the information, the information density would cause it to collapse into a black hole. This would happen once your measurement needs to be about as precise as a Planck length, which would allow you to encode about 115 bits of information with your tickmark.

(This in of itself is independent of the fact that you would need to construct the ruler out of objects that the universe permits; your ruler tickmark would need to be made of and measured with discrete fundamental particles, which by their very nature are quantized).

[1] https://en.wikipedia.org/wiki/Bekenstein_bound


Or use a larger stick. Every doubling of the stick length gives another bit of information. A stick of one light-year length would add about 53 bits.


The main point I'm making is that the information density of the physical world is not unlimited as GP suggests.

But I think that example just shows how few bits you can really get out the exercise!


In theory, pi has infinite digits. You could publish a book of a trillion digits of pi, and you have barely scratched the surface: in fact you published a precisely 0.00000% of all digits of pi.

In practice, you "only" need ~42 digits of pi to draw a circle spanning the entire known universe (diameter of 8.8 * 10^26 m) and it will deviate from the ideal circle by less than the size of a proton (0.8 * 10^−15 m).

Having a theoretically infinite precision does not mean that it makes a measurable difference.


Every number has infinite digits or can be made to have infinite digits for a given representation, but that's not the same as a number having an infinite amount of information. PI represents a finite amount of information, as opposed to say a number like Chaitin's constant which represents an infinite and irreducible amount of information:

https://en.wikipedia.org/wiki/Chaitin%27s_constant


You are right about information, but the comment you replied to is still right about precision.


I used to think there must be something 'special' to brains to distinguish us from computers. There isn't. Brains encode finite amounts of information (quantum mechanics seems to imply bounded local information). We are a huge information network ourselves -- that's what consciousness is (with some added bits like self-identity and various particulars structures that dictate the character of our experience).

But that doesn't mean brains aren't special -- it means brains are special and computers are special. Even more: it seems to imply computers, AI, etc. can be as special as ourselves, sentient, and perhaps even more special in ways we haven't realized yet.

It's difficult to even imagine a physical theory with unbounded local information. It seems to open the possibility to crazy things like hypercomputation, which do not seem very well defined. (For example: every 1-1/n seconds, (n>1) from now, flip a switch ON/OFF. At what state will the switch be at t>1s? An at exactly t=1s?)

Note: while information and information flow itself bounded (hence no hypercomputation), I don't know of any obvious objections to continuous time. (I'm not sure the continuity of time has any profound implications)


> that's what consciousness is

How do you go from computation to feeling a toothache?

It's like telling me a cucumber is really a Porsche (but worse).


Who says neural networks aren't in pain when their fitness (reward function) is subtracted from? or happy or orgasmic when the function is bumped up?

Pain/pleasure seem to have easy enough analogue to at least something like a reward function, but what really gets me is colors. I feel like if honestly considered, the word "color" is all that's necessary to disprove materialism. Color is. How? Dunno.


Perceptions and feelings are real as you both experience them and could measure their signals in the brain, if you would have the proper tech to do that. And filtered, processed perceptions exist as well. In a "pipeline" in some brain - no matter if biological or artificial - somewhere in the middle you have processed signals of sensor input that have a complex meaning that has only some relation to the actual physical world "seen" by the sensors. Things like color and motion seem one of those; for an intuitive understanding it seems very close to the distill.pub analyses of what some specific middle-neurons in a convnet see.


> It's difficult to even imagine a physical theory with unbounded local information. It seems to open the possibility to crazy things like hypercomputation, which do not seem very well defined. (For example: every 1-1/n seconds, (n>1) from now, flip a switch ON/OFF. At what state will the switch be at t>1s? An at exactly t=1s?)

I agree with the latter, but the former? Classical Newtonian mechanics is easy enough to imagine.

What baffles me a bit is that quantum mechanics seems to be linear, but we seem to see chaos in the real world. (And exactly that (mathematical) chaos is also what the article exploits.)


Chaos arises in very simple (Newtonian) systems, e.g. the double pendulum[1].

1. https://en.m.wikipedia.org/wiki/Double_pendulum


Yes, Chaos arises in Newtonian systems just fine. They are non-linear. But Quantum Mechanics is linear.

See eg https://physics.stackexchange.com/questions/33344/is-the-uni... for a discussion, and https://www.scottaaronson.com/papers/island.pdf for a longer treatment.


QM being linear doesn't make the universe linear. Schrodinger's equation doesn't work across measurements (or universes.) And if you try to weasel out of that by saying we will describe the multiverse instead, you have just created the unexplainable phenomena of being in a specific universe, with no way to accurately describe the process behind the random choice of the universe you reside in.


> unexplainable phenomena of being in a specific universe

One universe has "observer" (a bunch of variables, really) in one state, another has it in different state. Those variables encode some information about themselves, so indeed in different universes the different "versions" of the observer are different, and each perceive it as universe randomly deciding to chose their version of existence. So what?

We're parts of the physical reality so trying to describing physics from the point of view of an out-of-universe observer, while psychologically attractive, is ultimately futile: in QM, such approach breaks very quickly.


Measurements (and 'universes') are part of interpretations of QM. Not sure they are properly part of QM. I see what you mean though.

I'm not sure whether eg the Born rule is enough by itself to explain how (non-linear) chaos arises from linear QM.


the issue isn't brains

no algorithm running on a cpu can move a muscle --

it is precisely that movement is a spatiotemporal property which means no turing machine can realise it

movement isn't a symbolic operation



I would go a step further and look at the research on medical devices for spinal-damage patients which try to literally control their existing muscles through a computer; the concept of neuroprosthesis and https://en.wikipedia.org/wiki/Functional_electrical_stimulat... working to design algorithms on CPU to move muscles, as the parent post words it.


In this very narrow sense that you are defining, no brain can move a muscle either. All it can do is send signals. A computer can do the same.

If you are making a broader claim about the limits of hardware vs. wetware the you will need to clarify things a bit further.


This is such a strange claim to make, I can't even tell what you could have meant. We are surrounded by simplistic mechanical computations moving muscles , and have been for more than a hundred years, since the industrial revolution at least.


This is not just a false idea, but an obviously false one, contradicted by all the laws of physics. If you were right, then any finite volume would contain an infinite amount of information, which would mean it has infinite entropy, temperature, and energy.

Also, by the same logic you apply to space, you could say that time is infinitely divisible, so you could create a computer which finishes an infinite amount of steps in a finite amount of time.


There's an interesting paper[1] that argues that real numbers aren't physical, for the precise reasons you stated. That is, only the subset of real numbers that contain a finite amount of information (like constructive real numbers) are physically meaningful.

A consequence of this is that classical physics is not really deterministic: this is because, in general (ie. including chaotic systems), the evolution of a system depends on a set of initial conditions that are specified by full real numbers, impossible to measure with finite precision. So, the use of real numbers is hiding the indeterminism in the initial conditions, much like the function in this article encodes a dataset in a single parameter.

[1]: https://arxiv.org/abs/1803.06824


I think the precise definition of what exactly is physically meaningful is more interesting a bit, because pi is just as 'real' as 1 in some sense. You can't any more precisely measure 1m than pi meters. A perfect square doesn't exist any more or less precisely than a perfect circle.

Basically, the universe can just as well be approximated in two different ways. One, it's all straight lines, and anything that looks like a circle/curve is actually a piece of a veeery many sided polygon if you look closely enough at it. Two, it's all curves of some curvature, and anything that looks like a straight line is actually a segment of a veeery large circle. The first perspective corresponds to taking the rational numbers as physical. The second one corresponds to taking pi as physical (and then it's unclear whether e is also physical, or which other transcendental numbers are, but 1 is definitely un-physical then).

Probably the constructive framework is the best way to express this concept, just starting from whatever constant we chose to define as the unit.


Note that π, like practically any constant or number normally found in mathemathics is constructive and contains a finite amount of information. What the author consider unphysical are the non-constructive reals: unfathomable numbers that can't be defined or written in any finite amount of words or paper. It's hard to argue they play some role in our physical world: it would basically amount to metaphysics.

However, even if rarely used outside of proofs, they are the vast majority of the reals, in fact there are only countably infinite constructive/computable numbers.


information here, ie log of a probability, is a continuous notion -- it is real-valued, as not least, log is a transcendental fn --

i specifically said with a /discrete/ conception , geometry is infinitely dense (of discrete states)


In all physical theories, any finite system has a finite number of distinguishable states. So it is not infinitely informationally dense, especially when working with discrete bits of information.

Not to mention, the finer the distinctions between two states of a system, the more energy you need to distinguish them. So, the less impact these differences can have, unless the system is extraordinarily energetic (and even then, you end up in fundamental limits of energy per volume, like the Schwarzschild radius).

So again, there is no sense in which a finite part of the universe is universally dense.

Even worse for your argument, all currently known laws of physics use computable functions (the randomness in QM notwithstanding). So, by definition, all known laws of physics can be simulated by an ideal Turing machine (again, give or take some randomness in QM, depending on the interpretation you chose to believe in and on how you chose to simulate the QM system).


QM is even linear!


These are weird conclusions. Any attempt to measure “reality” gives some amount of uncertainty. The only way for this to lead to the relatively stable experience you perceive is if those small variations in measurement lead to relatively small differences in perception. In which case, you can truncate the resolution of your simulation and still get plausible results.

I assure you there are plenty of groups out there simulating systems that operate with similar densities (but lower volume) to the brain.


Even if space and time were continuous (which things like the Planck length would discredit), there are still discrete objects in that continuum.

Elementary particles, for example, are discrete. You could argue that they have continuous effects vis a vis the EM field and spatial positioning, but ensemble effects usually render that irrelevant at large enough scales.


I will note that even in QM, both space and time are considered continuous - the Planck length is just a smallest measurable distance, but nothing in QM currently assumes that particles must be separated by an integer multiple of the Planck length (unlike spins, for example).

I believe there are some theories of quantum gravity that do rely on the idea that space-time is quantized in integer multiples of Planck's length, but these are far from definitive theories.

A much more relevant limit in terms of possible information density is Heisenberg's uncertainty principle, which essentially puts a limit on the maximum possible precision for any measurement.


How do you know that reality is 'being geometrical, is infinitely informationally dense'?

You might be interested in the Bekenstein bound (https://en.wikipedia.org/wiki/Bekenstein_bound):

> In physics, the Bekenstein bound (named after Jacob Bekenstein) is an upper limit on the thermodynamic entropy S, or Shannon entropy H, that can be contained within a given finite region of space which has a finite amount of energy—or conversely, the maximal amount of information required to perfectly describe a given physical system down to the quantum level.[1] It implies that the information of a physical system, or the information necessary to perfectly describe that system, must be finite if the region of space and the energy are finite. In computer science, this implies that there is a maximal information-processing rate (Bremermann's limit) for a physical system that has a finite size and energy, and that a Turing machine with finite physical dimensions and unbounded memory is not physically possible.

Lots of math works out well that as a continuous approximation, eg the Navier-Stokes differential equations seem to describe fluids well on everyday scales. But we know very well that water is made of molecules, so we know that this particular continuous approximation will fail at small enough scales.

By the way, the closest thing we have come to for teleportation is cutting-and-pasting of quantum states. So no classical, digital computers involved there.


Others already noted how shaky your assertion "Reality, in being geometrical, is infinitely informationally dense" is.

Instead, let me throw out another extreme (but fun) view in the opposite direction: Finitism [0].

These guys not only reject the existence of the continuum; they reject all infinities altogether! In finitism, even discrete things exist only as finite objects (that is further constructable – Ultrafinitism [1]).

So no infinite universe, no "set of all natural numbers", no "limits" and other ideals over infinite domains. Screw Platonism. Hello Wittgenstein (and Wolfram).

I don't know how far that theory can be taken in a practical sense – most body of science is built on Platonism [2] – but I have to say finitism does appeal to my CS heart and my earthly experience.

[0] https://en.wikipedia.org/wiki/Finitism

[1] https://en.wikipedia.org/wiki/Ultrafinitism

[2] https://en.wikipedia.org/wiki/Primitive_recursive_arithmetic


My view is indeed the opposite. It seems trivial to me that reality has actual infinities as seen from a discrete pov.

A trivial example: you can partition the environment into an infinite number of objects. And which partition scheme you choose is, in some sense, abitary.

Eg., "object: the edge of the glass", "composite: edges of glasses on the table", etc.

Reality admits an infinite number of such schemes, and also forcloses an infinite number (eg., if "pen"=pen, then "paper"!=pen).

I dont think one can meaningfully speak "of reality", ie., provide a discrete linguistic/propositional account, which avoids these infinities.

I also think there is no meta-scheme, so one cannot even order (in terms of fundamentality) which scheme is 'the really real' one.

It's my view that the reason for this issue is that cognition is discrete but reality continuous. Since discrete aggregations arent enough, likewise "aggregative models of atoms" arent enough for chemistry.

An aqueous solution, just like a society, is much more than merely the sum of the properties of its members. When we partition the world with a discrete scheme, we introduce "emergent properties" which are only the "leftovers from our reductive failure".

The only problem infinity poses is to being realised by a discrete sequential process. That can never be actually infinite. But everything else can!


> And hence there is an extemely firm footing on which to reject: AI

This is like saying it's impossible to build a water pump without solving the quantum mechanical interactions that govern water flow.


Reality isn't quite infinitely dense though, it's hard to imagine how you could encode data geometrically over a distance smaller than the planck length, for example. Reality just has a very, very fine resolution.


A quick Google shows 6.25e+34 plank lengths in a meter. So, seems impossible to measure and make the mark.

Edit: As a fraction of appended ascii codes.


And if the mark were instead the removal of a single atom in a 40cm length, the resolution would be even poorer.. able to encode a single 30 bit number by my very rough calculation.


I wouldn't say that the distinction is "unbridgeable". Keep in mind that we send packets over plenty of "geometrical" and seemingly infinitely dense analog channels all the time, like electricity over a copper wire or EM waves over the air.

The "mark on a stick" channel has a capacity like any other channel. If you're sending just one symbol, you could easily calculate the information capacity given a desired probability of bit-error.

Assuming you can put the mark in exactly the right spot, you can model the "noise" as a distribution over the values that the reader will measure. If you model this as `mark + zero-mean normal distribution` with a known variance, then your stick is just an AWGN channel.


The gentleman is not suggesting that quantized information cannot be fully represented in a continuous function, but the opposite. That the continuous functions of life (his muscle example) and consciousness (AI example) cannot be simulated in any meaningful way by a discrete nomenclature or algorithm. The idea is impossible by definition and can be dismissed immediately.


As an aside, this idea that reality is "geometrical" leads to all kinds of fascinating paradoxes[1].

[1] https://arxiv.org/abs/1609.01421


Depends how you define simulation. There's no reason to assume that you need, or even that it's desirable, to simulate more than "almost nothing" in order to simulate what you'd want to simulate - even a whole universe can be simulated with ease if your required fidelity is low enough.

So, if you want to simulate every possible interaction in realtime, sure. But you can increase overall capabilities by making sacrifices along several axes: duration, speed, precision, persistence of changes from a generative baseline and lots more.

In other words it depends entirely of your purpose for simulation.


How do you know reality is not computational?


It's computational again when you allow the computation to be precise enough for any practical purpose, which is not infinitely precise.


Existence of analog reality is quite problematic, because it opens can of worms - eg. hypercomputation is possible.


To your last point: "All models are wrong, but some are useful."


Indeed! I term it the 'Cardinality Barrier'.

It is gratifying to know that the mathematician Georg Cantor demolished AI some hundred odd years before any engineer had thought seriously of it.


> Cantor demolished AI

Citation needed. Btw nothing in this thread has anything to do with AI. Perhaps you are using a very unusual definition of AI that programmers don't use? Or maybe I just fell for satire. It's really hard to tell on this site.


Something that I love about analog cameras and film: the "resolution" of film is infinite. Infinitely informationally dense as you say.


That's not really true in any sense. Film has grain size, and optics have a diffraction limit.


Planck length and the speed of light define the limits.

It's also a resolution of Xeno's Paradox.


I thought Xeno's classic Paradoxes were pretty well dealt with by limits already?


I don’t think we can confidently say that space is discrete at the Planck scale.


Only if real numbers are a "real" thing.


> Reality, in being geometrical,

Prove it.


What you are saying is P/=NP


Huh? What does that have to do with anything here?


>infinitely informationally dense

That can't possibly be true, because then there would be no point to space and time. If a single location could hold an infinite amount of information, then the rest of reality would be redundant.

>hence there is an extemely firm footing on which to reject [the Matrix]

This seems to be the opposite of the conclusion that your premise implies. I'm the one that doesn't believe in matryoshka simulated universes, but infinite information density is what would make it possible, no?

If we lived in a non-discrete universe, why would computation be unable to exploit it?


Space time is something that appears to you, the obsever, and only to you. There is no universal shared perception of anything or any event and therfore no common "reality" to try and suss out.

It is quite likely (rather obvious, really) that what we call reality is continuous in design, infinite in abstractions, relationships, and descriptions, yet at the same time cohesive in a single entity.

Those type of systems are not representable in computers.


>therefore no common "reality" to try and suss out.

You or I may be wrong about any specific aspect of reality, or fail to be aware of it.

However, we can be sure there is such a thing.

"Reality is that which, when you stop believing in it, doesn't go away." (Philip K. Dick)

It's kind of like the concept of agnosticism.


And of course in a footnote of the documentation for this encoding "Sufficiently precise measurement of this mark left as an exercise to the reader."


You reminded me of https://en.wikipedia.org/wiki/Gödel_numbering

> Each letter of the message is represented in order by the natural order of prime numbers—that is, the first letter is represented by the base 2, the second by the base 3, the third by the base 5, then by 7, 11, 13, 17, etc. The identity of the letter occupying that position in the message is given by the exponent, simply: the exponent 1 meaning that the letter in that position is an A, the exponent 2 meaning that it is a B, 3 a C, 4 a D, up to 26 as the exponent for a Z. The message as a whole is then rendered as the product of all the bases and exponents. Examples. The word 'cab' can thus be represented as 2^3 x 3^1 x 5^2, or 600.

Excerpt From: Frederik Pohl. “Starburst.”


That is from the novel "Hard-Boiled Wonderland and the End of the World" by Haruki Murakami[0].

[0]: https://everything2.com/title/Encyclopedia+on+a+toothpick


Martin Gardner illustrated this principle in "Paradoxes to Puzzle and Delight" back in 1982, and I'm sure he wasn't the first to think of the concept.


You jest, but this is almost exactly how arithmetic coding works.

If you arrange your symbols and contexts carefully, you can even use this as a technique for progressive or lossy compression -- i.e. the more accurately you specify the ratio, the higher fidelity your result.


One theoretical objection to this idea is that distances measured have an accuracy limit of Planck length (1.616e-35 meters). So if your number needs more precision then "just mark" step can't be done.



You can do it easier than that -- just make a stick of length .65168 meters. (Same idea, just the "full 1 meter stick" is defined elsewhere, not by the length of the stick).


That’s way harder, because you have to cut a stick to write and have a meter ruler to read.


I remember reading about this as a kid in one of Martin Gardner's books, either "aha!" or "aha! Gotcha"


how big of a stick are we talking about here ?


This reminds me of the "worst way to ask a user for a telephone number" UI, after one UI forced the user to select each digit in a 0-9 dropdown.

The specific one I'm thinking of is just spit out a scrollable numeric string of Pi, and make the user scroll until their phone number was the digits of Pi that matched it.

My (7 digit) phone number occurs after digit position 9_000_000, and occurs 21 times in the first 200_000_000 digits.


I think it is still not proven that Pi contains all possible strings.

https://math.stackexchange.com/questions/216343/does-pi-cont...


But it contains all possible phone numbers.

(Assuming that phone numbers are less than, say, 100 digits.)


I wonder if you gave away your phone number there? Or how many 7 digit numbers match those criteria?


There can't be a better way to get a date with a number theory nerd.


7 digits is barely a phone number in most of the United States anymore, most places outside of rural areas require 10 digit dialing and hide it behind carrier dial plans - I mean in the way a rural person can dial one, two, or three digits because all land lines in an area have the same area code, prefix, and possibly additional digits in common.


My phone number is already "out" in terms of being on the internet, and I can change it pretty easily if it became an ACTUAL security issue.

But it would be an interesting exercise to do, I guess.


Doubt it. OP effectively have us 5+4 bits of entropy, whereas a 7 digit phone number probably has 23 bits.



Fun read. But if we allow ourselves as much precision as we need, we don't even need a parameter. Any constant that is a normal number should suffice. Such constants already contain every possible sequence of digits you could muster -- i.e., they already contain every possible dataset.

EDIT: I replaced "transcendental" with "normal" after reading Scarblac's comment below: https://news.ycombinator.com/item?id=28699622 -- many important transcendental numbers, including π (Pi), are thought to be (but have not been proven to be!) normal.


It's not true that any transcendental number would work. A transcendental number is a number which isn't the root of any polynomial with rational coefficients. This property doesn't imply that it contains all number sequences.

It hasn't even been proven that pi contains all number sequences.

See https://math.stackexchange.com/questions/216343/does-pi-cont... for more details.


You're right. I edited my comment. Thank you for pointing it out!


I don't believe just being transcendental is sufficient for that property.

E.g. I can't imagine that pi with all the '1' digits replaced by '2' isn't transcendental, but it clearly doesn't contain every sequence of digits.

I think you mean normal numbers.


You're right: normal, not transcendental.

IIRC, most real numbers are normal, and many important transcendental numbers are thought to be (but have not been proven to be) normal.

I edited my comment. Thank you for the correction!


> most real numbers are normal

Which means that most real numbers violate all present and future copyrights.


Right, but any constant doesn't "encode" this information. To use a normal constant to encode information, you need to encode the location of the substring of interest. Generally, the location of the substring of interest needs to require as many bits as the substring itself (unless there's an intimate relationship between the number and the substrings of interest). So arguably it's the location that encodes the data, the number is irrelevant (and why not encode the data directly into this location?).


This actually isn't known to be true afaict: https://www.askamathematician.com/2009/11/since-pi-is-infini...


You're right. I edited my comment. Thank you for pointing it out!


You could use the digits of pi as an end-of-file marker, which would indeed make it transcendent. ;-)


First, this is a fun implementation and I love it.

Second, you could as easily embed an infinite size dataset into an infinitely long binary string and say that you've reproduced your dataset with a 'single' parameter! That's sort of what this is doing, with some extra steps.


This reminds me of the fun thought experiment of using /dev/random as storage. Given an infinite amount of time you'll find every file you need.


The index might need more bits than the file.


Considerably more in all probability :)


Infinite storage, horrendous seek time.


Your second point is what I first assumed this paper was doing before I read it.


Up until ARG_MAX is reached, or you run out of RAM that is... https://serverfault.com/a/844666


Now I see why the elephant. From the references:

> [2] “I’m not very impressed with what you’ve been doing.” As recounted the famous physicist Freeman Dyson himself, this is how Nobel laureate Enrico Fermi started their 1953 meeting. “Well, what do you think of the numerical agreement?”, Dyson countered. To which, Fermi replied “You know, Johnny von Neumann always used to say: With four parameters I can fit an elephant, and with five I can make him wiggle his trunk. So I don’t find the numerical agreement impressive either.”


The word “single parameter” is doing a lot of work in this statement.


intentionally:

> In addition to casting doubts on the validity of parameter-counting methods and highlighting the importance of complexity bounds based on Occam’s razor such as minimum description length (that trade off goodness-of-fit with expressive power), we hope that fα may also serve as entertainment for curious data scientists


>The purpose of this paper is to show that all the samples of any arbitrary dataset X can be reproduced by a simple differentiable equation: fα(x) = sin^2 (2^xτ arcsin √α)

So... use a PRNG?


Doesn't this exclude negative numbers?


This is beautiful. The real numbers are uncountable, so seems intuitive that could approximate any other space with them... basically a hash function. But this one has an inverse! It's so cool that someone actually implemented this.


You don't need uncountability. Any nontrivial interval of rational numbers contains every finite string that exists, in fact infinite copies of each!


It's not about uncountability, but simply that their space is infinite. The same would work with (unbounded) integer numbers.


Too easy. Now fit any dataset using a substring of digits from the binary expansion of pi


I've thought about this for a while. I didn't go to the trouble of actually inventing the function, but just the fact that it ought to be possible in principle is sufficient to change your perspective on some things. Most notably, parameter count based information criteria.

AIC, BIC, et al. need to be reformulated for each model in which they're used, and not all parameters can be treated as fungible.


Prior work: "One parameter is always enough", Piantadosi (2018)

https://www.google.com/search?q=one%20parameter%20piantadosi

Haven't read the posted article but sounds like the same idea and motivation


It is the first reference in this article. So I guess they have built on it.


Hypothetically you could achieve something similar using this[1]. Note the word "hypothetically".

[1] https://github.com/cakenggt/Library-Of-Pybel


It reminds me of the pifs [1], which stores data in π

- [1] πfs - the data-free filesystem

https://github.com/philipl/pifs


I love this as a verifiable implementation of a mathematically trivial, but easy to forget point.

So I'm sad that when I tried to recreate the elephant scatter plot, I haven't been able to. Anyone find exact parameters that work for tau and alpha?

Just wish they'd given a complete list of the decimal places for those animals. It would've been something to plot them yourself.


In the paper you find a link to a GitHub repo that should have all results being reproducible


And here is the relevant notebook for animal shapes: https://github.com/Ranlot/single-parameter-fit/blob/a065af75...


Epic, thank you:)


Let me guess before reading the article. Infinite precision float point number?


The question being of course: what happens when you lerp of two such "single parameters".


How to have fun with mpmath :)


Would this be a good compression tool in any instances?


This sounds like it could explain many ML models ;)


It’s actually possible in theory for part of a neural net to simply encode the training data and use it to regurgitate responses. In practice, more efficient parts of the network should win in the end.


Has nobody in this thread heard of https://en.m.wikipedia.org/wiki/Cross-validation_(statistics...

Every ml tutorial start with splitting your data in into a training set and validation set.


very silly. :)


this made me smile. :)

oh, uncountable infinities.


more digits than I care for




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: