Hacker News new | past | comments | ask | show | jobs | submit login
π in Other Universes (azeemba.com)
339 points by azeemba on Oct 29, 2023 | hide | past | favorite | 110 comments



> Mathematics can be seen as a logic game. You start with a set of assumptions and you come up with all the logical conclusions you can from that. Then, if someone else finds a situation that fits those assumptions, they can benefit from the pre-discovered logical conclusions. This means that if some conclusions require fewer assumptions, then those conclusions are more generally applicable

This is a really, really nice expression of something my mind's been hovering around for a while.


This is also a part of why I am somewhat fascinated by the idea and the state of Lean4 and mathlib in Lean4. People put more and more formally verified proofs into mathlib, which in turn makes formally proving further theorems in mathlib easier.

If you start with nothing (like in the numbers game), simple proofs are a lot of ... just effort, because you have specify a lot of rewrites and overall work. In mathlib, however, systems like simp (the simplification system) or linarith ("There is a solution by linear arithmetic") seem to do a lot of heavy, repetitive lifting by now.

It's a really interesting snowball effect. Sadly, everything I understand is most likely already in there, so I doubt I could contribute meaningfully, haha.


That's very interesting, I'm no mathematician but I should have a play around with it.

> Sadly, everything I understand is most likely already in there, so I doubt I could contribute meaningfully, haha.

I wouldn't be so sure - and even if so then remember there's enormous benefit to improving tooling around a system. If you want to be involved somehow, better devx, tutorials, output, packaging, error messages all make a big difference to end users.

Edit -

As another thought, is there benefit in going through papers and translating that work into lean4? I'm not really familiar enough with it but if so that may

1. Find issues in current work, like Tao did in his own work

2. Add to a reusable body of work


Former computational mathematics major

You absolutely can contribute meaningfully

The maths world is incomprehensibly broad and deep, even if you just take the Erdos approach and go for interesting but shallow problems


It blows my mind to think of mathematics/logic almost like a huge cellular automaton. “axioms” don’t necessarily correspond to “truth”, to me they’re arbitrary constraints that can give rise to complexity. And sometimes the resulting systems can be useful


The whole of the puzzles of cosmology actually all might be obvious if we had a few different fundamental theorems. But because we hit on some that almost work, and then build upon them a hole edifice of mathematics that is internally consistent and almost fits the universe we keep beating on it, not realizing that backing up a little and then driving forward again at a slightly different angle might yield a simpler, and even more consistent and explanatory system.


Most mathematics has no application to science whatsoever. It's a huge parts bin which scientists delve into when they build their models. And then much of the work is in trying to shoehorn the mathematics into being tractable.

Mathematics is also not provably internally consistent. This was famously shown by Gödel [1].

[1] https://en.wikipedia.org/wiki/Gödel%27s_incompleteness_theor...


Most mathematics originates from trying to solve physical or engineering problems. Typically physicists have been on the forefront of mathematical research - this has only really changed significantly in the last few decades.

Also, mathematics as practiced is internally consistent. It is incomplete, though. That is how it stays afloat of Godel's result. Basically Godel's results showed that no matter how much we strive, there will always be propositions which might be true, but which we will not be able to prove are true. Unless of course we start using methods that sometimes prove false propositions, which we have not done.


> Most mathematics originates from trying to solve physical or engineering problems.

Has this been true since the early 20th century? I have no feel for what constitutes "most" in the vast corpus of pure mathematics, so am not challenging your claim but rather am curious.


You're right, that might actually be wrong.

However, the claim I was actually thinking of, which is right I think, is that the maths used in the physical revolutions of the turn of the century (SR, QM, GR, and probably QFT, QED, and QCD as well) was invented by physicists or by mathematicians working with physicists for the express purpose of developing this theories, not the other way around.

Also, the basis of mathematics and the first few thousand years were indeed motivated by these kinds of concerns.


I wouldn't agree - consider the hyperbolic transforms used to describe space time "bending" wrt relativity:

https://en.wikipedia.org/wiki/History_of_Lorentz_transformat...

    In mathematics, transformations equivalent to what was later known as Lorentz transformations in various dimensions were discussed in the 19th century in relation to the theory of quadratic forms, hyperbolic geometry, Möbius geometry, and sphere geometry, which is connected to the fact that the group of motions in hyperbolic space, the Möbius group or projective special linear group, and the Laguerre group are isomorphic to the Lorentz group.
Mathematicians were following up on "what happens when you discard one of Eucilids Axioms" and discovering there was an entire world of consistent hyperbolic geometry and more.

Some time later:

    In physics, Lorentz transformations became known at the beginning of the 20th century, when it was discovered that they exhibit the symmetry of Maxwell's equations. Subsequently, they became fundamental to all of physics, because they formed the basis of special relativity in which they exhibit the symmetry of Minkowski spacetime, making the speed of light invariant between different inertial frames.
If you read mathematics histories it's a common complaint that it's nigh on impossible to discover something new and esoteric that doesn't soon end up with a military application; the ongoing search for interesting but useless mathematics is akin to the search for the fountain of youth.

It is the case (IIRC) that quaterions arose directly from Hamilton's search for a better way to describe mechanical motions in three dimension spaces - ie created to be useful from the outset.


I think there's a lot of fascinating mathematical "dualism" in how many of those were developed at the same time together by both "practical" mathematicians (such as physicists) and "theoretical" mathematicians. You feel it is easy to argue that because the practical mathematicians had an easily defined "need" (hypothesis/experiment) they were the "leaders" and the arrow flowed from them to the theoretical mathematicians working with them, but there's just as much evidence in some of those cases that those theoretical mathematicians were already doing the theory building on their own and had a "need" to find practical use cases/outlets. In some cases we know the theoretical mathematician sought out the physicist to try to find ways to test a theory and were really the ones building the hypotheses. In some of the cases we know that though both are generally credited for "deep" collaboration after the fact, because they never really worked together and did all of their work in parallel and it is likely both would have completed just about the same work even if they never crossed paths. Newton and Leibniz famously never corresponded until after both published their own takes on the fundamental principles of The Calculus. Alonso Church had already developed the Lambda Calculus before corresponding with Alan Turing on the fundamentals of Computing and Alan Turing couldn't even share most of his practical work because it was still state secrets (and there was an ocean's distance in their correspondence anyway).

I think as often as not the "arrows" in the diagram point both directions at the same time: the practical needed the theorist to explain the patterns they were seeing and the theorist needed the practical to take the simple beautiful thing they were working on and make it practical and find the edge cases and complications.

That sort of "dualism" seems an interesting pattern in math.


Yes, I agree to some extent with the restricted claim, which only (slowly) started to break down in the 17th century in the west.

A lot of Indian mathematics was rather abstract going back to Vedic times, but since they didn't develop the concept of proof, it sadly had little impact on other mathematics practice (except as inspiration to Persian and Arab scholars) other than the the famous cases of zero and positional notation. The mathematical documents I've seen from that practice have been in the form of essays.

I know little of Chinese or Mesoamerican mathematics and wonder where they were on this axis. It seems pretty likely that maths started in support of astronomy/planting predictions in the cultures I know of so likely also for East Asia and the Americas, but whither thence did it go?


Is there no use for some kind of branch of mathematics that can prove false things?

You would think that with how much math there is, there would be a whole field of working with uncertain proofs. I have no idea what for, but then again I'm not a math guy.


This has happened, and is happening all the time. Many groundbraking theories in physics can be framed this way. The problem is that "slightly different angle" is a huge space, so scientists throw a lot of theories at it and see what sticks.


In other words, you're claiming that we may have built-in biases, invisible to us, which cause us to mistake certain premises for conclusions, and now we've got a definition of something like what a "unit" is, or what identity means, that work well enough to solidly discourage further investigation.

yet if we just tried, oh, making the unit circle a unit... ellipse... all of the epiphenomenal complexity that comes from remediating the pervasively accumulated 0.01% error in that fundamental assumption would instantly vanish.


This reminds me of "The Road Not Taken" by Harry Turtledove. Awesome story!


I love this idea.

You might enjoy Stephen Wolfram's writing- it's exactly what you're talking about


Axioms are not wrong if you can derive some math from them.

They may not correspond to anything in our world, and then we usually discover something that does.


The point wasn't that they're wrong, but instead that they are arbitrary. You could create a mathematical system with entirely different axioms than what we explore typically, and it would only be different in how usefully it maps onto real world concepts.


Mathematics is humanity's longest running, largest-scoped, most complicated game.

It also happens to be useful, and you can dive into a lot of philosophy about that which is all very interesting. The utility itself is a large thing on its own. But I think of that utility as something separate from the game itself. The game is just a game. You can do whatever you want with it. If you want to convert your cookbook to hexadecimal just for fun, you can. The fact that it is (broadly speaking) useless, that it will produce no new knowledge, and if anything negative utility in general, doesn't mean you can't do it.

That's the game.

You can also try to play the game to prove the Twin Prime conjecture. That's a much harder level.

This game is scalable to all ages and skill levels, has the best level variety, and can be done with anything from just your personal noggin, to a pencil & paper, to the largest computing cluster in the world. Technically all other games you play are a subset of this game; that may not always be a useful way to think of it, but it is technically true. And while there are a few rules, generally, nobody can tell you how to play it. You want to color pretty pictures? The game has lots of ways of doing that. You want to smash atoms together? The game can help with that. You want to simply count to the highest number you possibly can? Go for it. It's a very popular play with the younger players, but anyone can do it.


The same principle applies to programming. Functions that know less about their arguments are more generally applicable.


A good example of that is how the Axiom of choice impacts the measure/probability theory.

It imply the existence of some sets that cannot be Lebesgue measured (which is an generalization of width, volume, etc for arbitrary sets, also generalization of probability for arbitrary sets)... but it's not possible to present a single example of those non measurable sets, only prove that they exist.

And it's possible to construct an alternative theory with the axiom of determinacy, then any subset of R is measurable.

* https://en.wikipedia.org/wiki/Axiom_of_choice * https://en.wikipedia.org/wiki/Axiom_of_determinacy * https://en.wikipedia.org/wiki/Lebesgue_measure


Note that even if another universe has a different π when it comes to geometry they are still going to also have an important constant that has the same value as our π.

E.g., the zeros of the function defined by the series x - x^3/3! + x^5/5! - x^7/7! + ... are nπ where n is an integer and π is our π. Another place our pi will come up is in the exponential function. It's periodic with period 2πi.


Right. Also (just a few more concrete examples):

• the sum of the series 4(1 - 1/3 + 1/5 - 1/7 + …) will still be our π: https://en.wikipedia.org/wiki/Leibniz_formula_for_%CF%80

• the sum of the series (1 + 1/4 + 1/9 + 1/16 + 1/25 + …) will still be π²/6: https://en.wikipedia.org/wiki/Basel_problem

• (therefore) the probability that two numbers chosen uniformly at random from [1…N] are relatively prime will still approach 6/π² as N grows large

• the product 2(4/3)(16/15)(36/35)(64/63)(100/99)… will still be our π: https://en.wikipedia.org/wiki/Wallis_product

• the value of (n!/(√n (n/e)^n))²/2 as n grows large will still (very slowly) approach π: https://en.wikipedia.org/wiki/Stirling%27s_approximation (e.g. https://www.wolframalpha.com/input?i2d=true&i=N%5C%2891%29Di... )

and so on, for most of the non-geometry results listed: https://en.wikipedia.org/w/index.php?title=List_of_formulae_...


In a change from the normal refrain of 'there's an XKCD about that' - in this case there is an Saturday Morning Breakfast Cereal (SMBC) about it: https://www.smbc-comics.com/comic/pi-2?ref=refind

For those unwilling to click-through, it essentially posits an alternate history where infinite series were explored by mathematicians before geometry, so rather than being surprised that the 'circle constant' is found in many infinite series, we would instead be surprised that the 'infinite series constant' is found in the geometry of a circle.


Pi is the scaling factor of the diameter of a circle to its circumference; there's an infinite set of such scaling factors: one for each ellipse (the circle is a special case). I wonder which/what sort of infinite series arise from/for the generalized elliptic scaling factors?


I rather suspect that the generalised elliptic scaling factor is a continuous function, so the answer may be a bit boring. For any infinite series with a finite sum I would be able to give you an ellipse (indeed, probably an infinite number of ellipses) whose scaling factor is a rational multiple of the sum.


• The probability that a random walk with n steps return to the origin is 1 - π/logn + O(1/logn)^2: https://twitter.com/thomasahle/status/1719140649952571672


> Another place our pi will come up is in the exponential function. It's periodic with period 2πi.

Isn't it the opposite? As I understand, we (European civilization humans) historically _define_ our complex exponential function to have a period of 2πi to match the period of our previously defined sin and cos functions.

We could have defined it to have another period — for example, if we define "360° angle" to be equal to 1 instead of 2*Pi, and define sin0=0, sin0.25=1, sin0.5=0, sin0.75=-1, sin1=0, we'd also define periodicity of e^ix to be 1.

UPD: Same idea as for why we use base-ten numbers. The only reason is that we have ten fingers on two hands, and historically we've been using base-ten numbers for the past few hundred years. But there's no reason to expect that "aliens" would be having ten digits also.


> As I understand, we (European civilization humans) historically _define_ our complex exponential function to have a period of 2πi to match the period of our previously defined sin and cos functions. We could have defined it to have another period — for example, if we define "360° angle" to be equal to 1 instead of 2Pi, and define sin0=0, sin0.25=1, sin0.5=0, sin0.75=-1, sin1=0, we'd also define periodicity of e^ix to be 1.*

No, it doesn't work in degrees.

The definition of e isn't that arbitrary.

2π is the unique period which satisfies the definition of e using derivatives and the extension of real number algebraic laws to complex numbers. This shows up as a real world physical measurement, which I describe below.

The (natural) exponential function eˣ is defined as the unique function which equals its own derivative and satisfies e⁰ = 1 (like other exponentials). The value of e comes from this.

Combine that with the definition i² = -1 and using basic rules of algebra which are observed on real numbers with exponentials and derivatives (such as (xª)ᵇ = xªᵇ) and you find the function eˣ must be periodic with period 2πi.

This comes from sin(x) and cos(x) and their derivatives. The derivative of sin(x) is cos(x), and of cos(x) it is -sin(x), but only if sin(x) and cos(x) are defined in the usual math way with period 2π.

Those sin/cos derivatives and that little negative sign are enough to make them components of the unique solution to the derivative definition of eˣ applied to a complex argument, and thereby fix its period in the complex plane and prove Euler's famous identity (without needing the Taylor expansion).

That in turn has.a more physical basis. Asin(x+B) with constants A, B are the family of functions whose second derivative equal themselves negated.

Physically, it means an object whose acceleration is proportional to its displacement from a fixed position and in the opposite direction will oscillate with a period of exactly 2π seconds, if the acceleration is -1m/s² per 1m displacement.

This setup is called a harmonic oscillator.

In this way, 2π arises (and is measurable!) from physical properties of time, force and inertia, of things moving in straight lines.

No circles required.


I agree with you that e is not arbitrary. I say that the period of 2π for e^ix is arbitrary, because we've arbitrarily defined periods of sin and cos as 2π.

If we defined a function sin to take not an angle in radians, but in degrees (with a period of 360.0), and used that definition of sin in our math, then our complex e^ix would have a period of exactly 360, and the entire complex math would still work — for example, Euler's formula below would still hold:

    e^ix = cos x + i sin x
And people in comments would rave about how magic number 360 is, and its magic properties were discovered by Romans two thousand years ago.


You seem to think that the 2pi is injected into the definition of e^ix somewhere, but actually it's the other way round, 2pi comes out as a theorem. I'll give the rough outline.

exp(x) for complex x is simply defined to be the infinite sum from k = 0 to infinity of x^k/k!. That is, exp(x) = 1 + x + x^2/2 + x^3/6 + x^4/24 + x^5/120 ...

(BTW, the motivation for this definition is that exp'(x) = exp(x), which shouldn't be too hard to see because it's already a Tailor series.)

Purely from this you can prove that exp(ix) with real x is periodic with period 6.28...

It just so happens that this number is also the circumference of the unit circle.


I had never spotted that before, each term of the series is the integral of the previous. That is pleasing!

The Pi thing feels now less of a coincidence than the fact that exp is a power. That probably falls out of expanding the polynomials but it so ingrained as taken for granted that it is wonderous when you think about it.


Hey, you're right! So Pi is special :)


I guess radians are “magic” in that sin and cos can be defined by infinite series that look nice (and feel canonical). You have to manipulate those series to get 360 or even just revolutions.


As codeflo showed in a sibling comment, actually I am wrong, and Pi's special magic also becomes from Taylor's series expansion. So turns out that Pi is the real magic number rather than just our arbitrary choice!


Where in the definition of the complex exponential function is π used? IIRC its: the exponential function is defined to be its own derivative (note 1), i is the square root of -1, and exp(ix) is observed to have a period of 2π. There isn't any arbitrary choices in there that could be said to be defined in such a way that π results.

note 1: exp(x) can alternatively be defined by the exponential series, but that series does contain arbitrary numbers that could be said to be selected in such a way that π results.


As I understand, "complex exponential" function f(x) = e^ix must satisfy only two equalities:

    f(0) = 1
    f'(x) = i f(x)
So any function that satisfies these equalities can work as a "complex exponential" function which we denote as e^ix.

So we can define a function with period of 1, and use it everywhere — then "2π" vanishes from most equations, and the complex math still works and all equalities hold.*


The only function that satisfies those two equalities is e^ix which has period 2π.


Hey, you're actually right! My bad


Consider simultaneous functional equations:

f(x) = dg(x)/dx

g(x) = df(x)/dx

Its only linearly independent solutions are sin(x+c) and cos(x+c) with, x being in radians, periodic by 2pi.


You forgot a minus before one of the equations, otherwise you get sinh and cosh.


Yep, silly me.


Yep, you’re right, my bad.


Fact that pi is irrational may point at something fundamental missing in our knowledge system.

It appears that we cannot precisely measure circle length/area in units of radius and vice versa. Basically, the unity as such does not exist in our knowledge, nor can we truly comprehend infinity.

Perhaps, unity and infinity are just our abstractions for something else.


The fact that π is irrational has absolutely nothing to do with physically measuring circles or with infinity. We know the value of π exactly.


> ...We know the value of π exactly.

Perhaps you could share that exact value with the rest of humanity. And I mean the number value, not the nominal value.


What do you mean by “number value”? There are many different ways to calculate π, the most famous (but converging very slowly) being

π = 4 atan(1) = 4 (1 - 1/3 + 1/5 - 1/7 + …)


There are many ways to obtain inexact value of pi.

Being irrational number, there are no finite number of digits (e.g. in decimal form or other base) to represent the pi value exactly. Nor can such value be expressed as ratio of integers.

Likewise, in your representation the ellipsis hides away the infinity.


π has a finite amount of digits in base π. Why arbitrarily limit yourself to integer bases?

If you can compute a number to any desired precision, then you know its exact value.

And this still has nothing to do with measurement of the physical world. We cannot measure anything exactly.


I think it's better to say that π is the same number everywhere 3.14... , but in other universe you don't use π in the formula of the length of a circle.

* Manhattan (L_1): C = 8 R

...

* Euclidean (L_2): C = 2π R

...

* Maximal Distance (L_infinity): C = 8 R


would their π re-emerge if unit distance i.e distance between 2 and 3, 5 and 6 is defined by their metric. sort of like change in base in number systems.


One thing this doesn't touch on is that there are multiple meaningful definitions of pi-like constants for the p-norm unit circle that don't necessarily agree with each other in p != 2. Defining pi as the area of the unit circle gives an entirely different set of values that satisfying some wonderful properties - in particular, that definition of pi turns out to be the periodicity constant for a (arguably) natural set of trigonometric functions for the p-circle. Furthermore, pi(p) = 2 Beta(1/p,1/p)/p...

However, this (circumference/arc-length based) definition of pi does have a fascinating property for conjugate p,q: pi(p) = pi(q)

"Squigonometry: The Study of Imperfect Circles" is a very fun reference for this sort of stuff.


I wonder whether not being a Hilbert space has any awkward implications for geometry. I guess we have to chuck out the Polarization identity, which probably has implications for parallelograms, though I'm not sure quite what. anyway, thanks for the rec!


Well, there isn't a meaningful inner product, so how can you speak of parallelograms? The geometries are definitely weird! Once you leave p=2 and break the rotational symmetry around the origin, the only isometries in your geometry are signed permutation matrices - so geometry "over here" looks different from "over there". Angles aren't really meaningful, I guess.

The other interesting thing is that duality kicks in (or maybe becomes non-trivial, since it's always there) and derivatives naturally start to live in a different space. If you take the particularly natural definitions of general cos_p and sin_p I alluded to, you get a nice parameterization of the unit p-circle as (cos_p(t), sin_p(t)) - but if you differentiate this wrt t, the resulting tangent vectors don't lie on the p-circle. Instead, they form a parameterization for the q-circle!


* pi = 3.14159… appears in analysis and by extension statistics, independent of geometry. So aliens in these other universes would know this value, they’d just have a different constant for circles. Since they wouldn’t use Greek letters anyway, we’d have to translate, and it would be a bit silly to equate their 3.757… with “pi” instead of their 3.14159…

* Personal aside: Of course, whether 3.14… (pi), 6.28… (2pi) or even 0.785… (pi/4) should be the fundamental constant is debatable, and aliens might have different ideas about that.

* The article introduces the concept of metrics to explain that there could be different circle constants in other universes. But arbitrary metrics don’t necessarily have linear scaling or translation invariance. You need stronger assumptions than a metric to meaningfully define a circle constant at all, like a normed vector space. AFAICT, all of the given examples are in fact normed vector spaces, not just metric spaces.


I don't find the first point surprising. (Our) pi is the one tied to the only metric where the unit circle is perfectly continuous, differentiable, etc.

The 2-norm is very special for many reasons I won't enumerate... and it seems apropos that its corresponding constant (pi)... for relating a distance from a point (wlog 0,0) to the result of integrating a constant around the path those points occupy/form/consist in... would itself tend to be found more than others.

Perhaps this is simply because without that continuity and differentiability everywhere of the corresponding path generated by the metric's unit circle, many other pieces would fall like dominoes.

There is something uniquely central about a concise relation between a point, a distance, and a path.


I wonder about the first point. As you explain in another comment the value of pi, 3.14159, can be derived from number theory alone but magically it plays a huge role in shaping the physical world we know.

Would a different universe have a different number theory or is number theory something that is True regardless of the universe? What would an alternate number theory even look like?


https://tauday.com/tau-manifesto#table-quadratic_forms

(Not to sound all Buzzfeed-y, but Table 3 makes a lot of sense)


Yes, and they actually keep using 2pi over and over in their examples.


This person is not a sailor. Sailing orthogonal to the wind, a "beam reach", is the fastest point of sail due to the lift of the sail.


I knew someone would make this comment. I love HN for this kind of pedantry when it's specific, accurate and doesn't dismiss the entire article for one inaccurate analogy.


Also, "broad reach" would like a word with lfnoise. (It's complicated.)

https://physics.stackexchange.com/questions/186515/why-is-a-...


The polar diagram shown there is what should replace the ellipse in TFA. It's far more complicated than a simple geometric shape since it has to account for such practicalities as sail inventory.


I also knew that someone was going to comment the the beam reach was not necessarily the fastest.


I didn't know anything about sailing, but your one comment made me search up point of sail and now you've opened my eyes to something that was a mystery to me for all my life -- how sailboats can "course made good" against the wind. Thank you.. this stuff is amazing, and sailing is an incredible science!


What's interesting is that if you manage to exceed the hull speed doing that you'll end up surfing on your own bow wave!


A beam reach isn't necessarily the fastest point of sail. It depends on the boat, the efficiency (lift/drag ratio) of the sail, and the efficiency of the centreboard/keel (again, lift/drag ratio), but a reach of some kind is likely to be the fastest - it just won't be exactly perpendicular to the true wind direction. It'll also vary with the wind speed, wave height, weight distribution, etc.


What does the "circle" look like with correct assumptions?


All of these assume your background metric is Euclidean.

If your background 2D metric is a projection of a warped 3D space, you can make π as big as you want by tugging on the centre of the circle.


There is no concept of the "background metric" here. Both the radius and the circumference are measured in the defined metric itself.

Any metric that "pulls on the origin" compared to Euclidean distance will have to do the mapping in a continuous way. This will basically result in both the radius and circumference being expanded in that metric.

Matter of fact, I linked an article that proves that for _all_ metrics, the value of π is always between 3 and 4 (inclusive). Unfortunately the article might have gotten the hug of death so here is an alternative link: https://www.researchgate.net/publication/353330827_Extremal_...


How is circumference defined?

And I can think of a counterexample on a sphere, just using Euclidean distance on the surface. Consider a circle with centre at North Pole and radius being the distance from the North Pole to a point on the equator. For this circle it is easy to find out that pi=2


Thanks for your example! I have been thinking about it.

Your observation is correct and the surface of the sphere is a metric. The ratio of radius to circumference is not constant with that metric though so I feel like something should disqualify it. But I am not sure how.

So I think your observation shows that we need a stronger constraint than just being a metric. Other commenters have hinted that you need a normed vector space but I am not sure if that's sufficient.


Hmm. And if you keep increasing the radius, pi will shrink all the way to 0.


It's not the background metric but the space geometry is assumed Euclidean - in non-Euclidean geometry the ratio of of the circumference of any circle to the diameter of that is not a constant, it depends on such diameter (so you simply cannot define 'pi' in that case)


Well.... One still obtains pi for the ratio of circumference vs diameter in the limit that the diameter goes to zero.


relatively completely off topic but everything I understand about pi has come from 3D gif models I never saw in school they should be a core part of the learning curve much further to the start of it than 3B1B


Those GIFs really do make it super simple. I learned it the same. The unit circle made absolutely no sense to me, and appeared as yet another dogmatic arbitrary "rule" shoved down my throat in school. Had they made an attempt to make it intuitive by showing one single GIF, it'd have all come together for me much quicker.

Math is far more elegant than public school allows it to appear.

https://raypatrick.xyz/blog/2023/10/27/were-you-mathematical...


When I was a kid I liked to muse about relationships like these. Since I was a kid I imagined that there might have been a god that created the universe, and imagined that they were a bored kid like me perhaps making it as a school assignment.

So what if the god had turned the pi or e knobs to a rational number (presumably in a god’s universe knobs can be turned to precise irrational values). Would it have made our lives easier or harder (probably easier…?). Or what about the apparent size of earth/moon/sun when viewed from earth? It’s a great clue, but perhaps we would have known more about astronomy if that coincidence had not existed? (We would have missed out on that fabulous Connie Willis story though).

Maybe all those weird cosmological QM oddities and (literally obscure) imbalances needing mysterious dark matter are just due to bugs in a kid’s rushed assignment and actually don’t make sense?

But the irrationals…they led to the most musing.


IF I've correctly assesed the zeitgeist of HN postings

THEN it follows Terence Tao's Introduction to Measure Theory must be a bullet.

https://news.ycombinator.com/item?id=38064211

But seriously, who's going to read|skim a free 260+ tract on measure theory?

https://en.wikipedia.org/wiki/Measure_(mathematics)


You don't just read/skim Tao's lecture notes. I used them to teach myself measure theory to skip some prerequisites at university, and they were _hard_. Every other page is a list of exercises. I doubt you would learn much if you didn't take time to solve them. But they are hard exercises.


> who's going to read|skim a free 260+ tract on measure theory?

Why is that so hard to believe? People read 260 page books all the time.

I'm not going to read this one, but only because it's not my area of interest. I'm busy reading 100+ page books on other subjects.


Take that as a tongue in cheek comment - I've read such things with close attention, I was studying measure theory back in the 1980s when I first met the author of this work here in Australia.

There is a subset of people on HN that do read and enjoy mathematical texts, they appear outnumbered by a larger group that seem to post and comment on anything Terence Tao without seeming to be that deep in the actual math, which is fine, but it has struck me as a HN trend of late.


There's this fun space made of p-adic numbers upon which you can define a simple distance, and then circles have mind bending properties like the diameter (max edge to edge distance) and radius (distance from edge to center) being equal to each other.

Quirky stuff happens to disc area and perimeter as well, and open discs are also closed. The equivalent of Pi there is nuts.

Sadly I can't recall the details (it was a 2000-ish exercise on my maths course).

https://en.wikipedia.org/wiki/P-adic_number#Topological_prop...


The boat analogy seems particularly poor.

a) Comparing a sailboat on a windy day to a sail boat on an [implied] non-windy day? Surely the boat with no wind wouldn't even have a circle.

b) I'm no boatologist, but if the wind is X knots, then the boat can travel downwind at a rate of X knots, but contrary to what the article states, the boat would be able to travel cross-winds at some multiple of X. So you would get something resembling an oval, but in the opposite orientation as depicted.

Also, it's worth pointing out that it's perfectly possible for a boat to travel "into" the wind via "tacking and jibing"


OMG. After all this time, you're telling me the drafters of the Indiana Pi Bill [0] could have been right all along?

It would mean that Indiana happened to be in a different Universe at the time, were:

  d=1/(2 √3) ∑( n=1…6 )∣∣x sin(3πn )+y cos(3πn )∣∣ [1]
Well, whose to say otherwise?

[0] https://en.wikipedia.org/wiki/Indiana_Pi_Bill [1] Poor man representation of the same equation in the article.


Maybe worth pointing out that there are countless other weird Universes where "Pi" retains its standard value.

This is the domain of differential geometry where the relation of circumference and radius holds only in the limit of infinitesimally small.

By all accounts our own Universe is of such a deformed-in-the-large but Euclidean-in-the-small variety. At least for as far we understand geometry in the quantum realm.


This largely depends on how one defines pi. I believe that the concept of R^n (Euclidean space) exists even in entirely different physical spaces. This is because Euclidean space represents a universally recognized idea of simple space in terms of curvature. For instance, in any world, the concept of '0' represents simplicity. In this context, pi will always remain constant.


I noticed that all the "circles" for alternative metrics are aligned with the coordinate system. For example, the one for the Manhattan distance has its corners on the coordinate axes.

What if we added an additional condition that a distance metric should not change when the orientation of the coordinate system is changed? Could we still have different values for the pi constant then?


Is that true for the hexagon, or just very close ?


Not sure about your universe, but here on earth, pi is 2. The length of the equator is 4 times the distance from the pole. (Approx.)


Why stop there? Take the circle with its center at one pole, its radius running an entire meridian, and its perimeter making an infinitesimally tight loop around the other pole. That exhibits a pi that's zero.


You can have any Pi_Earth you like where 0 < Pi_Earth < Pi_Euclidian


Sorry, but you're wrong.

Pi is 3. (more accurately, 3.2). https://cs.uwaterloo.ca/~alopez-o/math-faq/mathtext/node18.h...


A flat earth would have pi at about 3.14159 though.


Based on the attitude of its advocates, a flat earth would lack science and mathematics entirely, so pi would be undefined?


You're making the assumption that most flat earth advocates aren't trolls who actually know science very well.


I must be missing something. In the example of using a sailboat with constant wind and distance, wouldn’t sailing against the wind (let’s call it any constant oppositional force), cause us to get a circle, just shifted from the origin? Not an ellipsis?


There are a number of complications. Two of them are that firstly, when when the wind direction is within about 45 degrees against the direction you want to go, you have to tack, and secondly, a reasonably efficient sailboat is fastest when it is on a reach, with the wind coming from the side.

https://physics.stackexchange.com/questions/186515/why-is-a-...


I think you're right. Just apply a transform equal to the speed and direction of the wind.


A p-norm of 3 make quit a funky "retro" rounded corner style for avatars. It also shows what the "opposite" of rounded corners might look like. In CSS you can only go to a circle.


This is the sort of thing that makes me want to learn VR development.


A circle is a pillar for a 3 dimensional universe in a 2 dimensional universe. So I guess every dimensional jump has one and the binary one is the origin constant?


The hexagonal metric at the end uses pi in its definition - is this our value of pi, or the value of 3 that that metric provides?


My belief, which could be wrong, if we change π and distance metric i.e. definition of unit distance between 1 and 2, 4 and 5, 10 and 11 to be their unit distance, all the equations involving numbers and pi would come out to be same. e.g. basel problem etc.


The area of the circle in Manhattan distance comes out to 2 million, but pi * r^2 is 4 million. What am I doing wrong?


Oh, I was measuring the sides in Euclidean length. In Manhattan length they're 2000 each, so area is 4 million.


Excellent article, both informative and accessible, and the interactive visualizations are lovely.


Didn't 3blue1brown have a video on exactly this?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: