I will always find these type of explorations fascinating. Number theory is so mysterious. I liked these two sentences from the article:
> “Mathematics is not just about proving theorems — it’s about a way to interact with reality, maybe.”
This one I like it because in the current trend of trying to achieve theorem proving in AI only looking at formal systems, people rarely mention this.
And this one:
> Just what will emerge from those explorations is hard to foretell. “That’s the problem with originality,” Granville said. But “he’s definitely got something pretty cool.”
When has that been a "problem" with originality? Hahah but I understand what he means.
How do mathematicians come to focus on seemingly arbitrary quesrions:
> another asks whether there are infinitely many pairs of primes that differ by only 2, such as 11 and 13
Is it that many questions were successfully dis/proved and so were left with some that seem arbitrary? Or is there something special about the particular questions mathematicians focus on that a layperson has no hope of appreciating?
My best guess is questions like the one above may not have any immediate utility, but could at any time (for hundreds of years) become vital to solving some important problem that generates huge value through its applications.
A lot of mathematical problems that originally seemed important [EDIT: unimportant], like "how hard is it to factor big prime numbers?" or "what kind of mathematical structure describes composed rotations of a sphere" turned out to be way more important than they seemed. The former is at the root of a substantial amount of cryptography (either directly in e.g. RSA or indirectly in e.g. elliptic curve cryptography) and the latter turns out to form the foundation for quantum mechanics, neither of which was remotely in the mind of people originally considering either problem.
Seemingly simple, but practically intractable, problems like the Collatz conjecture, twin primes, etc. tell us that there's something we don't know about relatively simple structures, and the hope is that by studying those simple structures, we might discover something new that is more generally applicable. The interconnections in math tend to form an extremely dense graph, so these long-unsolved problems tend to have lots of established connections to other questions. For example, the Collatz conjecture would tell us something about computability theory, since the next Busy Beaver number we know little about turns out to depend on Turing machines that encode it (alongside other very similar problems).
Mathematicians focus on esoteric stuff too; it usually doesn't make it into the popular press. "Is the fifth fundamental group of the 19-dimensional Pulasky-Robinson manifold of type III infinite?" is the sort of question that comes up all the time but never makes it onto the front page of HN. Questions like "what's the smallest uncomputable busy beaver number" or "is the twin prime conjecture true" are the ones you'll hear about because they're the ones where the question makes some sense to a layperson.
Nitpick: big prime numbers are of course trivial to factor. Big composite numbers are much harder. Checking whether an unknown number is prime or composite is nontrivial but fairly fast.
It's not arbitrary at all! We know that primes themselves get rarer and rarer (density of primes < N is ~1/log(N)), so it is natural to ask whether the gaps between them must also necessarily increase and, in general, how are they spaced.
We know the primes are rich in arithmetic progressions (and in fact, any set with positive “upper density” in the primes is also rich in arithmetic progressions).
So we do know that there are 100,000,000,000! primes that are equidistant from one another, which is neat.
There's something quite interesting about the problems in number theory especially. The questions/relationships sometimes don't seem useful at all and are later proven to be incredibly useful. Number Theory is the prime example of this. I believe there's a G H Hardy quote somewhere, about Number Theory being obviously useless, but could only find it from one secondary source, although it does track with his views expressed in A Mathematician's Apology[1] - "The theory of Numbers has always been regarded as one of the most obviously useless branches of Pure Mathematics."
You can find relationships between ideas or topics that are seemingly unrelated, for instance, even perfect numbers and Mersenne primes have a 1:1 mapping and therefore they're logically equivalent and a proof that either set is either infinite or finite is sufficient to prove the other's relationship with infinity.
There's little to no intuitive relationship between these ideas, but the fact that they're linked is somewhat humbling - a fun quirk in the fabric of the universe, if you will.
> No one has yet found any war-like purpose to be served by the theory of numbers or relativity or quantum mechanics, and it seems very unlikely that anybody will do so for many years.
Less than 100 years later, we stand waiting for nuclear bombs guided by GPS to be launched when the authorization cryptographic certificate is verified.
My bad. You’re right. It was group theory and cryptanalysis. Number theory comes in later in the 1970s for public key cryptography (1976 publicly, early 1970s at GCHQ). So the military work on it really started in the late 1960s.
Number theory appears to be at the core of computability, via Matiyasevich's theorem, which links the behavior of every Turing machine to the solutions (or lack thereof) of Diophantine equations. It’s not surprising to me that number theory constantly ends up being more useful than suspected since in many ways the study of these equations is equivalent to the study of computation.
With primes I think a general question is to what extent they are "random". Clearly, they are defined by a fixed rule, so they are not actually random, but they have a certain density, statistics, etc. So we can try to narrow down in what sense they are like random numbers and in what sense not. Do they produce a similar "clustering" or (lack thereof) as if we would generate random numbers according to some density, etc. I see the twin primes question fitting in such a theme. Happy to learn if a number theorist has more insight.
I'm also curious if Number Theory folks acknowledge and/or have conventional language to describe "same-ness" between 2 or more of these goofy prime number phenomena. (I guess what I'm describing is just a reduction. This made more sense before I started typing.)
The answer is historical evolution. To you, the problem may appear arbitrary, like physicists studying some "obscure phenomenon" like the photoelectric effect may have seemed to outsiders. But (far, far) behind the scenes, there is a long and winding history of questions, answers and refinements.
Knowing that history illuminates the context and importance of problems like the above; but it makes for a long, taxing and sometimes boring read for the unmotivated, unlike the sexy "quantum blockchain intelligence" blurbs or "grand unification of mathematics" silliness in pure math. So, few popularizations care to trace the historical trail from the basics to the state of the art.
I think if you are fascinated by primes and look at lists of primes, you would eventually notice that twin primes keep happening, but they get further and further apart. The first time someone published the conjecture that there are infinitely many is Polignac in 1849, but I'm sure someone wondered before.
If you are fascinated by primes, then you just want to know the answer, independent of any application.
The problems mathematicians tend to be interested in are those that have a short description in a formal language yet the proof is likely not very compressible via the language. I.e., you can easily get the index of any statement in the formal language of PA but finding the index of either the statement or its negation in an enumeration of the theorems of PA is significantly more difficult. It’s equivalent to the halting problem in a general sense because the statement and its negation may be independent of PA in which case the enumeration program never halts.
What does this mean? These problems are like the Busy Beaver analogue for formal mathematical systems, which implies that if you solve one, in theory you solve entire classes of lower complexity problems simultaneously (similar to how BB(5) solves the halting problem for all programs below a certain complexity).
It's really only the applied mathematicians who tend to care much at all about utility. If you ask a bunch of theoretical mathematicians why they're working on what they're working on, you'll probably get many saying that it's fun or interesting, some saying that they needed to publish something, and very few answers related to utility at all.
That is a large part of it. Maybe that solving this problem leads us to the answer to other problems.
If you could prove the twin prime conjecture, that means you have figured out something about prime numbers. If you found a pattern or formula for figuring out the distances between twin primes, that means you're able to identify certain primes.
And maybe that formula becomes the basis for a formula for the distance between primes in general. And once you have that formula, you have a simple way to test for primes.
And I'm not saying that is happening or will happen or is even possible, but that would be a reason to study proofs of the twin prime conjecture.
Your question already garnered a ton of answers but I still want to try: the distribution of primes has a deep connection to the Riemann Zeta function. RZF and it's hypothesis is indeed one of the millennium problems, but beyond (or maybe, because of) that, RZF connects number theory and complex analysis (and probably more such as group theory, but I don't want to risk saying anything incorrect) at a very deep level.
So picking off little morsels of the bigger problem is kind of the first steps towards climbing this metaphorical mountain.
It's the classic Feynman quote (it was about physics, but applies to mathematics): it's like sex, it may give us some practical results but that's not why we do it ;)
If you wonder my mathematicians are so obsessed with the prime numbers, you can think of them as one of the most fundamental mathematical patterns—arising from the interplay of addition and multiplication—that we are just barely skimming the surface of understanding.
You could almost equally ask why physicists are so obsessed with elementary particles.
there is actually something rather special about the questions mathematicians focus on but the jargon and technical detail get in the way. ultimately, the reason prime numbers are so fascinating is because they turn up in weird places, with strange properties, and connect wildly different branches of mathematics together.
the Riemann Hypothesis is an example of this. it starts from a pair of observations:
1. the sum of the logarithms of the prime numbers less than n approximates the graph of y = x to absurd precision. there's no obvious reason why this should be true but the result just kind of falls out if you attempt to estimate the sum, even in really crude ways.
2. the sum of the reciprocals of the squares was calculated by Euler centuries ago. the way he solved the problem was to connect two different ways of calculating sin(x) -- the Taylor series and a new idea he developed which you may have tried in a calculus class. namely, he wrote down sin(x) as a infinite product of its roots. one result that happens to fall out of this comparison between the infinite sum and the infinite product is that you can factor equations like sum(1/n^2) over all the integers into a product of the reciprocals of the primes.
this second fact leads very directly to a proof of the first through some clever algebra but it's far more general. in fact, we not only get exact values for sum(1/n^2), but exact values for all expressions of the form sum(1/n^s) where s is a positive even integer -- odd powers remain an unsolved problem to this very day. in fact, these sums are hiding in the Taylor expansion of sin(x). so somehow, we've connected a seemingly arbitrary question about how to take a certain kind of sum of the integers, connected it to a product of primes, and tied both to an analytical function -- sin(x). already bizarre.
but it gets even stranger. if you let s vary as a complex number and not just as a positive number > 1, this very same approach lets you calculate the sums for negative even powers as well. remember, we're calculating sum(1/n^s) -- how can we get any result but infinity for sum(1/n^-2) -- i.e. sum(n^2)? more perplexing is that the value you can calculate from algebra and calculus over the complex numbers, following Euler's method, and Riemann's extension of it, is 0 for every negative even power. this is also where the notion that the sum of the integers is -1/12 comes from. this is the Riemann Zeta function.
so now, we've connected a seemingly arbitrary sum of the integers to an analytic function over the complex numbers via prime numbers. and worse still, if we connect this back to the first point, we discover that this Zeta function tells us the order of the error term on the estimate for sum(log p, p < x). remember, this function looks like the graph of y = x. if the Zeta function has a zero at s = 1/2, we have an exact formula for that sum and the error term is precisely O(x^1/2). we can even calculate the constant factors. this fixes all the prime numbers. it in fact gives you an exact formula to calculate them, not as a recursive function, but directly. all because some asshole in the 1700s posed a problem no one could answer until Euler: what's sum(1/n^2)?
so this is why prime numbers fascinate mathematicians. nothing we understand about them is obvious. but being able to answer seemingly arbitrary questions about them unlocks whole fields of mathematics. and new techniques in completely unrelated fields of mathematics suddenly answer deep questions about the structure of the natural numbers.
if this topic interests you, I highly recommend this series of videos from zetamath: https://www.youtube.com/watch?v=oVaSA_b938U&list=PLbaA3qJlbE... -- he walks you through the basic algebra and calculus that lead to all of these conclusions. I can't recommend this channel enough.
I have an MSc in math but haven't managed to grasp modular forms, which is needed for the proof.
If anybody has a learning path or a primer recommendation for modular forms (assuming formal math education), I'd be very interested in reading more about it.
But as such I doubt it can be more accessible to everybody.
As a first pass, check out the pop history book called Fearless Symmetry by Ash and Gross.
Next, and perhaps I shouldn't be suggesting it, Serre's A Course in Arithmetic. Despite it's reputation of terseness it is a great book by one of history's great mathematicians and worth every sweat and tear spent over its short number of pages.
Yet another way is to see the lectures by Richard Borcherds (who won the fields for the moonshine conjecture) on youtube.
Finally, since this hacker news check out William Stein's Modular Forms: A Computational Approach (pdf online)
> “Mathematics is not just about proving theorems — it’s about a way to interact with reality, maybe.”
This one I like it because in the current trend of trying to achieve theorem proving in AI only looking at formal systems, people rarely mention this.
And this one:
> Just what will emerge from those explorations is hard to foretell. “That’s the problem with originality,” Granville said. But “he’s definitely got something pretty cool.”
When has that been a "problem" with originality? Hahah but I understand what he means.