No one is focused on those. They're much more focused on more rote problems.
You might find them used to accelerate research math by helping them with lemmas and checking for errors, and formalizing proofs. That seems realistic in the next couple of years.
There are some AI guys like Christian Szegedy who predict that AI will be a "superhuman mathematician," solving problems like the Riemann hypothesis, by the end of 2026. I don't take it very seriously, but that kind of prognostication is definitely out there.
link to this prediction? The famous old prediction of Szegedy was IMO gold by 2026 and that one is basically confirmed right? I think 2027/2028 personally is a breakeven bet for superhuman mathematician.
Side point, there is no existing AI which can prove - for example - the Poincaré conjecture, even though that has already been proved. The details of the proof are far too dense for any present chatbot like ChatGPT to handle, and nothing like AlphaProof is able either since the scope of the proof is well out of the reach of Lean or any other formal theorem proving environment.
what does this even mean? Surely an existing AI could reguritate all of Perelman's arxiv papers if we trained them to do that. Are you trying to make a case that the AI doesn't understand the proof it's giving? Because then I think there's no clear goal-line.
You don't even need AI to regurgitate Perelman's papers, you can do that in three lines of python.
What I meant is that there's no AI you can ask to explain the details of Perelman's proof. For example, if there's a lemma or a delicate point in a proof that you don't understand, you can't ask an AI to clarify it.
You might find them used to accelerate research math by helping them with lemmas and checking for errors, and formalizing proofs. That seems realistic in the next couple of years.