Of course AI draws on past experiences to build new things.
This is also what humans do.
There is no invisible well of "creativity" that humans draw from that AI can't.
In fact, in areas that AI has "solved", such as chess, human experts describe AI as extremely "creative" beyond their understanding.
"Creative" is just a word for connecting existing concepts in ways that seem novel. AI does this when it solves a new problem.
AI clearly has the potential to make better connections, much faster than humans, in pretty much all domains.
There's only one piece "missing" that is already technically possible, but expensive: giving an LLM the ability to "learn".
In the near future, all creative work will be produced by AI. This will likely come before all work is produced by AI, as creative work is typically less precise.
Humans can seek out new experiences optimized for what they want to learn, which is fundamentally different.
Creativity is not just connecting existing concepts. The pinnacle of creativity is inventing new concepts ab nihilo, which is something that every human does through self-directed interaction with the environment.
Instead of giving a trite dismissal, give a substantiaded rebuttal. I don't see any metaphysical claim in my comment. It's objectively true that humans will seek out or create experiences that allow them to obtain better understanding, and it's also objectively true that humans can create entirely original concepts, so long as you agree that there is such a thing. If you don't think that exists, then your entire comment is pointless, because then you're arguing by hidden definition, which I'd rather assume you aren't.
You asserted that "creativity" is a process by which the human brain is able to create something from nothing.
This is a ridiculous claim that doesn't in any way comport with any reasonable understanding of reality.
Why would it deserve anything more than a "trite dismissal"?
Anyhow, the thing is, every unique human insight is predicated upon the connection between patterns in available data using prior experiences as a rubric.
There's no reason to believe an LLM couldn't easily be superior at this process if you asked it to model a very, very, very smart person and the model was sufficiently advanced to do so.
Creativity is a process where humans can create concepts, certain of which can be created without being being a combination of previous concepts, yes.
It's fundamentally untrue that human insight is merely a product of available data - the entire field of mathematics is a great example of concepts that have a substantial original source and which aren't derived from any data.
The idea that all humans can do is observe patterns in data and interpolate is patently ridiculous, and is readily contradicted by the vast swathes of concepts created through reason alone that far predated their observation in any data.
For example, from which naturally occuring, exogenous dataset did humans discover prime numbers? None, we invented numbers from the concept of object we also invented, and we invented multiplication, division, addition, and from that we invented the concept of primality (which, despite relying on other concepts, is not a mere amalgamation and requires substantial creation to come up with). We didn't need a dataset to infer them from.
Note that while NNs can come up with functions and might in theory internally come up with some form of primality test, LLMs wouldn't be able to organically manipulate and explain the concept of primality if it was completely absent from their datasets, even if it turned out to be useful to model something, because they don't have access to the process by which they arrive at their results. Humans do when working at the semantic level.
Note that even if you take a hard-line Platonist approach instead, replace "create" by "discover from the world of forms through the mind's eye", and you get the same argument.
You're asserting a type of "of the gaps" argument, the "gaps" being the machinations of intellect that produce ideas.
Just because you personally can't explicitly draw the line between ideas and their progenitors doesn't mean causality breaks down and the line doesn't exist.
Every aspect of your reasoning is flawed, you're not even starting from a rational place and going in a reasonable direction, you're making all kinds of critical errors using uncertainty as a smokescreen.
I've already given you a concrete example disproving your metaphysical woo -- in chess and other solved enterprises of complexity that humans cannot fully conceptualize, AI is capable of producing ideas that humans find "creative".
This is because "creativity" means "problem-solved a novel problem based on the rubric of prior problems".
It's a form of vector-based learning. Humans just happen to be far more "general" than a chess AI, but in its own vector-space, a chess AI is vastly more creative than a human, and can produce vastly more complex novel solutions.
If you accept that a branch-and-bound chess AI can be creative, do you think that a brute force chess AI, given sufficient time, can be creative? If you do, we simply don't have a definition of creativity in common and that's where the discussion should be headed, and if you don't, then I don't understand what's the point of your example.
My position on the subject is very simple, chess players are assigning meaning and intentions to the chess solution that the machine doesn't understand. A branch and bound search isn't AI, even when you use a neural network to skip a few depths. Just because a human finds it creative doesn't mean that the process actually was.
Humans fundamentally don't and can't use simple vector learning with the input vector being real world data, because humans use metacognition to come up with representations and models even before data for which it is useful exists.
Again, there's no metaphysics here. The most abstract tool I'm using here is that of a concept/idea, and it's not metaphysical at all, it can be understood as a tangible mental pattern. Unlike LLMs, we can create and understand patterns without needing to having seen them anywhere. It's a fundamental difference.
Of course AI draws on past experiences to build new things.
This is also what humans do.
There is no invisible well of "creativity" that humans draw from that AI can't.
In fact, in areas that AI has "solved", such as chess, human experts describe AI as extremely "creative" beyond their understanding.
"Creative" is just a word for connecting existing concepts in ways that seem novel. AI does this when it solves a new problem.
AI clearly has the potential to make better connections, much faster than humans, in pretty much all domains.
There's only one piece "missing" that is already technically possible, but expensive: giving an LLM the ability to "learn".
In the near future, all creative work will be produced by AI. This will likely come before all work is produced by AI, as creative work is typically less precise.