Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

IMO it seems almost epistemologically impossible that LLM's following anything even resembling the current techniques will ever be able to comfortably out-perform humans at genuinely creative endeavours because they, almost by definition, cannot be "exceptional".

If you think about how an LLM works, it's effectively going "given a certain input, what is the statistically average output that I should provide, given my training corpus".

The thing is, humans are remarkably shit at understanding just how exception someone needs to be to be genuinely creative in a way that most humans would consider "artistic"... You're talking 1/1000 people AT best.

This creates a kind of devils bargain for LLMs where you have to start trading training set size for training set quality, because there's a remarkably small amount of genuinely GREAT quality content to feed this things.

I DO believe that the current field of LLM/LXM's will get much better at a lot of stuff, and my god anyone below the top 10-15% of their particular field is going to be in a LOT of trouble, but unless you can train models SOLELY on the input of exceptionally high performing people (which I fundamentally believe there is simply not enough content in existence to do), the models almost by definition will not be able to outperform those high performing people.

Will they be able to do the intellectual work of the average person? Yeah absolutely. Will they be able to do it probably 100/1000x faster than any human (no matter how exceptional)?... Yeah probably... But I don't believe they'll be able to do it better than the truly exceptional people.




I’m not sure. The bestsellers lists are full of average-or-slightly-above-average wordsmiths with a good idea, the time and stamina to write a novel and risk it failing, someone who was willing to take a chance on them, and a bit of luck. The majority of human creative output is not exceptional.

A decent LLM can just keep going. Time and stamina are effectively unlimited, and an LLM can just keep rolling its 100 dice until they all come up sixes.

Or an author can just input their ideas and have an LLM do the boring bit of actually putting the words on the paper.


I get your point, but using the best-sellers list as a proving point isn't exactly a slam-dunk.

What's that saying? "Nobody ever went broke overestimating the poor taste of the average person"


I’m just saying, the vast majority of human creative endeavours are not exceptional. The bar for AI is not Tolkien or Dickens, it’s Grisham and Clancy.


IMO the problem facing us is not that computers will directly outperform people on the quality of what they produce, but that they will be used to generate an enormous quantity of inferior crap that is just good enough that filtering it out is impossible.

Not replacement, but ecosystem collapse.


We have already trashed the internet and really human communication with SEO blogspam brought even lower by influencers desperately scrambling for their two minutes of attention. I could actually see quality on average rising, since it will now be easy to churn out higher quality content, even more easily than the word salad I have been wading through for at least the last 15 years.

I am not saying it's not a sad state of affairs. I am just saying we have been there for a while and the floor might be raised, a bit at least.


Yes, LLMs are probably inherently limited, but the AI field in general is not necessarily limited, and possibly has the potential to be more genuinely creative than even most exceptional creative humans.


I loosely suspect too many people are jumping into LLMs and I assume real research is being strangled. But to be honest all of the practical things I have seen such as by Mr Goertzel are painfully complex very few can really get into.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: