Yeah, it's just the fact that you pasted in an AI answer, regardless of how on point it is. I don't think people want this site to turn into an AI chat session.
I didn't downvote, I'm just saying why I think you were downvoted.
That's reasonable. I cut back the text. On the other hand I'm hoping downvoters have read enough to see that the AI-generated comment (and your response) are completely on-topic in this thread.
I use llms as tools to learn about things I don't know and it works quite well in that domain.
But so far I haven't found that it helps advance my understanding of topics I'm an expert in.
I'm sure this will improve over time. But for now, I like that there are forums like HN where I may stumble upon an actual expert saying something insightful.
I think that the value of such forums will be diminished once they get flooded with AI generated texts.
Of course the AI's comment was not insightful. How could it be? It's autocomplete.
That was the point. If you back up to the comment I was responding to, you can see the claim was: "maybe people are doing the same thing LLMs are doing". Yet, for whatever reason, many users seemed to be able to pick out the LLM comment pretty easily. If I were to guess, I might say those users did not find the LLM output to be human-quality.
That was exactly the topic under discussion. Some folks seem to have expressed their agreement by downvoting. Ok.
I think human brains are a combination of many things. Some part of what we do looks quite a lot like an autocomplete from our previous knowledge.
Other parts of what we do looks more as a search through the space of possibilities.
And then we act and collaborate and test the ideas that stand against scrutiny.
All of that is in principle doable by machines. The things we currently have and we call LLMs seem to currently mostly address the autocomplete part although they begin to be augmented with various extensions that allow them to take baby steps in other fronts. Will they still be called large language models once they will have so many other mechanisms beyond the mere token prediction?
We don't care what LLMs have to say, whether you cut back some of it or not it's a low effort wasted of space on the page.
This is a forum for humans.
You regurgitating something you had no contribution in producing, which we can prompt for ourselves, provides no value here, we can all spam LLM slop in the replies if we wanted, but that would make this site worthless.
:)