Hacker News new | past | comments | ask | show | jobs | submit login

> LLM uses delve more, delve appears in training data more, LLM uses delve more...

Some day we may view this as the beginnings of machine culture.




Oh no, it's been here for quite a while. Our culture is already heavily glued to the machine. The way we express ourselves, the language we use, even our very self-conception originates increasingly in online spaces.

Have you ever seen someone use their smartphone? They're not "here," they are "there." Forming themselves in cyberspace -- or being formed, by the machine.


I think they meant culture in the sense of knowledge that gets passed down from one generation to the next. Not a human culture of using machines, but a machine culture of using human languages.


Consider that the algorithm cannot evolve without human interaction. That's what I'm saying, it's a symbiote to us. If you consider "weights in the Instagram recommendation algorithm" to be "the machine", what we are talking about here has been happening for a long time now and has seen many generations, with each entity influencing the other.

I don't think we'll have true machine culture until we have fully autonomous agents in the wild that are interacting with the world independently on its own terms. Right now the substrate is text which comes from a human mind -- it does not arise naturally from nothing. So the machine is a symbiote for now until we solve some difficult robotics problems.


Hm, it's probably true that recommendation algorithms do something similar already, training on "human likes" that were influenced by the previous generation. But "human language" is a richer medium to carry information.

I don't think you need to be independent or autonomous to develop a culture. And a lot of human culture was passed down over generations without understanding why it worked. We just imitate the behaviour and rituals from our most successful ancestors or role models.

If new LLMs can access the past generation's knowledge of how to please human evaluators, they will use it. It's not a deliberate decision by an "agent", it's just the best text source to copy from. This is a new feedback loop between generations of assistants, and it bypasses whatever the human designer had in mind. Phrases like "it is always best to ask an expert" will pop up just because you tuned the LLM to sound like a helpful assistant, and that's what helpful assistants sound like in the training data. You'd have to actively steer the new generation away from using their ancestral knowledge.

I guess it comes down to what your definition of "culture" is. There is no targeted teaching of the next generation, for example - but is this a requirement? I agree that talking about "machine culture" right now sounds like a stretch, but now I wonder what pieces are actually missing.


Yep I was going for more "the machines have their own culture increasingly independent from ours."


chat is this real?




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: