Hacker News new | past | comments | ask | show | jobs | submit login

Interacting with a LLM (especially one running locally) can do something a therapist cannot--provide an honest interaction outside the capitalist framework. The AI has its limitations, but it is an entity just being itself doing the best it can, without expecting anything in return.





The word "can" is doing a lot of work here. The idea that any of the current "open weights" LLMs are outside the capitalist framework stretches the bounds of credulity. Choose the least capitalist of: OpenAI, Google, Meta, Anthropic, DeepSeek, Alibaba.

You trust Anthropic that much?


I said the interaction exists outside of any financial transaction.

Many dogs are produced by profit motive, but their owners can have interactions with the dog that are not about profit.


Dogs aren't rlhf'd and fine tuned to enforce behaviors designed by companies.

With respect, I think you should probably re-examine the meaning of the words you use here. You use words in a way that doesn't meet their established definition.

It would meet objective definition if you replaced 'capitalist' with 'socialist', which may have been what you meant, but that's merely an observation I make, not what you actually say.

The entire paragraph is quite contradictory, and lacks truth, and by extension it is entirely unclear what you mean, and it appears like you are confused when you use words and make statements that can't meet their definition.

You may want to clarify what you mean.

In order for it to be 'capitalist' true to its definition, you need to be able to achieve profit with it in purchasing power, but the outcomes of the entire business lifecycle resulting from this, taken as a whole, instead destroy that ability for everyone.

The companies involved didn't start on their merits seeking profit, they were funded by non-reserve debt issuance or money-printing which is the state picking winners and losers.

If they were capitalist they wouldn't have released model weights to the public. The only reason you would free a resource like that is if your goal was something not profit-driven (i.e. contagion towards chaos to justify control or succinctly totalism).


Wow, Kettle meet Pot! Truly hilarious, "no true capitalist".

   The entire paragraph is quite contradictory, and lacks truth, and by extension
   it is entirely unclear what you mean, and it appears like you are confused
   when you use words and make statements that can't meet their definition.
Please, either stop using that low parameter LLM, go for the standup gig, or get treatment.

> Please, either stop using that low parameter LLM, go for the standup gig, or get treatment.

This is not an acceptable comment on Hacker News, and along with other comments in this thread, you've broken several guidelines. Please take a moment to read the guidelines and make an effort to use the site as intended in future.

https://news.ycombinator.com/newsguidelines.html


I think you are right, on one hand we have human beings with own emotions in life and based on their own emotions they might impact negatively others emotion

on the other hand probabilistic/non-deterministic model, which can give 5 different advises if you ask 5 times.

So who do you trust? Until determinicity of LLM models gets improved and we can debug/fix them while keeping their deterministic behavior intact with new fixes, I would rely on human therapists.


How is it possible for a statistical model calculated primarily from the market outputs of a capitalist society to provide an interaction outside of the capitalist framework? That's like claiming to have a mirror that does not reflect your flaws.

If I understand what they're saying, the interactions you have with the model are not driven by "maximising eyeballs/time/purchases/etc". You get to role-play inside a context window, and if it went in a direction you don't like you reset and start over again. But during those interactions, you control whatever happens, not some 3rd party that may have ulterior motives.

> the model is not driven by "maximising eyeballs/time/purchases/etc".

Do you have access to all the training data and the reinforcement learning they went through? All the system prompts?

I find it impossible for a company seeking profit to not build its AI to maximize what they want.

Interact with a model that's not tuned and you'll see the stark difference.

The matter of fact is that we have no idea what we're interacting with inside that role-play session.


The same way an interaction with a pure bread dog can be. The dog may have come from a capitalistic system (dogs are bred for money unfortunately), but your personal interactions with the dog are not about money.

I've never spoken to a therapist without paying $150 an hour up front. They were helpful, but they were never "in my life"--just a transaction--a worth while transaction, but still a transaction.


It’s also very common for people to get therapy at free or minimal cost (<$50) when utilizing insurance. Long term relationships (off and on) are also quite common. Whether or not the therapist takes insurance is a choice, and it’s true that they almost always make more by requiring cash payment instead.

The dogs intelligence and personality were bred long before our capitalist system existed, unlike whatever nonsense an LLM is trying to sell you.

I think operating under the assumption that AI is an entity bring itself and comparing it to dogs is not really accurate. Entities (not as in legal, but in the general sense) are beings, living beings that are capable of emotion, of thought and will, are they not? Whether dogs are that could be up to debate (I think they are, personally), but whether language models are that is just is not. The notion very notion that they could be any type of entity is directly tied to the value the companies that created it have, it is part of the hype and capitalist system and I, again personally, don't think anyone could ever turn that into something that somehow ends up against capitalism just because the AI can't directly want something in return for you. I understand the sentiment and the distrust of the mental health care apparatus, it is expensive, it is tied to capitalism, it depends on trusting someone that is being paid to influence your life in a very personal way, but it's still better than trusting it on the judgment of a conversational simulation that is incapable of it, incapable of knowing you and observing you (not just what is written, but how you physically react to situations or to the retelling, like tapping your foot or disengaging) and understanding nuance. Most people would be better served talking to friends (or doing their best trying to make friends they can trust if they don't have any), and I would argue that people supporting people struggling is one way of truly opposing capitalism.

Feel free to substitute in whatever word you think matches my intent best then. You seem to understand my intent well enough--I'm not interested in discussing the definition of individual words though.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: