> Is it your intent to convince them that they don't need privacy?
Quite the opposite actually. My intent is to shed light on the fact that sharing information with OpenAI is not private. And you should not do that with information that you wouldn't even share with people you trust.
> Quite the opposite actually. My intent is to shed light on the fact that sharing information with OpenAI is not private. And you should not do that with information that you wouldn't even share with people you trust.
I'm not OP, but I think you're missing the point.
Privacy and trust isn't really a 1D gradient, it's probably planar or even spatial if anything.
Personally I'd be more willing to trust OpenAI with certain conversations because the blowback if it leaves their control is different than if I have that same conversation with my best friend and it leaves my best friend's control. The same premise underlies how patients can choose who to disclose their own health matters to, or choose who their providers can disclose to.
Same reason behind why someone may be willing to post a relationship situation to r/relationship_advice and yet not talk about the same thing with family and friends.
> Same reason behind why someone may be willing to post a relationship situation to r/relationship_advice and yet not talk about the same thing with family and friends.
I ask that you consider the people who use Reddit and the people who run Reddit independently. The people who use Reddit are not in a position of power over someone who asks for advice. The people who run Reddit on the other hand, are in a position of power to be able to emotionally manipulate the person who asked for advice. They can show you emotionally manipulative posts to keep your attention for longer. They can promote your post among people who are likely to respond in ways that keep you coming back.
OpenAI has a similar position of power. That's why you shouldn't trust people at either of those companies with your private thoughts.
You're assuming power comes with an assumed guarantee of use. OpenAI has vast amounts of power with the data they're collecting, but the likelihood of OpenAI using it against any individual is small enough that an individual could consider it to be outside their threat model (I'm speaking using security language, but I doubt most people go so far as to threat model these interactions; it's mostly intuitive at this point).
Your family has limited power in the grand scheme of things, but the likelihood that they may leverage what power you give them over you is much higher.
The IRS has vast power and is likely to use it against you, hence why tax fraud is usually a bad idea.
> OpenAI has vast amounts of power with the data they're collecting, but the likelihood of OpenAI using it against any individual is small enough that an individual could consider it to be outside their threat model
I think your use of the word "individual" is a bit weird here. I absolutely find it likely that OpenAI is doing individualized manipulation against everyone who uses their systems. Maybe this would be more obvious if you replace OpenAI with something like Facebook or Youtube in your head.
Just because they are using their power on many individuals doesn't mean that they are not using their power against you too.
> I think your use of the word "individual" is a bit weird here. I absolutely find it likely that OpenAI is doing individualized manipulation against everyone who uses their systems. Maybe this would be more obvious if you replace OpenAI with something like Facebook or Youtube in your head.
> Just because they are using their power on many individuals doesn't mean that they are not using their power against you too.
Yeah but at this point you're identifying individual risks and grasping at straws to justify manipulating* everyone's threat model. You can use that as your own justification, but everyone manages their own personal tolerance for different categories of risks differently.
*Also, considering the published definition of manipulation is "to control or play upon by artful, unfair, or insidious means especially to one's own advantage," I think saying that "OpenAI is doing individualized manipulation against everyone who uses their systems" is an overreach that requires strong evidence. It's one thing if companies use dark UX patterns to encourage product spend, but I don't believe (from what I know) that OpenAI is at a point where they can intake the necessary data both from past prompt history and from other sites to do the personalized, individualized manipulation across future prompts and responses that you're suggesting they're likely doing.
Considering your latest comment, I'm not sure this discussion is receiving the good faith it deserves anymore. We can part ways, it's fine.
Quite the opposite actually. My intent is to shed light on the fact that sharing information with OpenAI is not private. And you should not do that with information that you wouldn't even share with people you trust.