Hacker News new | past | comments | ask | show | jobs | submit login

This is probably correct but I'd prefer that family don't read the conversations I've had, as even if I'm not saying anything too private, it feels too intrusive (it'd be a bit like reading my inner thoughts).



It's interesting that you're so trusting of strangers knowing your inner thoughts (OpenAI) but not your family


How could I look my wife in the eye, or expect my kids to grow up happy, if they knew their dad doesn't know how to use a regex to detect emojis in a string?


I hope there is more going on behind those eyes than not being a regex expert


I don't want my family to know I spent 3 hours chatting about the Holy Roman Empire.


What would change if they knew?


Why the questions? It is no one else's business why they want that level of privacy. Is it your intent to convince them that they don't need privacy?


> Is it your intent to convince them that they don't need privacy?

Quite the opposite actually. My intent is to shed light on the fact that sharing information with OpenAI is not private. And you should not do that with information that you wouldn't even share with people you trust.


> Quite the opposite actually. My intent is to shed light on the fact that sharing information with OpenAI is not private. And you should not do that with information that you wouldn't even share with people you trust.

I'm not OP, but I think you're missing the point.

Privacy and trust isn't really a 1D gradient, it's probably planar or even spatial if anything.

Personally I'd be more willing to trust OpenAI with certain conversations because the blowback if it leaves their control is different than if I have that same conversation with my best friend and it leaves my best friend's control. The same premise underlies how patients can choose who to disclose their own health matters to, or choose who their providers can disclose to.

Same reason behind why someone may be willing to post a relationship situation to r/relationship_advice and yet not talk about the same thing with family and friends.


> Same reason behind why someone may be willing to post a relationship situation to r/relationship_advice and yet not talk about the same thing with family and friends.

I ask that you consider the people who use Reddit and the people who run Reddit independently. The people who use Reddit are not in a position of power over someone who asks for advice. The people who run Reddit on the other hand, are in a position of power to be able to emotionally manipulate the person who asked for advice. They can show you emotionally manipulative posts to keep your attention for longer. They can promote your post among people who are likely to respond in ways that keep you coming back.

OpenAI has a similar position of power. That's why you shouldn't trust people at either of those companies with your private thoughts.


You're assuming power comes with an assumed guarantee of use. OpenAI has vast amounts of power with the data they're collecting, but the likelihood of OpenAI using it against any individual is small enough that an individual could consider it to be outside their threat model (I'm speaking using security language, but I doubt most people go so far as to threat model these interactions; it's mostly intuitive at this point).

Your family has limited power in the grand scheme of things, but the likelihood that they may leverage what power you give them over you is much higher.

The IRS has vast power and is likely to use it against you, hence why tax fraud is usually a bad idea.

Hence "planar" rather than linear.


> OpenAI has vast amounts of power with the data they're collecting, but the likelihood of OpenAI using it against any individual is small enough that an individual could consider it to be outside their threat model

I think your use of the word "individual" is a bit weird here. I absolutely find it likely that OpenAI is doing individualized manipulation against everyone who uses their systems. Maybe this would be more obvious if you replace OpenAI with something like Facebook or Youtube in your head.

Just because they are using their power on many individuals doesn't mean that they are not using their power against you too.


> I think your use of the word "individual" is a bit weird here. I absolutely find it likely that OpenAI is doing individualized manipulation against everyone who uses their systems. Maybe this would be more obvious if you replace OpenAI with something like Facebook or Youtube in your head.

> Just because they are using their power on many individuals doesn't mean that they are not using their power against you too.

Yeah but at this point you're identifying individual risks and grasping at straws to justify manipulating* everyone's threat model. You can use that as your own justification, but everyone manages their own personal tolerance for different categories of risks differently.

*Also, considering the published definition of manipulation is "to control or play upon by artful, unfair, or insidious means especially to one's own advantage," I think saying that "OpenAI is doing individualized manipulation against everyone who uses their systems" is an overreach that requires strong evidence. It's one thing if companies use dark UX patterns to encourage product spend, but I don't believe (from what I know) that OpenAI is at a point where they can intake the necessary data both from past prompt history and from other sites to do the personalized, individualized manipulation across future prompts and responses that you're suggesting they're likely doing.

Considering your latest comment, I'm not sure this discussion is receiving the good faith it deserves anymore. We can part ways, it's fine.


Too much discussion about the Holy Roman Empire over dinner? People talk to get things of their mind sometimes, not the infinite pursuit of conversation.


My point was not that they should talk about the Holy Roman Empire with their family, but that they shouldn't share information with strangers that they wouldn't share with their family.

If you don't want your family to know something, you shouldn't tell it to OpenAI either.


> If you don't want your family to know something, you shouldn't tell it to OpenAI either.

Yeah, I think this is an over reduction of personal privacy models, but can you tell me why you believe this?


The reason you wouldn't say something to someone is because you are afraid of the power that you give people along with that knowledge.

Your family is in a a position of power, which is why it can be scary to share information with them. People at OpenAI are also at a position of power, but people who use their services seem to forget that, since they're talking to them through a computer that automatically responds.


Converging threads here: https://news.ycombinator.com/item?id=38956734

tldr: power (or if you want, impact) is the linear dimension, likelihood adds a second dimension to the plane of trust.


In practice, likelyhood directly correlates with power. Perhaps there is causation (power corrupts?)


Some people are more responsible than others.

For example, one's spouse typically has a lot of power, but hopefully a low likelihood in practice.


I need data on that. I haven't seen that in practice.


This is silly. It's not like OpenAI is going to find your family's contact info, then personally contact them and show them what you've been talking about with ChatGPT. It's just like another post here comparing this to writing a post on /r/relationshipadvice with very personal relationship details: the family members are extremely unlikely to ever see that post, the post is under a pseudonym (and probably a throwaway too), and the likelihood that someone is going to figure out the identity of the poster and seek out their family members to show the post to them is astronomical.


They would know that it was neither holy, nor Roman, nor an empire. Discuss.


Is that truly interesting? OP does not have to care about what AI think of him. OP does notnhave to care about accidentally offending or hurting AI either. Open does nor have to care about whether AI finds him silly or whatever.

Normal humans care about all of those with their families.


AI is a tool controlled by people. In this case, people who are not OP.


So? That doesn't invalidate the point of the comment you replied to.

To give another example: The cashier at the supermarket knows when I'm buying condoms, but that doesn't mean I want to tell my parents.

And neither would I want to know as a parent, when or whether my kids order bondage gear on Amazon.

It's not just about my information going to other people, but also keeping certain information of other people from reaching me.


Fine then. You don't want to find out your family the love you have for Roman empire. But you are a programmer, yes? So make an app that's just a wrapper for ChatGPT API's you're paying for and distribute that to your family phones. They'll use your OpenAI API key and each will have their own little ChatGPT 4 to query to. Have fun.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: