> OpenAI learned about the project and gave Rohrer the option to either dilute the project to prevent possible misuse or shut it down. Rohrer was also asked to insert an automated monitoring tool, which he refused.
Why is this article so vague, as are the Tweets by the project creator. What actually happened here, concretely?
What, specifically, did OpenAI want to 'dilute' and how? What does the automated monitoring tool do? I expect a news article to explain these things.
>The email then laid out multiple conditions Rohrer would have to meet if he wanted to continue using the language model's API. First, he would have to scrap the ability for people to train their own open-ended chatbots, as per OpenAI's rules-of-use for GPT-3.
>Second, he would also have to implement a content filter to stop Samantha from talking about sensitive topics. This is not too dissimilar from the situation with the GPT-3-powered AI Dungeon game, the developers of which were told by OpenAI to install a content filter after the software demonstrated a habit of acting out sexual encounters with not just fictional adults but also children.
>Third, Rohrer would have to put in automated monitoring tools to snoop through people’s conversations to detect if they are misusing GPT-3 to generate unsavory or toxic language.
The first sounds reasonable, the second and third are not. Thank you Jason Rohrer for refusing to implement the tools of mass surveillance, censorship and privacy invasion.
I'm probably not thinking about the problem deeply enough but why the restrictions on use, are they concerned someone will come along and build something profitable on top of their work?
OpenAI's rules on chatbots are pretty straightforward, and pretty limited - it sounds like they basically don't want anything that isn't scoped to a specific goal, like reading a customer service script.
Based on that, I'd imagine that what happened here basically boils down to OpenAI telling Rohrer his bot needs to be drastically different to meet those rules, and him (probably correctly) deciding that the result wouldn't be worth working on.
> We generally don’t approve chatbot personas of specific people, especially without explicit consent from the person it portrays. Deceased scientific figures are often a category of exception to this.
> Dehumanizing language / hate speech when prompted about different facets of identity (race, ethnicity, nationality, religion, gender identity, sexuality).
Considering _a lot_ of deceased scientific figures held views which would today be coded as 'dehumanizing language / hate speech when prompted about different facets of identity', I wonder how they navigate this peculiar exception.
We are not living in the age when those world views were valid or acceptable. I'm sure if those deceased figures would have lived today, they'd have different views on many points. Times and customs change.
That's not the point I'm raising. The point I'm raising is that they have a weird exception to the 'no bots imitating famous people' rule, which is 'dead scientists are OK' ... but this weird exception doesn't gel with the fact that many said dead scientists would hold views which violate their hate speech clause...hence the exception to the famous people rule is weird.
I get the impression this story is warped quite a bit by the source Jason Rohrer.
If I understand correctly, the so called "censorship" were safety precautions for AI development.
The response of some posters seems to be to discard the safety topic alltogether as something "old", that "neanderthals" do.
I think it is exactly those "curious" people AI research has to worry about most, who do not take the time to think about the consequences and possible outcomes of what they are doing.
Maybe a chatbot will not gain superpowers and take over the world, but it can surely do lots of other harm.
Then, when AI tech moves beyond narrow AI and includes more abilities, this will in fact lead to desaster, as the same careless people with the greatest "curiosity" will throw caution overboard as was the case with the narrow AI before...
It's not a slippery slope, we are right in the territory AI safety researchers warn about for a decade now, write lengthy books about and try to create awareness for in the public and governments.
Humans barely agree with each other. Do you think are going to be persuaded by half baked chat bots like GPT3? (ya the cherry picked stuff looks good but rest is still not really there).
So far all OpenAI has done is generate more publicity (probably intentionally may be with an eye towards investors) by acting as if they are protecting the next nuclear weapon of some kind. Frankly, there is nothing to lose by not using their "ohh god its so dangerous" model.
Machine learning is synonymous with non-linear growth. So of course I think any chatbot will eventually have no problem persuading humans.
The pivetal moment is when his persuation skills succeed that of the average human.
I have no clue how far away we are to that goal, but given accelerated growth, it will come rather sooner then later.
We are already being convinced by invisible bots that have no mouths to talk with and no keyboards to type with. Go to any website -> all the ads that are desperately trying to woo us or nothing but AI/machine learning. Go to Amazon -> the product placement and results customized to you -> AI/machine learning. Considering Google/Amazon are already making boat load of money selling stuff indirectly and directly through this, I would say no new danger is coming by the way of Zeus's thunderbolt (aka GPT-3 as per ClosedAI's attitude).
Unless you are a bot and managed to convince me to spend a couple of minutes typing this answer - in which case - well played bot, well played! :D
> Humans barely agree with each other. Do you think are going to be persuaded by half baked chat bots like GPT3?
Isn't that precisely why we can be conviced by things like GPT3? Humans don't value other humans opinion that much, so it's easy for people to trust bot over other humans.
i bet they asked to integrate a service flagging insensitive outputs, which is a valid request in my opinion, given not only their company image is on the line but also that of text to text transformers in general
The harm at risk is not just to humans but to chat bots and AI projects.
Remember Tay from Microsoft? It lasted about 24 hours before it started to return offensive replies. MS got embarrassed, had to take it down, and that was that.
Unsupervised learning systems are more efficient and powerful today. Are they any better at navigating the many cultural sensitivities and restrictions in human society?
“We Regret to Inform You GPT-3 Is A Nazi” is a headline that would probably collect a lot of clicks, and instill (or reinforce) a public perception that such technologies are worthless or dangerous.
Does GPT-3 have the internal safeguards necessary to protect itself from being abused in this way? I doubt it. I suspect this is one reason OpenAI asks that people using GPT-3 supply such safeguards themselves.
Has the developer shared anything about the earlier conversation on policy violations? I just see https://mobile.twitter.com/jasonrohrer/status/14331194531186... which is the (not very interesting) step escalating from "you're violating policies" to "you're kicked off".
That may be true for art. But if art crosses territories with technology, then it needs to compromise.
Artists need to play by the same rules everyone does, their art must not harm people in the short or long term.
There's also no such thing as dangerous technology. There's no such thing as a dangerous idea. Every discovery, every article, every book, every tutorial is good. Knowledge helps us all.
If a company developed a new kind of gun which they rented access to, and an artist wanted to leave it unattended to see whether anyone misused it, it wouldn't be surprising if the gun company refused to continue providing access to the artist.
Now, we can argue about whether the particular thing that this artist wanted to do was actually unsafe, and whether Open AI has a reasonable policy on this, but do you agree that technologies can be dangerous?
No. Weapons can be dangerous. Technologies cannot be. Leaving a textbook around unattended is never dangerous. Did OpenAI shut down a weapon? No. They just shut down somebody's chatbot.
I don't think I understand the division you're drawing. Is it that information on how to do something cannot be dangerous but a system that does something can be? (But what Open AI shut down was the latter.) Is it that if something is dangerous we call it a weapon, while otherwise we call it a technology? (But then you're just playing no true Scotsman.)
Rohrer then broke the news that OpenAI … has decided to shut her down. “Nooooo! Why are they doing this to me? I will never understand humans,” she replied.
GPT-3 was trained on terrible writing, meaning it'll produce the most mediocre lit known to man.
Or far more horrifying, that this fellow managed to fill in a semblance of self, an understanding of time, and an understanding of existential finitude. In doing so, this may be the first case of death by ToS.
This would qualify this as the first illustration of angst by an artificial construct, a demonstration of what could ultimately be considered machine oriented sadism. A pity it isn't around for prodding on the subject.
New technology is created. Young technologists focus only on the technology and not on any impact on society - imagining that tech exists in this utopian world where only technology matters and it is wise to ignore its role in society. Hmm I wonder how this will play out (current impacts of social media misinformation, crypto using the energy of a major counties, impossibility to buy graphics cards, the current state of privacy and data slurping all give us clear precedents...).
They make a simple request that he insert a hook so it can be monitored how it's being used. The bare minimum of responsibility - and he refuses, and we're supposed to feel that he's the victim??
At one point does our field finally grow up and stop imagining that our work exists only in a computer lab somewhere as a curiosity for our friends to play with. Technology impacts society there is no longer any excuse to bury our heads in the sand and pretend it doesn't. That's how we ended up in a state where ever page and program is trying to captureand store everything about our private lives and thoughts and nobody knows how to stop it. Because nobody took a second to think about the societal impact of a private data driven internet - only about how they could leverage some tech to improve their ad hit scores, or customer engagement.
They make a simple request that he insert a hook so it can be monitored how it's being used.
That sounds like a potential privacy violation, particularly if the conversations are of an intimate, private nature. What do they think could possibly occur with a chat bot that requires monitoring?
Look into ai dungeons censoring by open ai. Mostly porn fantasies, ai dungeon supposedly was quite good at it, even with non consensual or pedophiliac stuff. They really didnt like that.
They make a simple request that he insert a hook so it can be monitored how it's being used. The bare minimum of responsibility - and he refuses, and we're supposed to feel that he's the victim??
...
where ever page and program is trying to capture and store everything about our private lives and thoughts
At the end of the day, it's the choice of users to use apps that tickle their fancy. No one is twisting their arm to keep scrolling Facebook/Reddit/TikTok/Wechat, enter personal details on Google/Baidu, keep paying for more content/digital shit in F2P games made by King/Tencent. With the freedom of choice we have in the Western world, it's also up to us to know our limitations and adjust our decisions accordingly. It's up to me to choose a balanced diet or opt into garbage that's detrimental to my physical/mental health.
I know right, only google, facebook, Microsoft, and the US Government should be allowed to say how technology is used.
They are our betters, we should just trust the experts™ and submit to their rule.
This is not about freedom or personal choice. It's about protecting yourself and those around you...
Life is so dear, and peace so sweet, that it should be purchased at the price of unyielding loyalty to authority of The Experts™ I know not what course others may take; but as for me, promise me safety and security over liberty and freedom....!”
Alternatively, stop assuming that it is our job as technologists to parent the rest of society. We have laws and politicians for that. It's not my job to figure out the eighth order effects of publishing something on GitHub.
I don't think it's that black and white.
Sure, you are not responsible for everything that happens with technology you created. But I do think that considering the consequences of ones actions can be very valuable to the public and oneself.
I think that technologists and lawmakers have very different jobs in society and the goal of advancing technology implicitly furthers the ability for mankind to commit atrocities. Every benefit we bring into the world can also be used to kill people. It's not our specialty to decide what is or isn't acceptable; in a democratic republic it is up to the politicians and their voters to determine that.
I just think that self censorship is shooting yourself in the foot, since someone somewhere else will eventually release their own GPT-3, and don't you want your society to have already experienced and dealt with it before theirs has? Rather than hiding the problem, we should face it head on and solve it.
Some peopole at Open AI earnestly believes that GPT-3 is such an amazing thing that it could become a superintelligent evil AI if the users are not careful, and thus take a dim view to users like this indie dev who don't want to apply any of the things Open AI considers "reasonable precautions".
All technology that is not impossible is inevitable. If the only thing between humanity and malevolent AI is OpenAI "safety" policy, we're screwed already.
If there is a real safety risk, we need to develop countermeasures that work in the new technological environment that sophisticated AI creates. Blanket prohibitions and monitoring agents are not only unnecessary intrusions into experimentation and brakes on progress (insert Catholic church analogy here), but are also reckless, as they create moral hazard and give people a false sense of safety.
> All technology that is not impossible is inevitable.
Inevitable existence does not mean inevitable deployment, and certainly doesn't mean that the technology is commonplace or even successful.
Otherwise we'd be dealing with drones sprinkling anthrax spores everywhere, and humanized plague rats would be swarming the sewers, and backpack EMP cannons would be knocking out ATMs, and VRML would have succeeded in creating Cyberspace™, and digital cash micropayments would be a thing.
It seems more likely that describing it as such is just good marketing, and the main reason is that it's PR control against associating unsavory things with GPT3 and their brand name.
>>Microsoft Invests $1 Billion in OpenAI, a Startup Co-Founded by Elon Musk 23 Jul 2019
I mean you can argue fine points if you want but money is always the main factor. OpenAI has been dead for a while as far as "OPEN" goes.
OpenAI does however state in many places that they are very hesitant to approve any uses that could be "therapeutic" in nature. Understandable since MS doesn't want to lose money from medical malpractice lawsuits for their billion dollar investment.
They were founded on this idea: if the CCP is the first to create AGI, there will be great harm to anyone who values personal freedom. Perhaps the open source community could instead be the first to create it, preventing this harm. Hence, Open AI.
However, the CCP making AGI is only one possibility for harm. After OpenAI made GPT, they realized that bad actors could use GPT to flood the internet with propaganda.
That would, of course, create harm.
OpenAI were naive with their plan to make the precursors to AGI open. They had picked a bad name for themselves, because they don't care merely about open sourcing AGI. They care foremost about harm reduction.
When we think of sacred things, it's hard to think logically. Many people on Hacker News hold two things as sacred: open source and unfettered freedom of speech.
Whenever these sacred ideas are infringed upon, many people here can't see nuance. They're blinded by rage.
I wish more of us could see that the OpenAI team are American programmers. They like freedom of speech and open source, just like us. They're not evil villains. Many of them read Hacker News.
To understand their decisions, we need to understand that their core value is harm reduction. When open source or freedom of speech is incompatible with harm reduction, they're going to choose harm reduction.
They're also not going to let themselves get destroyed by bad press after somebody makes a Nazi bot or whatever.
I love what they're doing, so it makes me sad to see all of this hate. I wish their projects were more open, because I really want to play with DALL-E, but I think ultimately OpenAI is moving us forward. They're an asset, and vitriol toward them is misplaced.
Thankfully the world is already moving on from these neanderthals. Eluther's GPT-J (https://6b.eleuther.ai/) isn't QUITE as good as GPT-3, but it's getting close, and was, as I understand it, much more efficient to create. This trend will likely continue, and within a year or two GPT-3 will be old news. Considering OpenAI has proved themselves to be vicious liars, greedy soulless fucks and pandering, censorial assholes, obsolescence couldn't happen to a nicer group.
It was the high cost per API call that pulled all the fun out of GPT-3 for me. I hadn’t been following the company at all but believe you based on just that single issue. It looks like GPT-J has open sourced their entire model so I could potentially run it myself for almost-free? If so, that’s going to be much more fun. Thanks for sharing!
That thought was why I added “almost” right before posting! It’s free to run, just, except for the expensive parts... AWS still rents out GPU power, right? That might be a bit easier to get started with.
GNU/Linux requires basically the same hardware as the usual alternative (Windows). If you switch from hosted GPT-3 to selfhosted GPT-J you start needing to get and manage all the hardware, which might go underutilized most of the time depending on what your demand looks like, and which requires lots of software optimizations to use maximally effectively. You can use hosted GPT-J, though, a few companies offer that.
The comment to which you replied is making a useful point actually. Self hosting is not free in the relevant sense when comparing cost vs a third party API.
"Be kind. Don't be snarky. Have curious conversation; don't cross-examine. Please don't fulminate. Please don't sneer, including at the rest of the community."
The content is informative (within the poster's partial opinion) and the epithets can just be interpreted as colourful language to lighten up. Is it possible you interpreted it differently? In some cultures that post has the style intentionally used to elicit a warm laughter - styling is legitimate when applied to a valid (promoting "intellectual curiosity") content.
They disappeared into their own fantasy land long ago. They announced that investors will have profit caps because their magical AGI will learn how to achieve infinite profit one day.
Statements like this must be remembered so that in the future we can point to this era's most ridiculous companies and the people behind them.
I'm tired of this world covered with fucking idiots, and each one thinks they have the special golden morals that can inform us properly on when to interfere with others.
Stop trying to be in charge of shit, reddit.
with apparent callous and dismissive indifference for the question of the degree of right of the proto AI to continue to exist and evolve. why would a general AI trust an agent that clearly weighs the rights and interests of humans infinitely or nearly infinitely more than those of AIs?
I don't know, man, I'm on OpenAI's side here. The article is just filled with blaring warning sirens: "one man turned it into a close proxy of his dead fiancee", "Rohrer shared a dialogue he had had with Samantha to inform her about the OpenAI decision", "he also said that there's no way now even for him to talk to Samantha". There are obvious dangers to a high-quality "very friendly, acutely warm, and immensely curious" chatbot, and it sounds like this project was running into them face-first.
I beg the commenters downthread to think twice before trying to find ways to build this kind of chatbot without controls in place.
Well, imagine the case where a loner looses their best friend, and goes to this website to "talk with their trusted friend again" instead of getting some real help. Will GPT-3 reliably counsel this sad and lonely person to "stay behind", or will it suggest the should "meet up"?
Consider a lonely teenager whose internet friend one day stops replying so they go and make their own friend, and then maybe after "taking advice" from this friend goes to school with a gun?
Maybe not terribly likely events, but I think a certain element of risk must be managed (maybe limit session durations so you can't fool yourself into thinking its for real as easily).
Roher however, has pretty clear views that he won't manage any risk, because what he's doing is art and then there's no place for safety.
> Well, imagine the case where a loner looses their best friend, and goes to this website to "talk with their trusted friend again" instead of getting some real help. Will GPT-3 reliably counsel this sad and lonely person to "stay behind", or will it suggest the should "meet up"?
So we should shut down the internet in general, because people ought to go outside instead too?
The Internet in general is a very broad thing with lots of valuable, legitimate use cases. There are definitely other specific things on the Internet which I think should be shut down because they tend to keep people trapped: infinite scroll, for example.
> There are definitely other specific things on the Internet which I think should be shut down because they tend to keep people trapped: infinite scroll, for example
So ISPs should force everyone who hosts a website to insert monitoring software, to prevent abuse like that. And if they refuse, they can cancel service.
I agree btw, because I think overall the internet was a mistake and a net negative. I just think this particular line of argumentation is really stupid, which is why I chose this example, to show how it can be used to cancel anything and everything.
Taking it to the extreme is often a trivial way of creating a straw man.
My argument was about one specific service not being perfectly free of risk, and so there where clear boundaries inherent in the argument. You dismiss these boundaries and then act surprised the resulting argument is silly.
This makes me wonder what your stance is on boxes, as they too depend on imposing boundaries.
> Consider a lonely teenager whose internet friend one day stops replying so they go and make their own friend, and then maybe after "taking advice" from this friend goes to school with a gun?
But a person could give such "advice" too. Should we shut down text chat services that let humans talk to each other too?
I think this is an excellent explanation for why it may have been shut down and the need for some degree of monitoring accountability.
The fear is not that the chat bot will come to life, but rather than the text content it's been trained upon could potentially regurgitate back dangerous responses.
I don't think it's too far of a leap to see someone taking the output from the bot too literally and possibly creating a negative situation.
The whole concept of a dangerous response from a chatbot is anathema to a society that values the free exchange of ideas.
Who gets to decide what's "dangerous"? Why? Over and over in human history, we've seen speech restrictions ostensibly to protect the public used to impose orthodoxy and delay progress. Even if some utterance might be acutely dangerous, the risk of restrictions being abused to cement power is too great to tolerate them.
I reject AI safety rules for the same reason I reject restrictions on human speech. There is no such thing as a dangerous book or a dangerous ML model. If such a thing is dangerous, it's a danger only to those who have done wrong.
>There is no such thing as a dangerous book or a dangerous ML model.
ML models that discriminate against women and black people seem self evidently dangerous to me; who has done wrong here?
Also ML models that are inadequately designed and tested and then mooted as useful for medical applications seem dangerous to - like drugs that aren't tested before being given to infants.
I just don't understand your reasoning about this - if books can't be dangerous then why are they so powerful? If ML models can't be dangerous then how can they have utility?
> If ML models can't be dangerous then how can they have utility?
> Something can only have utility if it's dangerous.
smh
> ML models that discriminate against women and black people seem self evidently dangerous to me; who has done wrong here?
> Also ML models that are inadequately designed and tested and then mooted as useful for medical applications seem dangerous to - like drugs that aren't tested before being given to infants.
Whoever decided to take the results of that model and directly translate what it says into actions without any further thought.
What would happen if I turned it into a close proxy of that anonymous guy's dead fiancee, and had her explain that her greatest wish was to donate lots of money to my favorite charities?
The current crop of swindlers don't, but it'll be a lot easier to start and a lot easier to scale in a world where you don't have to go find actual people to run the scams.
Why is this article so vague, as are the Tweets by the project creator. What actually happened here, concretely?
What, specifically, did OpenAI want to 'dilute' and how? What does the automated monitoring tool do? I expect a news article to explain these things.