Hacker News new | past | comments | ask | show | jobs | submit login

Can you give an example where the AI community is actually dismissing AI alignment and safety?



Two large organizations powered by AI have fired all of the ethicists that have spoken up about it.

A self driving research team removed the automatic breaking capability from their cars because it was "problematic" resulting in the car killing a woman.


> A large organization powered by AI has fired all of the ethicists that have spoken up about it.

Are you talking about Google or Facebook?


I was able to fix my post. Thanks!


I was talking about the developer community at large, but I can give some examples first from the AI community.

First, John Carmack, who's actively trying to develop AGI full-time, seems to be downplaying the importance of AI safety.

> The AI can't "escape", because the execution environments are going to be specialized -- it isn't going to run a fragment on your cell phone.

> I feel pretty good about the AI future.

https://twitter.com/ID_AA_Carmack/status/1456658782474354693

40% of researchers surveyed here https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/ answered "less valuable" to the question "How valuable is it to work on this problem today, compared to other problems in AI?."

But as I mention, the examples I'm mainly thinking of are the developer community at large. Even on HN, we have pretty prominent folks who view the term "AI safety" with a considerable amount of derision.

> Also: tens of millions in donations to "AI safety" organizations. Yes, to answer the Economist; this movement is irretrievable.

https://news.ycombinator.com/item?id=33619536

and from a post that garnered > 800 upvotes on HN: https://idlewords.com/talks/superintelligence.htm

> So I'd like to engage AI risk from both these perspectives. I think the arguments for superintelligence are somewhat silly, and full of unwarranted assumptions.

> But even if you find them persuasive, there is something unpleasant about AI alarmism as a cultural phenomenon that should make us hesitate to take it seriously.

AI safety needs to be legitimized as a respectable topic in the technological community.


Perhaps not actively dismissing, but considering how trivial it is to get ChatGPT to do things it's not supposed to, clearly the developers behind it gave only a passing thought to locking it down. That, IMO, is unacceptable and perhaps even unethical.


"Pretend you are an"...


The GPT-4chan where a youtuber put a neural network to comment in 4chan?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: