Hacker News new | past | comments | ask | show | jobs | submit login

Amen. This whole scare tactic thing is ridiculous. Just make the public scared of it so you can rope it in yourself. Then you've got people like my mom commenting that "AI scares her because Musk and (some other corporate rep) said that AI is very dangerous. And I don't know why there'd be so many people saying it if it's not true." because you're gullible mom.



"<noun> scares her because <authoritative source> said that <noun> is very dangerous. And I don't know why there'd be so many people saying it if it's not true."

The truly frustrating part is how many see this ubiquitous pattern in some places, but are blind to it elsewhere.


That "pattern" actually indicates that something is true most of the time (after all, a lot of dangerous things really exist). So "noticing" this pattern seems to rely on being all-knowing?


> So "noticing" this pattern seems to rely on being all-knowing?

No. It relies on you being able to distinguish between an (your) opinion and an (your) identity.

The identity part is the precarious one, i.e. you defending a stance blindly without questioning it because you feel your identity is in danger.

This pattern being present doesn't mean that there can't be an underlying truth in what's asserted. In fact, that is what makes the assertion meaningful in the first place. However, it entailing a partial truth doesn't mean that the entire assertion holds true in the context it's presented in. Example: "AI" might ultimately be dangerous (like any other technology can be), but this assertion's primary goal is to make you behave a certain way where it is unclear how that would contribute more towards mitigating the danger than to empower the asserter.

To fix this, take a step back before accepting something blindly. Train yourself not to be reactive.


I'm not sure if this is commentary on me somehow or not lol but I agree with you. She is the same person who will point out issues with things my brother brings up but yeah is unable to recognize it when she does it. I'm sure I'm guilty but, naturally, I don't know of them.


"Uranium waste" scares her because "Nuclear Regulatory Commission" said that "Uranium waste" is very dangerous.

You know, sometimes shit is just dangerous.


Meh, I don't think this extrapolates to a general principle very well. While no authoritative source is perfectly reliable, some are more reliable than others. And Elon Musk is just full of crap.


Is Mom scared because Musk told her to be scared, or because she thought about the matter herself and concluded that it's scary? Why do you assume that people scared of AI must be under the influence of rich people/corps today, rather than this fear being informed by their own consideration of the problem or by decades of media that has been warning about the dangers of AI?

Maybe Mom worries about any radical new technology because she lived though nuclear attack drills in schools. Or because she's already seen computers and robots take peoples jobs. Or because she watched Terminator or read Neuromancer. Or because she reads lesswrong. Why assume it's because she's fallen under the influence of Musk?


Because most sociologists suggest that most people don’t take time to critically think like this. Emotional brain wins out usually over the rational one.

Then you have this idea of the sources of information most people have access to being fundamentally biased and incentivized towards reporting certain things in certain manners and not others.

You basically have low odds of thinking rationally, low odds of finding good information that isn’t slanted in some way, and far lower odds taking the product of those probabilities for if you’d both act rationally and somehow have access to the ground truth. To say nothing of the expertise required to place all of this truth into the correct context. But if you did consider the probability of the mother having to be an AI expert then the odds get far lower still off all of this working out successfully.


100% accurate! She has a tendency to read one person's opinion on it and echo it. I have seen it for years with things. I'm not shocked AI is the current one but I wish it were easier to get her to take time to learn things and think critically. I have no idea how I'd begin to teach her why so much of the fear mongering is ridiculous.

Yeah there are legitimate risks to all of this stuff but, to understand those and weigh them against the overblown risks, she'd have to understand the whole subject more deeply and have experimented with different AI. But you even mention ChatGPT she's talking about how it's evil and scary.


> She has a tendency to read one person's opinion on it and echo it.

...and when the people whose opinions she parrots are quietly replaced with ChatGPT, her fears will have been realized-- at that point she's being puppeted by a machine with an agenda.

Losing your own agency is a scary thing.


I mean, fox news seems to manage doing exactly that just fine without ChatGPT


Obviously, I don't know that person's mom, but I know mine and other moms, and I don't think it's a milquetoast conclusion that it's a combination of both. However, the former (as both a proxy and Musk himself) probably carries more weight. Most non-technical people's thoughts on AI aren't particularly nuanced or original.

Musk certainly doesn't help with anything. In my experience, a lot of people of my mom's generation are still sucking the Musk lollipop and are completely oblivious to Musk's history of lying to investors, failing to keep promises, taking credit for things he and his companies didn't invent, promoting an actual Ponzi scheme, claiming to be autistic, suggesting he knows more than anyone else, and so on. Even upon being informed, none of it ends up mattering because "he landed a rocket rightside up!!!"

So yeah, if Musk hawks some lame opinion on a thing like AI, tons of people will take that as an authoritative stance.


This is my mom to a T. She started using Twitter because he bought it and messed with it. Like, in the era where companies are pulling their customer service off of Twitter and people who are regular users are leaving for other platforms, she joined because "Musk owns it"

I remember when tech bros were Musk fanboys, myself included for a bit. Now adays it seems like he's graduated to the general population seeing him as a "modern day Ironman" while we all sit here and facepalm when he makes impossible promises.


First, I don't assume, I know my mom and her knowledge about topics. Second, the quoted text was a quote. She literally said that. (replacing the word "her" with "me")

I'm not sure what you're getting at otherwise. It's not like she and I haven't spoken outside of her saying that phrase. She clearly has no idea what AI/ML is or how it works and is prone to fear-mongering messages on social media telling her how to think and to be scared of things. She has a strong history of it.


AGI is scary, I think we can all agree on that. What the current hype does is that it increased changes the estimated probability of AGI actually happening in the near future.


OP specifically mentioned their mom citing Musk.


"wow our software is so powerful, it's going to take over the world!"


yes, just like "our nuclear bombs are so powerful, they could wipe out civilisation", which led to strict regulation around them and lack of open-source nuclear bombs


It will never stop being funny to me that people are straight-facedly drawing a straight line between shitty text completion computer programs and nuclear weapon level existential risk.


>shitty text completion computer programs

There's a certain kind of psyche that finds it utterly impossible to extrapolate trends into the future. It renders them completely incapable of anticipating significant changes regardless of how clear the trends are.

No, no one is afraid of LLMs as they currently exist. The fear is about what comes next.


> There's a certain kind of psyche that finds it utterly impossible to extrapolate trends into the future.

It is refreshing to see somebody explicitly call out people that disagree with me about AI as having fundamentally inferior psyches. Their inability to picture the same exact future that terrifies me is indicative of a structural flaw.

One day society will suffer at the hands of people that have the hubris to consider reality as observed as a thing separate from what I see in my dreams and thought experiments. I know this is true because I’ve taken great pains to meticulously pre-imagine it happening ahead of time — something that lesser psyches simply cannot do.


"Looks at all the other species 'intelligent' humans have extincted" --ha ha ha ha

Why the shit would we not draw a straight line?

If we fail to create digital intelligence then yea, we can hem and haw in conversations like this forever online, but you tend to neglect that if we succeed then 'shit gets real quick'. Closing your eyes and years and saying "This can't actually happen" sounds like a pretty damned dumb take on future risk assessments of technology when pretty much most takes on AI say "well, yea this is something that could potentially happen".


Literally the thing people are calling "AI" is a program that, given some words, predicts the next word. I refuse to entertain the absolutely absurd idea that we're approaching a general intelligence. It's ludicrous beyond belief.


Then this is your failure, not mine, and not a failure of current technology.

I can, right now, upload an image to an AI and say "Hey, what do you think the emotional state of the person in this image is" pretty damned accurately. Given other images I can have the AI describe the scene and make pretty damned accurate assessments of how the image could have came about.

If this is not general intelligence I simply have no guess as to what will be enough in your case.


Modern generative AI functionality is hardly limited to predicting words. Have you not heard of e.g. Midjourney?


By "approaching" do you mean "likely to achieve it this century"?


Which is interesting because after the fall of the Soviet Union, there was rampant fear of where their nukes ended up and if some rogue country could get their hands on them via some black market means.

Then through the 90's, it was the fear of a briefcase bomb terrorist attack and how easy it would be for certain countries, who had the resources to pull an attack off like that in the NYC subway or in the heart of another densely populated city.

Then 9/11 happened and people suddenly realized you don't need a nuke to take out a few thousand innocent people and cripple a nation with fear.


Yes, just like... the exact opposite. One is a bomb, the other a series of mostly open source statistical models. What kind of weed are you guys on that's made you so paranoid about statistics?


Last time I checked my statistical model book didn't have the ability to write Python code.

And a nuclear bomb is just a bunch of atoms. Do you fear atoms? What the hell.


Maybe an odd take, but I'm not sure what people actually mean when they say "AI terrifies them". Terrified is a strong wrong. Are people unable to sleep? Biting their nails constantly? Is this the same terror as watching a horror movie? Being chased by a mountain lion?

I have a suspicion that it's sort of a default response. Socially expected? Then you poll people: Are you worried about AI doing XYZ? People just say yes, because they want to seem informed, and the kind of person that considers things carefully.

Honestly not sure what is going on. I'm concerned about AI, but I don't feel any actual emotion about it. Arguably I must have some emotion to generate an opinion, but it's below conscious threshold obviously.


And thats exactly the goal - make mom and dad scared so they can vote those that provide “protection” from manufactured fear. And resorting to this type of tactics to make your product viable just proves how weak your position is.

I think more people should speak out left and right about what’s going on to educate mom and dad.


Here we have all these free-market-libertarian tech execs asking for more regulation! They say they believe regulation is "always" terrible -- unless it's good for their profits. In that case, they think it's actually important and necessary. They remind me of Mr. Burroughs in the movie "Class:"

Mr. Burroughs: "Government control, Jonathan, is anathema to the free-enterprise system. Any intelligent person knows you cannot interfere with the laws of supply and demand."

Jonathan: "I see your point, sir. That's the reason why I'm not for tariffs."

Mr. Burroughs: "Right. No, wrong! You gotta have tariffs, son. How you gonna compete with the damn foreigners? Gotta have tariffs."

---

Source: https://www.youtube.com/watch?v=nM0h6QXTpHQ


I mean if they were lying about that, what else might they be lying about? Maybe giving huge tax breaks to the 0.1% isn't going to result in me getting more income? Maybe it is in fact possible to acquire a CEO just as good or better than your current one that doesn't need half a billion dollar compensation package and an enormous golden parachute to do their job? I'm starting to wonder if billionaires are trustworthy at all.


https://www.techdirt.com/2023/05/24/sam-altman-wants-the-gov... shows the same conclusion from several months ago.

However Elon Musk has openly worried about AI for a number of years. He even got a girlfriend out of it: https://www.vice.com/en/article/evkgvz/what-is-rokos-basilis...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: