Hacker News new | past | comments | ask | show | jobs | submit login

I'm not here to defend the particular social movement that grew around these guys, even though they are right about some things, like catastrophic risks tied to new tech (which certain academics are quick to dismiss).

And I get the point of it not being too rigorous in academic sense. Sure, no arguments there.

The point I would like to make is that dismissing thinkers due to their writing style, personality issues, oddities, etc, can lead to very problematic outcomes. Many top thinkers in history have had childish qualities and were not well-rounded fully functional "normal" human beings. "Normal"s are too busy leading productive lives to write thousands of pages on vaguely identified future risks. The types of blind spots that these odd thinkers can uncover in the rest of the society can be very useful in improving everyone's lives. The guy who pointed out the toxicity of lead had to go through same type of ridicule by the society that was busy applying a layer of lead paint on every flat surface, including their faces. Same with many others pointing out practical risks that the society is currently blind to. Off the top of my head, other famous weirdos with odd beliefs, yet huge contributions to science/tech: Faraday, Tesla.




I am not dismissing his writings based on his personality, but based on its lack of rigor. In fact, I am not dismissing them at all as examples of their genre. But they are not philosophy. Philosophy requires a certain form of questioning and doubt which those writings lack. Regardless of what genre they belong to, they also lack originality, which is something Tesla and Faraday had in abundance. If they uncover anything, it is the very interesting political foundations of a group of people, some of whom possess a lot of power in today's society.

Also, I do not doubt for one second their ability to improve some people's lives. If they didn't, he wouldn't have had a following.


> something Sheldon Cooper would write

One might view this as a style/personality jab rather than an issue with rigor.

All I'm saying is Sheldon Cooper types have moved things forward for the rest of society more often than people realize.

Also, it would help to hear why you're so dismissive of the underlying arguments. One of their core ideas is that technology (specifically AI) can grow to surpass human level intelligence and pose a threat to the civilization. This is not a vacuous statement or a lunatic's fantasy, we have seen many other types of technology develop to a point where they pose an existential risks (e.g. nuclear weapons). If anything, the dismissal of AI-related risks reveals very clear blind spot / lack of rigor in most people's reasoning. "Because it hasn't happened yet, it must not be a problem we should worry about."


> All I'm saying is Sheldon Cooper types have moved things forward for the rest of society more often than people realize.

Rarely outside of their narrow field of expertise. I have no doubt that a real-life Sheldon Cooper would have made great contributions to physics. Philosophy, however, would not be his strong suit.

> Also, it would help to hear why you're so dismissive of the underlying arguments.

I haven't dismissed the conclusions, just said I disagree with many of them. I have dismissed their inclusion under the term philosophy because they don't follow the method. It is not a matter of style, just as following deduction rules and proven theorems through a careful precise process is not a style of mathematics but is mathematics. Eliezer Yudkowski makes some intriguing arguments, but they are political more than philosophical in their method.

> If anything, the dismissal of AI-related risks reveals very clear blind spot / lack of rigor in most people's reasoning. "Because it hasn't happened yet, it must not be a problem we should worry about."

Indeed, that is not why people with rigorous thinking on the subject dismiss the threats cautioned by the singularitarians. Also note that many people warn of AI risks all the time (forget true AI, people are warning against current use of machine learning), it's just that those are not the same risks the singularitarians are alarmed about.

In any event, very good arguments against singularitarian alarmism are found in abundance and that is not the topic of this discussion.


> I haven't dismissed the conclusions, just said I disagree with many of them.

Earlier: > If they uncover anything, it is the very interesting political foundations of a group of people

That sounds like a classic dismissal to me.

--

Anyway, not to get bogged down in a silly semantic argument. Who cares if it's real philosophy? I've tried reading Heidegger when I was young and naive. If that's real philosophy, I'll pass, thank you.

If you have substantial, rigorous arguments against the existential risks posed by AI-based tech, I'd love to hear those, otherwise what you're saying sounds a lot more like an emotional and political reaction (presumably from the left end of the political spectrum).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: