Hacker News new | past | comments | ask | show | jobs | submit login

The worry I have is that the net value will become great enough that we’ll simply ignore the flaws, and probabilistic good-enough tools will become the new normal. Consider how many ads the average person wades through to scroll an Insta feed for hours - “we’ve” accepted a degraded experience in order to access some new technology that benefits us in some way. To paraphrase comedian Mark Normand: “Capitalism!”



Scary thought, difficult to unthink.

I'm afraid you might be right.

We've accepted a lot of crap lately just to get what we think we want, convenience is a killer.


Indeed, even if I were to minimise what LLMs can do, they are still achieving what "targeted advertising" very obviously isn't.


They're both short sighted attempts at extracting profit while ignoring all negative consequences.


To extent I agree, I think that's true for all tech since the plough, fire, axles.

But I would otherwise say that most (though not all*) AI researchers seem to be deeply concerned about the set of all potential negative consequences, including mutually incompatible outcomes where we don't know which one we're even heading towards yet.

* And not just Yann LeCun — though, given his position, it would still be pretty bad even if it was just him dismissing the possibility of anything going wrong




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: