Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Artificial general intelligence seemed way cooler before the current LLM wave. Now it just seems dangerous. What's the point? What's the goal here? Edit: genuinely curious about examples of applications, which are absent from the replies so far.



For a bunch of computer scientists to keep their curious minds entertained and another bunch of already rich individuals to get richer.


Whoa, so we discard all his previous achievements because you're jealous you don't have any money?


Weird jump to make from my comment. Carmack is obviously in the former group.


Alright, I've read too fast and the thread turned down my mood. Sorry!


Interested about your logic, what did you like about pre-LLM AGI? The "maximize utility function at any cost" feature? The single-minded focus on beating people in games?

It's quite terrifying how, as we've chosen an apparently very easy path to bake our preferences and quirks into intelligent systems, people became very "responsible" and concerned for survival of human race, parroting alarmist rhetoric that precedes not only LLMs, but even RL successes of early Deepmind and just cites vague shower thoughts of Bostrom and such non-technical ilk. Say what you want about LLMs but there's zero credible reason to perceive them as a more risky approach!


LLMs prove a very very different path from what most humans for decades assumed artificial intelligence would manifest as.

They're not these rigid, logic/rule bound systems that struggle with human emotions. By all accounts, GPT-4 is as emotionally competent as it is on anything else.

I suppose there's something unsettling about building Super Intelligence in humanity's image.


It’s entirely rule-bound. All it does is draw tokens from a statistical distribution. What people mostly don’t like to contemplate is that they too are entirely rule-bound: Brains do nothing but follow the laws of physics, proceeding from one state to the next on the basis of these rules alone.


To reduce costs long term and make more money at the expense of other people.


Since when making math operations on a bunch of text is dangerous????


Perhaps because 'the pen is mightier than the sword'.

It's the same reason people like Jeff Bezos spent 250 mill a decade ago on the Washington Post.

And Musk spent 44 billion on Twitter.

Words matter - words can change the world.


windows is adding chat gpt that allows it to run commands on your computer. Things like "open application X" or "maximise this window". They still need the user to press a button for each action it operates though.

Sure, harmless now, but imagine you let one of these LLMs actually use your browser, logged in as you, and it does something that you didn't intend it to do. We are getting to that point fast where you can ask an LLM to check your emails and answer them for you, order food from uber eats or use some internal company or government system that can control a huge number of different variables.

Funnily enough it seems human error (through bad or misguided LLMs prompts) is going to become much more common as LLMs do more things for us. You would kind expect the opposite from automated computer systems, yet here we are.


Make numbers go up. Like always...


I take it you don't have any friend or relative that are dying because of an illness, otherwise you would have found a goal for AGI all by yourself.


I don't see the connection. Can you elaborate?


In the short term, AGI means that everybody can get a personal doctor. In the longer term, AGI will help medical research.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: