Artificial general intelligence seemed way cooler before the current LLM wave. Now it just seems dangerous. What's the point? What's the goal here? Edit: genuinely curious about examples of applications, which are absent from the replies so far.
Interested about your logic, what did you like about pre-LLM AGI? The "maximize utility function at any cost" feature? The single-minded focus on beating people in games?
It's quite terrifying how, as we've chosen an apparently very easy path to bake our preferences and quirks into intelligent systems, people became very "responsible" and concerned for survival of human race, parroting alarmist rhetoric that precedes not only LLMs, but even RL successes of early Deepmind and just cites vague shower thoughts of Bostrom and such non-technical ilk. Say what you want about LLMs but there's zero credible reason to perceive them as a more risky approach!
LLMs prove a very very different path from what most humans for decades assumed artificial intelligence would manifest as.
They're not these rigid, logic/rule bound systems that struggle with human emotions. By all accounts, GPT-4 is as emotionally competent as it is on anything else.
I suppose there's something unsettling about building Super Intelligence in humanity's image.
It’s entirely rule-bound. All it does is draw tokens from a statistical distribution. What people mostly don’t like to contemplate is that they too are entirely rule-bound: Brains do nothing but follow the laws of physics, proceeding from one state to the next on the basis of these rules alone.
windows is adding chat gpt that allows it to run commands on your computer. Things like "open application X" or "maximise this window". They still need the user to press a button for each action it operates though.
Sure, harmless now, but imagine you let one of these LLMs actually use your browser, logged in as you, and it does something that you didn't intend it to do. We are getting to that point fast where you can ask an LLM to check your emails and answer them for you, order food from uber eats or use some internal company or government system that can control a huge number of different variables.
Funnily enough it seems human error (through bad or misguided LLMs prompts) is going to become much more common as LLMs do more things for us. You would kind expect the opposite from automated computer systems, yet here we are.