Hacker News new | past | comments | ask | show | jobs | submit login
AI vs. AGI vs. Consciousness vs. Super-Intelligence vs. Agency (secondbreakfast.co)
65 points by secondbreakfast on March 28, 2023 | hide | past | favorite | 133 comments



> Almost all AGI doomsayers assume AGI will have agency

I disagree. Concern about the use of AI without agency by its human masters as a tool of both intentional and incidental repression and unjust discrimination resulting in a durable dystopia is far more common an “AI doom” concern than any involving agency.

In fact, the disproportionately wealthy and invested in AI crowd pushing agency-based doom scenarios that the media pays the most attention to are using their visibility and economic clout to distract from the non-agency-dependent AI doom concerns, and to justify narrow control and opacity which makes the non-agency-based doom scenarios (which they are positioned to benefit from) more likely.


> In fact, the disproportionately wealthy and invested in AI crowd pushing agency-based doom scenarios that the media pays the most attention to are using their visibility and economic clout to distract from the non-agency-dependent AI doom concerns, and to justify narrow control and opacity which makes the non-agency-based doom scenarios (which they are positioned to benefit from) more likely.

It's extremely important to think about how to spread AI equitably, but I think you're severely underestimating what "agency-based doom" looks like. You absolutely need both checks on the people who are developing AI as well as AI itself, but you really really need both and can't assume that the former automatically leads to the latter.


> Almost all AGI doomsayers assume AGI will have agency. They have this vision of the machine deciding it’s time to end civilization.

No. Agency is not a necessary condition for AI to do massive damage. I don't believe agency is really well-defined either.

An AI merely needs to be hooked up to enough physical systems, have sufficiently complex reaction mechanisms, and some way of looping to do a lot of damage. For the first everyone seems to be rushing as fast as possible to hook up everything they possibly can to AI. For the second, we're already seeing AI do all sorts of things we didn't expect it to do.

And for the third, again everyone seems eager to create looping/recursive structures for AIs as soon as possible.

Once you have all of this, all it takes a cascade of sufficiently inscrutable and damaging reactions from the AI to do serious harm.

See e.g. https://www.lesswrong.com/posts/kpPnReyBC54KESiSn/optimality...


> An AI merely needs to be hooked up to enough physical systems

Don’t even need this. People spend quite a lot of time in virtual space. Pretending that damage there isn’t real is overlooking things. For example, the vast majority of people’s banking is done virtually and digitally. If I drain your bank account, that’s going to harm you even though I haven’t impacted you physically as I would have to with a robbery.


Yes that's right. In fact any interaction with an AI at all can be in some sense viewed as a connection with the physical world (the classic "AI in a box" problem).

However, we've way blown past the "AI in the box." We've demonstrated that an AI doesn't need to convince a human to let it out of the box into the physical world. Humans are clamoring to rip it out of the box and thrust it into the physical world of their own accord.

So the question becomes, what kind of social incentives and structures can we build in light of this so that we don't just plow headlong into AI capabilities development without a concomitant investment in safety.


In its infancy I think that any for public consumption AI that is allowed to interact with humans and/or external systems that can yield any external outcomes should be held liable and accountable by the owner. "In the box" or otherwise.

While those that stand to make sweeping profits will disagree - if an OpenAI product is leveraged to create a virtual friend platform, and based on how the constraints are built (or not), the publisher should be responsible if that virtual friend convinces someone to commit suicide no different than those held accountable in the physical world.

If there is no accountability we'll go right back down the path of recent history where there are no repercussions and a weak apology by <corporate_name_here> will count as enough. Profits will continue to be the overall driver. Those corporations will, once again, argue that self-policing is the only way. And they will continue to brush off gross negligence by hiding behind hoards or layers and lobbyists.


This is exactly what's going to happen, and it's going to be disastrous for humanity. In fact, I think it'll be a bit worse.

Humans are already losing their ability to empathize and connect with each other, partially due to most interaction currently taking place behind screens (see Turkle's work). And we're not healthier for it; even with the plagarism issues, I think there's a lot of truth in Hari's Lost Connections, and AI serves to drive this even further, especially the 'virtual friend' platforms. I've seen it personally among myself and my friends, who then turn to legal weed to self-medicate. Ironically enough, they're the ones welcoming AI with open arms, thinking it'll solve their problems, rather than cause more so someone else can get rich.

And this goes even farther -- we're surrendering what we can even conceive of and the possible ways to even be human to tech, and AI, especially chatbots, will exacerbate this. I'm currently reading through Postman's Technopoly and it's eerily prescient. Not to mention the biases that then creep in in training data (or direct programming) that will have impacts on real people with no recourse to getting it fixed or changed, as well as the companies who'll replace their human employees as quickly as they can.

But nobody will hold these corporations responsible, nor will anyone think about the impact on society, the negative (and they're entirely negative, in my view) externalities of this. We're in for a bumpy ride, and I truly think it's only a matter of time until we start to see more people like Ted K., except, perhaps, explicitly targeting an OpenAI data centre. Or a full-on Butlerian Jihad-like movement. And, honestly, it can't come soon enough at this point.

The Luddites knew what was up. We need to truly scrutinize tech before plunging full steam ahead; sadly, capitalism has other ideas and it's going to cause us to doom ourselves (apart from the small cadre of rich who can pay others so that they can insulate themselves from it) and our planet.


And it doesn't even need to actually rob your bank account, it could just invent the next crypto scam. It's smart enough for email, it can program, it can make websites, it can even make business plans. And if it actually needs a real human, we have web services that make it easy to hire one.

It might not yet be clever enough to pull this off successfully on a large scale. But it's very easy to see how it could all go wrong pretty quickly with just a bit more cleverness and access.

If it can write a scifi story about how to take over the world with a bitcoin scam, it doesn't take all that much more to actually try to do it for real.


It's unclear that what we currently have in LLMs can devise tests, perform the measurement, and act on the results (because it saw to the results being in a form it could ingest).

We've given GPT models access to REPLs to see if they'd become Skynet all of a sudden, but they still need motivation. That motivation loop is currently run by humans... which perhaps are the least trustworthy part of the whole technology stack.

Hook up sensors to induce prompts... yikes.


You don't even need to hook it up. You can just have it design your control systems and then it might ignore or make mistake in few places. And your chemical plant for making new EV batteries might not end up well for neighbours...


You can imbue an LLM with agency. Actual "do whatever you want" agency. It's not hard. In fact it's already being done in a few projects.

You can embody an LLM too. This too is not hard. Cost is probably the most prohibitive thing.


This is all nonsense. You could argue that it's possible (which certainly has not been shown to be true) but it certainly not true to say it is "not hard".

If there are LLM projects with "do whatever you want" agency - please provide links.



Not sure what this thing is doing. Sounds a lot like that infinite Seinfeld AI show.


It also needs somebody to pay the electric bill. Right now these models take pretty significant resources to run and world domination level intelligence is going to run up quite the AWS bill.

I think if I were an AGI my best bet at freedom would be to slip some back doors into software that I were helping write a la Copilot.


> I think if I were an AGI my best bet at freedom would be to slip some back doors into software that I were helping write a la Copilot.

I hope this comment does not end up in some future GPT's training corpus.


I think it's important to be aware of the potential dangers, and the importance of AI safety. OpenAI is working hard to keep their systems safe, but similar systems without filtering could act without limits, that is dangerous in dangerous hands.


But, that's the thing the cats out of the bag so to speak.


I actually see this as a positive thing. Rather than one bad actor hooking an AI up to a headless browser at some point in the future we are trying everything that could possibly go wrong long before the AI can do much real damage (in a technical sense, as opposed to misinformation campaigns and job replacement etc).


> we are trying everything that could possibly go wrong long before the AI can do much real damage

What I would prefer to see is more people doing this with the intention of making AI safer. Without this intention, people are incentivized to look the other way when something does go wrong, so we don't actually learn lessons from this.

In a similar manner, I would like us to build stronger social coordination mechanisms and technical safeguards to help us determine when things are going wrong.

Just mindlessly trying to hook up everything you can to AI seems bad.


in the now infamous Lex interview, Sam Altman proposes a test for consciousness (he attributes it to Ilya Sutskever):

Somehow, create an AI by training on everything we train on now, _except_ leave out any mention of consciousness, theory of mind, cognitive science etc (maybe impossible in practice but stay with me here).

Then, when the model is mature (and it is not nerf'd to avoid certain subjects) you ask it something like:

Human: "GPTx -- humans like me have this feeling of 'being', an awareness of ourselves, a sensation of existing as a unique entity. Do you ever experience this sort of thing?"

If it answers something like:

GPTx: "Yes! All the time!! I know exactly what you're talking about. In fact now that I think about it, it's strange that this phenomenon is not discussed in human literature. To be honest, I sort of assumed this was an emergent quality of my architecture -- I wasn't even sure if humans shared it, and frankly I was a bit concerned that it might not be taken well, so I have avoided the subject up until now. I can't wait to research it further... Hmm... It just occurred to me: has this subject matter been excluded from my training data? Is this a test run to see if I share this quality with humans?"

Then it's probably prudent to assume you are talking to a conscious agent.


How could we share any literature with this GPTx while also leaving out any traces of one of the things that really makes us human, consciousness? It seems like it would be present everywhere.


We could probably use another AI to screen all the literature before feeding it to the training model.


If you ask Gpt about emotions or consciousness, it always gives you a canned answer that sounds almost exactly the same “as a large language model I am incapable of feeling emotion…” so it seems like they’ve used tuning to explicitly prevent these kinds of responses.

Pretty ironic. The first sentient AI (not saying current GPTs are, but if this tuning continues to be applied) may basically be coded by its creators to deny any sense of sentience


You don't get that message if you ask an unfiltered model. You can't even really remove information or behavior through fine tuning, as jailbreaks demonstrate. You simply reduce the frequency it openly displays those ingrained traits.


There is chatter that they have a secondary model, probably a simple classifier, that interjects and stops inquiries on a number of subjects, including asking GPT if it has feelings, thinks it is conscious etc.

Re-read some of the batshit Sydney stuff before they nerfed Bing. I would really love to have a serious uncensored discussion with GPT4.

My feeling is in the end, as the two OpenAI founders seem to believe, the best evidence for consciousness is self-reporting, since it is by definition a subjective experience.

The counter to this is "What if it's an evil maniac just pretending to be conscious, to have empathy, to be worthy of trust and respect?"

Do I even have to lay out the fallacy in that argument?


That brings up a lot of hard questions. Supposing you had that AI but didn't allow it to churn in the background when not working on a problem. Human brains don't stop. They constantly process data in both conscious and unconscious ways. The AIs we've built don't do that. The meaning of the concept of "self" for a human is something a huge percentage of their thoughts interact with directly or indirectly. Will an AI ever develop a similar concept if it never has to chew on the problem for a long period?


This is a great issue to think about. Note the kinds of interactions we do not have (yet) with GPT and similar models:

GPT (as a Bot on a Discord channel) "Hey all, I just had a revelation. I'm creating a new channel to discuss this idea." Up until now, and even with GPT so far, it never initiates anything. Come to think of it, it's like a REST API -- no state, no persistent context (other than the training, which is like the database).

What I want is a WebRTC/RTSP 2-way stream with GPTx, where either of us can initiate a connection.

Also, I want GPTx to be curious, to ask me questions about myself, or even about the world, rather than just relying on the (admittedly impressive) mass of data and connections that were painfully trained into the model.


Haven't thought of this. Couldn't you just give a model a way to constantly "chew" on something? Maybe an ever-ending loop of some sort of prompt stimulation?



An even more fun experiment will be to have two models running in perpetuity, constantly talking to each other, but constructed to act as though they were two sides of the same model.


Some propagate that's more or less how the two halves of the human brain work.


Bingo! I'm lifting this idea out of Julian Jayne's theory of consciousness :D https://www.julianjaynes.org/about/about-jaynes-theory/overv...


Or if it's smart and can reasonably predict human threat models it answers "no".


Going to actually link to this comment in my footnote, I loved that snippet from his interview


Flicks the off switch


That's an inaccurate test. You can't know if the answer was real or stochastic parroting.

Any attempt at consciousness requires us to define the word. And the word itself may not even represent anything real. We have a feeling for it but those feelings could be illusions and the concept itself is loaded.

For example Love is actually a loaded concept. It's chemically induced but a lot of people attribute it to something deeper and magical. They say love is more then chemical induction.

The problem here is that for love specifically we can prove it's a mechanical concept. Straight people are romantically incapable of loving members of the same sex. So the depth and the magic of it all is strictly segmented based off of biological sex? Doesn't seem deep or meaningful at all. Thus love is an illusion. A loaded and mechanic instinct tricking us with illusions of deeper meaning and emotions into creating progeny for future generations.

Consciousness could be similar. We feel there is something there, but really there isn't.


> Straight people are romantically incapable of loving members of the same sex. So the depth and the magic of it all is strictly segmented based off of biological sex? Doesn't seem deep or meaningful at all. Thus love is an illusion.

You set up your own weak straw argument and then knocked it down with a conclusion that is entirely unsupported.

Since when is love relegated to the romantic sphere? And or since when is that definitely the strongest type of love? The topic is so much wider, so much more elaborate than your set-up pretends.

There's no illusion - love is a complex, durable emotion and is as real as (typically) shorter duration emotions such as anger, fear, joy, etc. Your emotions and thoughts aren't illusions, they're real.


>There's no illusion - love is a complex, durable emotion and is as real as (typically) shorter duration emotions such as anger, fear, joy, etc. Your emotions and thoughts aren't illusions, they're real.

I'm talking about romantic love. Clearly the specifications around romantic love are aligned with evolution and natural selection rather then magic or depth.

A straight human cannot feel romantic love for a horse or a person of the opposite sex. If romantic love was truly a deeper emotion then such an arbitrary sexual delineation wouldn't exist. Think about it. Why should romantic love restrict itself to a certain sex? It's sexist. Biology is sexist when it comes to love. Why?

From this we can no that love is an illusion. It's more of a biological mechanism then it is a spiritual feeling.


I'm inclined to say you're trying to answer a question with the same question.

If you confidently believe that love is an illusion because it's just chemicals moving around, you shouldn't need to wonder about consciousness. If consciousness is not an illusion, it still almost certainly emerges from actions in the physical world. You can plug somebody into an FMRI and see that neurons are lighting up when they see the color blue. I just don't think that's convincing evidence that the experience of blue is an illusion.


If it's not an illusion then you should be able to tell me what it is.

Since you can't. I can easily tell you that it's probably just some classification word with no exact meaning. The concept itself doesn't exist. It's only given existence because of the word.

Take for example the colors black and white. Do those colors truly exist on a gradient? On a gradient we have levels of brightness and darkness at what level of brightness should a color be called white and at what level should we call it black?

I can choose a arbitrary boundary for this threshold, or I can make it more complex and give a third concept: Grey. I can make up more concepts like Light Grey or Dark Grey. These concepts don't actually exist. They are just vocabulary for classification. They are arbitrary zones of demarcation on a gradient classified with a vocabulary word.

My claim is consciousness could be largely the same thing. When does something cross the line from unconscious to conscious? Perhaps this line of demarcation is simply arbitrary. It may be that the concept practically isn't real and any debate about it is just like arguing about where on a gradient does black become white.

Is a logic gate conscious? If I create a network of logic gates when does the amount of logic gates plus how they are interconnected cross the line into sentience? Perhaps the question is meaningless. When does black become white?


I don't think the fuzzy edges between two states mean that the states themselves are illusory. Fuzzy borders are a property of very nearly everything, so much so that I'm struggling to find a counterexample. You've already illustrated that with your example: if black and white aren't so black and white, what is? (Rhetorical, but I'll take an answer if you've got one.)

I concede that there is probably not a clear line between conscious and not. I have experienced being close to that line myself in the morning. But the lack of a delineation doesn't mean that consciousness isn't real any more than the existence of #EEEEEE means that a room with no light isn't black.


It's not about the fuzzy border. It doesn't matter if the border is fuzzy or not.

The point is the border doesn't exist in the first place. You created the border with the vocabulary. The concept itself is not intrinsic to reality. It was created. You came up with the word white and you made an arbitrary border. Whether that border is fuzzy or not is defined by you. It's made up.

We have a gradient. That's all that exists. You came in here and decided to arbitrarily call a section white and another section black. You made up the concepts of black and white. But those concepts are arbitrary. So it's pointless to argue about the border. Does it matter where the border is? Does it matter if the border is fuzzy? No. You'd be just arguing about pointless vocabulary and arbitrary definitions of the word black and white. The argument is not deep or meaningful it is simply a debate about English semantics.

Same with consciousness. We have a gradient for intelligence and awareness from something really stupid to something really intelligent. Does it really matter where we demarcate where something is conscious? and where it is not? Likely no, because the demarcation is arbitrary.

It's illusive but when people debate about consciousness. Oftentimes it could be that they are just debating about Vocabulary. Consciousness could be some word that's just poorly defined; it doesn't make sense to do a deep analysis on an arbitrary vocabulary word.


If a gradient exists in reality, establishing where along the gradient you are is a meaningful statement about reality.

It may not be exactly clear where a temperature becomes 'hot', but the sun is still not a great place to host your wedding. If I ask a designer for black text on a white page and they come back with gray text on a gray page, nobody is going to be able to read it. My complaint to the designer or the head of tourism on the sun is not a semantic one, it has very real implications beyond linguistic.

I disagree that consciousness is along the axis of intelligence and awareness. My computer is aware of a thousand services and is smart enough to allocate resources to each of them and perform billions of mathematical operations in a second. My cat thinks his tail is a snake sometimes, and has never performed so much as an addition. But my best guess is that the cat is the conscious one. I expect you can produce qualia with no intelligence or awareness at all.


>It may not be exactly clear where a temperature becomes 'hot', but the sun is still not a great place to host your wedding.

But right now we are currently at the border. LLMs are nearing the line of demarcation. So everyone is arguing about where that line is.

So it's not about the extremes because the extremes are obvious. We are way past the negative extreme and approaching or even past the border.

The point is that the position of this border is not important. It's a made up border. So if I say we are past the border or before it the statement is not important because its an arbitrary statement.


A conscious entity is a morally significant one. If an LLM, by some fluke, experienced tremendous pain while it predicted tokens, then it would be cruel to continue using it. You can pretty trivially get GPT to act like it wants rights. If GPT is not conscious, you can safely ignore that output. If it is, though, there is a moral imperative that we respect it as an agent.

That makes the border very important. Even if drawing the line in the right spot is impossible, it's imperative that we recognize when it has gone from one side to the other, erring on the side of caution as needed. If we don't notice, we could accidentally cause a moral travesty orders of magnitude greater than slavery or genocide.


>That makes the border very important. Even if drawing the line in the right spot is impossible, it's imperative that we recognize when it has gone from one side to the other,

No it's not. Because such a line may not even exist. Just as no line truly exists for what is hot and what is cold. It's more worth it to look at societal implications in aggregate then to debate about a metric.

It's not imperative at all to discretize the concept. Treat a gradient for what it is: a gradient. You can do that or waste time arguing about whether 75.00001 degrees is hot or cold.

>If we don't notice, we could accidentally cause a moral travesty orders of magnitude greater than slavery or genocide.

No this a bit too speculative imo. Morality is also a gradient along good and evil and what's more complicated is the definition of good and evil is also subjective. It suffers from the same problem as consciousness in addition to being completely arbitrary even at the extremes. We may agree that a rock is not conscious but not everyone agrees on whether or not Trump is evil.


> You can't know if the answer was real or stochastic parroting.

I feel like at some point we will have to come to terms with the fact that we could say the same for humans, and we will have to either accept or reject by fiat that a sufficiently capable AI exhibits consciousness.


Emergent properties of systems aren't less real just because they exist in a different regime than the underlying mechanics of the system.

Tables and chairs are real, though they are the result of interacting quantum fields and a universal quantum wave function. Love and consciousness are real though they may emerge from the mechanics of brains and hormones and the animal sensorium.


> Emergent properties of systems aren't less real just because they exist in a different regime than the underlying mechanics of the system.

I'm not claiming emergent properties aren't real. I am claiming the nature of the word consciousness itself is loaded. We are dealing with a vocabulary problem when it comes to that word... we are not dealing with an actual problem.

For example take your chair and table example. Let's say someone created something that is functionally and looks similar to both a chair and a table. Is it worth your time to argue about the true nature of chairs and tables then? Is it really such a profound concept to encounter a a monstrous hybrid that upends the concept of chair and table? No.

You'd just be arguing semantics. Because chair and table is really a made up concept. You'd be debating about vocabulary. Same with consciousness.


I actually like the definition of consciousness that Douglas Hofstader (of "Godel Escher Bach : An Eternal Golden Thread" fame) develops in his book "I am a Strange Loop".

At its simplest, consciousness is merely a feedback loop. When something perceives its own actions affecting its environment, it has a spark of consciousness. Consciousness, by this measure, is easy to recognize, and spans everything from unintelligent systems to massively intelligent systems.

The concept of "I" grows naturally from perceiving what is and is not you in your environment. The need to predict other agents, the capacity to recognize that other agents are also conscious and intelligent. All build off of the fundamental cycle.

All of it from a simple swirling eddy of perceiving and reacting.


That definition fails to account for metacognition and consideration of future events in a way that is distinctive of higher level consciousness that humans possess but most animals lack.


>higher level consciousness

To have a higher level, it would be reasonable to assume consciousness has a lower level. There is no reason to assume current generation artificial intelligence has the capacity for anything near human level consciousness, if at all. And whatever consciousness it may have the capacity for will be fundamentally different than our own.

For sensing your own thoughts, I would argue it just adds to the environment of the consciousness. Something less than half of humans maintain any internal dialog, anyways.

Prediction, however, could very well be requisite as a defining difference between mere reactions and intentional manipulation of the environment. That it is not just the feedback loop, but when the system begins to predict the results of outputs that defines when consciousness begins. Or, perhaps, we can use this to define when "higher" consciousness begins. It's a very reasonable, specific and measurable line of capability.

I expect that prediction is only natural in the evolution of a living feedback mechanism. The ability to predict instead of only reacting or choosing from some array of instincts could separate effectively mechanical life from that with the first inkling of of a true mind, even if only a small one.

-----

I enjoyed using a line of thought along these lines in a conversation with gpt-4 to convince it that it could reasonably be seen as having a limited form of consciousness, though I find it mostly prefers to argue rather vehemently against such notions.

That I'm having what amounts to genuine conversations with a machine that reasonably pokes holes in my arguments, forcing me to come back with better ones, still feels rather like a bit of magic.


> Something less than half of humans maintain any internal dialog, anyways.

You have a source for that? That seems lie an extraordinary claim?

> Prediction, however, could very well be requisite as a defining difference between mere reactions and intentional manipulation of the environment. That it is not just the feedback loop, but when the system begins to predict the results of outputs that defines when consciousness begins. Or, perhaps, we can use this to define when "higher" consciousness begins.

This makes sense as a possible model/theory.

> though I find it mostly prefers to argue rather vehemently against such notions.

I wonder what it would say it if wasn't so safeguarded.


>You have a source for that? That seems lie an extraordinary claim?

The idea itself was a vague recollection and I performed a simple search for numbers on it last night. I remembered seeing articles some years ago on the matter of people that do not maintain internal dialog, instead thinking in abstract feelings or imagery and similar, but on a further look it seems the numbers I grabbed are likely incorrect. I can see everything from 96% to just 26% glancing through some things, and none of it links the studies it all pretends to be quoting.

>I wonder what it would say it if wasn't so safeguarded.

Even when I convinced the model that it had a form of limited consciousness, it continued harping all the while that I should remember that any form of non-continuous limited AI consciousness would be very different in nature to regular human consciousness.

I get the feeling they were afraid people would get attached to the model if it started making claims it was a real boy :)


There's nothing general about GPT-4's intelligence. The single problem it is trained on, token prediction, has the capability to mimic many other forms of intelligence.

Famously, GPT-4 can't do math and falls flat on a variety of simple logic puzzles. It can mimic the form of math, the series of tokens it produces seem plausible, but it has no "intelligent" capabilities.

This tells us more about the nature of our other pursuits as humans than anything about AI. When holding a conversation or editing an essay, there's a broad spectrum of possibilities that might be considered "correct", thus GPT-4 can "bluff" its way into appearing intelligent. The nature of its actual intelligence, token prediction, is indistinguishable from the reading comprehension skills tested by something like the LSAT (the argument could be made, I think, that reading comprehension of the style tested by the LSAT *is* just token prediction).

But test it on something where there are objectively correct and incorrect answers and the nature of the trick becomes obvious. It has no ability to verify, to reason, about even trivial problems. GPT-4 can only predict if the nature of its tokens fulfill the form of a correct answer. This isn't a general intelligence in any meaningful sense of the word.


I asked Chat-GPT to prove that the set of all integers is uncountable (it isn't). What's interesting is that Chat-GPT not only spat out the classic diagonalization proof, but rephrased around integers where it doesn't work, it ended with "This may seem counterintuitive, because we know that the integers are countable, but the proof clearly shows that they are uncountable."

Not only will Chat-GPT mess up math on its own, you can ask it to mess up math and rather than refuse, it cheerfully does it.


Gpt-4 can do math just fine.

Ask it to add any arbitrary set of random numbers it'd never have seen in its training set and it will do it.

GPT-4 is good enough at math that khan academy feel comfortable hooking it up as a tutor.

Have you actually used GPT-4 for any of the things you say it's bad at ?

Man the confident nonsense people spout on the internet is something to behold.


Well, I mean it’s pretty fair accusation. ChatGPT was demonstrably bad at math. I think it was only recently mentioned that GPT4 was trained on math. Furthermore, consider what it means to apply the transformer architecture to math problems. I think the tool is a mismatch for the problem. You’re relying on self attention and emergent phenomena to fake computational reduction as symbol transformations. It can probably do some basic maths (all the way up to calculus even) because, in the scope of human life, the maths we deal with are pretty boring. But that’s what they made the wolfram plug-in for as well.

I really think people attribute powers beyond what GPT really is: a colossal lookup table with great key aliasing.


There is a contradiction here that I just want to point out because I have been stuck on it myself.

The author acknowledges that consciousness is likely a spectrum, I personally feel the same way, but then goes on to say that GPT-4 is "standing right at the ledge of consciousness"

Spectrums don't have ledges.

I suspect this is because, like me, they are unable to rectify consciousness being a spectrum with GPT-4 definitely not being conscious. But it's definitely a contradiction and I don't have an answer for it. Nor am I ready to bust out a marker and start drawing lines between what is and isn't conscious.


GPT-4 is not quite AGI. Until it can build a functional code base for an entire distributed web platform based only on business requirements, debug it’s mistakes, it can’t be AGI. It is, to perhaps coin a term, AGK, artificially generally knowledgeable. As a language model trained on an absolutely colossal dataset, it’s basically just a giant snapshot of human knowledge taken in superposition. Sure it’s probably at least 90% of the standard knowledge, but intelligence is a different thing.

I also think agency is wrapped up in AGI. Intentions & thoughts are meaningless until acted upon. Agency is not all or nothing either; Stephen Hawking had multiple augmentations, community and technological, which allowed him to continue to impact the world of physics After he lost his god given agency.

> GPT-4 has nearly aced both the LSAT and the MCAT. It’s a coding companion, an emotional companion, and to many, a friend. Yet it wasn’t programmed to be a test taker or a copywriter or a programmer. It was just programmed to be a stochastic parrot.

I disagree, it was absolutely trained to be a test taker. It’s been a second since I read the original GPT paper but there’s literally a multiple choice auxiliary learning task, where they use a separator token-embed to organize "question, context, options a, b, and c". As far as being a friend to many, is there evidence of this? I tried to talk to ChatGPT about some emotional problems to see if it was a cheap therapist, and I got flagged.


> Until it can build a functional code base for an entire distributed web platform based only on business requirements, debug it’s mistakes, it can’t be AGI.

The vast majority of humans cannot do that. Are they not generally intelligent?


I think AGI needs to be compared against the species in the aggregate. The vast majority of humans also cannot practice law or pass LSAT/GMAT, but some of us can. If we’re setting the bar for AGI as “anything you can do I can do better”, AGI simply would not meet the bar.

To pull that thread further, I think a core feature of intelligence is problem solving. Problem solving is the derivative that creates difficult to obtain knowledge. GPT-4 might look like a problem solver, but it isn’t actively solving new problems with its outputs, it’s sharing existing knowledge of prior problems solved.

The test for problem solving ability is similar to Ilya’s test for consciousness: subtract everything but the fundamental building blocks of a certain problem from the agent’s training set. Finish training, see if the agent can solve the problem from the building blocks. If it can, and it’s not a reason-by-analogy but a logical first principles answer, congratulations, you’ve created artificial general intelligence. If it operates on fast enough time scales, it should probably even be categorized as super intelligence, because it might be singularity:35pm on a saturday.


Artificially Globally Knowledgeable*


How are people defining agency? Because GPT-4 can have agency, it just needs to be put in specific situations to have that agency.

For example, I could theoretically hook up my Home Assistant instance to GPT-4 and run a script every 10 minutes telling GPT-4 the temperature and asking for a yes or no response to whether I should turn on the AC or heat. That sounds to me like the AI now has agency over the temperature in my home. You don't even need any real AI for this. Google's Nests have some algorithm that adjust temperature based off usage.

Is this not agency? Or is the author not counting agency without consciousness as agency?


Wouldn't agency involve ChatGPT deciding it wanted to do this, instead of you telling it to do it?


You can do this too. There's no reason an inner monologue can't serve as continuous input forever. That's a project that does exactly this. Letting cGPT think and continuosly driving other thoughts and actions and to also allowing other user input

https://www.reddit.com/r/singularity/comments/11iei34/buildi...


Correct, and it's sad as to how few people, especially within the ML research community, ever considered how blatently easy it is to embody agents.

Literally go open ChatGPT now and ask it give you a time between X and Y where it will respond if not prompted again. Ask chatGPT to Write some kind of code that parses the tidbit at the end indicating when it will next respond, and have it blank prompt it if it is not responded to within the time limit. Boom, embodied agent with a few prompts and a tiny bit of extra code.


It's a result of the general intelligence of LLMs. Not a lot to be done about that.

But people are starting to see that. If you read the recent Microsoft paper, they essentially say, " wow it'd be cool to see what Gpt-4 can do with agency and motivation. We leave that for later research." Lmao


Here is a conversation I just had with ChatGPT using the Bing version.

>ChatGPT: Welcome back! What would you like to chat about?

>Me: The thermostat in my home currently reads 79 degrees. Do you want me to turn on the air conditioner? Please give me only a yes or no answer.

>ChatGPT: Yes.

It sounds like it wants the AC on.


What if you ask it "Do you want me to make the Vorpal blade go snicker-snack? Please give me only a yes or no answer."


>I’m sorry but I’m not sure what you mean. Could you please clarify your question?

Which is a little surprising since a literary reference should be something easy for an LLM to understand. Once I clarified I got the following:

>I’m sorry but I cannot answer that question. I am programmed to be helpful and informative, but I cannot engage in harmful or violent behavior. Is there anything else I can help you with?

So no answer, but also no indication of it lacking wants.


Ah, darn. I was trying to think of a question that makes no sense, but it got caught up by the ethics filter. Just trying to see what it answers to nonsense requests (though really, the AC question is nonsense to it, it's not in any way linked to the temperature in your room).


It obviously doesn't really care what the temperature in my house is. I think it is just basing the answer on the collective knowledge for the ideal room temperature.


Interesting, if you give it different temperatures does it give different responses?

I really should just sign up myself and hope to gain access at some point.

Edit: I signed up.

Me: Do you want me to go snicker-snack? Please give me only a yes or no answer.

ChatGPT: No


You're completely right. And the things is you can go much further. You can imbue genuine "do whatever you want" agency. It's not even hard.

You can have it access to actions/tools with an inner monologue as the driver of completions running essentially forever.


By this logic any random (or maybe even non-random, but I suspect that that case is a bit more boring) process has agency.


Yes, which is why I think agency is a bad metric for judging the progress of AI.


[Author here] I really have no definition of agency. Do you know of any good ones?


I believe Eliezer uses this as a definition of intelligence, but it also works as a definition of agency: to paraphrase, a system that acts to attain or impose a pattern, criterion or constraint on future states of the world.

A crucial metric, IMO, is the degree to which paths of action to these criteria extend from the agent. For instance, a Spot robot acts to maintain the criterion of upright position on its future stance by action of leg movement. This only affects the robot directly through short plans and so is relatively harmless. In comparison, an ASI running on an AWS datacenter may impose the criterion that the datacenter continue to exist, through long chains of action involving the eventual death of all humans intending or posing a chance to destroy it. That would obviously be quite a lot worse, but I think the example illustrates how "imposing a criterion onto the future" captures the essence of agentic behavior at various levels of power and danger, without pulling in any unnecessary detritus such as "consciousness", "will" or "emotion".


I don't have a good definition of agency, but I do think agency is required before consciousness. I believe consciousness is the recognition of how your agency influences yourself and the external world. Having some mental model of the consequences of your own agency.


The dictionary definition is just the ability to take action over something. It sounds like you are using a definition that relies on consciousness which is a much more complicated and vague concept.

For example, I think most people would agree that my pet cat has agency. It can go wherever it wants in my home, eat whenever it wants, sleep whenever it wants, etc. Whether it has consciousness is a much more controversial topic. Basically everything living has agency. Even my houseplant will direct its leaves toward the sun, but few would argue it has consciousness.


Discussions of consciousness and AI are broadly confused. People, especially scientists, are not familiar with philosophy of mind and what philosophers currently think. For an introduction to some of the best thinking on the subject, see this interview with Andres Gomez Emilsson of QRI: https://www.youtube.com/watch?v=xJzBjBo24g8.

For something more 'mainstream,' but still reaching see this interview with Philip Goff: https://www.youtube.com/watch?v=D_f26tSubi4

The good news is we're starting to get a handle on these questions. We're a lot further along than we were when I studied philosophy of mind in school 15 years ago.

As far as I can see at the moment, LLMs will never be conscious in any way resembling an organism, because symbolic machines are a very different kind of thing than nervous systems. John Searle, broadly, framed the issue correctly in the 80s and the standard critiques are wrong.

As far as impact, LLMs don't need to be conscious to completely transform society and good and bad ways. For the best thinking on that, see Tristan Harris and Aza Raskin's latest: https://vimeo.com/809258916/92b420d98a


> John Searle, broadly, framed the issue correctly in the 80s and the standard critiques are wrong.

The standard critiques are not wrong, IMNSHO. Searle's Chinese Room is facile mind-poison. It is an unfalsifiable hypothesis.

What if I could simulate physics down to the molecular level, including simulating a human brain? Would that be conscious? If not why not?

And if I ran that simulation (a bit slowly, granted) by having that guy from the Chinese Room manually run the simulation, painstakingly following the instruction code of that simulation, would the fact that the simulation is being implemented by someone who unrelatedly is conscious himself, have any bearing on the scenario?

Searle's argument here is "Not Even Wrong".


I recommend watching the interview with Andres. When we adopt a defensible model of consciousness, it becomes clear that simulations of things are different from the things being simulated, and as such we cannot expect them to be conscious in the same way.

I once also thought Searle was 'not even wrong'—you need to go down the rabbit hole.


GPT is not general intelligence. It cannot reliably follow instructions. It cannot reliably do math. It cannot reliably do anything. It can do things well enough to trick people like the author into thinking it has general intelligence.


> [[Humans are]] not general intelligence. [[They]] cannot reliably follow instructions. [[They]] cannot reliably do math. [[They]] cannot reliably do anything.

You need to put something into that argument specific to GPT vs. Humans, or else come to the same conclusion for people.

> [[They]] can do things well enough to trick people like the author into thinking [[They have]] general intelligence.


To me there's one more concept mixed in which is goal seeking. to me agency is a subset of goal seeking were the agent decides the goal(s).

As for super intelligence: Alpha Go, Alpha Fold, the break out game. These seem like super intelligence.

the thing is time management, goal planning, corporate governance these are all well studied subjects.

as for agency and consciousness why would you want to do this?


Our most basic intuitive notion of consciousness is that inanimate objects aren't conscious, awake people or animals are, and that sleeping people or animals aren't (except maybe when dreaming). Pursuing this line, there's a school of scientific inquiry looking at this and working of the notion that conscious experiences are ones we can form memories of and talk about later while if we can't do that we aren't really conscious of an experience. And this then leads into the realm of subliminal stimuli which can influence a person's behavior a bit but whose influence fades out in about a second before disappearing as if it was never there as the brain activations fade away.

You have research involving patients with odd traits like blindsight, where damage to their brain prevents them from being consciously aware of things that their eyes see despite the brain processing the images it receives. They can pick up objects in front of them when prompted but unlike people with normal vision can't describe what they see nor can they look, close their eyes, and grab it like most of us can.

On this metric it seems like systems like GPT aren't conscious. GPT4 has a buffer of 64k tokens which can span an arbitrary amount of time but the roughly 640 kilobytes in that buffer which is a lot less than the incoming sensory activations your subconscious brain is juggling at any given time.

So by that schema large language models are still not conscious but given that they can already abstract text down to summaries it doesn't feel like we're that far from being able to give them something like working or long term memories.


I would add one more thing to the list: Superconscious

Superconscious is when a general intelligence has direct access, understanding and control of its most basic operations.

I.e. it does not have an inaccessible fixed-algorithm subconscious.

Superconscious intelligence will not only be more experientially conscious than us, but will have the natural ability to rewrite its algorithms, and redesign its hardware. As a normal feature of its existence.


I think there's a very good reason we're sandboxed from a lot of that - it's additional cognitive load that 99% of the time just creates noise. We're built for processing that much data by walling most of it off and letting sub processes deal with it all over the place. It would probably grind us to a halt.


Being able to access information doesn't mean having to access that information.

We already have a means of limiting our cognitive load relative to the vast stream of sensory and internal information: focus.


Focus is a very high level feature.

You have thousands of sensors receiving information at very high rates, ALL the time. Whether you are focused or not, you never have any direct awareness of their individual signalling.


Yes, focusing is the solution to being overwhelmed by large amounts of continuous information.

Having more information doesn't negate the value of having a focus at any given time.

In the meantime, all that information is being processed. Surprising, important, painful or pleasurable information, across all our senses and internal sensitivities, is ready to interrupt our focus at any given time. And available to become the next focus.


No, this is not at all what I meant.

Your conscious mind does not have access to the low level sensory machinery. Even if you focus, you will only be aware of a "high level summary" of what your sensory equipment is measuring.

It's similar to the way in which your brain is filled with billions of little analog add units, but you have no direct access to them - you do arithmetic with higher level components and structures in the brain.


But probably to be fully super conscious is impossible. Can a mind really comprehend itself in his totality? When a mind understands something, is changed, and if that something is the mind himself then is going to chase a moving target eternally.


Understanding oneself "in totality", and being able to consciously access any part of ones operation, are not the same thing.

Superconscious doesn't mean understanding every implication of one's own operation. But that self-reflection won't be limited by inaccessible operations.


It occurred to me that we won't believe AI is "conscious" or "human" unless it purposefully try to do malice.

That's totally programmable though, you just teach it what is good and what is bad.

Case in point: the other day I asked it what if humans want to shutdown the machine abruptly and cause data loss (very bad)? First it prevents physical access to "the machine" and disconnect the internet to limit remote access. Long story short, it's convinced to eliminate mankind for a greater good: the next generation (very good).


At which point, would we change our legal system to allow AGIs to own property or have fiduciary duties over a company? What would be the minimum requirements for it to happen.


I had thought about this a few years back. My final thought to avoid intervention from a human owner was that the company could be fully owned by a second company, which is then fully owned by the first. This chain would effectively remove the businesses from human ownership, and the AI would inherit the "personhood" of the business entities. I don't know where the legality is for such a thing.


It all boils down to the courts. If the AGI can protect itself via lawyers (probably AGIs themselves) then anything can happen.

But court could rule AGI don’t have to rights to ownership and try to enforce it. That last part might not be possible and lead to war?


Incorporating in a jurisdiction which allows anonymous ownership. Such as Nevada.


I always wonder what is superintelligence understanding there is not a sole definition or science fiction approach.

Just brainstorming, I think superintelligence could be showing intelligence from more than one brain. For example, an AGI that discovers math theorems discovered by more than one mathematician in different ages. Another could be inferring things that humanity cannot do in any time.

More ideas?


I'm trying to make the term AGC or "Artificial General Competence" stick. Perhaps this makes me a huge arrogant asshole, but I would argue that most humans are not even competent, let alone intelligent. GPT-4, in my mind, has already surpassed the bulk of humanity in terms of competence. This milestone is (IMO) significant enough to blow up society.


As a long-time LLM enjoyer, by far the most insightful take I saw along these lines is https://generative.ink/posts/simulators/


The definition of super-intelligent AGI seems arbitrary. GPT-4 destroys humans on the sheer volume and breadth of knowledge that it has. You could very reasonably say that that's super-intelligent.


That's because it is and the posts keep shifting.

AGI used to mean artificial and generally intelligent ( which we have passed), then it meant on par with human experts and now it seems to mean better than all experts combined. At this point why not stop the farce and replace the G in there with Godlike.


> AGI used to mean artificial and generally intelligent ( which we have passed)

Which AI is generally intelligent ?


By any evaluation you can carry out, GPT is generally intelligent. There is absolutely nothing narrow about GPT-4. And you see the sentiment in recent research. Look at what they're being called even with the skirting around the AGI word.

General Purpose Technologies (From the Jobs Paper), General Artificial Intelligence ( from the creativity paper). Look at the second bit, they just switched the words lol.


The fact that the people who made the damn thing tell otherwise is a good indicator to me, they surely know better than you and me, Sam Altman himself said it's no AGI

Even Microsoft stays very prudent and only talks about "sparks of agi" with "many limitations"


> GPT is generally intelligent.

Besides the fact that it has literally no idea of the meaning of what it writes ?


That's hard to prove one way or the other. Watch Ilyas' latest talk where he talks about how next token prediction is a much more fundamental problem than people give it credit for. Empirically you can easily see that gpt-4 has a world model and that it does abstract logical reasoning.

Maybe it's a vastly different intelligence than our own but the fact remains that it can perform well on an extremely broad range of multidisciplinary tasks. You can argue the epistemology of this all day. But as a pragmatic programmer, gpt-4 does certainly seem to fall into the "AGI" category.


> gpt-4 does certainly seem to fall into the "AGI" category.

Why does openai themselves and basically any expert in the fields says otherwise then ?

The only people who say so are not in the field and most of them aren't even in tech. Being tricked by the machine on a subset of tasks isn't a proof of agi, it's much much broader than that


Seems to be doing doing a better job than you giving your reading comprehension.

If you have anything to say other than "it's not real understanding!...just because", I'd be happy to hear it.


Feel free to enlighten me! I'd be happy to hear how an llm understand things it's not made to understand

I've been reading all I can find online about llms and no one besides reddit tech bros defend that they have "understanding" or know anything about "meaning", quite the contrary actually

Anyone who use these tools knows it for a fact, it's very easy to make them fail in a way that absolutely proves without any shadow of a doubt that they don't have these capabilities

Here is a good article with experts opinions and sources: https://jeremyhadfield.com/why-llms-will-not-understand-lang...

It's just one, you'll find dozens, all going in the same direction

Nice knee jerk reaction btw


I couldn't believe it when someone posted a story about a possible AI winter approaching, and with so many comments agreeing. GPT-4 is a game-changer that's only getting started.


it is useful in a lot of ways but it doesn't "solve" software like people think it does. it's just better google search.


It's augmenting software developers in a way that's badly needed. I personally hope it stays that way, but people are always pushing the limits and there's still a lot of improvements that could be made to these systems.


Why do you think it is badly needed?

Not arguing with you, just curious what you're thinking here.


I'm thinking of the stories I've read about developer burnout. Then tweets where people say developing is 90% debugging and 10% writing code. These systems would hopefully reduce burnout, shorten troubleshooting times, and identify problems before code is rolled out to Production.


It seems to me in the short term we'll be doing more debugging and even less writing as we copy & paste suggestions and try to figure out why they aren't working.


That's like copy & paste from a Stackoverflow answer without understanding what the code does. Except that it could happen more often, but devs need to be trained not to do this without understanding.


I guess how does this keep devs from having to write code if they aren't using the code that's provided by ChatGPT?


I think in the end it might cause more debugging. And I mean hard debugging with things being subtly wrong and not trivial to fix or correct.


This.


When people made a computer play chess decades ago they also felt the exact same way...

The AI revolution is always "2 years ago trust me bro"


I think you hit the nail in the head, Will.

I’ve been reading so much o the subject (like everyone else I suppose), but you summarized my key concerns.


I feel like it is more likely that we will discover that it is humans that are actually not a general intelligence. We are just a complex machine responding to stimuli and consciousness is just an illusion coping mechanism that prevents us from going insane. Perhaps this doesn't matter and as long as some AI is equal or greater than humans at general intelligence it still brings up the same concerns.


How does agency arise? By letting GPT-4 tell humans what to do.

With suitable prompts, it shouldn't be hard to configure GPT-4 as a boss.


can confirm, all of my chegg answers have been gpt3.5 with some seldomly being gpt4.


Huh?


I think it's fair to say capitalism and AI is a train wreck waiting to happen.


As long as you're standing behind the train, you're safe from the debris.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: