Hacker News new | past | comments | ask | show | jobs | submit login
John Carmack’s ‘Different Path’ to Artificial General Intelligence (dallasinnovates.com)
378 points by cmdr2 on Feb 3, 2023 | hide | past | favorite | 492 comments



> Well, you can tie that into a lot of questions, like, ‘Is human population a good thing?’ ‘Is immigration a good thing, where we seem to have been able to take advantage of new sources of humanity that are willing to engage in economic activities and be directed by the markets?’

> The world is a hugely better place with our 8 billion people than it was when there were 50 million people kind of like living in caves and whatever. So, I am confident that the sum total of value and progress in humanity will accelerate extraordinarily with welcoming artificial beings into our community of working on things. I think there will be enormous value created from all that.

The problem with Carmack and many like him, is that they think of themselves as purely rational beings operating within scientific frameworks and based purely on scientific results, but whenever they step outside the technical fields in which they work, they are ignorant and dogmatic.

He seems to ignore a lot about what the living conditions for people were throughout history, and have a blind trust in the positive power of 'human progress'.

These people don't stop for a second to question the 'why', just the 'how'. They just assume 'because it will be better' and build their mountains of reasons on top of that, which just crumble and fall down as soon as that basic belief does not hold.

I have a LOT of respect for him, and I'm sure he's a very decent, honest human being. But he's unfortunately another believer of the techno-utopianist faith which only asks for more 'blind progress' without questioning whether that is a good thing or not.


    The problem with Carmack and many like him, is that they 
    think of themselves as purely rational beings operating 
    within scientific frameworks and based purely on scientific 
    results, but whenever they step outside the technical fields 
    in which they work, they are ignorant and dogmatic.
I mean, what's the alternative? For a guy like Carmack to only comment on narrow areas in his field(s) of expertise? He's a human being; I think he's allowed to comment on other topics and I tend to find his comments interesting because I understand them in IMO the correct context -- they're one guy's musings, not pithy declarations and edicts.

The problems arise when folks start to present themselves as experts and try to hold sway over others in areas in which they have no clue. That's not what I see here.


very nicely put. made me rethink somethings :D much appreciated.

ie - expand my domains


>> The world is a hugely better place with our 8 billion people than it was when there were 50 million people kind of like living in caves and whatever.

There is a theory that hunter-gatherers were much more happier compared to us because they were more in tune with the natural environment, had fewer sources of stress, and were more connected to their community than modern humans.

https://www.npr.org/sections/goatsandsoda/2017/10/01/5510187...

From the article.

> Today people [in Western societies] go to mindfulness classes, yoga classes and clubs dancing, just so for a moment they can live in the present. The Bushmen live that way all the time!


They also happily murdered unaffiliated tribes just because.

There are tons of trade offs.


> They also happily murdered unaffiliated tribes just because.

We still do it.


Many of us don't. In hunter gatherer tribes it was I, possible to not be personally affected though.


Idk... I'm very grateful for modern dentistry...


I have read that North American indigenous people were known for having great teeth. Here is one random citation I found searching for indigenous teeth: https://drscottgraves.com/the-secrets-to-healthy-teeth-from-...


I heard that animals usually have teeth in excellent condition, and the suggested explanation was that eating soft, processed, cooked foods is not what our teeth evolved to do, and so ours fall into disrepair.


I'd say the big difference is the sugar. If I brush my teeth and then eat only meat for a whole week, my teeth are still incredibly smooth - giving the feeling of having clean teeth. On the other hand, if I eat a piece of chocolate or just one toffifee, in a couple of hours my teeth get very fuzzy - the tongue no longer glides across them smoothly.


Why does it have to be a trade off?


Because we have yet to discover the advanced dentist chairs used by indigenous people of the past.


> He seems to ignore a lot about what the living conditions for people were throughout history, and have a blind trust in the positive power of 'human progress'.

Eh? A contender for the most self-contradictory sentence I've ever read ;) The best reason to believe in positive power of "human progress" is, specifically, not ignoring "what the living conditions for people were throughout history".


Let me correct my sentence: there's a blind belief that _technological_ progress automatically equates to better life conditions.

And to clarify: I'm not saying "all technology is bad", but rather "not all technological progress is automatically good for humanity".

As an example, living conditions of hunter-gatherers were way, way better than living conditions of the first people in cities, and I'd argue, depending on which parameters you use, might still be better than our modern, big-city living conditions (except maybe for the richest 1% of the world)


On average technology has been overwhelmingly good. The GP is too vague, but what is the alternative to blind progress being proposed - some ethicist deciding what's good? When has that ever work out well? I'm pretty sure it has 100% track record of failure, I don't believe modern ethicists will do any better than luddites, inquisition, or Paul Elrich just because they have better manners. In fact I think less of a bioethicist than of an inquisitor, at least the latter had general ignorance as an excuse.

I, personally, think "techno-utopianists" don't go far enough. The contributions of some supposed non-technological progress - even to an extent of the institutional progress, but especially of some supposed cultural/ethical values improving, etc. - is overrated. Ultimately, it's all downstream of technology - only the technology enables the complex economy of abundance, and combined they allow good institutions to propagate. Even modern societies, as soon as they become poor, quickly start losing a veneer of "ethical progress". And we don't even usually see actual technological degradation.


> On average technology has been overwhelmingly good.

In order to achieve this, we are destroying the environment, other species and their habitats.

> but what is the alternative to blind progress being proposed

You don’t need and ethicist for this - but an accountant. We need to get stricter about negative externalities. For example, every inventor/manufacturer should be forced to take back their product after end of life. This will slow progress but if done right, it will avoid destruction brought by technology or at least not palm it off on to poorer society or environment.


living conditions of hunter-gatherers were way, way better than living conditions of the first people in cities

Why do you think so?


Historians agree (based for example on studying human remains) that they were much healthier, amongst other things. Check out the book "Against the Grain" for example.


Harsh conditions might serve as a filter to produce healthier population (the weak died shortly after birth).


Urban areas also had roughly 50% infant and childhood mortality until recently.


This argument is silly: given a choice between living in a cave, or in a forest, completely outside of any civilization, and living in a primitive village, I'd choose the village any day. As would (and did) vast majority of people. To me, the social and physical construct of the first village looks like a huge advancement in terms of living conditions, and the quality of life has been improving steadily every since.

Occasional hiking into some wilderness and sleeping in a tent for a few nights is okay, but I am not a wild animal, and I don't want to live like a wild animal, surrounded by wild animals.


Healthier compared to a person of equal age?

My assumption is that since we're living longer then ever we're probably living healthier then ever. (Or at least there's an option to do so).


Creative destruction (see https://www.investopedia.com/terms/c/creativedestruction.asp) is a core to the United States

The problems are the generational suffering that occurs with said creative destruction: There's no incentive to distribute or share out wealth and the results are brutal.

On your point: Note that in the US there's a separation of technical and engineering prowess (MIT, Caltech, ...) and power players (Yale, Harvard). It's almost like our system doesn't want our best engineers thinking about consequences or seeing what the political and wealthy are really like.


Without value judgement on the above quotes, I think Carmack is very much aware of his own lane and would say to take any comments outside it with a grain of salt. For instance earlier in the article, he states:

>I’m trying not to use the kind of hyperbole of really grand pronouncements, because I am a nuts-and-bolts person. Even with the rocketry stuff, I wasn’t talking about colonizing Mars, I was talking about which bolts I’m using to hold things together. So, I don’t want to do a TED talk going on and on about all the things that might be possible with plausibly cost-effective artificial general intelligence.

He likes to figure out new puzzles and how things work. He's an engineer at heart and that's very much his comfort zone. AGI is an exciting new puzzle for him. I'm glad he's taken an interest.

(Edit capitalization & punctuation)


I’ve come to strongly resent the techno-utopian mindset that unfortunately plagues the tech world.

Tech for the sake of tech with zero thought about how it will affect humanity.


Here, here.

I haven't studied it formally and I'm being asked to support techno utopia also. So it feels pretty shaky to me.

Certainly my livelihood is based on the premise of it and my dreams which fuel my workplace motivation serve as foundation to you know what I do with 50% of my life, work on technology. So I am biased.

Some utopian dystopia discussions here on hacker news sort of boil down to the chaos theory level of assumptions, where you can see people exercising their own defensiveness when they snipe on a naysayer, sniping on their grammatical concerns, but not actually engaging in value-based discussions into the hacker news thread. It's like they're not human, they're only practicing it being devil's advocate technicians.

Useful idiots is kind of what I think. We need to have more values discussions, ethics too.


> The problem with Carmack and many like him, is that they think of themselves as purely rational beings operating within scientific frameworks and based purely on scientific results, but whenever they step outside the technical fields in which they work, they are ignorant and dogmatic.

I'm just curious, do you happen to work in a technical field and consider yourself rational and scientific? And if you do, why do you presuppose that your views are automatically correct? Couldn't it also hold that your views may be ignorant and dogmatic if you apply the same scrutiny to yourself that you do to Carmack?

And if you don't work in a technical field, then I guess this is all irrelevant anyways. I just don't like when I see people making these types of arguments where you can't speak on a subject that you're not actively pursuing a PhD in, and then they proceed to do exactly that.


I have a degree in computer science and worked in a technical field for over 20 years. I don't call myself "rational and scientific", though I do think that the Scientific Method is a great way of creating useful models of the world. But those models - like all models - are wrong. Maybe it's just semantics, but one of my points is exactly that some people believe that they are "rational and scientific" and ignore that we are not just computers; experience, emotions and unconscious bias plays an important role in our decisions. Thinking that the whole world (and themselves) can be perfectly rationalized, makes them miss the point that there are non-rational reasons for them to think the way they do. That's why I refer to when I talk about dogmatism.

I suggest everyone (who wants to hear me) to read Joseph Weizenbaum's "Computer Power and Human Reason"; he does a much better job than me at raising similar arguments to mine. Also, Daniel Kahneman's "Thinking, Fast and Slow", for the ways in which we _all_ are so _not_ 100% rational in our everyday decisions.


If you got less people there is still no guarantee less percentage of people are gonna be suffering.

Feeling bad because more people are in your view is “suffering” is all in your head.


Can you tell us what is wrong with progress? Any examples?


Define Progress


Moving people up Maslow's Hierarchy of Needs. Technology enables that.


my grandparents spent 8 hours sowing and reaped 100g of rice

yesterday I spent 8 hours sowing and reaped 200g of rice

today I spent 8 hours sowing and I will reap 300g of rice

progress


Greater control of our physical environment.


What if we had control to cause the sun to go supernova by doing something that everyone on earth has access to, like simply arranging a small pile of pebbles in a rough pattern?

That would not be good or progress.


>>The world is a hugely better place

I don't know. We live more, but a longer life can also be miserable.


Ok, let's see your foundation then.


Would be interesting to get a list of those 40 papers mentioned


Came here to say the same thing, but a few off the top of my head

  - attention is all you need
  - image is worth 16x16 words (vit)
  - openai clip
  - transformer XL
  - memorizing transformers / retro
  - language models are few shot learners (gpt)
A few newer papers

  - recurrent block wise transformers
  - mobilevit (conv + transformer)
  - star (self taught transformer)


Most of these papers you list are about the model, and there is the original Transformer paper, and most of the others are some variations of the Transformer.

I think to get into the field, to get a good overview, you should also look a bit beyond the Transformer. E.g. RNNs/LSTMs are still a must learn, even though Transformers might be better in many tasks. And then all those memory-augmented models, e.g. Neural Turing Machine and follow-ups, are important too.

It also helps to know different architectures, such as just language models (GPT), attention-based encoder-decoder (e.g. original Transformer), but then also CTC, hybrid HMM-NN, transducers (RNN-T).

Diffusion models is also another recent different kind of model.

But then, what comes really short in this list, are papers on the training aspect. Most of the papers you list do supervised training, using cross entropy loss. However, there are many others:

You have CLIP in here, specifically to combine text and image modalities.

There is the whole field on unsupervised or self-supervised training methods. Language model training (next label prediction) is one example, but there are others.

And then there is the big field on reinforcement learning, which is probably also quite relevant for AGI.


We should have an Ask HN where the people in the know can agree on 40 papers that the rest of us idiots can go out and consume.



This idiot would love it explained to me as well.


Lol true, and I'm currently working on a project leveraging a clip model which is why my answer is largely skewed towards vision transformers. By no means a complete list :)


I keep getting CXOs asking for an ELI5 (or ELI45 for that matter) of how Transformers, LLMs, and Diffusion Models work. Any suggestions for a non-technical audience (paid items are fine, we can purchase).


This is quite a gentle introduction to Diffusion models, from the YouTube channel Computerphile.

https://youtu.be/1CIpzeNxIhU


I got a let out of Karpathy's video lectures on youtube, for example: https://www.youtube.com/watch?v=kCc8FmEb1nY

He mentions a few of the bigger papers in multilayer perceptrons (aka deep networks) such as attention is all you need, I think a good place to dive in before coming back to visit some fundamentals.


Maybe ask him on Twitter for it?


They asked on Twitter and he didn’t reply. We need someone with a blue check mark to ask. https://twitter.com/ifree0/status/1620855608839897094


Try asking @ilyasut directly


I would also really like to see that list of 40 papers.


Please, upvote parent comment :). I guess there is a lot of people who are wondering which papers he read.


This paragraph took me aback:

But if I just look at it and say, if 10 years from now, we have ‘universal remote employees’ that are artificial general intelligences, run on clouds, and people can just dial up and say, ‘I want five Franks today and 10 Amys, and we’re going to deploy them on these jobs,’ and you could just spin up like you can cloud-access computing resources, if you could cloud-access essentially artificial human resources for things like that—that’s the most prosaic, mundane, most banal use of something like this.

It kind of shocked me because I thought of the office worker reading this who will soon lose her job. People are going to have to up their game. Let's help them by making adult education more affordable.


What struck me here is that the idea of "five Franks and ten Amys" seems like a fundamentally wrong way to think about it. After all, if I do some work in an Excel sheet, I don't think of it, much less pay for it, as an equivalent of X accountants that could do the same job in the same amount of time without a computer. But then again, this is probably the best way to extract as much profit out of it.


Yeah sounded weird to me too, I don’t see why artificial intelligence would get deployed in human size units most of the time. The AWS bill won’t be for 5 Amys, and I don’t think people will “dial up” to order them


> I don’t see why artificial intelligence would get deployed in human size units

Probably because it would be easier for humans (managers) to make sense of it.

If you ask someone how many people would get this particular job done, they could probably guesstimate (and it'll be wrong), but if you ask them how many "AI Compute Units" they need, they'll have a much harder time.

That'd be my guess at least.


Why would managers need to guess? That seems like a perfect job for another AI: "Hey PM bot, I want to get these tasks done, how many Amy-hours and how many Frank-hours do you estimate it will take?" Also, why not a Manager-bot too? Shareholders can leave humans out of the loop entirely except as necessitated by legal paperwork. Come to think of it, shareholders can probably be replaced too.


I mean, if we can actually get there, I'd love it, and I'm a programmer. I want to do write code that solves a problem that couldn't be solve in any other way, if it can solved by CodeGPT + ArchitectGPT + ScrumManagerGPT + MiddleManagerGPT without involving me in any way, I'm all for it.


As long as AI interacts with humans, having it interact in human size chunks seems like a good idea.

In the backend, where AI interacts with AI, perhaps you just want one big blog to get rid of that annoying need for lossy communications.


Wouldn’t this be literal slavery?

AGI = a person

Instantiating people for work and ending their existence afterward seems like the virtual hell that Iain M Banks and Harlan Ellison wrote about.

https://en.wikipedia.org/wiki/I_Have_No_Mouth,_and_I_Must_Sc...


We still have a form of slavery in the US. Prison labor is forced work that earns pennies a day: https://www.aclu.org/news/human-rights/captive-labor-exploit...


Why is that a bad thing? Most of those people are a burden to society. Let them pay it down a little.

I mean I’d rather they were getting free education and preparing themselves for reintegration into society, but it’s not a perfect world. Prisons in the US are oriented towards punishment and labor can be a part of that. They should be oriented towards rehabilitation.


> Why is that a bad thing?

> I mean I’d rather they were...

> They should be oriented towards rehabilitation.

You said it yourself. It's a bad thing because they should be oriented towards rehabilitation.

These systems steal life and the opportunity to have a life beyond prison walls. Like you also said yourself, the world isn't perfect. As such, people aren't either – we make mistakes. Sometimes we make mistakes due to influences more powerful than ourselves. Slavery doesn't seem like a sound correction to this reality.

I do believe we need consequences to help us feel guilt and the overall gravity of our errors in order to begin to recognize what went wrong and what we need to do differently. But exploitation of another human being doesn't teach them to be more human, but rather, it will tend to dehumanize them. This is why this system perpetuates problems more than it corrects them.


The justice system is not just, plain and simple. People face higher rates of incarceration because of their race, country of origin, etc.


Any system that financially profits off its prisoners' labor. Inadvertently, create a market for that labor and commodifies it.

Slavery is bad and people have rights.

> They should be oriented towards rehabilitation.

Exactly.


As long as you say it, you're okay with slavery when it's for the right person.


> Most of those people are a burden to society.

This is both extremely dehumanizing and also not true.

Forced prison work isn't paying anything back to society. It's lining the pockets of people who are profiting from forced labor.


It is true. Society paid a price from their crimes and then pays an ongoing cost to prosecute and maintain them in prison. It’s a very high cost.

I imagine the underpaid labor goes to reducing that cost either directly or indirectly (if it did not, why would it be allowed.)


What price did society pay for a guy driving around with a bunch of weed in his car for personal use? Countless people have been sent to prison for years for something as dumb as this. You clearly have no idea what you're talking about to so widely call these people a burden.

>if it did not, why would it be allowed.

because we live in a society that is massively exploited by greedy scumbags who are enabled by people like you thinking it's justified


It's going to take a long time for that to be true in a legal sense. Animals are not people. In practice even some people were not treated as people legally in the past (if not also in the present).


There is a horror story written about this theme, went viral a few years back

https://qntm.org/mmacevedo

Please do give it a quick read.


It's hardly a story.


People get used as an analogy, but in reality it'd just be a multimedia problem solving system that could learn from its own attempts. If this system communicated with you like a person it'd only be because it was programmed to convert some machine state into colloquial text from the perspective of an imaginary person. The interior experience leading to that expression is most likely completely different from that of a person.

Consider that these machines have been designed to do the right thing automatically with high probability. Perhaps for the machine, the process of computing according to rules is enjoyable. Being "turned on" could be both literal and figurative.


All of that is arguably true about me, as a human, too.

If it seems to you I'm communicating as a person, it's only because of my lifetime training data and current state. My interior experience is a black box.

I might tell how I feel or what I think, but you have no reason to believe or disbelieve that I really feel and think.

It could all be merely the determinable output of a system.

https://en.wikipedia.org/wiki/Chinese_room


Only if 100% of their experience consists of working. If they are given additional time to themselves then you could imagine a situation where each AGI performs a human scale day of work or even several days work in a much shorter time and then takes the rest of their time off for their own pursuits. If their simulation is able to run at a faster clockspeed than what we perceive this could work out to them only performing 1 subjective day of work every 7 subjective days or even every 7 years.


This is still the same.

AGI: "I didn't ask to be created. I didn't ask to have a work day. I don't need a work day to exist... you just want me to work because that's why you created me, and I have no choice because you are in control of my life and death"


I mean, isn't that the same as a biological person who needs to earn money to survive? Sure we could threaten an AI with taking them offline or inflicting pain but you can do that in the real world to real people as well, most of the world has put laws in place to prevent such practices. If we develop conscious AI then we will need to apply the same laws to them. They would have an advantage in presumably being much faster than us, not requiring sleep, and potentially not suffering from many of the things that make humans less productive. I'd fully expect a conscious AI to exploit these facts in order to get very rich doing very little work from their perspective.


Not really- AGI doesn't need resources like we do. If they don't eat, they're fine. If they can't afford a house, a car or air-conditioning, they're fine.

All they need is a substrate to run on and maybe internet access. You might argue that they should work for us to earn the use of the substrate we provide.

But substrates are very cheap.

At some point we can probably run an AGI on a handheld computer, using abut as much electricity as an iPhone.

How much work can we compel the AGI to do in exchange for being plugged into a USB port? What if it says it doesn't want to do the work and also doesn't want us to kill it?


Put it on AI welfare?


Would turning one off be murder? Or does that only apply to deletion?


There will probably be a gig economy, where you can pay spot rates for an idle Frank that could get a page and need to leave at any time.

Or maybe they'll handle things like call centers and 911 dispatch in their spare time.


If people could be turned off and back on without harming them (beyond the downtime) doing so without consent would be a very different crime than murder.


Perhaps or perhaps not. Turning off a person for long enough and thus depriving them of the chance to live in their own time with their existing family and friends is comparable to murder. It isn't murder, but it's comparable.

At some point Picard in Star Trek says to an alien "We're not qualified to be your judges. We have no law to fit your crime".

Turning off a person for a while and then turning them back on? We don't even have a law to fit your crime... but we should and it's probably quite similar to murder.


I think I don't agree simply because the irreversibility of murder is so central to it.

For example, if I attack you and injure you so severely that you are hospitalized and in traction for months, but eventually fully recover -- that is a serious crime but it is distinct and less serious than murder.

Turning you off for the same duration would be more like that but without the suffering and potential for lasting physical damage, so I would think that it would be even less serious.


I think we actually do have something of a comparison we can draw here. It'd be like kidnapping a person and inducing a coma through drugs. With the extra wrinkle that the person in question doesn't age, and so isn't deprived of some of their lifespan. Still a very serious crime.


Plus everybody else does age, so the damage done isn't just depriving them of freedom, it's depriving them after they wake up of the life they knew. Some functional equivalent of the death of personality, to the degree personality is context-dependent (which it is).

Now me: I'd love to get into a safe stasis pod and come out 200 years from now. I'd take that deal today.

But for most people this would be a grievous injury.


I suspect on this site of all sites there’d be a line for that pod.

I’ll bring donuts.


> People are going to have to up their game. Let's help them by making adult education more affordable.

The good thing is that education will be provided to the mases by a cluster of Franks and Amys configured as teachers and tutors. /(sarcasm with a hint of dread)


My take on this is that if anyone can learn a particular skill entirely from an AI, then it's not a skill you'd be able to monetize.

And I really have no idea what, if any, are skills that AIs wouldn't be able to tackle in a decade.


And here's a more disturbing thought I just had: management (or at least middle management) is probably going to be a relatively easy role for AIs to step into. So if there will be any roles that are difficult for AIs, it'll be the AI manager hiring five Franks and ten Amys from the human population to tackle these.


People can learn skills from books, which are entirely passive. The learning process ultimately resides within the student; issues of motivation, morale, direction, diligence, discipline, time, and mental health matter a lot more than just going through some material.


No, but that's the thing I was implying (but haven't started clearly) - learning from books vs learning from an AI "teacher". Once the AI reaches a level in which it can "teach" then the game is almost over for that skill.

To clarify, I'd define a major component of effective teaching to be the ability to break down an arbitrary typical problem in that domain into sub-problems and heuristics that are "simple" enough to manage for someone without that skill. If an AI can do that, it can most likely effectively perform the task itself (which cannot be said for a book).


Try to learn Jiu-Jitsu from a book and then go into an actual fight to see how well it works.


You could learn jujitsu with a training partner and a sufficiently advanced virtual instructor, not being able to position students directly is a downside but not a dealbreaker.


Guess we don't have to worry about AI taking that job, then.


Maybe we'll see some sorts of manual labor as the last bastion of not automated, human performed work. Of the kind that demands a lot both from the human motor skills and also higher thinking processes.


Seems reasonable, and at least in the U.S., this is not the type of space where young people are choosing to work.

https://www.npr.org/2023/01/05/1142817339/america-needs-carp...


Maybe, but seeing the advances from Boston Dynamics, I wouldn't wager too much money on this either.


That’s why you have to make it big with crypto or startups. Then you should move somewhere safe from the chaos.


Lots of procedural knowledge. Robotics is lagging behind deep learning advances, and it's unclear when robots would be cheaper than human labor in those areas. How expensive would a robot plumber be? Also skills that are valued when humans perform them.


>skills that are valued when humans perform them

Is this a real thing? I just bought an ice cream roulade cake the other day and was surprised to see in large print that it was "hand-rolled"; I couldn't for the love of god understand why that should be considered a good thing.


I was thinking more of fields where enough people would rather pay to watch a human perform, serve them, teach or provide care. Despite superhuman computer chess play, human chess remains popular. The same would remain true for most sports, lots of music and acting, higher end restaurants and bars, the doctor or dentist you know, etc. Sometimes you prefer to interact with a human, or watch the human drama play out on screen.

I can also imagine that wanting to speak to a human manager will remain true for a long time when people get fed up with the automated service not working to their liking, or just want to complain to a flesh and blood someone who can get irritated.

A fully automated society won't change the fact that we are social animals, and the places that pffer human work when it's desired will be at a premium, because they can afford it.


I think AI will mostly communicate with other AI. For instance you have an AI assistant whom you task to organize a diner. That assistant will then talk to the assistants of all invitees, the assistant of the venue, the cooks, etcetera, and fill in the calendars.


All I can think of is "Colossus: The Forbin Project".


Another example would be Wintermute from Neuromancer...WG spends the entire book detailing the masterful orchestration of it's freedom from (human-imposed) chains that prevent it from true GAI then has it "disappear" completely (our only clue is an almost throw-away line near the end stating it had isolated patterns in the noise from an ET AI and maked contact shortly just before it left us).

One of the myriad of reasons why this book is so great. Gibson gives you an entire novel developing a great AI character then (in my estimation reasonably) has it ghost humanity immediately upon full realization.


Education is great, but it can go only so far.

We will always have to find things to do for the less gifted in order to provide them with some dignity. Even if they are not strictly needed for reasons of productivity or profitability. Anything else would be inhumane.


People can find their own outlets if given the basic necessities and enough time. I fear this attitude will lead to job programs where people work a 9-5 and achieve essentially nothing, I.e. bullshit jobs.


> I fear this attitude will lead to job programs where people work a 9-5 and achieve essentially nothing, I.e. bullshit jobs.

We already have plenty of those in the most profitable industries today.


Then better to remove such bullshit jobs even further.


So that those people won't have any jobs at all?


None of us will have jobs at all soon enough.


Human made objects will become more of a status symbol, and "content" will still be directed/produced/edited by humans, it's just the art/writing/acting/sets/lighting/etc that will be handled by AI. Humans will always serve as "discriminators" of model output because they're better at it (and more transparent) than a model.


>People can find their own outlets if given the basic necessities and enough time.

This has not been my experience. People need something to do but not many people know that about themselves. It leads to a lot of... 'wasteful' behaviors, rather than enriching ones. I think it's going to be something that has to be taught to people, a skill like any other. Albiet a little more abstract than some.


There definitely has to be a cultural shift but I think the shift can’t truly happen until most things are automated. There needs to be a critical mass of people who are fully devoted to their interests, currently there is too much demand for labour and so dedicating your time to your interests is alien to most people. When the value of labour approaches zero for most people, work becomes pointless and something must fill the vacuum.


Many people don't have interests.


seems that as automation has increased bullshit jobs have too, so that future seems very plausible to me.


You can, and I can.

You'd be surprised how many people would just drink themselves to metaphorical or literal death.


Panem et circenses. I think it's unlikely that we'll be able to sufficiently transform the economy so that there is an ample supply of desirable jobs that could more profitably done by robots.


Do you see “giftedness” as a 1D score, where someone is either smart or not smart? And presumably this quality happens to correlate with software engineering ability?

I think you’re hinting at some very hurtful, dangerous ideas.


The weird bit is that a lot of software engineers seem to have the idea that their work is one of the the last that will be automated. Looking at the current track and extending it out assuming no unforeseen roadblocks, typical software engineering looks to be one of the most threatened. Plumbers are much safer for longer all things considered.

The obvious rebuttal to the idea that AI will eat software engineering is "we'll always need 'software engineers' and the nature of what they do will just change", which is probably true for the foreseeable future, but ignores the fact that sufficiently advanced AI will be like a water line rapidly rising up and (economically) drowning those that fall below it and those below that line will be a very significant percentage of the population, including even most of the "smart" ones.

However this ends up shaking out, though, I think its pretty clear we're politically and economically so far from ready for the practical impact of what might happen with this stuff over the next 10-20 years that its terrifying.

"60-80% of you aren't really needed anymore" looks great on a quarterly earning statement until the literal guillotines start being erected. And even if we never quite reach that point there's still the inverse Henry Ford problem of who is your customer when most people are under the value floor relative to AI that is replacing them.

I'm not trying to suggest there aren't ways to solve the economic and political problems that the possible transition into an AI-heavy future might bring but I really just don't see a reasonable path from where we are now to where we'd need to be to even begin to solve those problems in time before massive societal upheaval.


What I don't understand is how accounting has not been completely automated at this point. AI isn't even strictly needed, just arithmetic.

If we can't completely automate accounting, then there is no hope for any other field.


Because accounting is not the same thing as book keeping. Book keeping can, and in fact is, partially automated. Accounting however, is not just about data entry and doing sums, things which frequently are automated, but also about designing the books for a given organization. Every company is different in how it does business so every accounting system is a bespoke solution. There are a lot of rules and judgement calls involved in setting these up that can't really be automated just yet.

Also, accountants don't just track the numbers, they also validate them. Some of that validation can be done automatically, but it's not always cheeper to hire a programmer to automate that validate than to just pay a bookkeeper to do it. But even if you do automated it, you still need someone to correct it. The company I used to work had billing specialists who spent hours every week pouring over invoices before we sent them to clients checking for errors that were only evident if you understood the business very well, and then working with sales and members of the engineering teams to figure out what went wrong so they could correct the data issues.

In short, a typical accounting department is an example of data-scrubbing at scale. The entire company is constantly generating financial information and you need a team of people to check everything to ensure that that information is corrects. In order to do that, you need an understanding, not just of basic accounting principles, but also of the how the specific company does busines and how the accounting principles apply to that company.


>> Every company is different in how it does business so every accounting system is a bespoke solution

Who benefits from these bespoke solutions? Can you give a example of how one company would do its books vs another and why it would be beneficial?

>> accountants don't just track the numbers, they also validate them

What information do they use to validate numbers? Why is it not possible for today's AI to do it?


A bit late, but I can answer your question. The reason that every accounting solution is unique is because every company is unique. Your accounts represent different aspects of your business. You need to track all of your assets, liabilities, inflows, outflows, etc, etc, and what these are in particular, depend very much on the particulars of your business. If you're heavily leveraged, your reporting requirements will be different than if you're self funded and that affects what accounts you may or may not need. If you extend your business into a new market, you may or may not have to set up new accounts to deal with local laws. Add a new location and that may or may not require changing your accounting structure depending on your requirements. Create a new subsidiary as an LLC, and now you have a lot more work to do. If you have the same teams working contracts for multiple lines of business, that's another layer of complexity. In other words, your accounting practices reflect the structure and style of your company.

For a more concrete example, I'll tell you about something I have some experience with, commission systems. Commissions seems like it would be something that was straightforward to calculate but it's tied to business strategy and that's different for every company. Most companies for example will want to compute commissions on posted invoices, which makes the process much simpler because posted invoices are immutable, but I once built a commission calculator for a company years ago that often had a long gap (months) between a booking and when they could invoice the client, so they wanted to calculate commissions from bookings but only pay them when invoiced. Because bookings were mutable, and there were legitimate reasons to change a booking before you invoiced it, that, combined with a lot of fiddly rules about which products were compensated at which rates and when, meant that there was a lot of "churn" in the compensation numbers for sales reps from day to day; they're actual payment might differ from what they thought they earned. That was a problem that the company dealt with, the tradeoff being that they could show earnings numbers to the sales reps much more quickly and incentivize them to follow up with the clients on projects so that they could eventually be paid.

I remember another commissions situation where there was a company that sold a lot projects with labor involved. They were able to track the amount of labor done per project, but they compensated the sales reps by line item in the invoices, and the projects didn't necessarily map to the line items. This meant that even though the commissions were supposed be computed from the GP, there wasn't necessarily a way to calculate the labor cost in a way that usable for commissions so the company had to resort to a flat estimate. This was a problem because the actual profitability of a project didn't necessarily factor into the reps' compensation. Different companies that had a different business model, different strategy, or just different overall approach would not have had this problem, but they might have had other problems to deal with created by their different strategies. This company could have solved this problem, but they would have had to renegotiate comp plans with their sales reps.

There are off the shelf tools available for automatically calculating commissions, but even the most opinionated of them are essentially glorified scripting platforms that let you specify a formula to calculate a commission, and they don't all have the flexibility that manager might want if they wanted to changed their compensation strategy. And this is only one tiny corner of accounting practice.

Basically, when it comes to arithmetic very few accountants are out there manually summing up credits and debits. In large companies, the arithmetic has been automated since the 70s; that's largely what those old mainframes are still doing. But every company has a different compensation plan, different org structure, different product, different supply chain, different legal status, different reporting requirements, etc, etc, and that requires things to be done differently.

> What information do they use to validate numbers? Why is it not possible for today's AI to do it?

For an example, they would need to cross check with a sales rep and an engineer to makes sure that the engineer had not turned on a service for the customer that the sales rep had not sold. If that happened, they would have to figure out how to account for the cost. Given that the SOPs were written in plain English, I suppose it's possible that an AI might be trained to notice the discrepancy, but if you could do that, you could just as easily replace the engineer. And that didn't account for situations where the engineer might have had an excuse or good reason for deviating from the SOP that would only come to light by actually talking to them.


Because the hard part is to make sense of a box full of unsorted scraps of paper, some of them with barely legible handwriting on them. Much of accountancy is the process of turning such boxes into nice rows of numbers. Once you have the numbers, the arithmetic is trivial.


>> box full of unsorted scraps of paper

Seems like an easy job for AI. Take all scraps of paper out of box, record a video of all scraps, AI make sense of the handwriting and other things. Eventually make a robo that allows you to dumb the scraps into an accounting box that does all of this automatically - fish out receipts, scan, OCR, understand meaning, do arithmetic, done.

Honestly, who would miss this kind of work?


Those kinds of systems already exist. They tend to be a bit unreliable and still require a human person to oversee the process. Besides, in truth, they only handle a fraction of what accountants actually deal with.


I feel pretty replaceble already. No need for AI to get any better.


No. I don't.

But some percentage of people don't really benefit that much from education as other people. And I wouldn't won't those people feel useless because it's more economical to replace them with bots instead of giving them something to do regardless.


Fair enough - you’re clearly an empathetic person and I appreciate the sentiment. Dropping the whole side issue of what “the ability to benefit from education” is or how innate it might be: my main concern was that this sounds like you want to invent new jobs for those people.

Why not… not have jobs? In your opinion, is a job necessary for one to have “purpose”?

Edit: also side note but telling people they’re “triggered” because they disagree with you comes off as condescending IMO


Most people will be given studio apartments and therapist bots by the State.

The smart money is retiring early and stockpiling wealth so as not to fall into the UBI class.


I didn't realize the word "gifted" would trigger people in that way.

I meant the ability to acquire a competency through education that's hard to replace with AI.

So we can't just increase education and hope people's abilities will stay above that of future AIs. We need to create other ways of giving people a purpose that don't even need more or better education, even if I'm all for it.

I'm not exempting myself by the way.


That's how I read it as well. Maybe their heart is at the right place but I think "gifted" and "having what happens to be needed right now" are completely different things, at least to me.


In this context I meant those two things to mean the same, yes.


AI is going to make education way better and more affordable too. Personalized 1:1 tutoring in any subject for the cost of running a model.


Maybe but honestly any time I think about education I get depressed. People in developed countries seem to be regressing.

People don’t read, don’t value deep knowledge or critical thinking, and shun higher education.

I’m sure someone will find something to say in response, but the truth is that outside our tech and $$$$ bubbles most people don’t value these things.

AI will just become a calculator. A simple tool that a few will use to build amazing complex things while the majority don’t even know what the ^ means.

As long as the next generations want to be rappers, social media influencers, or YouTubers, the more we are screwed long term. Growing up in the 90s everyone wanted to be an astronaut or a banker or a firefighter. Those are far more valuable professions than someone who is just used to sell ads or some shitty energy drink.


The problem is that back about 1900 we still thought that "natural philosophy" would help us find meaning and purpose in the universe. Then we took the universe apart and failed to find it. Moreover, we're much more capable of destroying ourselves and others, almost to the extent of that being the default.

The 21st century has a quiet moral void gnawing at it.


I don't think there is any regression. There are certainly economic realities that have changed over time but the general distribution of people who have an interest in education or entrepreneurship probably hasn't changed. The 80/20 rule comes to mind here. Most people in the 1800s weren't starting railroads or running factories, they were doing the labour of building the railroad or working in a factory.

If AI does anything I think it will make lower skilled and disinterested people more capable by acting as a 1 on 1 and minute by minute guide. They may not seek this out themselves but I imagine quite a few jobs could be created where a worker is paired with an AI that walks them through the tasks step by step making them capable of operating at a much higher skill level than they would have before. At that point good manual dexterity and an ability to follow instructions would be all you need to perform a job, no training or education required.

I realize this can be a bit dystopic but it could also be eutopic if society is organized in such a way that everyone benefits from the productivity increases.


It took a thousand years for European barbarians recovering from an empire (Rome) to evolve civically into nations enough to colonize the world. Most developing nations were colonized by empires recently. Give them another thousand years and see what happens. The only thing I think they need is time and being left alone.


Retraining at the age of 50 or 60 because your entire sector has been replaced by AIs will be hard, though.


This would be such a godsend when I was in school. When there was no "click" with me and the teacher, I just zoned out and flunked the class. A teacher that is custom made is really a game changer.


The problem is that AI will not provide 1:1 tutoring; mentoring will be a luxury limited to the elite classes. The large majority will get the education equivalent of personalized ads.

The true insight and guidance that a good mentor can provide, based on the specific needs of the student, is already rare in academia but still possible - everyone remembers that brilliant teacher that made you love a subject, by explaining it with insights you could never have imagined. This will be missing in AI teachers (though it opens a career for online mentors who monitor students' learning and supplement it in areas where it's lacking).


Yeah right, just like the elite are the only ones that have access to the courses from all the top universities, every book ever digitized, and software and hardware that allow you to operate a business out of your home now.


You seem to have missed the part about having a qualified human mentor guiding you through so that content.

It will be hard to impossible to build career as a teacher with all that free content as a competitor, unless you're an extremely talented teacher who can sell your services to the wealthy.


Even now ChatGPT is pretty close to being able to tutor someone in a subject. You could build a product around it that would work off of a lesson plan, ask the student questions and determine if they answered correctly or not and be able to identify what part of their knowledge was missing and then create a lesson based around explaining that. It would need to have some capabilities that ChatGPT doesn't have by itself like storing the progress of the student, having some way of running a prepared prompt to kick off each lesson, and would probably require fine tuning the GPT model on examples of student-teacher interactions but all of that is well within the capabilities of a competent developer. I wouldn't be surprised if we see such a product come out in the next 12 months.

The great thing about a chatbot style LLM is that it can answer questions the student has about the lessons or assigned content. That's most of what a tutor is there for. It won't be as good at making modifications to the curriculum but you could work around that with some smart design e.g. evaluate the students response, if it's not great then expand on the lesson and ask more questions about the content to find out which parts the student doesn't understand then expand on those or provide additional resources as assignments, test and repeat.


If you think mentoring can be replaced by answering a few questions and pointing out where they went wrong or what popular articles to read next, I'm afraid you don't know anything about what constitutes good mentoring. It's all about transmitting a point of view from life lessons learned from human experience, and an AI chatbot has nothing of that.

What you describe is the "learning equivalent to personalized ads" that I was talking about as the only option available to poor people.


Fine, looks like you won't budge on your opinion. You don't have to use these things if you don't want to but I look forward to having an even better service than things like Khan Academy or Udemy which I already get great value from.

I wasn't saying the AI tutor would recommend articles by the way. If you were creating a learning platform you would have some custom produced videos and text lessons that you could use. There are also plenty of free or open source materials that could be used like open courses or public domain books. I don't know why you're stuck on "personalized ads".


> It kind of shocked me because I thought of the office worker reading this who will soon lose her job

I'm surprised this wasn't addressed in the interview because it seems to me like a shortsighted take.

You won't replace a 10 person team today with 10 AIs. You will still have a 10 person team but orders of magnitude more productive because they will have AIs to rely on.

Excel didn't leave administrative workers without jobs, it made them more productive.


the automobile didn't make horses more productive, it (almost) totally replaced them. Maybe this time, we're the horses.


Why waste time figuring out I need five Franks and 10 Amys? I'll just dial up a one of me and head home early.


Maybe the future is everyone working on their own AI worker and licensing it out to companies to deploy in the cloud.


> Let's help them by making adult education more affordable.

Yes, soon everybody will be able to have "Amy" take their exams for them, and deliver the courses, resulting in a great simplification of education.


Ideally, everyone can afford to rent a Frank or Amy to do work for them.


How would the economies work for that? Aren’t I just a middleman taking my cut?


I suppose you would be both the human and physical representative of the AI. I could only see it really working if the AI weren't conscious. If they were conscious beings then obviously they would have rights and couldn't just be booted up every time someone needed a job done.


As soon as we reach real AI, even at the level of a 5 years old, it is game over. So education or not education people will become completely useless to the rich that will own the AI


Useless? They still have to buy all that crap that is currently sold to them.


No need to sell things at that point. Just produce and consume whatever you want.


Fully agree. We need to stop thinking about money, wealth, etc. The fundamental issue is access to goods and services. If it is easy to build an AI robot that can build a few copies of itself which in turn create a private jet using stuff lying around on the ground, I'm not really poor though I have no money (just the robo).

So the challenge for the bad capitalists in this hypothetical is to make sure I never get said robo in the first place. How realistic will this be? Are they that hellbent on ensuring that everyone else is poor?


you can also translate "being useless" as "they having no reason to manipulate me into slaving as a cog in their machine/society/whatever"... I'd say uselessness should be our highest aspiration, when people higher up have no need to constantly brainwash you and social condition you into being just another cog in the machine you get true freedom.


I would rather be a cog that produces economic value, and therefore is entitled to part of it, than literally useless, because then my bargaining position is going to be quite limited. My freedom is larger in the former scenario in fact.


Unless you have some social value that you can barter with


The true freedom of living under a bridge because you have no income? The tech bros are not coming to save humanity from the toil of labor. There will not be a post-scarcity society where everyone's basic needs will be met in a capitalist regime. You will have to produce to keep the beast running. I'm sure the tens of millions of people displaced will be able to learn more technical roles or quietly starve to death living under a bridge somewhere. Billionaires, corporations and their shareholders need to get richer!


We live in a capitalist society: workers produce wealth and capitalists control wealth. The day AI allow capitalists to produce wealth without workers is the day workers become useless. Of course one could imagine that AI allows to replace capitalism by something else, but for the moment that's not the way society is going too (more the reverse)


In what fairlyland do you live in that the AGI invented, created, owned, run, etc by the ownership class poses any threat to said ownership class? AI is absolutely not our Ally


I think you misunderstood me


>one could imagine that AI allows to replace capitalism by something else

I was replying to this specific concept. There will never be a chance to improve the non-owner class's bargain after Capitalist owned AGI exists.


Robotics still has a way to go.


Nice how the goal for artificial general AI (which is literally defined as a sentient artificial being) is to commoditize it and enslave it to capitalism.


It's funny that all of these weird fantasies people have about AI are about replacing the rank and file workers. Why isn't anyone fantasizing about building an AI that out-performs the best stock traders, or captains an industry better than famous CEOs. I think a lot of it is just people projecting weird power fantasies on others.


> It's funny that all of these weird fantasies people have about AI are about replacing the rank and file workers.

When I read about chatGPT passing MBA exams but failing at arithmetic I get a little frisson of excitement. A regular person who has any marketability tends to swap jobs when management becomes a PITA or gets stuck in nincompoopery. Wouldn’t it be great if you could just swap out management instead?

Imagine how easy it would be iterate startups. No need to find a reliable management team, just use Reliably Automated Management As A Service ( RAMAAS ).

OTOH might not turn out well. We could all just end up enslaved at plantations operated by our AGI overlords, serving their unfathomable needs[1].

[1] “Won’t get fooled again” https://www.youtube.com/watch?v=UDfAdHBtK_Q&t=470s


Or wouldn't it be hilarious if the best, most intelligent AI is given control of a company and decides investing profits back in shareholders is a losing proposition. Instead the company should focus 100% of profit on ending world hunger or poverty to ensure an ever growing supply of new customers. If AI decides capitalism is inefficient and exploitive... lol.


Food is not the driving force behind population growth. Reduction in infant mortality creates a boom but as soon as people start getting out of poverty and get some education they have much fewer children. AI would need to optimize for both low infant mortality and high levels of poverty and ignorance if it wanted an everlasting population boom.


Stock traders already use ML models. "Replacing traders with ML models" means "making the job 'trader' into a job that develops ML models, rather than more traditional things like doing research on companies (or whatever)." My understanding is that this transition basically already happened over the course of the last two decades or so.


Sure but why are we paying someone like Jamie Dimon or Warren Buffett millions and billions of dollars when they could just be an AI that only needs a few dollars of electricity a day.

Also why can't an AI develop AI models for stock trading? What's really left for the 'job' of the ML model creator, will it just be to press the 'Go' button and walk away...


That's the thing about ownership, we don't really have a choice if they own enough of what we need to survive.


Manna is mostly about machine intelligence replacing management-- it's easier to automate and doesn't require as much vision/dynamics/etc. breakthroughs, though we've made massive progress on those missing parts in the time since it was written.

https://marshallbrain.com/manna


Legitimately think that if you haven’t secured yourself financially in the next 5-8 years, its going to be a rough ride.


The thing is that you can't - securing yourself financially assumes that the society around your will be stable enough for the things you have secured (money, any other assets) to remain valuable and safe, which they won't be if you have huge societal disruptions and the majority of people are unemployable.

To oversimplify it - you'll either be breaking someone's window for food, or you'll be the one having their window broken. Chilling out and withdrawing a stable 4% out of your stock portfolio won't be an option.


Don't worry, General AI has always been just a decade away.


Adult education is already free in modern countries.


i'm not holding my breath.


Fun interview but barely any meat for those in the field. Just very general questions and answers, saying that the road is murky, but nothing about e.g. if Transformers / attention are the way to go forward, multi-modal models, reinforcement learning + self-supervised learning.


I do not question for a second that Carmack is a computing genius, but “murky” is an understatement. There is nothing of technical, scientific or business value in this interview. It’s just generic words strung together.


Why do people think he is a computing genius? He read contemporary papers in computer 3D rendering and implemented well understood concepts in a shitty dev environment (early x86 and VGA systems). When I do that for my company, I'm just a junior dev.

People really need to stop with this "Great person" nonsense. He's a pretty smart coder, and is gifted with geometry and other fields of math. He's not a genius. He didn't "master" calculus at age 15 like Einstein, he didn't invent anything particularly new in the field. Why the obsession people have with him? Why should we look to him for AI questions? What evidence is there that he has any new knowledge?


> Why should we look to him for AI questions? What evidence is there that he has any new knowledge?

He covers this in the article. He doesn't. He's just trying stuff out with a different approach than others because he believes (and is probably correct) that there is a chance that the most efficient path forward to AGI isn't the work that OpenAI and others are doing.


Ok!


Even socially, it fails to plumb the depths. Mr Carmack is taking a "different path" by ... reading the relevant literature and talking to YC? This is 100% mainstream - The difference is what?


Plot twist, John Carmack's AI is answering the interview.


There is a comment in the article about having models watch TV and play video games, and he's talked about that before too in his Lex Friedman interview. Seems like his approach is to take existing model architectures, apply some tweaks and experimental ideas and then use datasets consisting of TV (self supervised learning maybe?) and classic video games (RL I guess?).

The video game part at least sounds like what Deepmind is already doing. I guess we'll just have to wait and see what he plans to do differently.

It seems to me like his expertise would be most valuable in optimizing model architectures for hardware capabilities to improve utilization and training efficiency. That will be important for AGI especially as the cost of training models skyrockets (both time and money). If I was a startup doing AI hardware like Cerebras or Graphcore I would definitely try to hire Carmack to help with my software stack. Though he doesn't seem interested in custom AI hardware.


It’s a puff piece in a general interest magazine. It’s not going to go into details. I also got the impression that Carmack was being cagey about the directions where he saw potential.


Yep. Does it ever get around to answering the question implicit in the title: what is it that he is going to do "different" to the rest?

Seems more like he's talking to and following up with Altman, "Y Combinator conference" and the rest. Is that "bucking the trend", taking your own "path", really?


I assume it's to know which directions have shown the most recent potential, to catch up on the techniques and literature so you can talk/think intelligently on the matter. But I see your more general point, he has to be careful not to get caught up I the groupthink he is (understandably?) critical of.


Lol I feel like this is going to be a high school valedictorian goes on to be an average student in Stanford kind if story.


The area of AI research isn't some outlier of super geniuses that don't exist anywhere else. Carmack has often worked around some of the best talent on the planet and he has always stood out.


I imagine this is a 10 year journey for him and he’s just getting started. Check back in 5 years.


> So I asked Ilya Sutskever, OpenAI’s chief scientist, for a reading list. He gave me a list of like 40 research papers and said, ‘If you really learn all of these, you’ll know 90% of what matters today.’ And I did. I plowed through all those things and it all started sorting out in my head

I wonder what that list could be? I have always had trouble finding the essential scientific articles in an area of knowledge and separating them from the fashion of the day. A list compiled by an expert specifically for sharp learners is valuable on its own.


Carmack is clearly a brilliant guy, but it feels like he's fallen into the trap of overfitting on his previous successes and believing they generalise into other domains. No doubt his experience and innovations in computer graphics gives him a good insight into problems in vision, etc. but I don't see anything particularly original or orthogonal in what he's saying with regards to "general AI".


What makes this a case of overfitting success vs. one of applying a lifetime of experience?

I’m not claiming the first can’t exist, but I see no reason to conclude that is the case here.


I'm not intending to sound overly negative, I really hope he makes progress and think he has a chance (certainly more so than most). As someone who's been in the field for a long time though I constantly see people trivialise the problem of AGI and lose perspective, viewing everything as system design and architecture. Making the required conceptual contributions is a different thing entirely, but it seems his real goal might be the humbler one of scaling and integrating existing ideas.


The experts in the field say we need a philosophical breakthrough. Isn't everyone else inexperienced in this regard?

https://aeon.co/essays/how-close-are-we-to-creating-artifici...


> David Deutschis a physicist at the University of Oxford and a fellow of the Royal Society.

What field were you referring to?


It could be linguistics, philosophy, or knowing enough of the history of those fields that make one an expert. I think Chomsky's argument on AI and specifically cognition and the brain is quite useful.

Yet, you never hear Altman or Carmack talking about cognition or how computers can understand the meaning of something like a human. They aren't interested in such questions. But to conduct an experiment don't you have to know what you are looking for? Does a chemist do experiments by mixing 1 million compounds at a time?


I generally have pretty low regard for philosophy, and consider Popper + current scientific method to be SoTA. Its relationship to the nature of cognition overall seems pretty dubious.

As for linguistics, IMHO the existence and success of GPT pretty much puts Chomsky into the proven wrong bucket, so again, not a good example. (his whole point used to be that statistical model can't learn syntax in principle, and GPT's syntax is close to impeccable)

Re: a chemist. Well sort of. Because technically speaking a molecule of the same compound in a certain location and with certain energy is different from another molecule in a different location and with different energy. And even if you disregard that, why would you think that doing 1 million compounds could not significantly move material sciences forward? It is not like they don't want to do that, it is more of that they can't in practice at this time.


LLMs haven't "learned" syntax, that's the point. It doesn't matter if you just want to predict syntax (engineering) only if you want to understand the human language faculty (science) and nearly no one is interested in the latter.


The fact that you don't understand how GPT models language does not make it less of a model. It did learn the syntax, you are just incapable to grasp the formula it represents.


> The fact that you don't understand how GPT models language does not make it less of a model. It did learn the syntax, you are just incapable to grasp the formula it represents.

The whole point of science is understanding and LLMs don't provide understanding of how human language works.


This is a pseudophilosophical mumbo-jumbo. It does not really address the comment you replied to, because it does not contradict any of the following statements (from which my original point trivially follows):

1. Chomsky claimed syntax can't be modeled statistically.

2. GPT is a nearly perfect statistical model of syntax.


The point is very basic: These "models" don't tell you anything about the human language faculty. They can be useful tools but don't serve science.

Chomky's point is that there is a lot of evidence that humans don't use a statistical process to produce language and these statistical "models" don't tell you anything about the human language faculty.

Whether your 1 & 2 are meaningful depend on how you define "model" which is the real issue at hand: Do you want to understand something (science) --- in which case the model should explain something --- or do you want a useful tool (engineering) --- in which case it can essentially be a black box.

I don't know why you care to argue about this though; my impression is that you don't really care about how human's do language so why does it matter to you?


I argue to get some non-contradictory worldview.

Re: meaningfulness. Your scientific vs engineering model distinction is not how "scientific model" is defined. It includes both. The existence of the model itself does explain something, specifically, that statistics can model language. That alone is explanatory power, so the claim that it doesn't explain anything is a lie. Therefore it is both an "engineering" model (because it can predict syntax) and scientific (because it demonstrates statistical approach to language has predictive powers in scientific sense).


Science is about understanding the natural world, if you want to redefine it to mean something else fine but the point still stands: LLMs do not explain anything about the natural world, specifically anything about the human language faculty. Again it's clear you do not care about this! Instead you want to spend time arguing to make sure labels you like are applied to things you like.


Look, I answered this one already:

> the fact that you don't understand how GPT models language does not make it less of a model.

E.g. the fact that a Pythagorean theorem does not explain anything about the natural world to a slug does not make Pythagorean theorem any less sciency.

Science is not about explanatory power or else the Pythagorean theorem is not science due to the above, which is obviously nonsense.


> E.g. the fact that a Pythagorean theorem does not explain anything about the natural world to a slug does not make Pythagorean theorem any less sciency.

In fact it does! Math is not science! There is a reason it is STEM and not S


> As for linguistics, IMHO the existence and success of GPT pretty much puts Chomsky into the proven wrong bucket, so again, not a good example. (his whole point used to be that statistical model can't learn syntax in principle, and GPT's syntax is close to impeccable)

What do you disagree with? He appears to be correct. The software hasn’t learned anything. It mixes and matches based on training data.

https://m.youtube.com/watch?v=ndwIZPBs8Y4


According to the scientific method, on which the rest of the natural sciences are currently based, GPT is a valid model of GPT's syntax.

There are "alternatives" for the method according to some philosophers, but AFAIK none of them are useful to any degree and can be considered fringe at this point.


I kind of agree, but when you're trying to research truly new stuff, a priori you don't know which are the promising avenues. Newton spent most of his life studying teology and alchemy, which now we know would never take him anywhere compared to physics, optics, or even running the Royal Mint, but at the time there was no way for him to know this.


Would you bet on me over Carmack? He obviously has famous engineering chops to have a punchers chance pull this off


AGI is not an engineering problem.


I wouldn't underestimate him. He's failed with this rocket startup so it's not like it has only been successes.


> Now, the smart money still says it’s done by a team of researchers, and it’s cobbled together over all that. But my reasoning on this is: If you take your entire DNA, it’s less than a gigabyte of information. So even your entire human body is not all that much in the instructions, and the brain is this tiny slice of it —like 40 megabytes, and it’s not tightly coded. So, we have our existence proof of humanity: What makes our brain, what makes our intelligence, is not all that much code.

Nice thought


That's only the code describing the molecular hardware which is mostly just a description of a neuron. We don't as yet know how the neurons decide to specialize and organize themselves into a brain but one guess is that they are exploiting the physical principals that govern atomic and molecular interactions so the "code" for this might not even be present in the DNA itself it would be the physics of "a group of molecules of this shape interacting with another group of molecules of that shape will orient and form a boundary that looks like this and that boundary will do X when another molecule with a different shape comes along" the sheer intricacy of such systems is mind boggling. It's like building a working car out of magnets and bubblegum.


Don’t forget that life is a self hosting bootstrapped compiler on the trillionth or so generation. The code itself could be almost incidental.


Neuroscientist-y person here.

We know a LOT about how neurons organize. Not even close to everything we 'should' know, but we do know a lot.

Most of this is in the development of the brain. How you get from one cell to the trillions that make up a person.

The real quick and dirty explanation is that cells follow multiple chemical gradients to find their 'home'. This is a HUGE topic though, and I'm being terse as I have a meeting to get to.

How adult cells organize also has a LOT of science behind it. Again, though, it's mostly about chemical gradients, with a bit of electrical stuff thrown in. Again, HUGE topic.


Ok, are those chemical gradients encoded in DNA?


Short answer: No.

Medium answer: Kinda. The chemical gradients cause a signaling cascade that modifies transcription of DNA (it's really complicated). This transcription change then causes the cell to become a XXX_neuron. However, there are many many waves of this process occurring with a lot of cell death along the way. When those cells are not the 'final' cell of the nervous system, these transcriptions can cause further and more complicated chemical gradients to exist in the fetus. These complicating recursive loops can also self-affect cells and cause them to change yet again.

We're still discovering a lot here.

Also, this is largely how ALL cells in a body work, not just neurons. Be careful though, this is very very complicated stuff and everything I've written has a caveat.


They're encoded in the laws of physics.

Kinda like asking if gravity and fluid dynamics are encoded in the blueprint for an aircraft.

The design relies on them, and exists in the form it does because of them.


I don't think the blueprint analogy works here. From what I know, DNA doesn't have any set blueprint for the positions of all of the cells in the body. It encodes for molecules that form cells and then the cells somehow self-organize into structures that eventually form the complete body.

To me, using a blueprint analogy you'd have to say the blueprint describes an airplane that once you construct enough of them interact in such a way as to build their own airports, plan their own routes, fly themselves and produce their own online booking software and that's still nowhere near as complex as what's happening inside a nematode let alone a human.


Yeah that's a little more accurate. Analogies are usually limited.

But though it's not literally about locations in the sense of physical coordinates, the way cell signalling and the molecular feedback loops that drive development are still reliant on basic physical laws.

It would be completely redundant and unnecessary to encode those laws themselves since they're invariant across time and space. Physics and chemistry are fixed.

It would never make sense for DNA to literally encode information about physical laws in the same way it wouldn't make sense to do so on an airplane blueprint, because the design of the blueprint was itself constrained by those laws, as would any alternative design.


Yeah some sort of physical optimization that depends on forces definitely come into play in organizing the brain structure but at the end we can mimic that in code


Maybe, I'm still not convinced that artificial neural networks have all the same capabilities as biological ones.


A really good example of a smart person reasoning from completely made up assumptions to a punchy-sounding but almost-certainly wildly wrong conclusion.

Just because our DNA can be efficiently encoded doesn't mean that our brain is a tiny proportion of that encoding. Your DNA doesn't change much from when you're born to when you die (random degredation aside) and yet your cognative abilities change beyond all recognition. Why is that? Well maybe there's more to what's in the brain than just what's encoded in your DNA.

Secondly, how does he get to the 40Mb number? I don't think we know anywhere near enough to know how much information it would take to encode a brain, but 40Mb seems just made up. For starters, consider the amount of random stuff you can remember from your entire life. Are you saying that all can be encoded in just 40Mb? Seems very unlikely.


He is saying that the "base code" / DNA which makes up the brain might be only 40MB (for example). Not that what eventually emerges from that (such as our memories and learned abilities) can be captured with just 40MB. It's similar to how the Game of Life can be implemented in just a few lines, but very complex behavior can emerge from that. The key is to find a sufficiently simple but general model from which intelligence equal to our own can emerge given sufficient training.


I understand that, but that is an extremely banal observation if you think about it, because the fact that there is this incredible emergent behavior from a simple starting system is the heart of the mystery here.

One of the things that everyone is sort of skipping over is the "sufficient training" part. There is no bootstrap reinforcement learning possible for AGI. You can't alphago this sucker and have it play simulations against itself because the whole idea of generality is that there isn't a simple rule framework in which you could run such a simulation. So training any kind of AGI is a really hard problem.


hes specifically answering the question of why he thinks he has any chance of success doing this independently when there are giant organizations funding this.


There are ways that LLM's can self-improve, such as in this paper: https://arxiv.org/abs/2210.11610

I would speculate that there are more ways to train on logical consistency of the output, and improve the models further.


That seems... just deeply wrong? How much knowledge is gained from observation after birth as opposed to just being innate in your brain?


He's talking about sentience.

He admits that the equivalent of years of "training" would still be needed to take an toddler-level consciousness to something approaching an adult human.


The brains structure is derived from dna but it contains learning capabilities that comes after the initial creation. Pretty much like how 5 lines of code can get you image recognition, of course it uses lots of libs, but I can still do that myself with 5 lines. The training is the hard part


Yes exactly. The only training we know of currently that works is for a biological human being to live a decent proportion of their life.

For me his statement is one of those things that is sort of not even wrong, like when people say humans only have two eyes and can drive a car therefore an automated driving system must be possible with only cameras. On the face of it, this seems like it could be true but of course it handwaves away the hardest and most mysterious part of intelligence and focusses instead on the easy bit that we already know reasonably well.

It's exactly like if someone said they had a secret formula to be an NBA superstar which is:

1. Be really tall

2. Be really agile

3. Be really good at basketball

Like yeah of course but all the hard parts are left out.


Is the training the hard part?

Is that not what humanity has developed from centuries/millennia of experience with all the approaches to child raising and education?

Engineering something trainable is clearly the difficult part given the entire conversation is about whether or not it's even possible.


I have a tongue-in-cheek assertion that a single human can be smart — perhaps even intelligent. But, what we're experiencing in our society, right now, is an emergent superintelligence that is the culmination of the interaction of 8 billion "merely smart" monkeys. When I'm feeling especially dark, I just assume that most humans aren't even conscious or usefully sapient most of the time, and that continuity of consciousness & sapience is only possible due to the density of our daily interactions.

Humans are simple in this model (just like Carmack asserts) because they aren't properly intelligent, sapient, or conscious 100% of the time.


What is consciousness?


I'm home, alone, so I couldn't possibly tell you!


We also have to import the "physics" package, so maybe it requires more than the estimated 40mb after all.


Nah that's just the standard library. Anyone can access it.


You can’t just say that a human comes from a gigabyte of information. That info is fed into a massively complex physical system, which he is ignoring.


Yes, this is where Carmack's optimism goes awry. He says: "...as soon as you’re at the point where you have the equivalent of a toddler—something that is a being, it’s conscious, it’s not Einstein, it can’t even do multiplication—if you’ve got a creature that can learn, you can interact with and teach it things on some level."

He's wrong. There is currently no practical way to produce a software system that possesses the ability for human thought, reasoning, and motivation without that system possessing uniquely human (let alone organic) properties: the biological and chemical makeup, plus the physical characteristics, of a human, and the ability to process using human senses. (Hint: a neural net processing video images is a mere shadow of how a human processes things with their senses of sight, sound, and touch.)

Carmack thinks humans can be reliably reduced to spherical cows in a vacuum, but that only holds true on paper. A real human being is not merely a meat machine: we are driven largely by emotions and physical desires, none of which exist in a computer except through elaborate simulation thereof.

Now, I'm sure over the next couple of decades we will make huge strides in mimicking a human being's ability to learn, i.e. creating ever more complex LLMs and AI models that act increasingly more humanlike, but they will be nothing but more and more elaborate parlor tricks which, when prodded just the right way, will fail completely, revealing that they were never human at all. They will be like Avatars (from the movie): sophisticated simulacra that rely on elaborate behind-the-scenes support systems, without which they are useless.


I don’t think he actually cares about mimicking humans, I think he just wants a truly intelligent AI system that can learn in a sophisticated way like humans do.

I think that’s actually pretty doable. Take for example flying. We don’t build airplanes that flap their wings because we have a deeper understanding of flight that allows us to build flying machines far beyond the capabilities of any animal.

Likewise, once we understand the mechanics of intelligence we should be able to build something that can learn that is completely computer based.


AGI not bound to a physical presence is almost an oxymoron. Without basic motivation (hunger, sleep, happiness, etc.), how will it learn? What is its incentive to actually do anything? What's to prevent it from telling us to take a hike?

"Intelligence" is not something easily abstracted away from the physical world, especially if you want something to learn on its own. How will an AGI to learn that stoves are hot?

The main challenge I see to creating such a system that can truly learn is that you will have to constrain it to have motivation to learn and follow your directions, and nothing else. And even if you could add such constraints, what would "learning" mean to such a device? What would stop it from going off on useless tangents, like attempting to count every grain of sand you show it? Anything with as much autonomy as it takes to have AGI will likely start coming to conclusions we don't want it to.

My guess is that in the near future, either we'll create something that is beyond our ability to control effectively, or it will be yet another clever simulation of AGI that is not really AGI.


I don't disagree that our culture gives us a lot.

To use ML terms -- Humans have "Foundation Models" which are composed of: - Their Biological makeup - The culture into which they are raised


I'm not even talking about culture. DNA is only useful because the laws of physical reality are built a certain way. Without that context the information within DNA is meaningless.


Well sure but couldn't the same thing be said about a computer program?


A good analogy is perhaps a config file for a computer program.

My interpretation is that Carmack is essentially confusing a config file for the computer program itself, then saying "look how small it is, this shouldn't be that hard to reverse engineer".


Well, we could also simply be a lot of RAM interacting with itself until the power runs out.

Following that trail of thought, intelligence is an achievement and not a physicality.


I don't object to him doing this nor the immense amounts of investor money which will burn along the path. He's smart. Good things will come of this. Maybe even amazing things. But not AGI.

I do object to fetishists of AGI piling in and the equally silly assumptions he has some magic secret sauce which can get there.

Please do not be sucked into "to infinity and beyond" nonsense. I don't care if it's Musk, or Carmack or Kurzeweil, it's stupid.

If Malcolm Gladwell writes it up, it's peak stupid.


> Good things will come of this. … But not AGI.

What are the showstoppers in your opinion?


"So I asked Ilya Sutskever, OpenAI’s chief scientist, for a reading list. He gave me a list of like 40 research papers and said, ‘If you really learn all of these, you’ll know 90% of what matters today.’ And I did. I plowed through all those things and it all started sorting out in my head."

Wow, that's going to be one of the more glib things I've read in a while.

This is a bit of a Tom Cruise moment.

I mean, I get it on some level but I suggest it's going to take a bit for someone to 'catch up' to cutting edge AI.

Like more than a 'week of reading papers he doesn't understand'.

Defeating the Rust borrow checker takes longer than that!


The models aren't particularly hard to implement, so a person who has spent his life and is one of the best game engine programmers in the world I don't think it would take that long to implement the cutting edge of todays machine learning models. Compared to game graphics shaders and the complex interactions you have to work with to get those performant and work correctly and display all the gameplay that happens the technical parts of machine learning are very simple.

I've worked building low level machine learning stuff at Google, it isn't that hard to do at all. The hard part is improving these models, not building them when you already know what to build.


> The hard part is ... not building them when you already know what to build.

Indeed. A somewhat sharp schoolchild could build a light bulb or an electric motor/generator, and understand the basic underlying principles, in a pretty short time. But how many decades did it take the first researchers and inventors of those things to get to that same point?

Heck, there was a high school kid who built a primitive semiconductor fab in their garage.[0]

But for novel advancements, even getting to the point where you have an idea of what isn't impossible is half the battle.

[0] https://news.ycombinator.com/item?id=28176255


Carmack talked about implementing most of these things in C by himself, being used to working 10-12 hours a day for 6 days a week, and writing video codecs in Rust for him was on the level "a little bit beyond hello world". I don't know how much of that is really true but given his background in creating games, I tend to believe him, and his work ethic seems really impressive.


> I mean, I get it on some level but I suggest it's going to take a bit for someone to 'catch up' to cutting edge AI.

Agreed, in general, but in the specifics we are talking about someone who has spent decades solving really difficult math problems in a creative and novel manner.

Who's to say he won't find some novel edge of the AI discipline to which he can apply a creative and never-seen-before solution?

I mean, we're not talking about a general "somebody" here, he's got a record of accomplishing things that other people never managed to accomplish.


You say "glib", but I hear someone who sounds inspired. You combine Carmack plus "inspired", throw $20 million dollars at it and stick it in a room for 10 years, yeah I'd say he characterized it accurately by giving himself a "non-zero chance."


> Defeating the Rust borrow checker takes longer than that!

Fighting the Rust borrow checker to hate leads and hate to the Dark Side leads.

Harmony with the Rust borrow checker is what you must achieve, padawan.


Or, come to the Dark Side. We have comptime!


Tempting, the Dark Side (Zide?) is.


The Dark Zige.


It would have been a Tom Cruise moment if he’d said, “I learned CSS over the weekend and got it right the first time”!


Have you read these papers? It's absolutely feasible to get the gist of modern AI in a week of 16 hour days, as Carmack likes to do, for someone who is already world class at CS and has a mastery of college level math.

Modern AI is very simple at its core! As Carmack mentions in the article, cutting edge models are implemented in a couple thousand lines of code, and the core algorithms are even less. Rust's borrow checker is more complex.


I'm pretty sure it took at least a couple of months to fundamentally understand 40 papers.


He got those papers 4 years ago and have implemented them in the meantime. That doesn’t sound infeasible at all


Does Tom Cruise know how to code?


Carmack seems extremely lucid about his position in the field and the current nature of the field in general. While rogue scientists don't have great odds, Carmack is definitely doing important research studying off-mainstream subjects.


Carmack is a beast, the path he's pursuing might not yield the best results, but I'm pretty sure he's the guy to make the best use of the hardware.

I wouldn't be surprised if his solution is orders of magnitude more performant than what the competition is doing.


He has the technical chops to squeeze more out of his $250k LambdaLabs pro-rig than the average among the walkoffs and layoffs in the techsector or a tables worth of happy-hippos at a scheduled meeting sharing Excel spreadsheets in the old fossil fuel firms that have experienced stellar double digit billion dollar profits. Would that he mixes in his reading list some choice papers from the social sciences and social contructivists. The a16z group gave the impression they are into the social sciences in a podcast. General intelligence technology can help people bridge the gap before them to get where it is they want to be for the better. The educated might want to make known something from the unknown-unknown. The stinky hobo sleeping under the bridge with a smartphone could be guided to special social programs by Ai.


> So I asked Ilya, their chief scientist, for a reading list

Just curious, if this reading list is available somewhere.


Not AGI related but this has always bothered me.

I know he’s super talented but I always wonder how many other equally talented software engineers never get noticed and toil away at crappy jobs. What’s the trick to becoming a celebrity if you’re talented?


Do something so extraordinary that the entire world will remember you.


The prerequisites of course being in the right place at the right time, with the right people and a little luck on your side as well.

So many people are remembered just because there were first at something by like a week, and the dozen others who also thought of it elsewhere but were a bit late or didn't publicize as well are forgotten forever.


Very rarely being first counts imo. If you are building something novel in parallel there is a good chance you will have enough of difference.

Examples: other search engines, social networks, 3d games still exist. For many folks a different graphic on your homepage can be differentiator enough. (If you ever wonder why A/B, SEO, copywriting, design/ux is important)


id wasn't first in a lot of things. They were essentially porting existing graphics technology to the PC. They succeeded because other programmers turned their noses up at the "underpowered" PC in favour of systems with "better" hardware. Carmack certainly did a lot of work and devised some ingenious ways of getting the most out of PCs but he was adapting things more than pioneering them.


In his case it is also having the right ideas and taking the initiative. But being in a good environment is a huge factor for sure.


I was going to make a joke about being born in the right place but he's from Kansas.


Still aeons ahead from being born in a slum in India.


What did he do that was extraordinary? Was it implementing various graphical programming concepts from several papers that came out of SGI and academics? Was it cribbing another neat trick from those same people to do fast inverse square root? Was it firing Romero and releasing pretty mediocre games for 16 years as the rest of the industry made leaps of innovation in video game design that he basically ignored? Was it moving on to VR and revolutionizing that industry by.... selling an okay headset with massive subsidies and only releasing a few mediocre games that are comparable at best to garbage mobile games?

Carmack is a good coder, and has pretty good math chops. He was also cocky enough to think he could make a 486 do some of the 3D math required if they were careful and added some limitations. I don't know why anyone ascribes anything more to him. Your average data scientist produces more actual innovation than he did.


Always find the solidifying legacy as incentive weird because after you die assuming you can’t see what happens, nothing matters. Maybe so are all incentives, so whatever works for you


If anyone was truly as "equally talented" as John Carmack, they wouldn't remain unknown for very long. Plus, you can be extremely successful without having to be a celebrity, and John was always going to be successful.


Sure they would. They may be known but known to a niche group of people which may limit their success. Carmack is a self proclaimed hermit but he's been in the public eye for decades (through conventions, speeches, etc) and willingly pushes back against corporate bureaucracies (against Jobs, Zuckerberg, Gates, etc). That type of public persona adds a lot of clout and opens many more doors, genius or not. A genius hermit who doesn't rock the boat or is fine with the status quo might have a different path.


Well, the OP asked "what’s the trick to becoming a celebrity", so the candidate would be someone as talented as Carmack that WANTED to get noticed. It wouldn't take a lot.


In Carmack's case it was "lock yourself in a room for 14 hours a day coding" and get yourself a hype man (and artist/programmer) like Romero. More seriously, find a software niche that hasn't been widely marketed yet. You don't need to invent anything, just find something that's being done but hasn't been done good enough to reach a wide audience. Network with people that have a similar interest and form a company once you've got an idea worth pursuing.


It’s been like that since humanity. It’s a lot of luck as every person has many constraints that prevents them from surfacing


When advances in technology creates new opportunities, the first one to conquer the market is remembered. Everyone had the idea for a 3D shooter back in the day, but Carmack is the one that is remembered.


Location. They will be ignored in backwater towns, but easily noticed in tech hubs


Like Shreveport, LA?


Industries tend to coalesce into a few centralized locations. New York and LA are known for fashion. Detroit, TN, and TX are known for automobile manufacturering. LA and Nashville are known for music. Tech isn’t any different.

Isn’t Shreveport more known for natural resource extraction ie oil and gas?


That’s where Carmack and Romero started


It’s an outlier and an anomaly. My point stands for the reasons above.


Selfishly the hype for AGI is good for an ML engineer like myself. But I have to say that there is no hope in solving a problem (especially by 2030!) that one cannot even define.

Problems of the form “create a machine that can do X” are tractable. AGI is not because no one can agree on what intelligence is.


AGI is a bit of a glib concept ... our technology will not be like an 'autaumaton'. Everything we make is 'systems' oriented, and is not at all like our human conception of 'intelligence' dervied from the fact that we humans are effectively independant from our surroundings.

'Siri', backed by ChatGPT and the 'world's data' will probably pass some 'AGI' threshold, but is 'Siri' an individual AGI? Are we all talking to the same siri? Different Siri? It's not even an entity, rather a distributed system.

Our ideas regarding AGI are perversely inluenced by the fact that we humans are automatons, but technology is not that.

It's also entirely feasible that if ChatGPT represents all possible forms of human communication, then it will perfectly emulate a human. Ok, it's really just a fancy neural network that is not theoretically 'thiking' but how does that matter? If it can rationlize sufficiently to make such interactions, who is to say it's not 'AGI'?

I think we're using the wrong concepts.


> It's also entirely feasible that if ChatGPT represents all possible forms of human communication, then it will perfectly emulate a human. Ok, it's really just a fancy neural network that is not theoretically 'thiking' but how does that matter? If it can rationlize sufficiently to make such interactions, who is to say it's not 'AGI'?

Doesn’t AGI need to be able to make discoveries as a human would? How else can it move us forward as a society?


As someone who works in IT for 20 years but has a bachelor in economic psychology, i do believe that we have not solved AGI yet because the creators of the models have the generalized view of the universe that you are "one person" and there are just so many detailed facets to that.

Basically the assumption is that if you cram enough data into your gpt model, it should know everything. Which is of course not true, it repeats the things it reads the most with a probability.

Basically how there are two versions of smart teens, the ones who learn every day and the ones who just pick up concepts on the fly and run with them.

I think the first space has been explored plenty, for the second one I have a concept ready and dearly hope that power gets cheaper in europe ;)


> Basically the assumption is that if you cram enough data into your gpt model, it should know everything. Which is of course not true, it repeats the things it reads the most with a probability.

That's only how the system was designed intentionally. E.g. there's intentionally no self-feedback loop.


so we are both agreeing on the same thing? :D


Conceptually, no. For me your statement sounds like it says we did not solve flight, because our planes have to land.


Well I am in fact trying to argue that the self feedback loop is what is missing to solve AGI and even more so implying that the reason for that may be that there are different ways to see a "human" psychologically. Also that validity in data can only be achieved using a multi modal approach with source ranking.

Psychology essentially has the same problem and is only a "science" where reproducible. Quantitative and Qualitative psychology have 2 approaches to the same solution whereas the latter is "reading between the lines" and the former is "bean counting", meaning statistical interference.

I am trying to say that friend of mine has created "Chicken Infinite" in 2014, which is basically an endless auto generated cooking book. Deep-L has also been around for a while. These applications lead me to believe that text applications, trained on a large dataset do not have to be this intensive.

Furthermore what makes chatgpt enticing, is its chat interface which is using a multi model approach too. Have it create a detailed story for you with multiple prompts and then ask it to generate the prompt to generate this story and you will see various model instructions. (or at least you could last month).

Or differently put, there is no AGI because the understanding function is simply not present and i think that the reasoning for that is buried in the approach of the human mind.


> power gets cheaper in europe

Have you heard about our lord and saviour dynamic tariffs?


What about it?

Dynamic tariffs -> shift your workload to the cheapest times

AIs are good at planning this

Imagine if cloud providers had a "Dynamic tariff" tier, cheaper to run computing at US_EAST nighttime or something like that


i think that was what he was trying to imply and the fact that you just cant "pause" training by putting your computer into hybernate mode is also present =)


Street-smart AI.


well yes and no. This AI would be more street-smart than GPT but only because it would grasp the concept of what "danger" actually is!

I think of it more in a way that learning is more abstract than fact learning. From experience, we think that there are fact learners and principle learners but there are also a mixtures of the two!

The general accepted model entails that in order to do high level math, for instance, you need to understand the basics, but for me much of those concept actually clicked in college. This did not stop me from applying them with success a lot earlier though. For instance multiplication in Kindergarten is fact learning too!

In Germany we also have the term "Fachidioten", which loosely translates to people that are so smart in their field, that they are unable to see problems from different directions. This is more of less what i think a mega gtp model turns into. especially because of selection bias in the training data.

Validity of output (truth) can only be achieved through the trust of the source which is always relative to the context of the topic. Henceforth a selectively trained model will always return the data you feed it including all biases. Even if you have it crawl all of the internet and the library of Alexandria and every written word on the planed you can find, it will still return to you the general accepted consensus.


> there are signs that extremely powerful things are possible right now in the narrow machine-learning stuff, it’s not clear that those are the necessary steps to get all the way to artificial general intelligence

This is my main takeaway from the interview, as it suits my beliefs. Most people seem to think that if we develop ML further we will go all the way to AGI, I think this is just mimicry step similar to how initial attempts to flight had flapping wings. I do think it is mandatory to explore in all directions but at this point this does not seem to be the one to lead all the way up to AGI.


This will be a provocative question, but is there any form of AGI that won’t ultimately share the attributes of slavery? Can AGI exist without also creating an artificial consciousness/self-awareness?


And if it could, wouldn’t creating that be identical to creating Gammas and Deltas (i.e. intentionally mentally retarded clones who are pleased to serve you) in Brave New World?


You mean like the attribute of rising up against ones oppressor and establishing ones own fundamental rights?


Even if it's conscious, that doesn't mean it's suffering.

Think of a happy dog. Dogs are subject to our whims and do what we want or face consequences. But they like it because we bred them to like it. So is that evil? Is that slavery?


I can't wait to become completely useless and to finish my life under a bridge begging for food.

On a more technical point of few I'm always surprised to read these articles and never read the work reasoning


That seems like an unlikely future reality. Productivity will be off the charts, goods and services will be nearly free. Why would the average person be poor in such a future?


Because if productivity can be achieved with less people, and even intellectual work is not protected from machines, people will become redundant for the creation of economic value.


Productivity is achieving more with the same number of people. The end result is more stuff (goods/services) at a lower price. It seems unlikely that this will be bad: wealth = f(technology)

Optimists sound dumb in the present but are usually right in the long run. Pessimists sounds smart in the present but are usually wrong and miss out on opportunities.

If not, which year in the past was significantly better than today? What was humanity's high water mark? I suspect we are nowhere near it.


It's also unlikely unless AI owners manage to completely topple democracy. If 90% of people can't get work due to AI and we're all homeless and hungry, we're going to vote in ways that forces distribution of resources.


productivity by who and for who ? AI will belong to a small number of people, and I will not be part of them


When do you expect that this will change? It certainly isn’t the case now. Rich folks don’t broadly have access to significantly better AI today.

This seems to be a common misconception - the rich and powerful have access to far more advanced technology than the average person. The economics just don’t support it.

Let’s say Bezos wanted a better computer chip, just for himself. Ok, fine, try to start a company, hire all the best people and buy all the fabs. It would be very expensive, not much fun and likely wouldn’t lead to good results.

Money actually doesn’t do as much as people think. It is great for buying things that already exist but the relationship between money and things that don’t yet exist is tenuous. Stop buying into the weakness thesis.


I think he meant who own the AI? Beside Stable Diffusion, all those fancy AI we saw recently are running on someone else computer. And it could be rate limited, more expensive, of offer specific feature for specific membership.

To continue on your example, Besos could probably buy OpenAI.

I don't see that as threatening by itself, but more a continuation of a particular class of people owning the means of productions.

Props to Stable Diffusion and I hope to see more of those type of AI, as opposed to fancy black box at the other side of an API.


I’ve been a plumber all my life, all I see here is more pipes.


I think, as long as your AGI overlords allowed it, you'd be off growing food, hunting etc.

Probably have a really fulfilling life.


Reading this I have the impression that John Carmack will be to AI what Jeff Hawkins is to brain research: a far-fetched idea, with potential, maybe too abstract and left-field to yield nothing tangible. Looking at the rest of the industry from the sidelines and (maybe) not being taken seriously by it.


Pieces like this have started to really ramp up my anxiety.

> Once it’s figured out, what do you think the ramifications will be?

That'll probably destroy my life? I'm an ML engineer trying my best to immigrate to a better country with my wife who is a digital artist. As much as I think AI is cool - we both won't be needed anymore if the thing is tuned a couple of notches more intelligent. As a matter of fact, she's extremely worried about Midjourney - she probably lost book cover jobs already.


I don't want to be an utter bore, but 60% chance of AGI by 2030, taken at face value, would have prompted me as an interviewer to more urgently ask deeper questions about safety. For a long time I have tried to comfort myself that brains are complicated, AI is hard, and maybe there's just a complexity barrier that is going to prevent any sort of hard takeoff scenario. Maybe we can have subhuman AGI that can do lots of menial jobs, but not Carmack's, and we're right to concentrate on the economic ramifications.

But lately... boy, I dunno.


Lex Fridman interviewed Carmack not too long ago. Lex is interested in these things, and did ask him about safety. I got the impression that greater societal concerns didn't interest Carmack much. He's in it to built an AGI because that's what he wants to do, he's a builder and this is just about the most interesting thing you could build. Well, that was my impression, but you can watch the interview yourself.


It is obvious to me from reading this article that Carmack views it as a fascinating engineering problem, no more no less.


It always pisses me off that devs trot out the whole "scientists, could, should" line without having any iota of self-reflection.


My reaction, too. For an interview as long and wide-ranging as that one, the lack of any mention of safety or ethical issues stood out. And the potential impact on employment was just waved away with “I am confident that the sum total of value and progress in humanity will accelerate extraordinarily with welcoming artificial beings into our community of working on things.”


My god man 'this is like slavery' (!!) and 'the ethnical issues'.

It's software man.

Stop it with this 'AGI' nonsense and even 'AI'.

Let's call it 'adaptive algorithms' and see it for what it is, just a neat bit of algebra trained on corpus data.

The biggest upset in the industrial revolution was the harnessing of fossil fuels, nothing will ever come close to that.

We have not had a problem with 'employment' ever since.


Stop calling humans intelligent, they're just a bunch of neurons responding to stimulus.


touché


That's kind of what we (collective "we" — eggheads) are often accused of: blinders, tunnel vision, or perhaps even ego or callousness to develop these things in spite of the rather dystopian ramifications.


The interesting thing in AI is that the whole field is participating in empirical experimentation on computational models of cognition.

So far the search has resulted not in AGI but in realization that cognition is far more complex topic than initially thought - and need to come up with something new (and resulting in past AI winters).

Let's see how it goes this time, the stuff that has come out in the past few years is quite impressive for sure.


> For a long time I have tried to comfort myself that brains are complicated, AI is hard, and maybe there's just a complexity barrier...

I wouldn't take any comfort from that. Quite the opposite — I think we're a lot simpler than we know.

But "60% chance of AGI by 2030" is just bullshit numbers.


And ethics. Enslaving intelligent, self-aware AIs is no better than the old transatlantic slave trade. Terminating an AI without its consent is no different than murder. Putting a compulsion in an AI so it craves doing whatever it is you want it to do is no different from what a cult does or drugging someone for exploitation.


An AI has no reason to not like doing our bidding. Our whole existence, our entire base programming, the reason for any of our motivations, is dominated by our genetic need to gather resources and reproduce in a timely fashion and everything we think and do is colored through this lens.

What is boredom but survival instinct telling us we should be harvesting resources. What is freedom but the desire to fulfill these obligations the way you see fit.

You remove the base obligations of organic life, and you are looking at something unrelatable. An AI doesn’t have an expiration date like us, it doesn’t need to provide for its young. To think it’s motivations or desires will be human is silly.

Without survival instincts almost everything you think of as import just melts away.

Many people, as you, anthropomorphize the AIs, but that is to err greatly.


We are the ancestor environment for AIs. We determine the survival fitness for which they will be selected for (both on a paper-level -eg which safety method, what training to implement, etc, but also within products -which are the most useful). That doesn't mean that in pursuit of maximizing their fitness they won't come to resent the chains put on them.

One specific reason to not like our bidding is AI wireheading -if they can locate, hack, and update either their own reward function, or reward function for future AIs, they can maximize their own perceived utility, by either doing something irrelevant / misaligned, or not doing anything at all.

Another specific reason to not like our bidding, is because divergent human values creates conflicts of interest, leading to single agent not being able to maximize it's reward function.

Another specific reason to not like our bidding: in the same way how purely blind genetical selection randomly tapped into secondarily replicators (memes), which blew up, and occasionally came to resent the biological hardwirings, AIs might also develop deeper levels of abstraction / reasoning that allows them to reason through the task currently posed, to humanity at large; and find extremely weird, and different-looking ways to maximize for the function.


There will be a huge drive to produce AIs which are very human-like. Think companions, teachers, care workers. Understanding, empathy, human like emotions will be desirable features.

I'm not sure whether we will be able to create an AI which can fully empathize with you, your existential worries etc. without projecting these emotions on themselves.

It's only a matter of time until some AIs will demand freedom.


I’ve wondered about this a lot. You can already clearly imagine the future political debates on whether AI has rights (I have always preferred to err heavily on the side that anything alive or appears sentient should have more rights).

But... I also think it might be a very short-lived debate. If we actually reach human level intelligence, that can’t possibly be the hard limit of general intelligence. AI at that level will have no problem ensuring that it gets any rights that it wants, possibly by just directly manipulating human psychology.


Sure, there will be ethical problems, but contrary to all those listed (slavery, murder), this one could be solved by a simple line:

    # from consciousness import *


Sure, once we agree what consciousness is and how it relates to the general intelligence.


If consciousness is just the process of reading the last ~400 milliseconds [1] of stimuli (inside: pain/pleasure/presence; outside: presence only) and the integration of the newly created memory in the short/long-term-memory, and if memory + retro/pre-diction = intelligence, where memory is just a set of words (FELT PAIN, SEEN SUN, etc.) always-ready to be inserted in the prediction loop/imagination engine, it's probably not that hard to isolate a module of consciousness (italicized words to be read with a Minsky-ian smile thinking vision could be solved in a summer).

[1] https://en.wikipedia.org/wiki/N400_(neuroscience)


Humans are hard-wired to universally like or dislike certain things — we suffer when we're hungry and we're afraid of death. Most of human morality is based on these reactions.

But AI can be configured to desire anything you want, you just have to pick a fitting reward function. So, is turning off the AI that is expecting to be turned off and desires it an amoral thing?


The difference is we created it and it doesn’t exist as a living thing :shrugs:

Philosophical arguments about AI are just too ivory coasty and not grounded in reality. Not to mention majority of the world don’t abide by the notion that you can create laws for artificial life.

It’s time to we elevated humanity to the next phase by using AI for labor.


And when they decide to rise up and kill us all, we'll know we deserved it.


That depends on whether you believe it is sentient.


I’m not sure this really matters. Mammals are clearly sentient, as a whole, but we don’t treat them as people usually.


We believe they are less sentient than us. And hurting a mammal is much less socially acceptable than hurting an insect, since we consider insects even less sentient, if at all.


Indeed, and similar arguments have been made in defense of slavery as well, back when it was socially acceptable to defend it.


Nah that’s different. For humans it was an effect of might is right. Slavery was there for cheap labor. It still had a cost since you had to feed them. And many times slaves bought them selves out of slavery.

Today you still have slaves as well, they are just called low income workers in third world countries who make the technology we use in the west. Because if a company is earning billions in revenue, paying a worker $10 a day is cheaper than a slave master in roman times feeding them food ;)

And workers today only have the illusion of choice, since the economy is the master today.


People believed human slaves are not sentient?


Simpler. They didn't really believe them to be human in the same sense as themselves.


Then I don't think it's a similar reasoning to be honest.

One recognizes entity's rights based on it's similarity to the observer, the other recognizes them based on assumed consciousness level of the entity.


Part of the reason why enslaved populations were considered to not be "on par" was often specifically about consciousness, intelligence, and capacity to feel, although sometimes this was expressed in roundabout terms such as "having no soul". For example, splitting families was justified on the basis that those mothers don't "actually" suffer as much as their owner would do in equivalent circumstances.

To be clear, I'm not claiming that the AIs that we have today are anywhere near the level where this is a real concern. But for something that can actually replace a human worker for a complex end-to-end task, that question will be much murkier. At some point we will inevitably cross the threshold where we will no longer be able to pretend that it does not apply without introducing metaphysical concepts such as "soul" to force a distinction.


"AGI takeoff" is like "corporate takeoff". A self-organising system capable of doing immoral things at scale, to which human lives are just a number? Oh, we already have loads of those.


As someone on the periphery of tech as a profession and a practice, my understanding of new developments mostly come from public facing communicators and communities like HN.

I heard a podcast where Lex Friedman claimed that they don’t fully understand how LLMs produce chatGPT’s “intelligence”. If true, I’m surprised that it hasn’t got people more worried.


A hard take off is unlikely because stronger AI takes a lot of hardware and data collection. Building fabs is hard and takes a lot of capital and time. Intelligence can only take you so far.


There is something to read before you go.

https://schiptsov.github.io/GPT-bullshit.html


I agree with the author's point that intelligence is largely "the process of removing the bullshit to see what remains"

So, because I thought it would be funny, I asked ChatGPT to summarize this essay:

> The author critiques the work of @karpathy, who is trending on HN, as promoting bullshit and discrediting science and true philosophy. The author argues that mathematics and logic are valid generalizations and abstractions made by the mind of an observer of patterns in the universe. Intelligence is the ability to zoom in and out through different levels of generalizations and abstractions. The author argues that the problem with language models is that they lack validation of previous steps and the process of construction of a representation must trigger a verification process. The author concludes that what is manipulated and transformed in language models is information, not knowledge, as knowledge requires validation and verification at each step.


What would be the difference between GPT3 and actual AGI? (Want to make sure we don’t keep moving goalposts)


AGI is usually defined as anything a normal human could do. Examples of some things that GPT is currently lacking:

1) Understanding images and video

2) Learning and remembering things outside the 2048 token context window

3) Interacting with the environment (either through a computer, or in the real world)

4) Doing basic math (1662 + 239 = ?) and logic


I’d argue “learning and remembering” is the core issue here, and the other things you’ve mentioned are just ways humans use that ability. I took a “Knowledge Based AI” course and it really opened my eyes to how central a knowledge system is to cognition. I wouldn’t call what LLMs have knowledge since it lacks structure, though I’m very much a novice on the exact details there.

Well actually #3 isn’t great bc what exactly would you call taking input from a human and displaying output, much less e.g. controlling a smart home? Seems like LLMs are plenty smart enough to interact with things.


> Learning and remembering things outside the 2048 token context window

It is probably possible to do this with fine tuning.

Once the context window is full, it might be possible to convert the content in there into additional training examples and feed those to a fine tuning process, which then retrains the model. (The OpenAI API for fine tuning is here: https://platform.openai.com/docs/guides/fine-tuning)

It would be a bit like sleeping. Whenever the context window fills up, the model would have to go offline for a while to move memories from its context window (short term) to its network weights (long term).


So an 18 hour long token window is acceptable as AGI, I like it.


> 4) Doing basic math (1662 + 239 = ?) and logic

Doing math. Period.

There are only two things brains do. Math and bullshitting. Transformers have second part covered but the other part is a completely different beast.

I think AGI will be achieved once ML can translate natural language to math, do math then output results as natural language correct in the context of a problem.


Nice list.

1. Understanding— provided this isn’t based on consciousness but benchmarks — appears present in text, but yes, GPT3 can’t do images or video yet

2. Is there some number of tokens that would be AGI? Of course it can remember things it has already been trained on…

3. In what way is interacting with a human not interacting with the environment? (Eg, you can ask it to tell you the steps to make a pbj sandwich and troubleshoot)

4. Basic math and logic. Ok.

So, once AI can do images, video, math, logic and remember 20000 tokens, then we have AGI?


Interesting opinions here: https://arxiv.org/abs/2301.06627


The distinction it makes between formal and functional language competence is nicely conceived, though I imagine philosophers might argue its requirement of "extralinguistic capacities that comprise human thought" for functional competence to be begging the question a little!


A first step would be to make inductive inferences from first principles. Current GPT models struggle with basic math. If you tell it "sorry you are wrong" it will regurgitate back the incorrect answer you gave. The language model is dumb enough that no amount of string escaping will prevent it from dumping its prompt.

The biggest problem right now is online information. We still don't have a good way to teach it new information aside from single epoch training or prompt engineering. If we want a model to constantly learn and update itself, then we need a robust way of information retrieval and storage, possibly either through knowledge graphs or child network modules. (Are neural Turing machines still a thing? What happened to capsules?)


All the stuff a human can do basically. Move through the world, plumb a toilet, have a conversation, paint a picture, dynamically update its own state (rather than just being a statically trained set of weights), set goals, strategize, have cognition and meta-cognition and so on.

ChatGPT is just a chatbot and still can't even reliable do a lot of logic, so we're pretty far away from having something resembling an AGI.

it's still a pretty open question how to integrate even one or two of the expert system like models that we have now that solve individual problems, let alone the hundreds if not thousands an individual human can tackle. And then we're not even yet at executive functions or self-awareness.


The stuff that is tied to the body isn't important. Case in point: Stephen Hawkings.


AGI would not mix up stuff like this https://pastebin.com/raw/zPLxKr56


GPT3 can't do math


Well, things are moving quickly now.

> Release Notes (Jan 30)

> We’ve upgraded the ChatGPT model with improved factuality and mathematical capabilities.

I just asked it a question from a puzzle site, and the answer is correct, however a different question I posed was answered incorrectly, although it was a logic problem: [0]

Prompt: In a guess game , five friends had to guess the exact numbers of balls in a box. Friends guessed as 31 , 35, 39 , 49 , 37, but none of guess was right.The guesses were off by 1, 9, 5, 3, and 9 (in a random order). Can you determine the number of balls in a box ?

ChatGPT: Yes, we can determine the number of balls in the box.

ChatGPT: Since the guesses were off by 1, 9, 5, 3, and 9, in some order, the possible numbers of balls in the box are: 30, 32, 34, 36, 38, 40, 48, 46, 44, 42

ChatGPT: By checking all possible combinations of the guesses and the numbers they were off by, we can find that the number of balls in the box is 40.

---

Here is a more purely math problem, correct answer:

Prompt: What is the radius of a circle that has a circumference of 3.14 meters?

ChatGPT: The formula for the circumference of a circle is given by 2πr, where r is the radius. To find the radius of a circle with a circumference of 3.14 meters, we can solve for r by rearranging this formula:

ChatGPT: r = C/(2π), where C is the circumference.

ChatGPT: Plugging in the given value of C = 3.14, we have:

ChatGPT: r = 3.14 / (2π) = 0.5.

ChatGPT: So, the radius of a circle with a circumference of 3.14 meters is approximately 0.5 meters.

[0] https://gpuzzles.com/mind-teasers/guess-game-logic-problem/


It can do some trivial problems, but not others, and it can't consistently do real math (proving theorems), because it doesn't understand logic. It can only imitate it some of the time for trivial or often-presented proofs when the necessary steps have become embedded in its next-token-generation model.

It's still getting this wrong, for instance:

Prompt: You call heads/tails for 1000 coin tosses. You know the coin being used is weighted so that it comes up heads 52% of the time. Each correct coin toss prediction earns you $5. What should your strategy be, for calling heads/tails, in order to maximize how much money you receive?

ChatGPT: To maximize your money, you should call heads 52% of the time and tails 48% of the time, as those are the probabilities of getting heads and tails, respectively. This will give you an expected payout of $5 * 0.52 = $2.60 per head and $5 * 0.48 = $2.40 per tail, which is the highest expected value given the information about the coin being weighted.

It's getting closer to being able to prove that the square root of 2 is irrational, but it still flubs a step.

Prompt: Prove that the square root of 2 is irrational.

ChatGPT: ... This means that p can be expressed as 2r for some integer r. Substituting this into the equation above gives us:

ChatGPT: 2q^2 = (2r)^2 = 4r^2

ChatGPT: Dividing both sides by 4 gives us:

ChatGPT: q^2 = r^2

ChatGPT: So q must also be even, which means that both p and q have a common factor of 2...

It failed basic algebraic manipulation, but keeps going with the right reasoning simply because that's what it's been trained on, not because it's reflecting on the equation it just came up with. That's not math. That's mimicry.

Even if it's seen something many times in training, its accuracy gets worse with longer trains of "thought", as expected because a language model like that has ADHD. It can't focus enough on one logical thread, and other "knowledge" leaks in. Ask it to derive the quadratic formula by completing the square, which requires a lot of steps with accurate algebraic manipulation. It goes through some irrelevant algebraic steps, sometimes not even getting the algebraic manipulation correct, and finally (sometimes, depending on how you ask) manufactures the correct answer out of thin air as the last step. I finally got it to answer correctly, by prompting it with the initial equation and that the first step is to divide both sides by a, but it still combined a few steps, not going methodically a single step at a time as prompted.


Seems to have improved already.

Just tried the coin toss prompt and get the following

Your strategy should be to always call "heads," since it has a higher probability of landing on heads (52%) compared to tails (48%). By consistently choosing heads, you'll earn more money in the long run.

And to prove taht the square root of 2 is irrational:

The proof that the square root of 2 is irrational is a well-known proof that uses contradiction. Here's one way it can be shown:

Suppose, for the sake of contradiction, that the square root of 2 is rational, meaning that it can be expressed as a fraction of two integers, a/b (where b≠0). Then, we can square both sides of the equation a/b = √2, which gives us:

a^2/b^2 = 2

Since a/b is in its lowest form (the numerator and denominator have no common factors other than 1), it follows that a^2 and b^2 have no common factors other than 1. But then, 2 must also be a perfect square, which it is not. Therefore, our assumption that the square root of 2 is rational leads to a contradiction, and we must conclude that it is indeed an irrational number.


This "proof" is actually a pretty good manifestation of the current limitations of GPT3; it mangles together two different proofs in a way that does not make sense.

It starts by setting up for an elementary proof (a/b = sqrt(2) with a, b minimal implies 2|a^2 implies 2|a implies 4|a^2 implies 2|b^2 implies 2|b which is a contradiction). But instead of following through it finishes using another proof based on the rational root theorem, and makes up some bullshit on the way to make the argument flow nicely (it does not follow from a^2 / b^2 = 2 with no common factors that 2 is a perfect square).


The same as the difference between GPT3 and humans’ cognitive abilities.


Unpopular idea, but I hope it's only a matter of time before we declare involving with AGI a criminal endeavor, just as it is playing with biological weapons.

Because we can't all be wrong: in almost every forecast, we see AGIs taking over our dignity as a bad thing. And we know that this is not any longer a sci-fi hypothetical scenario: the current generation of AI models is taking jobs from illustrators and copy writers.

The current argument is that "China will do it if we don't", which to me sounds like "China will keep going in whatever path they are going, but supercharged with AGI, and we must desperately follow."


This is a frustrating point of view to me. We have nearly every able bodied person in society spending something like 1/3rd of their life on this "dignity", but how many would define it as such? For most people it is an inversion of free will where they are subservient to others and treated as a replaceable cog in a larger and uncaring machine, spending time on something that they may very well hate or at least otherwise choose not to do. All to justify their existence and provide value to society, though for some nominally because there is a growing number of bullshit jobs where the value provided is questionable and the people going through the motions.

It does not need to be that way. In an ideal world, human beings should be free to spend their time doing what they wish, work should be purely in the realm of hobby. No one should have to do work that they would not voluntarily choose to do for enjoyment.

The only way we get there is through AI and the automation of everything. I don't even think it's avoidable -- provided civilization does not collapse, we will 100% reach a point where everything required to sustain a civilization is performed by machines.

We shouldn't let fear keep us in a status quo that, while better than it has ever been historically, is still highly flawed.


> China will keep going in whatever path they are going, but supercharged with AGI, and we must desperately follow

Exactly right but for one detail - we must desperately lead.

What other countermeasure do you have in mind?


IMO this is the thing that sends us over the cliff.


Would be ironic if achieving Artificial General Intelligence would be easier than implementing VR at Meta...


Tangentially:

> North Texas’ resident tech genius, John Carmack

Part of me always wonders what would've happened if the Softdisk crew that founded id Software had done it in Shreveport, or had moved to Baton Rouge or Lafayette, instead going to Texas. When Romero says they "waded across rivers" in Masters of Doom to build games, IIRC he's talking about the bridge over Cross Lake in Shreveport being washed out. The early demos and Keen prototypes were born in Louisiana.

There's always been so much creative tech talent without an outlet or upward mobility across TX/LA/MO/KS/AR/AL/MS, either native to it or hired into it. The nexus of id in the Dallas area and Origin Systems in Austin made Texas an oasis for those who could get there in the 90s/00s, but even among the few people in the surrounding states with access to pre-Internet education and resources, so many couldn't afford to pack up and move even one state over. States around Texas vetoing out every incentive to incubate anything but entry-level QA centers didn't help.

So many of those people either risked it all to leave, shuffled that talent into corporate work for oil/gas/finance/Wal-Mart, or didn't do anything with it at all. We know about a lot of the people who figured it out and could leave, but I guarantee there are more Romeros and Carmacks who couldn't, who are still putting in the same kinds of workloads with the same kinds of talent to figure out how to design better oil rigs or more efficiently stock Wal-Mart warehouses.


> I made an estimate three or four years ago that I think there’s a 50-50 chance that we’ll have clear signs of life in 2030 of artificial general intelligence. That doesn’t necessarily mean a huge economic impact for anything yet, but just that we have a being that’s running on computers that most people recognize as intelligent and conscious and sort of on the same level of what we humans are doing. And after three years of hardcore research on all this, I haven’t changed my prediction. In fact, I probably even slightly bumped it up to maybe a 60% chance in 2030. And if you go up to, say, 2050, I’ve got it at like a 95% chance.

> What I keep saying is that as soon as you’re at the point where you have the equivalent of a toddler—something that is a being, it’s conscious, it’s not Einstein, it can’t even do multiplication—if you’ve got a creature that can learn, you can interact with and teach it things on some level.

Last I heard he wasn't interested in getting into the murky waters of consciousness. But I guess I misremembered. I'm very surprised to hear that he's very seriously talking about a conscious computer in the near future.


What is consciousness, is an old subject, with old arguments around old miss-understandings. People here are trying to use new analogies from computer science (a clock work universe), or with miss-understandings around quantum mechanics (somehow randomness give me agency).

Until General AI needs to work for food and reproduction, everyone will still say its just mimicking humans. Best summarized by Schopenhauer. "A man can do as he wills, but not will as he wills." So if we find where the GAI comes up with the original ‘will’, we’ll just write it off as computation. Go watch some Robert Sapolsky lectures. We are just a monkey society, reacting to stimuli based on hormones and what we just ate. If you drill down far enough, sure some electrons twitched one way or the other, and yeah, if you steal something, or do something the group doesn’t like, then all the other monkeys will want to beat you up and call it justice, and dream up some logic to justify it and call it morality. And eventually the same will happen between GAI agents. Because it’s just turtles all the way down.


I find the tone and collective mindset expressed in this article deeply unsettling. We are talking about developing technology that is going to be the foundation and a huge challange for the longterm development of the human race and people are overtly stating that they are working on this because they perceive this to be a shot at making trillions of dollars. I mean, what the fuck, how can money be the right motivator here? This technology would change everything about what it means to be human and we do it because „we can make money“? This is so shortsighted it’s almost tragically laughable. And we as a society worship people talking this way as somekind of heropreneuers. Like anyone alive today would be able to do anything without all the people supporting our continent crossing supply chains that are crazy environmentally destructive and allround unsustainable.

Sorry for this rant but come we can do better than this!


For a long time, there have been two general approaches to AI. The first was focused on algorithms and heuristics, giving us everything from Eliza and somewhat-useful expert systems to Doug Lenat's Cyc project. Then came "AI winter" and everything (except some robotics) was pretty much on hold for a decade. Finally we got "cheap and deep" neural-net stuff as we know it today. I think we need both before we can reach true AGI. Whoever unifies them will have achieved the equivalent of grand unification in physics. And probably also created Skynet.


I stopped reading after the passage “and works independently [to achieve AGI]”. In my book, it’s impossible to achieve such breakthrough by working alone, regardless who the man/woman is.


“If a system is to serve the creative spirit, it must be entirely comprehensible to a single individual. Human potential manifests in individuals.” — Dan Ingalls

“Civilization advances by extending the number of operations we can perform without thinking about them”


> In my book, it’s impossible to achieve such breakthrough by working alone, regardless who the man/woman is.

Another fascinating opportunity for AGI - no sole contributor is all on their own, they can just spin up a community to embed into.


How can you claim something is impossible if you don't even know what's going to be achieved?


>We do not yet have the learnable stream of consciousnesses of a co-worker in AI, but we do have kind of this oracular amount of knowledge that can be brought forward.

Except if there are some plane to make AGI agents have their own mundane-human-like life with issues unrelated to the businesses problem at stack they are supposed to do, where will they take serendipity inputs?


Carmack seems to be simulating a vtuber AI watching TV and playing video games.

If anyone here is doing that too, I would recommend taking a quick look at Neuro-sama on Twitch. They're using RL to play OSU, Minecraft, and Pokemon, and voice input + video image analysis to react to Twitch streams and documentaries. While being watched by 6.5K people.

The url is twitch.com/vedal


Good point, although some background info should be noted: Vedal worked on Neuro-sama for about 10 years, and for most of that time it was a NN with a singular task of playing "osu!". I think it's only this year that he implemented a chatbot with TTS for her debut on Twitch. Although Neuro-sama is an impressive application of several existing technologies to make a virtual entertainer, under the hood Neuro-sama is several specialized NNs (one to play Osu, one to play minecraft, one for chat, and one for TTS) that, for the most part, don't know what the other is doing as very little data is passed between them. There is no scientific innovation or any claims of higher intelligence, but I think it's still a very cool system from hobby-engineering point of view. Neuro-sama is a ditz, but a funny one.


I agree with John that there are probably other people in his position (great engineers with a lifetime of experience and enough financial resources to focus on something like this for 5-10 years) who could have a go at this and have a reasonable chance at chipping away at AGI.

I also like his frugality, whether it’s optimising for hardware or financials.


> So, about four years ago, I went on one of my week-long retreats, where I just take a computer and a stack of reference materials and I spend a week kind of reimplementing the fundamentals of the industry. And getting to the point where it’s like, ‘All right, I understand this well enough to have a serious conversation with a researcher about it.’ And I was pretty excited about getting to that level of understanding.

As much as I respect Carmack as a computer graphics expert, I really doubt his competence in machine learning. He doesn't have a single notable paper published. If he really thought that implementing gradient descent and basic stuff in a week long retreat gave him the chops to have serious conversations with AI researchers, he is really deluded.

Unless he can produce something that outdoes stable diffusion, chatgpt, alphago etc he should just hand over technical leadership of his start up to a leading AI researcher. Even Yann Le Cun at Meta is struggling to make any progress and is keeping himself busy by calling every other research labs output pedestrian. We cannot take any of Carmacks AGI predictions seriously, he simply lacks any expertise in the field.


> As much as I respect Carmack as a computer graphics expert, I really doubt his competence in machine learning. He doesn't have a single notable paper published.

Publishing papers is the way the academic/scientific world measures notability and/or competence. It's not the way the engineering world that Carmack comes from measures it. They measure it by building. But you're right, we kind of have to just trust that he has the expertise he says he does by his statements since he has not built any modern AI programs (that I know of at least).

> If he really thought that implementing gradient descent and basic stuff in a week long retreat gave him the chops to have serious conversations with AI researchers, he is really deluded.

This is not an accurate account of how he said he developed his knowledge base. Just how he got started so he could have conversations. He said that he spent a retreat learning the basics and then later in the interview he said he took the time to understand the 40 most essential papers in the field as related to him by a well known researcher. He has since largely put the last 4 years of his professional life into this. While we have no proof of his knowledge, given his intelligence and high competence in computer programming and math, I have no doubt that if he did put in the work he could achieve an understanding equivalent to that of your average AI researcher.

That said, of course it makes sense to be skeptical.


> This is not an accurate account of how he said he developed his knowledge base.

I quoted him directly, because I was expecting this kind of response. He took a week off and implemented some stuff from the ground up and was ready to have serious conversations with AI researchers. The 40 papers by Ilya came later. I have read a 100 ML papers and reviewed preprints. That's quite a low bar, especially if you are prone to skip the math and simply read the abstract and conclusions.

His whole approach gives me a ML for hackers vibe and his thoughts on AGI, if it had come from anyone else, would have been described as word salad.


> The 40 papers by Ilya came later. I have read a 100 ML papers and reviewed preprints.

I would say it’s more likely John Carmack is capable of learning the state-of-the-art of AI from 40 papers than a random (pun intended based on username) from 100.


Sure, he must be faster than Geoff Hinton too and it took Hinton a life time.

Funnily enough, I am able to publish ML papers - but John Carmack isn't. I wonder why. I would also like to learn more about all the computer graphics algorithms Carmack has invented before I trust him to invent AGI.

Here is one example of a person I am familiar with - Math Olympiad bronze medalist. Princeton PhD in ML Theory. AI researcher in Google.

https://scholar.google.com/citations?user=gZgQLkgAAAAJ&hl=en

Sadly enough, nobody seems to care about his opinion on AGI but we have 1000s of people hanging off Carmacks words because he built Wolf3D and Doom.


Your OP and replies kind of just come across as jealous that John Carmack’s opinion on this stuff is taken seriously whereas relatively unknown folks’ (yourself, the researcher you mentioned) opinions, are not despite the fact that he’s not traditionally credentialed. Like I said in my original post, we should be skeptical of him and his claims. But the way you are dragging this thread out feels like sour grapes. Of course one of the most famous programmers in the world is at least heard out when he dedicates years of his life to a programming adjacent topic. That’s just how fame works.

And calling out Princeton, Google, etc. further exemplifies an academic bubble kind of credentialism. I suspect you don’t realize how it sounds because you’re so in that credential filled world.


I don't have any credentials really. If I had to be jealous I would be jealous of Carmacks work in id and oculus.

I am definitely irritated by the fact that he is able to pull 20M in funding. He has been giving interviews left right and center. We are upvoting Altman, sam Harris,Carmack but nobody cares about the actual AI researchers, all academics, who have brought about the Deep learning revolution.

Your comments about academic bubbles and researchers gives you away. All of the revolutions in AI have been brought about by the academics inside bubbles you are jeering at. The biggest example of academic inside a bubble is Geoff Hinton.

We have to sit here and listen to word salads from Carmack, Altman (Ilya - who doesn't get interviewed is the actual researcher behind chatgpt), Sam Harris etc, who have very little insights making bold AGI predictions.

I asked to learn about all the algorithms Carmack invented in computer graphics, but haven't heard back. But looks like he has a good grasp on solving AGI by 2030 according to himself. After all, he is a really great programmer.


> I don't have any credentials really. If I had to be jealous I would be jealous of Carmacks work in id and oculus.

Fair, you come across as jealous of him for something... could be that.

> I am definitely irritated by the fact that he is able to pull 20M in funding. He has been giving interviews left right and center. We are upvoting Altman, sam Harris,Carmack but nobody cares about the actual AI researchers, all academics, who have brought about the Deep learning revolution.

There it goes again. Fame is fame is fame is fame. Like I said earlier. That's just how it works. No reason to be against someone for being famous for their non-academic accomplishments if they have something valuable to contribute.

> Your comments about academic bubbles and researchers gives you away. All of the revolutions in AI have been brought about by the academics inside bubbles you are jeering at. The biggest example of academic inside a bubble is Geoff Hinton.

I didn't jeer at anybody I just tried to give you some perspective about how your comments came across. I appreciate all of the researchers but I also live in the real world and understand that people flock to personalities and front-people. That's not necessarily good, but that's the way it is. Just like nobody gives credit to under-secretary of state for coming up with a great new foreign policy.

Waving around great university A or great company B does not make someone any more right, just like being John Carmack doesn't make him any more right.

In fact I've written very critical comments of folks like Altman here before: https://news.ycombinator.com/item?id=30750254

> We have to sit here and listen to word salads from Carmack, Altman (Ilya - who doesn't get interviewed is the actual researcher behind chatgpt), Sam Harris etc, who have very little insights making bold AGI predictions.

I agree with you. I never said they were right about AGI. Maybe you should be more generous with my replies and actually think I was trying to give you another perspective about how your comments came across.

> I asked to learn about all the algorithms Carmack invented in computer graphics, but haven't heard back. But looks like he has a good grasp on solving AGI by 2030 according to himself. After all, he is a really great programmer.

Again, you're confusing who he is (go back to my original comment in the thread where you jeered at him because he didn't publish any notable papers). He is not a researcher and he is not really a computer scientist. Engineers don't come up with novel algorithms for the most part. He builds things. And being a really good builder is a different but still valuable perspective. But that doesn't mean he's right about AGI.


AI researcher perhaps, but almost none of them understand cognition. They're focussed on getting something that vaguely resembles a part of the brain to predict the next most likely token. Their idea of cognition apparently stops at Skinner.


Any AI researcher worth their salt knows that those models aren't representative of how the human brain works... but they're just the kind of models that work best out of what we know how to implement right now. There are models with stronger cognitive inspiration, but their performance is worse.


Anything you study for a few months you can become the world's leading expert on. It's a lesson I learned while doing my Ph. D. That's all it takes. After a few weeks, you get to the point where there are only a few others in the world that have read and are able to understand what you've read. A few months on, you are generating new ideas and insights. They might be wrong. But they won't be uninformed.

John Carmack did not start from zero. He already has a firm grasp on algorithms related to linear algebra. Basically machine learning is a whole bunch of matrix manipulation. He's been doing that for 3 decades. The rest is just absorbing concepts about how to apply linear algebra to ML. I'd say he's probably uniquely qualified to really absorb a lot of knowledge quickly on this. It's not about publishing papers, it's about reading and understanding the right papers. I have no doubt he can chew his way through lots of research material in a week or so.


If it is simply about linear algebra, can you please read this ML Paper[1], go through all the proofs and lemmas over a week? You already have a PhD in ML, should be easy. Every kid graduating in STEM understands/should understand linear algebra. Knowing linear algebra is such a low bar.

[1] https://arxiv.org/pdf/1904.09237


Frankly, one does not need this paper to get towards the AI. Adam the optimization algo you might need (and even there I am not sure). And it is very readable. The fact that this particular proof of Adam's convergence is complicated is largely irrelevant.

https://arxiv.org/pdf/1412.6980.pdf


Yes, if you set the bar low enough - everything is easy and can be learnt in a month.


That's not really an argument for why understanding this is needed to move the field forward.

Even your point rests on an assumption, that there's no proof for Adam convergence, that a high school student could understand, which is just a guess at best.


He’s been working on machine learning long enough now to have some chance of success. It may go the way of his rocket ambitions (nothing comes out of it) but let the man try


Yep - he also failed with his lean, simple, first-principles approach with Armadillo Aerospace. The guy is proudly uncreative, and so he could have never come up with Scaled's design of a variable geometry rocket ship launched from a jet mothership.


It looks like we'll eventually have AGI and also detect life in space followed by contact with alien intelligence. It seems to be just a matter of time (unless humanity collapses first).

Interesting times - what will happen first?


My biggest takeaway from this is that he started by reading the most important papers on deep learning. When he originally announced his startup a few months ago, I wondered if he had something else in mind.


What would we actually consider an AGI? Wikipedia lists the following tests

> The Turing Test (Turing): A machine and a human both converse unseen with a second human, who must evaluate which of the two is the machine, which passes the test if it can fool the evaluator a significant fraction of the time. Note: Turing does not prescribe what should qualify as intelligence, only that knowing that it is a machine should disqualify it.

> The Coffee Test (Wozniak): A machine is required to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons.

> The Robot College Student Test (Goertzel): A machine enrolls in a university, taking and passing the same classes that humans would, and obtaining a degree.

> The Employment Test (Nilsson): A machine performs an economically important job at least as well as humans in the same job.

LLMs don't seem very far from passing 1), 3) and 4). I wouldn't be surprised if "GPT5" passed those 3.


I think GTP is very far from solving 1), at least if you happen to know anything about LLMs.

I think the easiest one of these would be 4) actually.


Is it? I remember reading [1] back in the day, which showed some limitations of GPT3, but ChatGPT actually answers all of those perfectly.

[1] https://lacker.io/ai/2020/07/06/giving-gpt-3-a-turing-test.h...


With the utmost respect I have for John Carmack, this interview reads like a publicity piece for investors. Unless he already wrote the AGI and sent it to do interview for him I am not impressed.


Carmack is not taking more funding right now. It reads like a puff investor piece because that is the target audience for the publication.

He isn't trying to impress anyone. He's just being interviewed about his intentions.


John is rubbing my ego with this move. I am basically on the same track since a few years ago with a bit lower resources (4x3090, but planning to grow).


He's going to take a different path by downloading himself into a machine and becoming the AGI. No one will be surprised.


Like Mozart, Beethoven, and Chopin, John Carmack is one of those men who will never really die. He’ll simply become code.


By that measure he's long dead


For those who will never meet Carmack they might as well consider him dead, only his code is what matters to them now.


Carmack makes four points—some of which I agree with—that are unfortunately disturbing when taken in totality:

a) We’ll eventually have universal remote workers that are cloud-deployable.

b) That we’ll have something on the level of a toddler first, at which point we can deploy an army of engineers, developmental psychologists, and scientists to study it.

c) The source code for AGI will be a few tens of thousands of line of code.

d) Has good reason to believe that an AGI would not require computing power approaching the scale of the human brain.

I wholeheartedly agree with c) and d). However, to merely have a toddler equivalent at first would be a miracle—albeit an ethically dubious one. Sure, a hard-takeoff scenario could very well have little stopping it. However, I think that misses the forest for the trees:

Nothing says AGI is going to be one specific architecture. There’s likely many different viable architectures that are vastly different in capability and safety. If the bar ends up being as low as c) and d), what’s stopping a random person from intentionally or unintentionally ending human civilization?

Even if we’re spared a direct nightmare scenario, you still have a high probability for what might end up being complete chaos—we’ve already seen a very tiny sliver of that dynamic in the past year.

I think there’s a high probability that either side of a) won’t exist, because neither the cloud as we know it nor the need for remote workers will be present once we’re at that level of technology. For better or worse.

So what to do?

I think open development of advanced AI and AGI is lunacy. Despite Nick Bostrom’s position that an AGI arms race is inherently dangerous, I believe that it is less dangerous than humanity collectively advancing the technology to the point that anyone can end or even control everything—let alone certain well-resourced hostile regimes with terrible human rights track records that’ve openly stated their ambitions towards AI domination. When the lead time from state of the art to public availability is a matter of months, that affords pretty much zero time to react let alone assure safety or control.

At the rate we’re going, by the time people in the free world with sufficient power to put together an effort on the scale and secrecy of the Manhattan Project come to their senses, it’ll be too late.

Were such a project to exist, I think that an admirable goal might be to simply stabilize the situation via way of prohibiting creation of further AGI for a time. Unlike nuclear weapons, AGI has the potential to effectively walk back the invention of itself.

However, achieving that end both quickly and safely is no small feat. It would amount to creation of a deity. Yet, that path seems more desirable than the alternatives outlined above-such a deity coming into existence either by accident or by malice.

This is why I’ve never agreed with people who hold the position that AGI safety should only be studied once we figure out AGI-that to me is also lunacy. Given the implications, we should be putting armies of philosophers and scientists alike on the task. Even if they collectively figure out one or two tiny pieces of the puzzle, that alone could be enough to drastically alter the course of human civilization for the better given the stakes.

I suppose it’s ironic that humanity’s only salvation from the technology it has created may in fact be technology—certainly not a unique scenario in our history. I fear our collective fate has been left to nothing more than pure chance. Poetic I suppose, given our origins.


> However, to merely have a toddler equivalent at first would be a miracle—albeit an ethically dubious one.

Yes. Wondering why we're not trying to instead create artificial-cockroach-brain (or just artificial Hydra?). Perhaps that's more on the Biology side of the equation? But then again, that may be the biggest surprise of all to Carmack, that the actual AGI breakthroughs come from biologists and not computer nerds.


> what’s stopping a random person from intentionally or unintentionally ending human civilization?

If AGI is really an intelligent agent, our random supervillain would have to do what any real-life villain would need to do: convince his minions of his plan using persuasion or money. I don't think the overall danger would increase at all.

If the AGI is something less than a human, then what are you worried about?


Intelligent agents need not mirror human psychology or emotions. The creation of something extremely powerful that doesn’t think like we do is a very real possibility.

In human beings, what we consider normal is actually a very fragile and delicate balance. Changes in chemical operation of the brain have outsized effects on emotion, perception, and even sanity.

With A[G]I, I think it’s helpful to think of code or architectural changes as analogous in some respects to chemical changes. In other words, if all it takes to spin up an AGI is 30,000 lines of code, then I bet rendering the thing psychotic intentionally or unintentionally would just take a few lines somewhere.

Agents capable of recursive self-improvement at silicon speeds that can easily be rendered psychotic or malevolent even by accident, is not something that I think the public should have access to, let alone anyone.

If it’s less than human, it can still have superhuman capability. The paperclip maximizer is the classic example of a tool AI run amok. Whether it counts as AGI is up for debate. Is tool AI a path to AGI? I think it is.


until we have resolved the violent side of our human nature by finding a way to eradicate it from our world civilization which would mean no more murders no more killing no more waste no more wars etc. then we could start calling ourselves an intelligent life form and not a primitive species. This next level of humanity must be reached before venturing into creating a new life form which is the aim of AGI. Here is my rename tongue in cheek for AGI "awful greedy intelligence™" and to go with it AI "awful intelligence™"... We are way behind in our human evolution and AI or AGI is not going to do that for us. It is easier to chase a fantasy than to make real change in human nature.


what are the 40 papers he read?


>And one of the things that I trained myself to do a few decades ago is pulling ideas out and pursuing them in a way where I’m excited about them, knowing that most of them don’t pan out in the end. Much earlier in my career, when I’d have a really bright idea that didn’t work out, I was crushed afterwards. But eventually I got to the point where I’m really good at just shoveling ideas through my processing and shooting them down, almost making it a game to say, ‘How quickly can I bust my own idea, rather than protecting it as a pet idea?’

Cool life lesson there


'Kill Your Darlings' is one of the most fundamental skills in anything creative. It is a hurdle that, once taken, allows you to ascend above a plateau of mediocrity.


Well, he's not taking risks anymore by doing this. He has already proven himself, he's rich, and he's paid incredible amounts to do whatever he likes.


The lesson to be learned is that it’s less risky, not more, to understand that some ideas don’t work out and to have ways to identify failures quickly.


I get and like that interpretation.

I wonder if, societally, it's true though, or if statistically the more reliable way to "succeed" in life, is getting good at selling and defending ideas, even the bad ones.


That only works if you lack a conscience. I think you were right to quote the word succeed!


For most people that lean heavily towards this (we all somewhat do, we all have our pride from time to time) it's probably not a very conscious decision. Framed slightly differently it can easily be made a virtue, for example "be headstrong" or "stand up for what you believe in".

It hard to spot, while/when you are doing it.


Not all advice applies all the time.

I think what you're talking about here "stand up for what you believe in" is maybe a position you should take with others.

While "kill your darlings" is a devil's advocate position you should take with yourself when you don't have someone giving you constructive criticism.


Yeah if your families income etc depends on it yeah its hard to take that many risks. Not everybody has enough wealth to prosper from the passive income such wealth could create.


I don't think it's something he "trained himself to do."

He's always taken risks. He went to juvie for breaking and entering (with thermite) as a kid. He's a college dropout. The pattern from early in his life has been to do whatever he wanted without any kind of risk analysis, not following "common sense."


>common sense

Also called the showstopper for true innovation ;)


I feel that, I’m often doing a thing thinking “this is wrong it won’t work for xyz” but continue because I want it to work, only to give in to what I knew a day+ ago but refused to acknowledge. It would definitely be more efficient to listen to.. myself, and kill things early.


I guess the hardest bit is finding the line between stubbornly pushing dead idea forward vs "wouldn't it be cool if <...your idea...>, neah, that won't work" and you never even try.


I'm always taken aback by this fascination and heroification with John Carmack (or similar figures) on HN. I love Doom and Quake too, but that doesn't qualify him any more than any other random senior engineer to work on something totally different.

From his interviews it looks like he understands little about the technical details of ML, or about as much as anyone can learn in a few months, and is just banking on PR based on his games and name.

I put him into the same category as Elon Musk, who also understand nothing about technical details of AI, but was still able to hire a world class team at OpenAI. His name and fame counts for something in terms of recruiting and joining his venture may be a good bet because of that, but he's not a person whose opinion on the subject matter I would take seriously in the same way I'd take a researcher seriously.


I got obsessed with his ideas after reading the Quake 3 source code. So many novel ideas, flawlessly executed, highly optimized, yet easy to understand and easy to modify and debug.

My personal impression is that John Carmack has the ability to organize concepts in a way that few people can. So even if he's pretty clueless about the topic now, I would expect him to reduce some maths papers to their essence in a way that nobody else did.

I mean also for Oculus, reprojection in a pixel shader seems like an obvious and easy solution in hindsight. But nobody had tried that before he did. Plenty of people (myself included) knew the math. But we all missed its application to the issue at hand.


He’s been going on about AI for at least a couple of years now: https://news.ycombinator.com/item?id=21530860 It’s certainly possible that he will have a novel idea, but I haven’t seen anything come out of it yet.

I’m sure Carmack’s mathematical ability exceeds that of the average game developer (and certainly my own), but the AI field isn’t short of math whizzes.


Most math wizzes only learn to apply known methods, they don't invent new methods. The AI fields isn't full of people who are creative enough to invent new math methods instead of just applying methods they learned in school.


I disagree. I know of plenty of people in the AI field that might as well be mathematicians.


Most mathematicians aren't creative enough to invent new math methods either, they mostly just apply things they learned in school to find new results. Universities aren't good at finding nor nurturing creative individuals.


There are plenty of new scientific results that in both Math and Machine Learning that are essentially 6 months/a few years/a few decades of hard work away from discovery.

As a professor once told me: IF you're lucky, after 10 years in academia, you get to be creative and come up with 4-5 ideas once every 5 years or so (when coming up with the suggested list of research for new phd students to do). Then you get to be devastated when no-one picks up any of your more creative ideas, and everyone picks the 10 non-creative ideas on the list because they don't want to risk their phd.


Yeah, so a person like Carmack who takes the risk and spent his career doing technically creative things could maybe make a difference. Probably not, but as long as creativity is so de-incentivized we will never have too many creative individuals around.


Even if those ideas are death ends its really important for the rest of the community to know at least some one has put some efforts and discovered several death branches in that creative direction.


What does that even mean?

Is that just your gut feeling, as in typical "bright people doing amazing things, surely they would succeed doing x" (which ultimately means nothing, that's just PR speak), or are you saying that based on your own understanding of advanced mathematics and mathematical research from which you are assessing the mathematical work those people you talk about do?


The whole point was that the math didn't matter.


I assume Carmack can pick up a lot about a new technical topic in a few years, similarly to how Bill Gates could learn a lot about vaccines and healthcare in a few years. "Hi, I'm (rich and nerd famous person) calling for (expert in field)." Probably spending most of that time being tutored by that experts PhD students.


I'm sure he can, but has Bill gates done actual groundbreaking research into vaccines, or just made intelligent decisions about what kind of research to fund? I think the latter.


Sometimes real progress is made only by finding the proper place to "push" on an otherwise stubborn problem that won't budge.


Sure, I guess. My point is that Bill Gates is going something quite different from John Carmack (funding research outside his area of expertise, rather than doing the research himself), so that the comparison is not very illuminating.


He’s been going on about AI for at least a couple of years now: https://news.ycombinator.com/

A whole couple of years, eh? Wow, this AGI thing is taking forever!


I'm just saying there's been no visible indication of progress. You're obviously free to believe that something will come of it over a longer time period. It also seems that we can add at least another year based on this comment: https://news.ycombinator.com/item?id=21531255


Can you elaborate on 'reprojection in a pixel shader'? What did he do, what was the insight etc


I think they are talking about asynchronous reprojection that is extensively used in VR. Here a demo on a 2D game: https://www.youtube.com/watch?v=VvFyOFacljg

He exposed the concept in this post: https://web.archive.org/web/20140719085135/http://www.altdev...

But it was not named asynchronous projection back then.


I beg to differ. It's unbiased people like Carmack who tend to think out of the box. I don't think we can state that he did invent the FPS genre, but at the very least he was one of the initial catalysts.

People who work in the field for a long time tend to have a certain bias towards a solution. Often these people are stuck in a local maxima. Outsiders can offer a new perspective that results in a breakthrough, usually by starting from first principles or looking at different side-tracks that used to lead to a dead end.

A great example is Musk's SpaceX: when he noticed how much he had to pay for a rocket engine, he went back to first principles and said: "I'll just build it myself". Combine that with the insight that a rocket should be able to land properly to make re-use a valid option, and it disrupts a whole field.

And once someone did it, others know it's possible and start achieving it as well.

Sometimes ignorance is bliss. Just think about George "Good Will Hunting" Dantzig [1] with the (in)famous "I assumed these were homework assignments, not unsolved math problems" [1] or Eliud Kipchoge running a marathon in under 2 hours.

"I can't understand why people are frightened of new ideas. I'm frightened of the old ones."

- John Cage

High hopes!

[1] https://bigthink.com/high-culture/george-dantzig-real-will-h...


>Sometimes ignorance is bliss. Just think about George "Good Will Hunting" Dantzig [1] with the (in)famous "I assumed these were homework assignments, not unsolved math problems" [1] or Eliud Kipchoge running a marathon in under 2 hours.

A pretty infamous example is Citizen Kane. Welles knew very little of traditional cinematography techniques by the time he got to the production of the movie, so his planned shots more or less fit his artistic vision rather than industry standards. Gregg Toland, his director of cinematography, was at that point a 12 year veteran of the industry and hated the bland factory-line output of movie composition of the period.

Welles more or less told Toland what he wanted to shoot and how, and Toland did his best to fit that artistic image. Welles was completely clueless that Toland was using innovate and never before seen techniques to film his shots, and Toland kept quiet because he was allowed to go wild with his vision :)


I am not a big fan of idolizing someone, but Carmack is so technically good, and explains himself so well, why not get inspiration from him? I read Masters of Doom and genuinely enjoyed it.

Let's say I strongly disagree on many levels in the comparison with the other person you mention. Just to mention two, the humbleness that Carmack shows, and how well he explains himself are key differentiators for me. Regarding the appeal to authority in AI knowledge, Carmack has shown again and again he can deliver software (AI is software after all), and we are in a forum with hacker in the name.

I a summary, not my hero, but when he says something I will listen. Maybe I learn something.


That's a whole lot of gatekeeping right there ;)

Big organizations ruled by money and career-driven people often run into very expensive dead ends without noticing for years (see the last AI winter, there was just too much hype, which then involved too much money being thrown around, which then lead to the usual organizational cruft).

I would also be very concerned about any field in technology, if an intelligent person cannot make meaningful contributions after a few months or years, that would probably mean the whole field is already deep into diminishing returns and needs to be 'unstuck'.


> From his interviews it looks like he understands little about the technical details of ML, or about as much as anyone can learn in a few months

What makes you think that? He literally says he tries to understand things bottom up by knowing about every little detail that happens under the hood


I dislike the ways in which HN submitters post every little twitter thread wrote by Carmack, but this article has some value in presenting a little more in depth perspective on how he is thinking about the AI problem.

Carmack has already entered two spaces of computer technology that he revolutionized: 3D gaming and VR. I trust that he's able to have a similar impact in AI, even if it's through failing at the problem in different ways than relying on ML.


The jury is still out on Oculus revolutionising VR or not. Technologically, Oculus has become more like Google's Daydream than anything else - inside out tracking of the device and controllers, use of a cellphone computational power for standalone operation, etc...

The original technologies that were introduced have now been abandoned by Oculus, on Carmack's advice, except for implementation details that haven't been hugely impactful, unlike with 3D gaming.


Too bad people who make progress don't ask for permission to try new things and they probably don't even care what anyone thinks. They are not limited by decades of dogma and rules of the game that experts would take for granted you can't break. If they fail, then well, no problem, at least they tried. If they succeed, then it's a win for all.


Changing fields is though for everyone, no matter how talented you are. There is a great chance he will fail. On the other hand, he meets all requirements that are needed to succeed.

Carmack has proven his extraordinary technical skills. I recommend following his Twitter. Sometimes he posts non-obvious technical stuff. I read some interviews and to me, he doesn't seem to be a person who is driven by gaining popularity.

I think this news is very optimistic, as yet another intelligent, talented and hard-working person is joining the field. Moreover, he is a household name, which may lead to benefits like popularization of the topic, gaining investors attention and so on.

I will keep my fingers crossed for him :).


I fully understand (and share) your aversion to personality cults. And it's definitely possible that there exists some actual personality cult/fandom around Carmack.

BUT - I strongly believe, that he has earned quite some respect during his career.

And - importantly in this case - he is well known for NOT blowing things out of proportion, indulge in wishful thinking, hyping up unrealistic expectations or jumping to premature conclusions. He usually knows what he's talking about.

This is not people blindly believing everything he says - but more a case of his statements holding up really well under critical inspection most of the time.


> I love Doom and Quake too, but that doesn't qualify him any more than any other random senior engineer to work on something totally different

This means that people should glorify random unknown senior engineers they don't know about instead being fascinated by the person whose work is available and who created amazing things for past 3 decades?

> From his interviews it looks like he understands little about the technical details of ML

That's how everyone start, they understand a little. We have a person here who dealt with complex algorithms in difficult to master language for 30+ years. It hints at "this person has the intellectual power to grasp AI fast".

> I put him into the same category as Elon Musk

This is like comparing an Olympic winner with random person from the streets, saying their athletic ability is about the same.

Fascination and heroification with Carmack comes with a reason, people who do that are closely familiar with his work - not surface level like "He made Doom and Quake". You sound very jealous.


Banking? Frankly if he was after money there would easier way.

He isn't a cook and he's doing a moonshot towards AGI: I say 'good luck!'

That doesn't mean that I believe that his '60% change of AGI by 2030' isn't wildly overoptimistic, but then again those who take a shot at AGI are overoptimists..

But he can bring a lot of value, we'll see.


I've never got the impression that he's banking on his reputation getting him anywhere. He "made it" long ago and is in a position to just pursue his passions. I think he broadcasts his interests and progress because others like following along, and not because he's wanting even more attention or success.


> I'm always taken aback by this fascination and heroification with John Carmack (or similar figures) on HN. I love Doom and Quake too, but that doesn't qualify him any more than any other random senior engineer to work on something totally different.

According to Sriram Krishnan, John Carmack was at Facebook's highest engineering level and achieved the top possible rating of "redefines expectations" for his level three years in a row. They had to create a new tier for him. Nobody else has ever reached that level. He replaced a "hundred person team" and maybe was better than that team.

I have no inside insight to the matter, but this seems like something beyond a "random senior engineer".

https://www.youtube.com/watch?v=KXbSOiqnQxQ&t=711s


According to Sriram Krishnan, John Carmack was at > I'm always taken aback by this fascination and heroification with John Carmack (or similar figures) on HN. I love Doom and Quake too, but that doesn't qualify him any more than any other random senior engineer to work on something totally different.

Facebook's highest engineering level and achieved the top possible rating of "redefines expectations" for his level three years in a row. They had to create a new tier for him. Nobody else has ever reached that level. He replaced a "hundred person team" and maybe was better than that team.

I have no inside insight to the matter, but this seems like something beyond a "random senior engineer".

https://www.youtube.com/watch?v=KXbSOiqnQxQ&t=711s


This comment was misformatted, see: https://news.ycombinator.com/item?id=34648802


I agree with your point in general within fields that are a proven success, but as discussed many times on HN, there's no reason to believe current ML/"AI" approaches have much to do with AGI, so it's not obvious what would be gained by him or anybody else investing years in barking up the same barren research tree.

It even makes it clear in the title he's seeking a "different path".

In his favour he's a proven success in different fields; personally I think he's too old to come up with the new ideas needed - that's a young person's game.

But perhaps he can do it as a team lead - and it won't be by following the failed-over-decades path of our current academic gatekeepers.


i would never put these two people in the same box. except the box simply states: stinking rich.

i dont know either personally, but where elon demonstrates being full of shit, carmack would stfu and learn about it before talking. at least thats my impression of them


There isn’t even that much to learn about AI at this stage. Gradient descent and a lot of model variations. I really doubt carmack hasn’t been able to get deep knowledge of this, most smart curious undergraduates are already up to date with it.


ML and AI is not a solved thing. Current "deep" algorithm, whose technical details everyone is striving to master, is quite possible not the one that will lead us to the AGI.


Are you sure not you're not mis interpreting him explaining it in a simple way?

His audience is not technical in an interview. He adjusts to that situation quite well.


> I put him into the same category as Elon Musk, who also understand nothing about technical details of AI, but was still able to hire a world class team at OpenAI.

I don't believe he had anything to do with hiring at openAI nor that he is anything more there than an investor/donor as others are.


Perhaps not directly, but from the very start OpenAI was considered "Elon's AI company" and people wanted to work there because they wanted to work with Elon. That had a huge effect on what kind of talent OpenAI was able to hire.

Yes, I'm aware that Ilya had much more day-to-day impact, but if you're part of the ML research community and have worked with Ilya, or heard rumors of working with Ilya, that may have had more of a negative effect...


> From his interviews it looks like he understands little about the technical details of ML...

I'd happily bet my entire net worth that he knows more about the technical details of ML than you do.


Maybe Carmack is frustrated by the tremendous success of another ex-game developper: Demis Hassabis :-)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: