Hacker News new | past | comments | ask | show | jobs | submit | useruser125524's comments login

Carmack is absolutely not the norm from a brain chemistry perspective. His level of obsessiveness is not naturally achievable for the vast majority of individuals. That whole post is very short-sighted and solipsistic


Having a hyperactive brain often requires unhealthy coping mechanisms and eccentric fuel sources. In his case, way too much processed food, soda, etc. There's no bottomless reservoir. some people prioritize their health. Other people prioritize other things, at the expense of their personal health or longevity.


He's pretty fit - I think he's just way smarter than average and able to pair that with extreme focus (rare). It comes across in Masters of Doom.


Personally I'm pretty distrustful of documentaries. To be entertaining, they usually have to do some amount of narrative manipulation or propagandizing. There are many out there that open a small window into someone's life to make them look extremely evil or heroic while the reality is much more mundane. Some even manipulate interviews with credible people to prove the earth is flat or other nonsense. I love the book Blood Sweat and Pixels and find it very entertaining and inspiring, but I don't trust that it's a nuanced objective look on the 5 stories it covers or that the characters within are as perfect irl as they come off.


Reasonable, his intelligence also comes across in his tech talks too - similar to Andrej Karpathy in clarity and bandwidth for talks.

There’s also a lot of other commentary on his capabilities outside of that one account, so it seems likely to be true imo.


Didn't that book mention how, at the time, he used to live on pizza and cola?


Ultimately we're just a collection of atoms obeying the laws of physics, but reducing all the complexity that entails doesn't really accomplish anything


This is my frustration put really well. When people talk about how humans are also just a form of LLM (not that the above comment did exactly that). That might even be true, but simplifying things to that degree doesn't help actually discuss what is going on. The original comment is as far as I know correct... while I don't have a PhD some of my undergraduate work was in control theory and ML, and it works really well. The underlying methods we used were from NASA in the late 70s.. surely there is something more in the field no?


What frustrates me is when people say "neural networks cannot show intelligence because they are just a succession of linear layers", and methods that exist since the 70s". I don't understand this argument, and how this has anything to do with intelligence.


I’m not saying they can’t show intelligence. I am saying that the techniques are very old, throwing more hardware seems to have done wonders. It’s been surprising to see how far it’s come. I don’t think that simplifying intelligence in humans and saying we are LLMs helps the conversation. That along with that these techniques being used while they’ve clearly been improved are fairly old. From an engineering standpoint it leaves me wondering if there are not more elegant solutions. That nature actually is fairly cool, it doesn’t help the convo to downplay wetware I think.


You can look at very small creatures on earth and think "surely there is more to a human than to this ant". And sure, there is, but also, there isn't. Just as basic life evolves into more complex life, why shouldn't the underlying methods from the NASA in the late 70s evolve into ChatGPT and eventually AGI?

Simplifying things to this degree can show how you get from A to B.


We can see how an ant is a more primitive form of the same kind of thing that a human is, and though they're not on the same evolutionary path, we can see how humans evolved from the most primitive forms of life, because we know what the endpoint looks like.

We don't know what 70s AI and ChatGPT will evolve into. That's why everyone keeps debating and prognosticating about it, because nobody actually knows. But whatever people mean by AGI, we don’t know if it will or can evolve from the AI platforms that have come before.

We do know that the thing that evolved into ChatGPT is entirely different from primitive cellular life, so there's no reason to believe that ChatGPT will keep evolving into some kind of humanistic, self-aware intelligence.


I’m not the only one who thinks there is something missing though. That simplification does work to show how to get from A to B, but I don’t think it is analogous when we don’t really understand how we go to human intelligence. None of us were there. All I am saying is that simplifying even mammalian intelligence down to just an LLM doesn’t explain much to me. It might be the case that old 70s math proofs from NASA evolve into a mechanical intelligence maybe? I just personally think that isn’t the case. Humans run on a lot less overall power (though biological necessities like bathrooms and food are real), we also can learn even from less text than a machine and infer from experience. That might not convince you, or you might have your own ideas about what intelligence is.


That's how you read it because that's what they said. Not sure why the original reply felt the need to add "stupid" as "So people love it" makes the same point. Mister Gotcha is probably replying implicitly to the addition of 'stupid' to make a point. Except now that they've stuck their stakes in the ground and are getting defensive, let the pedantic internet slap-fight commence


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: