Hacker News new | past | comments | ask | show | jobs | submit login

What evidence for „AI models the human brain“ do you want? Isn’t a neural network pretty clearly a simplified model of the working of the human brain? What is there to prove?



Neural networks are not a model of the working of the human brain. They are based on an extremely simplified approximation of how neurons connect and function (which while conceptually similar is a terrible predictive model for biological neurons) and are connected together in ways that have absolutely zero resemblance to how complex nervous systems look in real animals. The burden of proof here is absolutely on showing how LLMs can model the human brain.


> They are based on an extremely simplified approximation of how neurons connect and function (which while conceptually similar is a terrible predictive model for biological neurons) and are connected together in ways that have absolutely zero resemblance to how complex nervous systems look in real animals.

Well then you already think it’s a model. Being a simplified approximation makes it a model.

Just as I said in another comment, a SIR model also models infection behavior of COVID in humans, even though it is extremely simplified and doesn’t even look at individuals. It’s basically just a set of differential equations that give a curve that looks like infection numbers. But that is exactly what makes it a model. It’s a simplified abstraction.


Neural networks are much closer to modeling a brain than other approaches to AI, ie symbolic reasoning. There will always be differences (it's machine, not meat), but it's fair to say the approach is at least "brain like".

Your position sounds like a No True Scotsman fallacy.


Sorry if it came across as non-falsifiable, that was not the intent.

Neural networks do not directly encode high-level reasoning and logic, yes. But in the spectrum of “does this model the actual functioning of an animal/human brain”, they lack both a 1st order model of how biological neurons and neural chemistry behaves, but also lack anything like the multiple levels of structural specialization present in nervous systems. That’s the basis for my argument.


That's true, but we also don't know that the multiple levels of structural specialization are necessary to produce "approximately human" intelligence.

Let's say two alien beings landed on earth today and want you to settle a bet. They both look weird in different ways but they seem to talk alike. One of them says "I'm intelligent, that other frood is fake. His brain doesn't have hypersynaptic gibblators!" The other says "No, I'm the intelligent one, the other frood's brain doesn't have floozium subnarblots!"

Who cares? Intelligence is that which acts intelligent. That's the point of the Turing test, and why I think it's still relevant.


I think we are arguing on different tracks, probably due to a difference in understanding of ‘model’.

There are arguments to be made, including the Turing test, for some sort of intelligence and potential equivalence for LLMs. I am probably more skeptical than most here that current technology is approaching human intelligence, and I believe the Turing test is in many ways a weak test. But for me that is different, more complex discussion I would not be so dismissive of.

I was originally responding to the claim “isn’t a neural network a simplified model of the working of the human brain”. A claim I interpreted to mean that NNs are system models of the brain. Emphasis on “model of the working of”, as opposed to “model of the output of”.


AI fanatics claiming to know SHIT THE TOP TIER NEURAL SCIENCE HAS NO FIRM IDEA OF.

We know there are neurons and that electric signals hit between them. That’s it. Literally everything else about your claim is bogus nonsense.


One clear piece of evidence would be ruling out "AI models the corvid brain" or "AI models the cephalopod brain" which might narrow it toward the human brain.

That it's functionally impossible to do either leads me to believe that "it models some form of intelligence" is about the best we can prove.


I don’t understand the standard of modeling you seem to assume.

Modeling a human brain, a cephalopod brain and a corvid brain aren’t even mutually exclusive if your model is abstract enough.

When I say „a neural network models a human brain“, I’m talking about the high level concept, not the specific structure of a human brain compared to other brains. You could also say that it models a dogs brain if you wanted to. It’s just the general working principle that is kind of similar. Does that not count as a model to you?

Edit: Here’s a simple example: I would say that a simple SIR model „models COVID infection in humans“. But that doesn’t mean it can’t also model Pig Flu in Pigs. It’s a very abstract model so it applies to a lot of situations, just like a neural network basically models the brain of every reasonably advanced animal.


I think a lot of people don't abstract their brain model when they say "models a human brain", or they'd say "models biological intelligence", etc. Specifically, I don't think there are any human traits in LLMs other than having mostly been trained on human outputs. They see tokens and predict tokens; very different sensorium from humans. There aren't any specific corvid or cephalopod traits either afaik.

Biological brains don't use gradient descent and don't seem to use 1-hot encoding/decoding for the most part.


Pointing out differences doesn’t mean it’s not a model, that’s what makes it a model and not a replica. Saying „A neural network is a model of the human brain“ doesn’t imply that it’s a direct simulation of the structure and scale of a human brain, it just means that the neural network is based on a simplification of how neurons in a brain work. That’s the entire claim.


How about "hallucinations"? They are exactly what students produce during exams when they don't exactly know the subject: plausible sounding but internally incoherent sentences.


> Modeling a human brain, a cephalopod brain and a corvid brain aren’t even mutually exclusive

Take an existing implementation, ChatGPT4 or whatever - is it closer to a brain of a rat or of Albert Einstein?

If you are not sure, then, we, it’s just have some sort of intelligence, not ‘model of a human brain’.

I would wager it’s closer to a rat.

Also the phrase implies that we understand the difference between human brain vs brain of an elephant. For some reason humans are more capable, it’s not just size. At the moment we don’t understand.


It’s probably closest to modeling a housefly, but that doesn’t mean it’s not also a model of a human brain. Being a model doesn’t require that it exactly captures every aspect and scale, it means that it tries to approximate the working principle. Just like a SIR model doesn’t really model how an infection in an organism works, but it still models infection behavior of COVID between humans.


IMHO, it's clear that no human can learn like AI. AI outperformed humans with huge margin in some areas already, while their performance is laughable in other areas.

Also, it's obvious that our brain is not built like AI models.

There are similarities between both human and current AI models, but there also huge differences, which doesn't allow easily map one to another.


Of course, that’s what makes it a model, not a digital twin of a human brain. But nobody claimed that, so having a roughly similar working principle is enough for me to call it a model of a brain.


No, it's not a model of a human brain, like bicycle is not a model of human legs. It's artificial intelligence. There are similarities, but we cannot use current AI models to study human brain: it's useless for that job.

We can create a model of human brain (Artificial Brain) using a bunch of AI models, of course, but it's not done yet.


Well it’s a model just like a Mindstorms NXT (This guy https://www.lego.com/cdn/product-assets/product.img.pri/8547... ) is a model of human bipedal walking. It’s very simplified and basic but that basic idea is there.

A model doesn’t have to be useful to study the thing you’re modeling, you can also just be interested in the output because it’s simpler than using the original thing. Modeling a human brain with a neural net and using it is simpler than directly simulating a human brain, so that’s what we do. Not being useful to study humans brain on doesn’t mean it’s not a model.


It's model of a humanoid robot. It's not a model of a human.


A model of a model of X is also a model of X. It absolutely is a very simple model of human walking. You’re just using „model“ in a very narrow sense that excludes many things that are commonly called models.


Models are used as orders of magnitude cheaper substitutes of real thing in learning, predicting, and so on, known as «modelling».

AI, in it current state, is not good enough to serve as substitute for human, or human brain, but good enough to serve as substitute of human level intelligence. At this point, we are able to model brain of a fly.

It looks like you are confused by similarity of scheme, model, and similarity.

A model needs a map to transfer knowledge in both directions: from a real thing to a model, and from the model to the real thing, while in scheme, knowledge is transferred in one direction: from a real thing to a scheme. They toy humanoid robot is just a schematic representation of a real human.

Moreover, similar things are not models of each other. Apes are not models of humans and vice versa.


It just depends on your definition of a model, but to me, a neural network is modeled after how a human brain works. If Apes were man-made, I would also count them as a model of a human.


What wrong with my definition of model?

We know how to produce apes.


Well, apes know how to produce apes, we mostly just give them some privacy.


We can clone animals and grow them in a lab.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: