I'm questioning if we actually understand ourselves. Or even if most of us actually "understand" most of the time.
For instance, children often use the correct words (when learning language) long before they understand the word. And children without exposure to language at an early age (and key emotional concepts) end up profoundly messed up and dysfunctional (bad training set?).
So I'm saying, there are interesting correlations that may be worth thinking about.
Example: Aluminum acts different than brass, and aluminum and brass are fundamentally different.
But both both work harden, and both conduct electricity. Among other properties that are similar.
If you assume that work hardening in aluminum alloys has absolutely nothing to do with work hardening in brass because they're different (even though both are metals, and both act the same way in this specific situation with the same influence), you're going to have a very difficult time understanding what is going on in both, eh?
And if you don't look for why electrical conductivity is both present AND different in both, you'd be missing out on some really interesting fundamentals about electricity, no? Let alone why their conductivity is there, but different.
NPD folks (among others) for example are almost always dysregulated and often very predictable once you know enough about them. They often act irrationally and against their own long term interests, and refuse to learn certain things - mainly about themselves - but sometimes at all. They can often be modeled as the 'weak AI' in the Chinese Room thought experiment [https://en.wikipedia.org/wiki/Chinese_room].
Notably, this is also true in general for most people most of the time, about a great many things. There are plenty of examples if you want. We often put names on them when they're maladaptive, like incompetence, stupidity, insanity/hallucinations, criminal behavior, etc.
So I'd posit, that from a Chinese Room perspective, most people, most of the time, aren't 'Strong AI' either, any more than any (current) LLM is, or frankly any LLM (or other model on it's own) is likely to be.
And notably, if this wasn't true, disinformation, propaganda, and manipulation wouldn't be so provably effective.
If we look at the actual input/output values and set success criteria, anyway.
Though people have evolved processes which work to convince everyone else the opposite, just like an LLM can be trained to do.
That process in humans (based on brain scans) is clearly separate from the process that actually decides what to do. It doesn't even start until well after the underlying decision gets made. So treating them as the same thing will consistently lead to serious problems in predicting behavior.
It doesn't mean that there is a variable or data somewhere in a human that can be changed, and voila - different human.
Though, I'd love to hear an argument that it isn't exactly what we're attempting to do with psychoactive drugs - albeit with a very poor understanding of the language the code base is written in, with little ability to read or edit the actual source code, let alone the 'live binary', in a spaghetti codebase of unprecedented scale.
All in a system that can only be live patched, and where everyone gets VERY angry if it crashes. Especially if it can't be restarted.
Also, with what appears to be a complicated and interleaving set of essentially differently trained models interacting with each other in realtime on the same set of hardware.
I'm not saying they are the same.
I'm questioning if we actually understand ourselves. Or even if most of us actually "understand" most of the time.
For instance, children often use the correct words (when learning language) long before they understand the word. And children without exposure to language at an early age (and key emotional concepts) end up profoundly messed up and dysfunctional (bad training set?).
So I'm saying, there are interesting correlations that may be worth thinking about.
Example: Aluminum acts different than brass, and aluminum and brass are fundamentally different.
But both both work harden, and both conduct electricity. Among other properties that are similar.
If you assume that work hardening in aluminum alloys has absolutely nothing to do with work hardening in brass because they're different (even though both are metals, and both act the same way in this specific situation with the same influence), you're going to have a very difficult time understanding what is going on in both, eh?
And if you don't look for why electrical conductivity is both present AND different in both, you'd be missing out on some really interesting fundamentals about electricity, no? Let alone why their conductivity is there, but different.
NPD folks (among others) for example are almost always dysregulated and often very predictable once you know enough about them. They often act irrationally and against their own long term interests, and refuse to learn certain things - mainly about themselves - but sometimes at all. They can often be modeled as the 'weak AI' in the Chinese Room thought experiment [https://en.wikipedia.org/wiki/Chinese_room].
Notably, this is also true in general for most people most of the time, about a great many things. There are plenty of examples if you want. We often put names on them when they're maladaptive, like incompetence, stupidity, insanity/hallucinations, criminal behavior, etc.
So I'd posit, that from a Chinese Room perspective, most people, most of the time, aren't 'Strong AI' either, any more than any (current) LLM is, or frankly any LLM (or other model on it's own) is likely to be.
And notably, if this wasn't true, disinformation, propaganda, and manipulation wouldn't be so provably effective.
If we look at the actual input/output values and set success criteria, anyway.
Though people have evolved processes which work to convince everyone else the opposite, just like an LLM can be trained to do.
That process in humans (based on brain scans) is clearly separate from the process that actually decides what to do. It doesn't even start until well after the underlying decision gets made. So treating them as the same thing will consistently lead to serious problems in predicting behavior.
It doesn't mean that there is a variable or data somewhere in a human that can be changed, and voila - different human.
Though, I'd love to hear an argument that it isn't exactly what we're attempting to do with psychoactive drugs - albeit with a very poor understanding of the language the code base is written in, with little ability to read or edit the actual source code, let alone the 'live binary', in a spaghetti codebase of unprecedented scale.
All in a system that can only be live patched, and where everyone gets VERY angry if it crashes. Especially if it can't be restarted.
Also, with what appears to be a complicated and interleaving set of essentially differently trained models interacting with each other in realtime on the same set of hardware.
Perhaps you care to explain how I'm all wrong?