It seems to me predicting things in general is a pretty good way to bootstrap intelligence. If you are competing for resources, predicting how to avoid danger and catch food is about the most basic way to reinforce good behavior.
This would fall down into a semantic debate over what is meant by intelligence.
There is a well known phenomenon known as the AI effect: when something works we start calling it something else, not AI. Heuristics and complex reasoning trees were once called AI. Fuzzy logic with control systems was once called AI. Clustering was once called AI. And so on…
This certainly has one root in human or carbon-based life cheuvanism but I think there’s something essential happening too. With each innovation we see its limits and it causes us to go back and realize that what we colloquially call intelligence was more than we thought it was.
Intelligence predicts, but is prediction intelligence?
Again, here by intelligence I mean what complex living organisms and humans do.
I still believe there are things going on here not modeled by any CS system and not well understood. Not magic, just not solved yet. We are reverse engineering billions of years of evolution folks. We won’t figure it all out in a few decades.
Demonstrably, humans do think, and arguably demonstrably, early life would go down a path of simple predictions (in the form of stimulus -> response). And demonstrably, evolution did lead to human level intelligence.
So I don’t think there needs to be a semantic debate over where in the process intelligence started. The early responses to stimulus is a form of prediction, but not one that requires thinking.
There can be much disagreement that prediction is at the core of intelligence, or if optimizing ability to predict leads to intelligence. But from the established facts, it is the case the higher forms of life were bootstrapped from the lower ones, and also our biochemistry does have reward functions. Successfully triggering those rewards will generally hinge on making successful predictions. Take from that what you will.
Prediction is a huge part of what intelligence does. I was questioning “prediction maximalism.”
Intelligence is also very good at pattern recognition. Did people once argue for pattern recognition maximalism?
Biological (including human) intelligence is clearly multi-modal and I strongly believe there are aspects that are barely understood if at all.
The history of CS and AI is a history of us learning how to make machines that are unbelievably good at some useful but strictly bounded subset of what intelligence can do: logic, math, pattern recognition, and now prediction.
I think we may still be far from general intelligence and I’m not even sure we can define the problem.
I’ve heard clips of hot water being poured vs cold water, and if you heard the examples, you would probably guess right too.
Time of day seems almost easy. Are there animals noises? Those won’t sound the same all day. And traffic too. Even things like the sound of wind may generally be different in the morning vs night.
This is not to suggest the researchers are not leaking data, or that the examples were cherry picked, it seems probable they are doing one or the other. But it is to say, if you were trained on a particular intersection, and heard a sample from it, you could probably train a model to predict time of day reasonably well.
Weather patterns have a daily/ seasonal rhythm… the strength and direction of the wind will have some distribution that is different at different times of the day. Temperature and humidity as well, like the other poster said.
I like the definition proposed in the post - all possible mazes should be equally likely. A function could return the same maze each time and pass the criteria above.
My male friend once left the room with two girls I didn’t know very well; one of whom he was dating. We were all around 20 years old.
In the ten minutes he was gone, they looked up his sign and his girlfriend decided they were not compatible and literally dumped him when he got back. I guess with the lens of time, maybe there was more going on, but at the time it seemed like this was solely based on the advice of the astrology website. I was flabbergasted and have had a very negative opinion on astrology ever since.
You make the bubble after you are already moving in the direction you want to go. The bubble will keep moving. What else would it do? Out of all the questions, this is not a hard one…
Which raises the spooky question is it the ship causing the bubble (space-time) to move, or vice versa? Typically space-time tells objects how to move, not the reverse. It would be disconcerting to discover FTL but find that the journey was fated from the beginning.
If you have to spend the entire combined annual energy output of all of humanity getting up to speed, why convert the equivalent of 1% of the mass of the sun into energy to create and maintain a warp bubble?
But in this case the testing is given different amounts of time.
The thing you think works gets less testing time than the thing you aren't so sure works.
Thus the thing you think works is more likely to pass, just because you are subjecting it to less tests.
Your bias (whether you think the thing works) is having an effect on the outcome.
Good testing, as with good judging should involve 0 preconceptions.
Yes it could be that the judge has a good eye for how long a topic will take, but leaving less time for the facts to come out, necessarily means the facts are less likely to come out.
According to spec, when someone uses oauth to try and log into an existing account for the first time, you must require the user to login through their normal method and then prompt them to link the login account.
However, the identity provider cannot force you to do that, and there are many examples of apps which do not follow this part of the spec.
I could have sworn I have seen this in the past, but I am not sure exactly where. Thinking about it; it probably would have been part of OIDC and not directly addressed by OAuth... maybe someone can find it for me, or maybe I misspoke when I said it was part of the spec.
I've checked 2.0 Security BCP, 2.1 draft and OIDC and none of them seemed to cover that. Perhaps I could be in ongoing discussion in the mailing list of 2.1? I only checked their GitHub issues and found nothing relevant.
Are you questioning if physics is computable? Even if it is not fully, we must be able to approximate it quite well.
Suppose we scan more than just the person, but a local environment around them and simulated the whole box.
The update that occurs as the person sits in the room involves them considering their own existence. Maybe they create art about it. If the simulation is to produce accurate results, they will need to feel alive to themselves.
We agree we can simulate an explosion and get accurate results; if we can’t get an accurate simulation of a person, why?
I disagree on the premise that computers are able to simulate many different things; it would be just as easy to say that the universe cannot have curiosity; that the universe is just things playing out according to some preset rules.
But of course, people exist within the universe, and while our brains do function according to those rules; likely all rules that can be expressed with math and statistics; we do have curiosity. You can look at the level of abstraction of the universe, and you will not find subjective experience or choice, yet the “platform” of the universe demonstrably allows for that.
When I see arguments like yours and the parent’s, I cannot help but think the arguments would seem to apply just as well to an accurate simulation of the universe, which shows the argument must be flawed. You are a file in the universe, loaded into memory and being run in parallel with my file. If you believe physics can be expressed through math and humans have subjective experienced, then the right kind of simulation can also have these things. Of course any simulation can be represented digitally and saved to disk.
> it would be just as easy to say that the universe cannot have curiosity; that the universe is just things playing out according to some preset rules.
Behavior/dynamics depends on structure, and structure is scale dependent.
At the scale of galaxies and stars, the universe has a relatively simple structure, with relatively boring dynamics mostly governed by gravity and nuclear fusion.
When you zoom down to the level of dynamics on a planet full of life, or of a living entity such as a human, then things get a lot more interesting since at those scales there is far more structure causing a much greater variety of dynamics.
It's the structure of an LLM vs the structure of a human brain that makes the difference.
But I am saying that is the wrong comparison. The LLM doesn’t need to implement a human brain directly. It needs to implement a sophisticated enough simulation of people that the simulation itself contains ”people” who believe in themselves.
I don’t know LLMs do that, but they are excellent function approximators, and the laws of physics which allow for my brain to be conscious also can be considered some function to approximate. If the LLM can approximate that function well enough, then simulated humans would truly believe in their own existence, as I do.
And it isn’t really important whether or not consciousness is part of the simulation or not, if the end result is the simulator is capable of emulating people to a greater extent.
If you wanted to build a behavioral simulation of a human, and knew how to do it, then what advantage would there be to trying to get an LLM to emulate this simulator ?!
Better just code up your simulator to run efficiently on a supercomputer!
The LLM teaches itself the rules of the simulation by learning to predict what happens next so well.
Presumable, running a human simulation by brute forcing physics at a scale large enough to represent a human is completely infeasible, but we can maybe conceive how it would work. LLMs are an impressive engine for predicting “next” that is actually computationally feasible.