The worst is we don't know what we don't know. That sounds trite, but in fact the scientific method is about generating a consensus among "rational, educated, intelligent people."
That doesn't mean it's correct. It doesn't even mean it's objective. The best you can get is a consensus among a subset of humans that certain things happen because of certain other things, and certain models can predict some of these things with limited accuracy.
This turns out to be useful for human experiences, as far as it goes. But we literally can't imagine what connections we're not aware of, what formalisms and models we can't create because our brains are too limited by their evolutionary wiring, and what experiences we're not having because same.
You could argue that these invisible imperceptible things can't affect us, by definition. But we don't know that's true. There could an entire universe of influences and abstractions we're not aware of.
And there probably is. Realistically, what are the odds that our not very large or clever brains really do have the potential to understand the entire universe?
What we think of science is more like the gap between the smartest 1% and the rest of the population. Science is a good way to make those 1% insights sticky and useful to everyone else.
But it's highly presumptuous to assume that human cognition has no limits, and the universe fits comfortably inside our brains.
> Realistically, what are the odds that our not very large or clever brains really do have the potential to understand the entire universe
My belief on this is not entirely rational, of course, but it seems to me that there's probably a sort of Turing-completeness for intelligence/understanding, where as soon as a mind starts being able to understand abstraction, given enough time and resources, it can probably understand the entire universe.
It would also be presumptuous to say that brainfuck is equally powerful to every other programming language that exists, and yet we know it to be true. The fundamental reason we can prove that Turing-complete languages are equivalent to each other is that we can build the same abstractions in both, so intuitively it feels like a similar principle holds for human intelligence.
That doesn't mean it's correct. It doesn't even mean it's objective. The best you can get is a consensus among a subset of humans that certain things happen because of certain other things, and certain models can predict some of these things with limited accuracy.
This turns out to be useful for human experiences, as far as it goes. But we literally can't imagine what connections we're not aware of, what formalisms and models we can't create because our brains are too limited by their evolutionary wiring, and what experiences we're not having because same.
You could argue that these invisible imperceptible things can't affect us, by definition. But we don't know that's true. There could an entire universe of influences and abstractions we're not aware of.
And there probably is. Realistically, what are the odds that our not very large or clever brains really do have the potential to understand the entire universe?
What we think of science is more like the gap between the smartest 1% and the rest of the population. Science is a good way to make those 1% insights sticky and useful to everyone else.
But it's highly presumptuous to assume that human cognition has no limits, and the universe fits comfortably inside our brains.