No, but it'll become a hobby or artistic pursuit, just like running, playing chess, or blacksmithing. But I personally think it's going to take longer than 30 years.
I knew someone who reached the final stage of one of these science fairs. Project was done at a lab in an Ivy League university over a couple of summers. Relative was a senior scientist at the lab and guided them every step of the way. Not to discount what these kids are doing, but the reality is that these science fairs have largely become a contest about how well your family is connected to science-fair-friendly research facilities and how good your presentation skills are. I mean, do we really think 17 year olds are out there doing human trials on novel cancer therapies? I’m sure there are some projects that are genuinely thought of and done by the students themselves, but looking at a lot of these PhD level research that are supposedly done as after school projects of high school kids, I can’t help but think the whole thing has become a bit of a farce.
Ok I actually went and read the article by Allen, and I think LessWrong's paraphrasing is a bit sloppy. Allen's actual argument is that because all Royal Navy captains (not just those on half pay) are not permanent employees, they have the right to refuse any commission offered to them. They could then reject any commands that seem too dangerous or unprofitable. This would lead to adverse selection if there were no prize money (for capturing or sinking enemy ships, etc.) and they only received fixed wages. There would be little incentive for captains to take the unfavorable commissions. But the RN did have a very generous prize money system, and the prize money was often much greater than the regular wages. This means that even if a commission looked pretty bad, there was still the possibility of getting rich. Whereas there's no such opportunity at all if one's sitting on the shore. So captains rarely rejected commissions. It's basically akin to how startups usually offer more stock options and the chance to get rich in order to offset the higher risk and worse work/life balance compared to the BigCos.
> There would be little incentive for captains to take the unfavorable commissions. But the RN did have a very...
I suspect the RN was quite aware of the incentives facing captains, and could think a few moves ahead. And that captains understood their offers of commission weren't random - if you turned down a less-desirable one, you'd need friends in really high places for the RN to offer you anything better.
Yes, it was old growth southern live oak, which is harder and denser than the oak the British used in their warships. Hence the Constitution's apt nickname of "Old Ironsides".
And the more general version, “Humanity progresses one funeral at a time.” Which is why the hyper-longevity people are basically trying to freeze all human progress.
Or Effective Altruism's long-termism that effectively makes everyone universally poor now. Interestingly, Guillaume Verdon (e/acc) is friends with Bryan Johnson and seems to be pro-longevity.
Take that moral position and then extend the calculation to cover all potential generations of humans to come, and the value of maximising your utility now is infinitely small.
> somebody figured out how to make a computer do something
Well, I would argue that in most deterministic AI systems the thinking was all done by the AI researchers and then encoded for the computer. That’s why historically it’s been easy to say, “No, the machine isn’t doing any thinking, but only applying thinking that’s embedded within.” I think that line of argument becomes less obvious when you have learning systems where the behavior is training dependent. It’s still fairly safe to argue that the best LLMs today are not yet thinking, at least not in a way a human does. But in another generation or two? It will become much harder to deny.
In many ways LLMs are a regression compared to what was before. They solve a huge class of problems quickly and cheaply, but they also have severe limitations that older methods didn't have.
So no, it's not a linear progress story like in a sci-fi story.
> It’s still fairly safe to argue that the best LLMs today are not ... thinking
I agree completely.
> But in another generation or two? It will become much harder to deny.
Unless there is something ... categorically different about what an LLM does and in a generation or two we can articulate what that is (30 years of looking at something makes it easier to understand ... sometimes).
> It’s still fairly safe to argue that the best LLMs today are not yet thinking, at least not in a way a human does. But in another generation or two? It will become much harder to deny.
Current LLMs have a hard division between training and inference time; human brains don’t-we train as we infer (although we probably do a mix of online/offline training: you build new connections while awake, but then pruning and consolidation happens while you sleep). I think softening the training-vs-inference division is a necessary (but possibly not sufficient) condition for closing the artificial-vs-human intelligence gap. But that softening is going to require completely different architectures from current LLMs, and I don’t think anyone has much of an idea what those new architectures will look like, or how long it will take for them to arrive
Resolution: Behaving as expected. Won't fix.
reply