I would answer, but I rather not, publicly, because of HN privacy policy (nonexistent) and handling of personal data (abysmal), and also because is impossible to delete your comments.
> The productivity studies on software engineers directly don't show much of a productivity gain certainly nowhere near the 10x the frontier labs would like to claim.
Which studies are you talking about? The last major study that I saw (that gained a lot of attention) was published half a year ago, and the study itself was conducted on developers using AI tools in 2024.
The technology has improved so rapidly that this study is now close-to-meaningless.
"The technology has improved so rapidly that this study is now close-to-meaningless."
You could have said that anytime in the last 3 years, but the data has never shown it to be true. Is there data to show that the current gen models are so much better than the last gen models that the existing productivity data should be ignored? I don't think the coding benchmarks show a step change in capabilities, its generally dev vibes rather than a large change to measurements.
> There's just no way in hell ChatGPT at its current level is going to guide you flawlessly through all of that if you start with a simple "I want to build a raytracer" prompt!
This is the entire crux of your argument. If it's false, then everything else you wrote is wrong - because all that the consumer of the book cares about is the quality of the output.
I'd be pretty surprised if you couldn't get a tutorial exactly as good as you want, if you're willing to make a prompt that's a bit better than just "I want to build a ray tracer" prompt. I'd be even more surprised if LLMs won't be able to do this in 6 months. And that's not even considering the benefits of using an LLM (something unclear in the tutorial? Ask and it's answered).
Indeed. The top-level comment is pretty much wishful thinking. At this point, if you tell a frontier LLM to explain things bottom up, with motivation and background, you usually get something that’s better than 95% of the openly available material written by human domain experts.
Of course, if you just look at top posts on forums like this one, you might get the impression that humans are still way ahead, but that’s only because you’re looking at the best of the best of the best stuff, made by the very best humans. As far as teaching goes, the vast majority of humans are already obsolete.
> ...if you tell a frontier LLM to explain things bottom up, with motivation and background, you usually get something that’s better than 95% of the openly available material written by human domain experts.
That's an extraordinary claim. Are there examples of this?
What about letting customers actually try the products and figure out for themselves what it does and whether that's useful to them?
I don't understand this mindset that because someone stuck the label "AI" on it, consumers are suddenly unable to think for themselves. AI as a marketing label has been used for dozens of years, yet only now is it taking off like crazy. The word hasn't change - what it's actually capable of doing has.
But, to be fair, that wasn't the kind of critique it was talking about. If your critique guns is moral, strategic, etc, then yes, you can do it without actually trying out guns. If your critique is that guns physically don't work, don't actually do the thing they are claimed to do, then some hands-on testing would quickly dispel that notion.
The article is talking about those kinds of critiques, ones of the "AI doesn't work" variety, not "AI is harmful".
I don't know any engineers, any reports, or any public community voices who claim GenAI is bad because "AI doesn't work because I tried ChatGPT in 2022 and it was dumb." So it's a critique of a fictional movement which doesn't exist vs. an attempt at critiquing an actual movement.
I’m not redefining anything, that's the definition of "novel" in science. Otherwise, this comment would be "novel" too, because I bet you won't find it anywhere on Google, but no one would call it novel.
Show me these novel problems, that were solved by LLMs, name more than 3 then.
You're seriously insisting that the definition of novel in science only includes things that thousands of people have worked on for decades and haven't solved?
An example problem includes the "Erdos set" problems (see problem 124).
But also, LLMs have solved Olympia problems, see the results of IMO 2025. You can say that these are not interesting or challenging problems, but in the context of the original discussion, I don't think you can discount them as "novel". This is what the original comment said:
> Not so much amazing as bewildering that certain results are possible in spite of a lack of thinking etc. I find it highly counterintuitive that simply referencing established knowledge would ever get the correct answer to novel problems, absent any understanding of that knowledge.
I think in this context, it's clear that IMO problems are "novel" - they are applying knowledge in some way to solve something that isn't in-distribution. It is surprising that this is possible without "true understanding"; or, alternatively, LLMs do have understanding, whatever that means, which is also surprising.
reply