We are now a few years into this latest "AI revolution." I keep hearing about how AI is coming for all our jobs. But people who actually try to use AI for coding largely don't seem convinced. Can anyone cite any smashing successes for AI? Are there any unicorn startups, or candidates to become one, that are staffed by say 5 really savvy users of AI? A story like that could be convincing of these claims about software engineering becoming a domain of prompt engineering.
Coding is a specific use case. It's more common to see obvious shrinkage in e.g. HR and customer service departments. It didn't start with the LLM flavour of AI, but it has been accelerated.
In other places, look for hiring freezes and re-orgs, rather than straight and obvious message from management: "Say bye to Bob, he's been replaced with this AI agent".
To everybody's point, not a new thing - automation, including what may of us have worked on for decades, by definition is simplifying, making more efficient, or reducing human effort.
But to your specific question - HN seems to have a couple of excited stories a week about a person saying "I don't really know coding, but I've started my tiny niche SaaS with help of AI". Which is not necessarily a bad thing, and more along the curve of all the other advancements - perhaps the question is the speed of improvement(replacement) and change on this particular one.
I'm not the expert / one of those people who have done it, but my thoughts FWIW:
The phrase "don't really know" is kind of doing the heavy lifting here - for example, I myself haven't done professional programming in 20 years, but I know the basics. Enough to be structured in my design / requirements / direction to LLM, and to read and comprehend the advice others have discovered and shared. With that in mind, extending the code seems to work fairly well - start with basic requirements and extend them through LLM. Having not done coding in 2 decades, I've asked ChatGPT to create a python script to interact with a website's API and retrieve an object; then I've extended it to retrieve and store multiple objects, then to input for parameters, etc. it was fun!
Anyhoo, the use case discussed is not necessarily a complex enterprise resourcing application; but a little niche webapp with simple requirements, created by somebody who understands the domain itself more than coding, and which may be exciting to a target requirement, seems eminently doable with today's LLMs -- again, the proof is in the pudding of people who have done it, rather than my claims :)
If the code works (well enough), do you need to debug it? And if you can get the LLM to debug/extend it for you good enough, does it matter if you understand it?
The problem isn't what AI can do today, the problem is that it isn't slowing down in what it can do. We keep hearing that AI will stop advancing any year now, but it just keeps getting better.
But ya, I see the future for SWEs as not only being able code, but being able to review code created by AI, and being able to write prompts to get AI to generate code.
But I don’t think the advent of LLM is responsible for any job cut in these companies. The overall startup ecosystem is (Stripe, cloud computing, efficient customer support software, etc).
But the fact that Cursor is one of those impressive companies shows the demand for AI assistance is huge
Instagram was 10 people and $1B at acquisition. Cursor is a modified version of an existing monolith maintained by Microsoft. Much of their AI is offloaded and their own bespoke model is relatively light (cursor-tab). It's not surprising it can be ran with 20 people. I can totally see an incompetent CEO running today's cursor with 300 people. But it's a leadership style, not so much down to programming efficiency. I will concede that the programmers at Cursor are more efficient than they would be without access to LLMs.
People aren't getting fired and 1-1 replaced by an AI agent. It's making people more productive so you can do more with less. I don't know why this needs to be constantly repeated.
You might be repeating it because it's a baseless assertion. Where's the proof?
OpenAI seems to be hiring lots of people [1]. Should they at least be able to do more with less?
Can you answer my question about an "AI native" company that never even had people to replace?
Some quick searching says openai had a headcount of 3500 employees in 2024. It also had a valuation of $150billion. Another search says $150 billion companies tend to average 10000-50000 employees. So it does seem like they’re doing more with less. A different search says average company with $3 billion revenue has 5000-10000 employees. Either way OpenAI seems to be running leaner than average.
Ah so the company that produced the most cutting edge technology needs to hire more people to stay ahead.. and that's proof LLMs aren't making most of the programmers in the trenches more productive. Let's see if an LLM can reason better than you do.
(You're Alice btw) :
Alice's response has a few issues:
Shifts the Goalpost:
Bob's point was about AI's impact on existing jobs—he's arguing that AI increases productivity, not that companies like OpenAI don't need people. Alice shifts the discussion to whether OpenAI, an AI company, is hiring people, which isn't directly relevant to Bob's claim.
Strawman Argument:
She implies that if AI is truly productive, OpenAI shouldn't need to hire. But OpenAI is in a high-growth phase, likely hiring to build and maintain the technology, not to replace traditional roles with AI. AI productivity doesn't mean zero hiring—it often means hiring in different areas or scaling faster.
Burden of Proof Misstep:
Alice demands proof from Bob while making her own claim ("baseless assertion") without providing evidence that AI is directly causing 1:1 replacements.
Weak "AI Native" Point:
The question about "AI native" companies might sound clever, but it doesn't address Bob's core argument about existing companies using AI to do more with fewer people.
Alice's frustration is understandable, but her argument misses the mark because it's more about OpenAI's hiring practices than the broader point of productivity gains through AI.
I can't tell if you even read through these or just pasted them. I (Alice) supposedly made my own claim without providing evident that AI is directly causing 1:1 replacements. I think you (really whatever "AI" tool you used without much critique of its output) have this entirely backwards. That's the most glaring, but not only issue with all of this. Sort of proving my point here...
You don't seem to understand the basic reality of most software development jobs which basically anyone with over 100 IQ can do with like 6 months of training from scratch.
Most software developers don't work for Google. They don't need a degree. They are making basic crud applications or some other apps that are chaining APIs or libraries together.
The difference between a junior and a senior in these kind of jobs is productivity and code quality/ maintainability. If you don't think current gen LLMs help with that you are delusional
Programming has a very long history of tools that make people more productive so they can do more with less, with it always leading to more programming jobs. This is also constantly repeated.
> People aren't getting fired and 1-1 replaced by an AI agent.
Tell that to all the customer service AI chatbots. They don't need as many real CS agents anymore because one chatbot can reply to hundreds or thousands of tickets with useless garbage copy-pasted from the FAQ. Given that even most real CS agents also only do that nowadays anyway, yeah they've been replaced.
This tends to be incredibly effective at CS's real job, which is to make customers with problems go away from being able to make it to bothering the actual consumer-unfriendly-decision-makers. If you start looking for this you will see it absolutely everywhere. Chatbots are really really good at making people too frustrated and exhausted to pursue other routes.
There are a few companies whose CS actually still is good (like Mila Cares!!!) but in my experience most CS just wants to get rid of the annoying customer.
Also, sort of related (to me, blame my autism), I've heard that the practical effect of a suicide hotline can be annoying the caller enough for them not to do the thing. I've had at least one or two friends told me that this was the effect it had on them. (they're doing better nowadays)
> Most anti-AI arguments can be dispensed with by recasting them in terms of the broken-window fallacy. This is certainly one of them.
Could you explain how and what fits the broken-window fallacy here? Is it the AI abolitionists? (I know at least one person who violently hates AI and anything relating to AI and firmly believes that there is absolutely no place for AI or LLMs anywhere in the world and that any usage at all for any reason whatsoever is utterly deplorable. I don't talk to this person since they tend to flame at others for using or talking about AI for any reason.)
The people working in CS now, as you originally pointed out, are already under strict orders not to say anything an AI chatbot wouldn't say, and are not empowered to do anything the company wouldn't empower an AI chatbot to do. So why are we employing humans to do a chatbot's job? Just to give them something to do?
That's where the broken-window fallacy comes in. It's not just a matter of paying someone to break all the glass in town to make work for the glaziers, it's like responding to the invention of cheap shatterproof glass with the same flawed zero-sum reasoning.
(I'm rate-limited so can't reply, but my position is basically that 100% unemployment is a good thing, not a bad one. We can't get to a post-scarcity society by doing things the same way we've been doing them, or by retrying the same alternatives that have failed before.)
IME, simple labor farms (that you get paid actual money for, anyway...) are hard to find in the US because of like, worker protections. Are there any entry-level jobs besides CS that can be done with essentially no thought or reasoning whatsoever? I'm not asking this to be insulting, I'm saying there is a type of person who could be good at that type of CS but doesn't have the skill, attention, or energy to do much of anything more. Those people still deserve a way to be useful to the world, I think.
I keep seeing cashiers replaced by self-checkout machines at stores, fast food jobs might be too complicated for certain people (like me - I couldn't do it), places like say the Apple Genius Bar require you to know what you're doing, etc. Maybe I'm super naive and first-world by missing something super obvious, but if I am then maybe I could learn something today.
If you think trying to debug a hallucinated code block is more productive than understanding how to write it in the first place is a good thing I have to think you're one of the stupid ones.
Personal anecdote but I don't debug hallucinated code blocks. I only use LLM to speed up what I would've already written myself. I'm autistic enough to where it's not easy for it to introduce subtle bugs because I still have all the constraints in my head. Rather than having it solve any problem for me I approach it from the angle of just save me some typing of what I already know I want to write.
I use ai “voraciously” coding every day. Even the best models are not very good. I found that if something is hard to do it’s because it’s not documented properly, and therefore it’s not in the data anyway so the model can’t out, just guess. If it is documented properly then I don’t need its help so much.
I barely get 1 line of code that I’d actually ship unchanged. It’s really useful but still has a long way to go
> Anyone skilled enough to code can now codes in any language.
That was always true. Once you learn how to program, it doesn't take a ton of effort to learn another language. It takes more time and effort to master a new language, but that's still just as true as it ever was.
We have a notorious junior LLM addict in the team. All experienced engineers are extremely annoyed reviewing his code due to endless stream of merge requests containing repetitive code which often plainly doesn't work in any other than the happiest of scenarios and the fact that it doesn't work can be seen from space (i.e. no testing needed, it's obvious while reviewing).
While every single developer now uses LLM, the outcome really depends on how one is using it.
I work at a mag 7 company with teams of SWE who don’t use LLM for their work at all (because they suck and have data privacy issues) and some have successfully resisted having their work consumed into an LLM. We read docs and apply paradigms we have learned, we are fast at writing correct code and so it doesn’t offer any benefits.
Software engineering is 10% knowing how to code and 90% knowing what to code. So I don't buy it either. I use LLMs daily for programming. Sometimes they're helpful. But I see them like one of our first power tools (comparable to the language servers and compilers in our toolbelt).
At BigCos, software engineering is also spending 70% of your day in meetings or handling housekeeping chores in JIRA / Monday / Airtable / Notion / GDocs. I’d love to send an AI agent to meet with PMs to give status updates and take notes if any of their needs change if it could actually do those things efficiently and reliably. Right now, it’d be about the equivalent of having a PM call a phone tree system and leave a voicemail that might get transcribed 85% correctly.
Reminds of the Google Translate model that eventually invented its own internal intermediate language. Which would be a nice touch in this kind of applications.
I wouldn't go this far, it's a really hit-or-miss thing, but as someone who is seeing it heavily integrated... Our days are numbered. That number, in my opinion, is closer to a decade, but I have folk at a junior level building out entire internal tools in a couple days what would have taken weeks.
The operations crew is having it automate metrics rapidly. Senior developers are increasing their throughout rates. Things come back to code review in a generally better state because folk say "hey, review this for me".
Is it a real developer? Not at all. Is it affecting our hiring or anything? Nah. But it's already a huge help and rapidly getting better with new models (o3 / deepseek), tooling (CSP and agents), and integration (cline, cursor, memory prompting).
Not saying obviously so take it from this pizza fan that I've seen some incredible advances.
Example from last week: pasted my terraform that was using internal custom modules and prompt that based on this code build additional services not yet built using the module output so they could all communicate.
First run. 100% accurate. Zero typing after paste and prompt. 2 minutes of time.
It probably saved me an hour at least of looking up module output and documentation with the additional services.
It may be minor or simple yet I now have an additional hour.
These things only get better with time, not worse, and they can already do a lot.
Otherwise we might as well predict that when all is said and done, AI will have no more impact than the fax machine, which is what someone once thought of the internet, and which seemed a lot more reasonable on dial-up AOL than in today's world.
The nature of the thing is that people are dis-incentivized to share their trade secrets when there is competitive advantage in not sharing.
What the competent people are saying is often overshadowed by the echo chamber of misleading information, since there is profit in driving people towards chaos.
Time and time again, I remind myself to follow the running away from the bear strategy, especially when I'm pessimistic about my career and future.
Just like you don't need to be able to outrun the bear, it's enough to outrun your friend, it's okay to be theoretically replaceable by AI, as long as I'm not the most obvious person to be replaced (at least that's what I tell myself).
Companies move slowly, I just hope they move slowly enough for me to provide a good life for my family as a software developer.
This is a motivating and least depressive outlook on the future, as it encourages me to learn things better, and that feels good for me.
I have many thought on AI, sometimes excited, sometimes frustrated, sometimes worried, but I didn't see this idea phrased like this before, so thought Id share it.
Do more than you think you will need to do. The economy for software engineer hiring is changing rapidly and will continue to do so for the foreseeable future for reasons independent of AI.
I got shaken out quickly (but am planning an epic comeback). The math doesn’t work out favorably so being ambitious is probably a good idea. But if things get too advanced then it also gets easier to automate an entire company, and enough automation means more viability for solo ventures. That ambition comes in handy there as well
If you run away from a brown bear, they'll chase you and your friend stuck in Freeze-or-Fawn will outlive you. So the question is, is AI a brown bear or a black bear? Is it better to let everyone else run around screaming and stand still, or to book it?
An LLM is incapable of performing an inquiry. Try making it do one. For instance tell an LLM that you want it to test a certain product and that you will be its eyes ears and hands. Then proceed as if you don’t know anything about testing. Do not correct its decisions.
It will probably not ask you any questions, but if it does and you answer it will not ask follow up questions, or if it does it will lose track of your answers or non-answers. It does not maintain situational awareness. It does not speculate on your state of mind or competence as you help it.
By the time AI can replace me (a decent but not spectacular programmer), it's not going to be that far from being able to improve itself. At that point, all bets are off. Good luck trying to outwit a self-improving AI that is smarter than you!
It will eventually be possible to compress everything inside an org down with AI until almost nothing remains that’s not automated - but the boundaries of an org (where it deals with customers, vendors etc.) require accountability and that requires humans.
You can avoid your job being eaten by AI by moving toward roles where you talk to people, understand their problems, and perform the work of translating that into solutions.
There will also always be an orchestration role no matter how much automation is thrown at a problem.
- Who debugs that AI code?
- OK, say it’s an AI. Now who fixes the auto-debugger when it breaks?
- Who makes decisions about rebuilds and migrations and big platform shifts?
- Sure maybe that decision is informed by advice from AIs but someone with accountability has to make the call before the wheels are put in motion for the rebuild or migration or whatever.
Most people on HN work in software, but I work in the intersection of software and hardware. The software for me is simulation software, but it could alternatively be real-time software for a product. The relevant difference between software alone and software+hardware is that LLMs can't do an experiment in the real world. A LLM might tell you to do something with hardware or write a simulator, but the results of both are really just predictions that need to be checked in the real world.
I'm a theory and simulation guy, but in retrospect I should have done far more experiments when I was in training. I guess it's never too late to start...
I work in HR for a large company. A third party system was chosen to use for part of the HR functions. This third party system uses an integration system that involves dragging and dropping components and doing some config work (think of Nifi, or, at a super basic level, Scratch). So, it is all UI work, very little to no written code. It will be a long long time before this kind of work will be automated by LLMs. We are still in the process of building out hundreds of these integrations. Just the building out will take years. Then there will be even more years of maintenance. Large companies are like large ships, they pivot exceedingly slowly. I figure I can maintain this system for a long time. That’s one way to get job security.
I am surprised by this: in my experience, LLMs excel in configuration of software far more than they do in writing it. And there are tools emerging almost daily for AI-powered browser and native application automation. So I don't see how this will create any kind of moat.
Because after spending years building it out, the company will be loathe to redo all the work, especially since it is so tightly integrated with the third party system. They won’t replace it until they replace the third party system someday, but because they are just now starting to use the third party system, it will be many years before they abandon it to move on. I’ve worked at large companies for decades. Switching major systems like these takes many many years. If you can get involved with setting it up and maintaining it, you are almost guaranteed job security for 10-15 years.
Consumers will quickly replace these slow companies with products from new lean fast companies using AI at the core. The old large companies will just disappear.
I feel it's a bit like with self driving cars: Until they can bring me safely to any destination without intervention, then they're not that useful. Currently LLMs are at the stage where I can shortly let go of the wheel but still need to be very focused, maybe even more focused. And while they improve at a fast rate, it's not sure they'll ever fully get there. In the meantime they're at best a slight productivity boost saving me some keystrokes.
"So what kind of programming work would be the opposite of this?
* Problems are ill-defined and poorly-scoped
* Solutions are difficult to verify
* The total volume of code involved is massive
In my view, this is describing legacy code: feature work in large established codebases."
If you have used cursor.ai to try to create a moderately sized project you'll see this happen even with newly generated code.
In my experience, if you limit yourself to generate not well thought through prompts and do not work on getting a deep understanding of the generated codebase,
the LLM will start duplicating the same code flows in different ways, many time forgetting some of the behaviour already implemented.
Kind of like having dozens of developers working on the same codebase clueless about what each other has done and re-implementing the same functionality until the code turns into a pile of spaghetti code.
It can be done but:
* You must have a deep understanding of the code
* You need to think hard about what you are doing and give very detailed instructions to the AI
It works for trying a quick prototype but when moving on to production grade code you need to slow down and "program" step by step providing precise instructions as you go.
You'll have to design the changes to the minor detail and then
you can let the AI do the grunt work.
I'd like to see a bit less fearmongering about how our bosses won't need us anymore and a bit more about how maybe we won't need our bosses anymore.
Most of us are already accountable for outcomes, not outputs. Perhaps we could go further in that direction--but doing so only makes personal sense if the underlying work that you're doing is important to you. If AI is about to make us all 10x coders, why should we keep the jobs we have when we could take that extra capability and go do something more meaningful--the kind of something that used to require a 10-person company.
I'm personally pretty happy with my company, but my point is that once everybody gets more productive, what's the likelihood that everybody who still has a job after the transition still wants that job now that doors which were previously closed are now open?
It's gonna be a bigger reshuffle than just taking more ownership over our existing domains.
"a fast and effective way to have a multi-million-token context window"
This truly is the challenge - both to have the huge context window and the ability to conduct coherent and comprehensive reasoning using the entire context. We should see soon whether there is a Moore's law effect here: I would be immensely surprised if not.
Some people blog because it makes them happy, others do so to build brand and status for professional development.
Upon reading this, it seems like the author is in the latter group, and while he offers a few points about what computers can and can't do, the advice given is horrible advice because it takes things in isolation and overgeneralizes, while not paying attention to underlying factors.
The "lets just tough it out" approach and specialize in old code, or learning to do what AI can't are impossible tasks in practice.
If the author is in the latter group, I think he's unintentionally doing himself a disservice by showing a low level of competency in addressing the problems.
You don't want to hire an engineer who is blind to the potential liabilities they create.
Any engineers in IT are intimately familiar with the fallout from failures involving sequential steps in a pipeline.
There's front-of-line blocking (FOLB), and there's single points of failure (SPOFs), these are considered in resiliency design or documentation of the failure domains. The most important parts of which are used in identifying liabilities upfront before they happen.
Entry level task positions are easily automated by AI. So companies replace the workers, with AI.
How do you get to be a mid-level engineer when the entry level no longer exists...its all based upon years of experience. Experience which can no longer be gotten.
Does this sound like a pipeline yet?
You still have mid-level engineers available, as you do senior engineers, but no new ones are entering the marketplace. Aging removes these people over time, and as that sieves towards 0 the cost of hiring these people goes up until it reaches infinite (where no one can be hired).
What goes into the pipeline is typically the same but most often less than what comes out of said pipeline. In talent development its a sieve separating the wheat from the chaff.
Only the entry point is clogged, and nothing new is going in, humans deal with future expectations and the volume going into such pipelines is adaptive. No future, no one goes into such professions.
After a certain point, you can't find talent. There's no economic incentive because companies made it this way by collusion.
Things stop getting done which forces collapse of the company. Its not just one company because this is a broad problem, so this happens across the board creating a inflationary cycle of cost, followed by a correlated deflationary cycle in talent, that cannot be fixed except by the industry as a whole removing the blockage. They can't do that though because of short-term competition.
When have industry business-people today turned on a dime in economically challenging situations where the money wasn't available; ever.
Debt financing makes it so these people don't need to examine these trends more than a year out, but the consequences of these trends can occur just outside that horizon, and once integrated the bridges have been burnt and there is no going back while also maintaining marketshare.
All of the incentives force business people to drive everything into the ground in these type of cycles. The only solution, is to know ahead of time, and not bait the hook. The business people of today have shown that this is beyond them, its all about short-term profits at the limits of growth, business as usual.
Real world consequences of such, you can look to Thomas Malthus, and Catton who revisits Malthus.
Catton importantly shows how extraction of non-renewables can reduce or destroy previous existing renewable flows leading to lower population limits as a whole than prior to before prior to overshoot.
Similar behavior applies broadly to destructive phase changes of super critical systems with complex feedback mechanisms (i.e. negative flips to positive and runs away, or vice versa leading to collapse/halt). In other words where you have two narrow boundaries outside which the systems fail.
>Entry level task positions are easily automated by AI. So companies replace the workers, with AI.
This is one of the concerns I hear. Not really in a position to judge how serious it is but I've had this discussion with people in senior roles related to, let's call it developer mentoring/development.
To the degree LLMs make junior developer roles commodities and therefore less attractive financially that definitely makes bringing new people on-board at a lot of companies less attractive.
Essentially no one thinks an LLM is going to step into the role of an experienced senior developer as anything other than a possibly useful assistant. Someone just out of school? Maybe you don't replace the best but maybe you need a lot fewer of them and pay them a lot less.
Given the decline in overall IT positions over the past two years, I'd say its pretty serious. Recent new graduates with CS/IT backgrounds largely aren't being hired.
Historically, we see Information Technology advances first disrupt IT, then it spreads with adoption everywhere else to realize the same cost savings, as a labor multiplier.
> Maybe you don't replace the best.
In fairness, there's no real way to tell who the best/competent are. University programs have always failed to prepare the student in IT because of the fast moving nature of it.
Given the lack of any way to properly distinguish oneself, the best and most competent will look for a time, but eventually they have to go where the money is. That means retraining and taking the loss in time and investment, and its a sticky decision where they aren't likely to fight for such meager scraps.
Competent people have options others don't.
A wage price floor was hit awhile back in the long trend towards wage suppression. You can't really pay them less when other opportunities with no education provide more economic incentive. This is an example of chaotic distortions involved in money printing generated whipsaws.
The opportunity cost ratio between unskilled and skilled labor eliminates any incentive. Why spend 10 years on education and experience when you'll only make at best 33% more than someone who doesn't (pre-tax). Less than that post-tax, and we aren't considering the increased costs that are borne by individuals seeking out positions. The job market has always imposed cost in time, from interview projects (where they steal your work without compensation), to circuitous interviews, etc.
And maybe the others should reconsider whether IT/tech is the automatic meal ticket they thought it was. Not sure that's the worst outcome. Honestly, there are probably still a lot of jobs floating around--just a lot fewer at the highest paying and sexiest companies--and some of the jobs may be in trades and other professions.
Except that's not the outcome, or how it happens in practice.
When merit is no longer an important metric, the competent leave first because they are no longer rewarded for their productive capacity, or effort. The people who thought t was an automatic meal ticket burn the house down for everyone.
The current offers going out for IT Architect work, decade+ direct experience is 40k/yr, no equity. No conversion rate on applications that isn't 1000:1 phone call.
For 40k/yr, you can go and flip burgers and not have to deal with the high stress involved in these type of positions. It would be a joke, if only it were not serious, and things will only get worse.
The creeping ruin has a way of coming at you sideways without you knowing until its too late.
I believe these concerns about entry-level talent pipeline neglect one thing, which is that LLMs offer learning on steroids - motivated intelligent junior developers can learn more about any technical or business topic in one morning of well-structured investigation with the help of the more advanced reasoning models than they might get in a month of on-the-job learning in the days before such tools existed. But it requires truly active, self-driven investigation and structured thinking. These skills in turn require education on "how to learn", from a very early age. I am not sure our education systems are there yet.
There are many ways in which this reasoning is flawed at its foundation.
The main flawed assumption in this assumes that the information provided by an LLM is both factual, and accurate, and that these junior developers will be able to adequately determine this.
In most cases the process of validating accuracy takes more time than the process of learning it the right way in the first place, and it requires domain knowledge they do not have. This makes for an impossible task of the junior, and allows them to easily be misled to false and destructive conclusions.
It is well established that hallucinations occur in these models, and have occurred to the point where legal professionals have cited non-existent sources and perjured themselves in the process, threatening the investment they made in their career in its entirety.
These professionals are highly incentivized to avoid this outcome, but its still happened regardless of the incentives, repeatedly, with several making national news over the span of a year, and similar news the previous year.
The entire premise you make is that rapid learning occurs, but it neglects and conflates the word 'learning' whose normal context is fact and truth, to that of learning falsehoods, which reflect and promote destructive delusion.
In the absence of learning in its normal context, the latter's likelihood occurs exponentially with each additional factor, while the former trends towards 0 as a fractional. Its a chair without legs.
Following from this towards introducing it to people at a young age, where children are biologically incapable of discerning falsehood, this promotes an indoctrinated state of delusion and circular reasoning with no rational basis, in reality it is quite an evil thing in my opinion hobbling the young and ruining their futures by induction of maladaptive reasoning frameworks.
Children are a vulnerable set of the population, and such activities can only diminish the young's abilities to survive long term. Something no good person would ever do.
There is a thing in literature called a Devil's pleasure palace. It refers to a short story, though I don't remember the author, it was slavic iirc, where a Noble of the Aristocracy tests his daughter's fiance to determine if they would maintain fidelity after marriage, without them knowing its a test.
A witch and magic are employed, though one can imagine drugs being used as well, and the prospective husband is led through a series of events unbeknownst to them, in repeated attempts to induce in him every possible indulgence without consequence.
He is tested for several days, unable to leave, and when he tries to leave he is told he cannot unless he partakes. Should he cave in to desire he would have been killed on the spot, he doesn't instead choosing to die instead.
He neither agreed to this (informed consent), nor knew of it happening, thus making it both a sinister and malevolent tale of chance, where the outcome will be destructive in all but the fairy tales, especially given the vulnerability of the young.
LLM's and AI broadly depict this in their function. They deceive, and manipulate those utilizing them without any perception of this having happened, because the required knowledge to do so is outside their domain of knowledge. The same as any fallacy by authority.
No education true to any valid definition would ever use these. Involuntary indoctrination is a vile thing, and there is no place for doctrines of Learning Understanding Acceptance, lest you somehow imagine the world is better off as depicted in 1984 by Orwell, where in reality, shortage and slavery eventually devolve into famine and population collapse through the Socialist Calculation Problem (Malthus/Catton/Mises).
LLM's shouldn't be called Large Language Models, they are more appropriately Looming Liability Machines.