Isn't it more likely that Meta has been infiltrated by Mossad, just as they no doubt have by other intelligence services and they use these insiders to exfiltrate location data on specific targets?
Sandberg herself does teary, falsehood ridden war propaganda videos for Israel, these days.
Microsoft shared data early on with IDF to help target their users (would have to check their ToS to see if there's a clause for that there).
I doubt there's any need to hide anything inside these kinds of companies. Leaders there likely believe they're doing the right thing helping "the good cause" by supporting extrajudicial executions of people. At worst they'll have to kick out employees who'll raise their voices, like they already did many times. No biggie.
This is a condensed version of Altman's greatest hits when it comes to his pitch for the promise of AI as he (allegedly) conceives it, and in that sense it is nothing new. What is conspicuous is that there is a not-so-subtle reframing. No longer is AGI just around the corner, instead one gets the sense that OpenAI they have already looked around that corner and seen nothing there. No, this is one of what I expect will be many more public statements intended to cool things down a bit, and to reframe (investor) expectations that the timelines are going to be longer than were previously implied.
Cool things down a bit? That's what you call "we're already in the accelerating part of the singularity, past the event horizon, the future progress curve looks vertical, the past one looks flat"? :D
If any of that were true, then the llm would be actively involved in advancing themselves, or assisting humans in a meaningful way in the endeavor, which they’re not so far as I can tell.
Maybe Sam meant it more generally - humanity has achieved a lot over past thousands of years, and now we are finally close to interacting with systems that are more capable intellectually.
The value of LLMs is that they do things for you, so yeah the incentive is to have them take over more and more of the process. I can also see a future not far into the horizon where those who grew up with nothing but AI are much less discerning and capable and so the AI becomes more and more a crutch, as human capability withers from extended disuse.
The implication is that they are hoping to bridge the gap between current AI capabilities and something more like AGI in the time it takes the senior engineers to leave the industry. At least, that's the best I can come up with, because they are kicking out all of the bottom rings of the ladder here in what otherwise seems like a very shortsighted move.
I like Thomas, but I find his arguments include the same fundamental mistake I see made elsewhere. He acknowledged that the tools need an expert to use properly, and as he illustrated, he refined his expertise over many years. He is of the first and last generation of experienced programmers who learned without LLM assistance. How is someone just coming out of school going to get the encouragement and space to independently develop the experience they need to break out of the "vibe coding" phase? I can almost anticipate an interjection along the lines of "well we used to build everything with our hands and now we have tools etc, it's just different" but this is an order of magnitude different. This is asking a robot to design and assemble a shed for you, and you never even see the saw, nails, and hammer being used, let alone understand enough about how the different materials interact to get much more than a "vibe" for how much weight the roof might support.
I think the main difference between shortcuts like "compilers" and shortcuts like "LLMs" is determinism. I don't need to know assembly because I use a compiler that is very well specified, often mathematically proven to introduce no errors, and errs on the side of caution unless specifically told otherwise.
On the other hand, LLMs are highly nondeterministic. They often produce correct output for simple things, but that's because those things are simple enough that we trust the probability of it being incorrect is implausibly low. But there's no guarantee that they won't get them wrong. For more complicated things, LLMs are terrible and need very well specified guardrails. They will bounce around inside those guardrails until they make something correct, but that's more of a happy accident than a mathematical guarantee.
LLMs aren't a level of abstraction, they are an independent entity. They're the equivalent of a junior coder who has no long term memory and thus needs to write everything down and you just have to hope that they don't forget to write something down and hope that some deterministic automated test will catch them if they do forget.
If you could hire an unpaid intern with long term memory loss, would you?
Determinism is only one part of it: predictability and the ability to model what it’s doing is perhaps more important.
The physics engine in the game Trackmania is deterministic: this means that you can replay the same inputs and get the same output; but it doesn’t mean the output always makes sense: if you drive into a wall in a particular way, you can trigger what’s called an uberbug, where your car gets flung in a somewhat random direction at implausibly high speed. (This sort of thing can lead to fun tool-assisted speedruns that are utterly unviable for humans.)
The abstractions part you mention, there’s the key. Good abstractions make predictable. Turn the steering wheel to the left, head left. There are still odd occasions when I will mispredict what some code in a language like Rust, Python or JavaScript will do, but they’re rare. By contrast, LLMs are very unpredictable, and you will fundamentally never be able to mentally model what it achieves.
Having an LLM code for you is like watching someone make a TAS. It technically meets the explicitly-specified goals of the mapper (checkpoints and finish), but the final run usually ignores the intended route made by the mapper. Even if the mapper keeps on putting in extra checkpoints and guardrails in between, the TAS can still find a crazy uberbug into backflip into taking half the checkpoints in reverse order. And unless you spend far longer studying the TAS than it would have taken to learn to drive it yourself normally, you're not going to learn much yourself.
Exactly. Compilers etc. are like well-proven algebraic properties, you can build on them and reason with them and do higher level math with confidence. That's a very different type of "advancement" than what we're seeing with LLMs.
> If you could hire an unpaid intern with long term memory loss, would you?
It's clearly a deficiency. And that's why one of the next generations of AIs will have long term memory and online learning. Although even the current generation of the models shows signs of self-correction that somewhat mitigate the "random walk" you've mentioned.
But, seriously, while it's not an easy task (otherwise we'd have seen it done already), it doesn't seem to be a kind of task that requires a paradigm shift or some fundamental discovery. It's a search in the space of network architectures.
Of course, we are talking about something that hasn't been invented yet, so I can't provide ironclad arguments like with, say, fusion power where we know what to do and it's just hard to do.
There is circumstantial evidence though. Complex problem solving skills that evolved in different groups of species: homo, corvidae, octopoda. Which points at either existence of multiple solutions to the problem of intelligence or at not that high complexity of a solution.
Anyway, with all the resources that are put into the development of AI will see the results (one way or another) soon enough. If long-term memory is not incorporated into AIs in about 5 years, then I'm wrong and it's indeed likely to be a fundamental problem with the current approach.
Hell no, any experienced engineer would rather do it themselves than attempt to corral an untrained army. Infinite monkeys can write a sequel to shakespeare, but it's faster to write it myself than to sift through mountains of gibberish on a barely-domesticated goose chase.
There’s a really common cognitive fallacy of “the consequences of that are something I don’t like, therefore it’s wrong”.
It’s like reductio ad absurdum, but without the logical consequence of the argument being incorrect, just bad.
You see it all the time, especially when it comes to predictions. The whole point of this article is coding agents are powerful and the arguments against this are generally weak and ill-informed. Coding agents having a negative impact on skill growth of new developers isn’t a “fundamental mistake” at all.
What I’ve been saying to my friends for the last couple of months has been, that we’re not going to see coding jobs go away, but we’re going to run into a situation where it’s harder to grow junior engineers into senior engineers because the LLMs will be doing all the work of figuring out why it isn’t working.
This will IMO lead to a “COBOL problem” where there are a shortage of people with truly deep understanding of how it all fits together and who can figure out the line of code to tweak to fix that ops problem that’s causing your production outage.
I’m not arguing for or against LLMs, just trying to look down the road to consequences. Agentic coding is going to become a daily part of every developer’s workflow; by next year it will be table stakes - as the article said, if you’re not already doing it, you’re standing still: if you’re a 10x developer now, you’ll be a 0.8x developer next year, and if you’re a 1x developer now, without agentic coding you’ll be a 0.1x developer.
It’s not hype; it’s just recognition of the dramatic increase in productivity that is happening right now.
> How is someone just coming out of school going to get the encouragement and space to independently develop the experience they need to break out of the "vibe coding" phase?
LLM's are so-so coders but incredible teachers. Today's students get the benefit of asking copying and pasting a piece of code into an LLM and asking, "How does this work?"
There's a lot of young people that will use LLM's to be lazy. There's also a lot that will use them to feed their intellectual curiosity.
Many of the curious ones will be adversely affected.
When you're a college student, the stakes feel so high. You have to pass this class or else you'll have to delay graduation and spend thousands of dollars. You have to get this grade or else you lose your grant or scholarship. You want to absorb knowledge from this project (honestly! you really do) but you really need to spend that time studying for a different class's exam.
"I'm not lazy, I'm just overwhelmed!" says the student, and they're not wrong. But it's very easy for "I'm gonna slog through this project" to become "I'm gonna give it a try, then use AI to check my answer" and then "I'm gonna automate the tedious bits that aren't that valuable anyway" and then "Well I'll ask ChatGPT and then read its answer thoroughly and make sure I understand it" and then "I'll copy/paste the output but I get the general idea of what it's doing."
Is that what students will do, though? Or will they see the cynical pump and dump and take the shortcuts to get the piece of paper and pass the humiliation ritual of the interview process?
I'm hearing this fear more frequently, but I do not understand it. Curriculum will adapt. We are a curious and intelligent species. There will be more laypeople building things that used to require deep expertise. A lot of those things will be garbage. Specialists will remain valuable and in demand. The kids will still learn to write loops, use variables, about OOP and functional programming, how to write "hello world," to add styles, to accept input, etc. And they'll probably ask a model for help when they get stuck, and the teacher won't let them use that during a test. The models will be used in many ways, and for many things, but not all things; it will be normal and fine. Developing will be more productive and more fun, with less toil.
>How is someone just coming out of school going to get the encouragement and space to independently develop the experience they need to break out of the "vibe coding" phase?
Dunno. Money is probably going to be a huge incentive.
I see the same argument everywhere. Like animators getting their start tweening other peoples content. AI is great at tweening and likely to replace farms of juniors. But companies will need seniors to direct animation, so they will either have to pay a lot of money to find them or pay a lot of money to train them.
Well this is actually happening in Japanese Animation and the result is that no young talents are getting trained in the workforce. [1]
But unlike animation, where the demand for the art can just disappear. I don't think the demand for software engineer will disappear. Same thing with musician. Young engineers might just be jobless or on training mode for much longer period of time before they can make actual living money.
Good thing is, as far as I know, Kyoto Animation managed to avoid this issue by having in-house training, growing their own talent pools.
Expecting commercial entities to engage in long term thinking when they can not do that and reduce costs in the next financial quarter is a fools game.
I think what you've said is largely true, but not without a long period of mess in between.
Back in the day I found significant career advancement because something that I haven't been able to identify (lack of on the job training i believe) had removed all the mid level IT talent in my local market. For a while I was able to ask for whatever I wanted because there just was not anyone else available. I had a week where a recruitment agency had an extremely attractive young woman escort me around a tech conference, buying me drinks and dinner and then refer me out to a bespoke MSP for a job interview (which I turned down which is funny) The market did eventually respond but it benefited me greatly. I imagine, this is what it will be like for a decade or so as a trained senior animator. No competition coming up, and plenty of money to be made. Until businesses sort their shit out, which like you say will happen eventually.
> get the encouragement and space to independently develop the experience they need to break out of the "vibe coding" phase?
I wonder this too, as someone who is entirely self-taught, when I started escaping “tutorial hell” was the hardest part of the journey, and took quite a bit of both encouragement and sheer willpower. Not sure I would have ever went beyond that if I had LLMs.
I worry for Juniors, and either we’ll need to find a way to mentor them past the vibe coding phase, or we hope that AI gets good enough before we all retire.
I wonder if that will make the great generation of human coders. Some of our best writers were the generation that spanned between oral education and mass production of books. Late generations read and wrote, rather than memorized and spoke. I think that was Shakespeare’s genius. Maybe our best coders will be supercharged with AI, and subsequent ones enfeabled by it.
Shakespeare was also popular because he was published as books became popular. Others copied him.
Quite a lot of the good programmers I have worked with may never have needed to write assembly, but are also not at all confused or daunted by it. They are curious about their abstractions, and have a strong grasp of what is going on beneath the curtain even if they don't have to lift it all that often.
Most of the people I work with, however, just understand the framework they are writing and display very little understanding or even curiosity as to what is going on beneath the first layer of abstraction. Typically this leaves them high and dry when debugging errors.
Anecdotally I see a lot more people with a shallow expertise believing the AI hype.
The difference is that the abstraction provided by compilers is much more robust. Not perfect: sometimes programmers legitimately need to drop into assembly to do various things. But those instances have been rare for decades and to a first approximation do not exist for the vast majority of enterprise code.
If AI gets to that level we will indeed have a sea change. But I think the current models, at least as far as I've seen, leave open to question whether they'll ever get there or not.
Doesn't it? Many compilers offer all sorts of novel optimizations for operations that end up producing the same result with entirely different runtime characteristics than the source code would imply. Going further, turn on gcc fast math and your code with no undefined behavior suddenly has undefined behavior.
I'm not much of a user of LLMs for generating code myself, but this particular analogy isn't a great fit. The one redeeming quality is that compiler output is deterministic or at least repeatable, whereas LLMs have some randomness thrown in intentionally.
With that said, both can give you unexpected behavior, just in different ways.
> With that said, both can give you unexpected behavior, just in different ways.
Unexpected as in "I didn't know" is different than unexpected as in "I can't predict". GCC optimizations is in the former camp and if you care to know, you just need to do a deep dive in your CPU architecture and the gcc docs and codebase. LLMs is a true shot in the dark with a high chance miss and a slightly lower chance of friendly fire.
What is that about memorization? I just need to know where the information is so I can refer to it later when I need it (and possibly archive them if they're that important).
If you're not trying to memorize the entirety of GCC's behavior (and keeping up with its updates), then you need to check if your UB is still doing what you expect every single time you change your function. Or other functions near it. Or anything that gets linked to it.
It's effectively impossible to rely on. Checking at the time of coding, or occasionally spot checking, still leaves you at massive risk of bugs or security flaws. It falls under "I can't predict".
In C, strings are just basic arrays which themselves are just pointers. There’s no safeguards there like we have in Java, so we need to write the guardrails ourselves, because failure to do so result in errors. If you didn’t know about it, a buffer overflow may be unexpected, but you don’t need to go and memorize the entire gcc codebase to know. Just knowing the semantics is enough.
The same thing happens with optimization. They usually warns about the gotchas, and it’s easy to check if the errors will bother you or not. You dont have to do an exhaustive list of errors when the classes are neatly defined.
This comment seems to mostly be describing avoiding undefined behavior. You can learn the rules to do that, though it's very hard to completely avoid UB mistakes.
But I'm talking about code that has undefined behavior. If there is any at all, you can't reliably learn what optimizations will happen or not, what will break your code or not. And you can't check for incorrect optimization in any meaningful way, because the result can change at any point in the future for the tiniest of reasons. You can try to avoid this situation, but again it's very hard to write code that has exactly zero undefined behavior.
When you talked about doing "a deep dive in your CPU architecture and the gcc docs and codebase", that is only necessary if you do have undefined behavior and you're trying to figure out what actually happens. But it's a waste of effort here. If you don't have UB, you don't need to do that. If you do have UB it's not enough, not nearly enough. It's useful for debugging but it won't predict whether your code is safe into the future.
To put it another way, if we're looking at optimizations listing gotchas, when there's UB it's like half the optimizations in the entire compiler are annotated with "this could break badly if there's UB". You can't predict it.
I suppose you are talking about UB? I don't think that is anything like Halucination. It's just tradeoffs being made (speed vs specified instructions) with more ambiguity (UB) than one might want. fast math is basically the same idea. You should probably never turn on fast math unless you are willing to trade speed for accuracy and accept a bunch of new UB that your libraries may never have been designed for. It's not like the compiler is making up new instructions that the hardware doesn't support or claiming the behavior of an instruction is different than documented. If it ever did anything like that, it would be a bug, and would be fixed.
> or claiming the behavior of an instruction is different than documented
When talking about undefined behavior, the only documentation is a shrug emoticon. If you want a working program, arbitrary undocumented behavior is just as bad as incorrectly documented behavior.
UB is not undocumented. It is documented to not be defined. In fact any given hardware reacts deterministically in the majority of UB cases, but compilers are allowed to assume UB was not possible for the purposes of optimization.
lol are you serious? I bet compilers are less deterministic now than before what with all the CPUs and their speculative executions and who knows what else. But all that stuff is still documented and not made out of thin air randomly…
Agree. We'll get a new breed of programmer — not shitty ones — just different. And I am quite sure, at some point in their career, they'll drop down to some lower level and try to do things manually.... Or step through the code and figure out a clever way to tighten it up....
Or if I'm wrong about the last bit, maybe it never was important.
Counter-counterargument; You don't need to understand metalworking to use a hammer or nails, that's a different trade, though an important trade that someone else does need to understand in order for you to do your job.
If all of mankind lost all understanding of registers overnight, it'd still affect modern programming (eventually)
The abstraction over assembly language is solid; compilers very rarely (if at all) fail to translate high level code into the correct assembly code.
LLMs are nowhere near the level where you can have almost 100% assurance that they do what you want and expect, even with a lot of hand-holding. They are not even a leaky abstraction; they are an "abstraction" with gaping holes.
As a teen I used to play around with Core Wars, and my high school taught 8086 assembly. I think I got a decent grasp of it, enough to implement quicksort in 8086 while sitting through a very boring class, and test it in the simulator later.
I mean, probably few people ever need to use it for something serious, but that doesn't mean they don't understand it.
Feels like coding with and without autocomplete to me. At some point you are still going to need to understand what you are doing, even if your IDE gives you hints about what all the functions do.
Sure, it's a different level, but it's still more or less the same thing. I don't think you can expect to learn how to code by only ever using LLMs, just like you can't learn how to code by only ever using intellisense.
> I like Thomas, but I find his arguments include the same fundamental mistake I see made elsewhere
Some of the arguments in the article are so bizarre that I can’t believe they’re anything other than engagement bait.
Claiming that IP rights shouldn’t matter because some developers pirate TV shows? Blaming LLM hallucinations on the programming language?
I agree with the general sentiment of the article, but it feels like the author decided to go full ragebait/engagement bait mode with the article instead of trying to have a real discussion. It’s weird to see this language on a company blog.
I think he knows that he’s ignoring the more complex and nuanced debates about LLMs because that’s not what the article is about. It’s written in inflammatory style that sets up straw man talking points and then sort of knocks them down while giving weird excuses for why certain arguments should be ignored.
They are not engagement bait. That argument, in particular, survived multiple rounds of reviews with friends outside my team who do not fully agree with me about this stuff. It's a deeply sincere, and, I would say for myself, earned take on this.
A lot of people are misunderstanding the goal of the post, which is not necessarily to persuade them, but rather to disrupt a static, unproductive equilibrium of uninformed arguments about how this stuff works. The commentary I've read today has to my mind vindicated that premise.
> That argument, in particular, survived multiple rounds of reviews with friends outside my team who do not fully agree with me about this stuff. It's a deeply sincere, and, I would say for myself, earned take on this.
Which argument? The one dismissing all arguments about IP on the grounds that some software engineers are pirates?
That argument is not only unpersuasive, it does a disservice to the rest of the post and weakens its contribution by making you as the author come off as willfully inflammatory and intentionally blind to nuance, which does the opposite of breaking the unproductive equilibrium. It feeds the sense that those in the skeptics camp have that AI adopters are intellectually unserious.
I know that you know that the law and ethics of IP are complicated, that the "profession" is diverse and can't be lumped into a cohesive unit for summary dismissal, and that there are entirely coherent ethical stances that would call for both piracy in some circumstances and condemnation of IP theft in others. I've seen enough of your work to know that dismissing all that nuance with a flippant call to "shove this concern up your ass" is beneath you.
> The one dismissing all arguments about IP on the grounds that some software engineers are pirates?
Yeah... this was a really, incredibly horseshit argument. I'm all for a good rant, but goddamn, man, this one wasn't good. I would say "I hope the reputational damage was worth whatever he got out of it", but I figure he's been able to retire at any time for a while now, so that sort of stuff just doesn't matter anymore to him.
I love how many people have in response to this article tried to intimate that writing it put my career in jeopardy; so forcefully do they disagree with a technical piece that it must somehow be career-limiting.
It's just such a mind-meltingly bad argument, man.
"A whole bunch of folks ignore copyright terms, so all complaints that 'Inhaling most-to-all of the code that can be read on the Internet with the intent to make a proprietary machine that makes a ton of revenue for the owner of that machine and noone else is probably bad, and if not a violation of the letter of the law, surely a violation of its spirit.' are invalid."
When I hear someone sincerely say stuff that works out to "Software licenses don't matter, actually.", I strongly reduce my estimation of their ability to reason well and behave ethically. Does this matter? Probably not. There are many folks in the field who hold that sort of opinion, so it's relatively easy to surround yourself with likeminded folks. Do you hold these sorts of opinions? Fuck if I know. All I know about is what you wrote today.
Anyway. As I mentioned, you're late-career in what seems to be a significantly successful career, so your reputation absolutely doesn't matter, and all this chatter is irrelevant to you.
I'm not quoting anyone. Perhaps wrapping the second paragraph in what I understand to be Russian-style quotes (« ») would have been clearer? Or maybe prepending the words "Your argument ends up being something like " to the second paragraph would have been far clearer? shrug
> It's not a reasonable paraphrase of my argument either, but you know that.
To quote from your essay:
"But if you’re a software developer playing this card? Cut me a little slack as I ask you to shove this concern up your ass. No profession has demonstrated more contempt for intellectual property.
The median dev thinks Star Wars and Daft Punk are a public commons. The great cultural project of developers has been opposing any protection that might inconvenience a monetizable media-sharing site. When they fail at policy, they route around it with coercion. They stand up global-scale piracy networks and sneer at anybody who so much as tries to preserve a new-release window for a TV show."
Man, you might not see the resemblance now, but if you return to it in three to six months, I bet you will.
Also, I was a professional musician in a former life. Given the content of your essay, you might be surprised to know how very, very fast and loose musicians as a class play with copyright laws. In my personal experience, the typical musician paid for approximately zero of the audio recordings in their possession. I'd be surprised if things weren't similar for the typical practitioner of the visual arts.
I agree this is a bad "collective punishment" argument from him, even if I think he's somewhat right in spirit because I as a software dev don't care in the slightest about LLMs training on code, text, videos, or images and fully believe it's equivalent to humans perceiving and learning from the output of others and I know many or most software devs agree on that point while most artists don't.
I think artists are very cavalier about IP, on average. Many draw characters from franchises that do not allow such drawing, and often directly profit by selling those images. Do I think that's bad? No. (Unless it's copying the original drawing plagiaristically.) Is it odd that most of the people who profit in this way consider generative AI unethical copyright infringement? I think so.
I think hypocrisy on the issue is annoying. Either you think it's cool for LLMs to learn from code and text and images and videos or you don't think any of it is fine. tptacek should bite one bullet or the other.
I don't accept the premise that "training on" and "copying" are the same thing, any more than me reading a book and learning stuff is copying from the book. But past that, I have, for the reasons stated in the piece, absolutely no patience for software developers trying to put this concern on the table. From my perspective, they've forfeited it.
> I don't accept the premise that "training on" and "copying" are the same thing...
Nor do I. Training and copying are clearly different things... and if these tools had never emitted -verbatim- nontrivial chunks of the code they'd ingested, [0] I'd be much less concerned about them. But as it stands now, some-to-many of the companies that build and deploy these machines clearly didn't care to ensure that their machines simply wouldn't plagiarize.
I've a bit more commentary that's related to whether or not what these companies are doing should be permitted here. [1]
[0] Based on what I've seen, when it happens, it is often with either incorrect copyright and/or license notifications, or none of the verbiage the license of the copied code requires in non-trivial reproductions of that code.
What about the millions of software developers who have never even visited a pirate site, much less built one?
Are we including the Netflix developers working actively on DRM?
How about the software developers working on anti-circumvention code for Kindle?
I'm totally perplexed at how willing you are to lump a profession of more than 20 million people all into one bucket and deny all of them, collectively, the right to say anything about IP. Are doctors not allowed to talk about the society harms of elective plastic surgery because some of them are plastic surgeons? Is anyone with an MBA not allowed to warn people against scummy business practices because many-to-most of them are involved in dreaming those practices up?
This logic makes no sense, and I have to imagine that you see that given that you're avoiding replying to me.
Ah good. If one of your family were to bring a plagiarism suit against another musician (or company (regardless of whether that company's music was produced by humans or robots)) that'd clearly ripped off their work, would you decry them as a hypocrite?
If not, why not?
If so, (seriously, earnestly) kudos for being consistent in your thoughts on the matter.
And I'm the only one in mine who isn't either a musician or an author. I'm not sure why you believe that being in a creative family gives you some sort of divine authority to condemn the rest of us for our collective sins.
The second paragraph in OP's comment is absolutely a reasonable paraphrase of your argument. I read your version many times over to try to find the substance and... that is the substance. If you didn't mean it to be then that section needed to be heavily edited.
What really resonated with me was your repeated calls for us at least to be arguing about the same thing, to get on the same page.
Everything about LLMs and generative AI is getting so mushed up by people pulling it in several directions at once, marketing clouding the water, and the massive hyperbole on both sides, it's nearly impossible to understand if we're even talking about the same thing!
It's a good post and I strongly agree with the part about level setting. You see the same tired arguments basically every day here and subreddits like /r/ExperiencedDevs. I read a few today and my favorites are:
- It cannot write tests because it doesn't understand intent
- Actually it can write them, but they are "worthless"
- It's just predicting the next token, so it has no way of writing code well
- It tries to guess what code means and will be wrong
- It can't write anything novel because it can only write things it's seen
- It's faster to do all of the above by hand
I'm not sure if it's the issue where they tried copilot with gpt 3.5 or something, but anyone who uses cursor daily knows all of the above is false, I make it do these things every day and it works great. There was another comment I saw here or on reddit about how everyone needs to spend a day with cursor and get good at understanding how prompting + context works. That is a big ask but I think the savings are worth it when you get the hang of it.
Yes. It's this "next token" stuff that is a total tell we're not all having the same conversation, because what serious LLM-driven developers are doing differently today than they were a year ago has not much at all to do with the evolution of the SOTA models themselves. If you get what's going on, the "next token" thing has nothing at all to do with this. It's not about the model, it's about the agent.
>> Blaming LLM hallucinations on the programming language?
My favorite was suggesting that people select the programming language based of which ones LLMs are best at. People who need an LLM to write code might do that, but no experienced developer would. There are too many other legitimate considerations.
If an LLM improves coding productivity, and it is better at one language than another, then at the margin it will affect which language you may choose.
At the margin means that both languages, or frameworks or whatever, are reasonably appropriate for the task at hand. If you are writing firmware for a robot, then the LLM will be less helpful, and a language such as Python or JS which the LLM is good at is useless.
But Thomas's point is that arguing that LLMs are not useful for all languages is not the same as saying that are not useful for any language.
If you believe that LLM competencies are not actually becoming drivers in what web frameworks people are using, for example, you need to open your eyes and recognize what is happening instead of what you think should be happening.
(I write this as someone who prefers SvelteJS over React - but LLM's React output is much better. This has become kind of an issue over the last few years.)
I'm a little (not a lot) concerned that this will accelerate the adoption of languages and frameworks based on their popularity and bury away interesting new abstractions and approaches from unknown languages and frameworks.
Taking your react example, then if we we're a couple years ahead on LLMs, jQuery might now be the preferred tool due to AI adoption through consumption.
You can apply this to other fields too. It's quite possible that AIs will make movies, but the only reliably well produced ones will be superhero movies... (I'm exaggerating for effect)
Could AI be the next Cavendish banana? I'm probably being a bit silly though...
> I'm a little ... concerned that this will accelerate the adoption of languages and frameworks based on their popularity and bury away interesting new abstractions and approaches...
I'd argue that the Web development world has been choosing tooling based largely on popularity for like at least a decade now. I can't see how tooling selection could possibly get any worse for that section of the profession.
I disagree. There’s a ton of diversity in web development currently. I don’t think there’s ever been so many language and framework choices to build a web app.
The argument is that we lose this diversity as more people rely on AI and choose what AI prefers.
You raise a valid concern but you presume that we will stay under the OpenAI/Anthropic/etc oligopoly forever. I don't think this is going to be the status quo in the long-term. There is demand for different types of LLMs trained on different data. And there is demand for hardware. For example the new Mac Studio has 512gb VRAM which can run the 600B param Deepseek model locally. So in the future I could see people training their own LLMs to be experts at their language/framework of choice.
Of course you could disagree with my prediction and that these big tech companies are going to build MASSIVE gpu farms the size of the Tesla Gigafactory which can run godlike AI where nobody can compete, but if we get to that point I feel like we will have bigger problems than "AI react code is better than AI solidjs code"
Yea probably..... I wonder when the plateau is. Is it right around the corner or 10 years from now? Seems like they can just keep growing it forever, based on what Sam Altman is saying. I'm botching the quote but either he or George Hotz said something to the effect of: every time you add an order of magnitude to the size of the data, there is a noticeable qualitative difference in the output. But maybe past a certain size you get diminishing returns. Or maybe it's like Moore's Law where they thought it would just go on forever but it turned out it's extremely difficult to get the distance between two transistors smaller than 7nm
> There’s a ton of diversity in web development currently.
You misunderstand me. It's not incompatible for a culture to choose options based largely on popularity (rather than other properties that one would expect to be more salient when making a highly-technical choice), and for there to also be many options to choose from.
In the relatively near future this is going to be like arguing what CPU to buy based on how you like the assembly code. Human readability is going to matter less and less and eventually we will likely standardize on what the LLMs work with best.
People make productivity arguments for using various languages all the time. Let's use an example near and dear to my heart: "Rust is not as productive as X, therefore, you should use X unless you must use Rust." If using LLMs makes Rust more productive than X, that changes this equation.
Feel free to substitute Y instead of Rust if you want, just I know that many people argue Rust is hard to use, so I feel the concreteness is a good place to start.
Maybe they don’t today, or up until recently, but I’d believe it will be a consideration for new projects.
Is certainly true that at least some projects choose languages based on or at least influenced by how easy it is to hire developers fluent in that language.
I am squarely in the bucket of AI skeptic—an old-school, code-craftsman type of personality, exactly the type of persona this article is framed again, and yet my read is nothing like yours. I believe he's hitting these talking points to be comprehensive, but with nothing approaching the importance and weightiness you are implying. For example:
> Claiming that IP rights shouldn’t matter because some developers pirate TV shows?
I didn't see him claiming that IP rights shouldn't matter, but rather that IP rights don't matter in the face of this type of progress, they never have since the industrial revolution. It's hypocritical (and ultimately ineffectual) for software people to get up on a high horse about that now just to protect their own jobs.
And lest you think he is an amoral capitalist, note the opening statement of the section: "Artificial intelligence is profoundly — and probably unfairly — threatening to visual artists in ways that might be hard to appreciate if you don’t work in the arts.", indicating that he does understand and empathize with the most material of harms that the AI revolution is bringing. Software engineers aren't on that same spectrum because the vast majority of programming is not artisinal creative work, it's about precise automation of something as cheaply as possible.
Or this one:
> Blaming LLM hallucinations on the programming language?
Was he "blaming"? Or was he just pointing out that LLMs are better at some languages than others? He even says:
> People say “LLMs can’t code” when what they really mean is “LLMs can’t write Rust”. Fair enough!
Which seems very truthy and in no way is blaming LLMs. Your interpretation is taking a some kind of logical / ethical leap that is not present in the text (as far as I can tell).
> Software engineers aren't on that same spectrum because the vast majority of programming is not artisinal creative work...
That's irrelevant. Copyright and software licensing terms are still enforced in the US. Unless the software license permits it, or it's for one of a few protected activities, verbatim reproduction of nontrivial parts of source code is not legal.
Whether the inhalation of much (most? nearly all?) of the source code available on the Internet for the purpose of making a series of programming machines that bring in lots and lots of revenue for the companies that own those machines is either fair use or it's infringing commercial use has yet to be determined. Scale is important when determining whether or not something should be prohibited or permitted... which is something that many folks seem to forget.
A trick is to know which brands you need. Marriott's Residence Inn is a big reliable one (for multiple "rooms" and kitchen/laundry) that exists almost everywhere in the US. It's a part of the whole Marriott system and often in tourism lulls in various cities has deals that keep it comparatively well priced with other Marriotts in that city and will let you use Marriott points to further defray costs.
Hilton and IHG both have similar brands, but their exact names escape me at the moments. The search keywords are "extended stay" and "apartment hotels".
Depends where you are. Maybe in expensive cities where space is at a premium. But suite hotels (with various levels of kitchenette/kitchen) in the US are not, in my experience, notably more expensive--though often have simpler facilities--than more conventional hotels. (Bedroom may not be actually a different room from living room area but is often at least somewhat separated. So may not help with kids. Stay in this type of hotel in the US a lot.)
The Korean War was famously the first of many "police actions" the U.S. would become involved in after WW2, saving Congress the trouble having to turn up to authorize an actual war declaration.
reply