It's very 'batteries-included', for one thing - when a novice wants to code, I recommend them Zed because it'll just handle and manage LSPs for them for a variety of languages. Meanwhile with VSCode step 1 of installing and using it for e.g. Rust is to go and install a random extension (and the VSCode store, whilst sorted by popularity, can be intimidating / confusing for a novice, who might install a random/scammy extension). The 'recommended extensions' thing helps, but it's still subpar.
It has some other niceties – I love how if you Cmd+Shift+F to search across the project, that you get a multi-buffer [1] - I often use that for larger manual refactors for a ton of places in my codebase.
But honestly... as others have said, speed is just _such_ a strong feature for my taste - it makes a world of difference compared to VSCode, because in VSC I'll be typing vim commands, the editor will fail to keep up and it'll do the wrong thing - whereas in Zed it's fast enough that I never really run into stalls.
The biggest problem with VSC for me is that sometimes undo history is completely broken with VIM. If you don't commit frequently, it is very easy to mess up the with the project and lose all your work, if you undo anything.
Having everything be an extension is the double edged sword of VS Code. Zed is great for the ecosystem and I use it as an alternate editor for quick text editing but I dont foresee it replacing VS Code as my IDE. Once youve configured VS Code to your liking with devcontainers, and extensions declared by the config file, it becomes excellent.
I wish they wouldn't use JS to demonstrate the AI's coding abilities - the internet is full of JS code and at this point I expect them to be good at it.
Show me examples in complex (for lack of a better word) languages to impress me.
I recently used OpenAI models to generate OCaml code, and it was eye opening how much even reasoning models are still just copy and paste machines.
The code was full of syntax errors, and they clearly lacked a basic understanding of what functions are in the stdlib vs those from popular (in OCaml terms) libraries.
Maybe GPT-5 is the great leap and I'll have to eat my words, but this experience really made me more pessimistic about AI's potential and the future of programming in general.
I'm hoping that in 10 years niche languages are still a thing, and the world doesn't converge toward writing everything in JS just because AIs make it easier to work with.
> I wish they wouldn't use JS to demonstrate the AI's coding abilities - the internet is full of JS code and at this point I expect them to be good at it. Show me examples in complex (for lack of a better word) languages to impress me.
Agreed. The models break down on not even that complex of code either, if it's not web/javascript. Was playing with Gemini CLI the other day and had it try to make a simple Avalonia GUI app in C#/.NET, kept going around in circles and couldn't even get a basic starter project to build so I can imagine how much it'd struggle with OCaml or other more "obscure" languages.
This makes the tech even less useful where it'd be most helpful - on internal, legacy codebases, enterprisey stuff, stacks that don't have numerous examples on github to train from.
> on internal, legacy codebases, enterprisey stuff
Or anything that breaks the norm really.
I recently wrote something where I updated a variable using atomic primitives. Because it was inside a hot path I read the value without using
atomics as it was okay for the value to be stale.
I handed it the code because I had a question about something unrelated and it wouldn't stop changing this piece of code to use atomic reads.
Even when I prompted it not to change the code or explained why this was fine it wouldn't stop.
FWIW, and this depends on the language obviously, but formal memory models typically do forbid races between atomic and non-atomic accesses to the same memory location.
While what you were doing may have been fine given your context, if you're targeting e.g. standard C++, you really shouldn't be doing it (it's UB). You can usually get the same result with relaxed atomic load/store.
(As far as AI is concerned, I do agree that the model should just have followed your direction though.)
Yes, for me it is and it was even before this experience.
But, you know, there's a growing crowd that believes AI is almost at AGI level and that they'll vibe code their way to a Fortune 100 company.
Maybe I spend too much time rage baiting myself reading X threads and that's why I feel the need to emphasize that AI isn't what they make it out to be.
The snake game they showcased - if you ask Qwen3-coder-30b to generate a snake game in JS - it generates the exact same layout, the exact same two buttons below, and the exact same text under the 2 buttons. It just regurgigates its training data.
I used ChatGPT to convert an old piece of OCaml code of mine to Rust and while it didn't really work—and I didn't expect it to—it seemed a very reasonable starting point to actually do the rest of the work manually.
Honestly, why would anyone find this information useful? Creating a brand new greenfield project is a terrible test. Because literally anything it outputs as long as it looks good as long as it works following the happy path. Coding with LLMs falls apart in situations where complex reasoning is required. Situations such as having debugging issues in a service where there's either no framework in use or they've significantly modified a framework to make it better suit the authors needs.
Yeah, I guess it's just the easiest thing to generate and evaluate.
A more useful demonstration like making large meaningful changes to a large complicated codebase would be much harder to evaluate since you need to be familiar with the existing system to evaluate the quality of the transformation.
Would be kinda cool to instead see diffs of nontrivial patches to the Ruby on Rails codebase or something.
> Honestly, why would anyone find this information useful?
This seems to impress the mgmt types a lot, e.g. "I made a WHOLE APP!", when basically what most of this is is frameworks and tech that had crappy bootstrapping to begin with (React and JS are rife with this, in spite of their popularity).
Rust memory management is... profoundly not manual?
Case in point: I use Rust/WASM in all of my web apps to great effect, and memory is never a consideration. In Rust you pretty much never think about freeing or memory.
On top of that, when objects are moved across to be owned by JS, FinalizationRegistry is able to clean up them up pretty much perfectly, so they're GC-ed as normal.
Wrangling the borrow checker seems pretty manual at times. And I don’t know why you’d bother with a persnickety compile time GC when JS’s GC isn’t a top issue for front end development.
There is no need for the concept of ownership of memory in JavaScript. So you are wasting time on a concept that doesn't matter in languages with a real GC. Dealing with ownership = manual memory management.
This used to not be true- once upon a time, Internet Explorer kept memory separate for DOM nodes and JavaScript objects, so it was very easy to leak memory by keeping reference cycles between the two.
Now, with all the desire for WASM to have DOM access I wonder if we'll end up finding ourselves back in that position again.
You can still have ownership issues and leaks even with a GC, if an object is reachable from a root. e.g. object A is in a cache and it references object B which references objects C D E F G ... which will now never get collected.
If A owns B then that is as expected but if A merely references B then it should hold a WeakRef
It's kinda exhausting to use TypeScript and run into situations where the type system is more of a suggestion than a rule. Passing around values [1] that have a type annotation but aren't the type they're annotated as is... in many ways worse than not typing them in the first place.
[1]: not even deserialized ones - ones that only moved within the language!
We can dream bigger: when music, images, video and 3d assets are far easier then treat them as primitives.
We can use these to create entire virtual worlds, games, software that incorporates these, and to incorporate creativity and media into infinitely more situations in real life.
We can create massive installations that are not a single image but an endless video with endless music, and then our hand turns to stabilizing and styling and aestheticizing those exactly in line with our (the artist's) preferences.
Romanticizing the idea that picking at a guitar is somehow 'more creative' than using a DAW to create incredibly complex and layered and beautiful music is the same thing that's happening here, even if the primitives seem 'scarier' and 'bigger'.
Plus, there are many situations in life that would be made infinitely more human by the introduction of our collective work in designing our aesthetic and putting it into the world, and encoding it into models. Installations and physical spaces can absolutely be more beautiful if we can produce more, taking the aesthetic(s) that we've built so far and making them dynamic to spaces.
Also for learning: as a young person learning to draw and sing and play music and so many other things, I would have tremendously appreciated the ability to generate and follow subtle, personalized generation - to take a photo of a scene in front of me and have the AI first sketch it loosely so that I can copy it, then escalate and escalate until I can do something bigger.
Asking it about a marginally more complex tech topic and getting an excellent answer in ~4 seconds, reasoning for 1.1 seconds...
I am _very_ curious to see what GPT-5 turns out to be, because unless they're running on custom silicon / accelerators, even if it's very smart, it seems hard to justify not using these open models on Groq/Cerebras for a _huge_ fraction of use-cases.
https://github.com/zed-industries/zed/blob/main/crates%2Fage...
https://www.npmjs.com/package/@zed-industries/claude-code-ac...
Even though that doesn't seem to be mentioned on the site yet.
reply