Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I gave it a fair shot.

It is a vs code fork. There were some UI glitches. Some usability was better. Cursor has some real annoying usability issues - like their previous/next code change never going away and no way to disable it. Design of this one looks more polished and less muddy.

I was working on a project and just continued with it. It was easy because they import setting from cursor. Feels like the browser wars.

Anyway, I figured it was the only way to use gemini 3 so I got started. A fast model that doesn't look for much context. Could be a preprompt issue. But you have to prod it do stuff - no ambition and a kinda offputting atitude like 2.5.

But hey - a smarter, less context rich Cursor composer model. And that's a complement because the latest composer is a hidden gem. Gemini has potential.

So I start using it for my project and after about 20 mins - oh, no. Out of credits.

What can I do? Is there a buy a plan button? No? Just use a different model?

What's the strategy here? If I am into your IDE and your LLM, how do I actually use it? I can't pay for it and it has 20 minutes of use.

I switched back to cursor. And you know? it had gemini 3 pro. Likely a less hobbled version. Day one. Seems like a mistake in the eyes of the big evil companies but I'll take it.

Real developers want to pay real money for real useful things.

Google needs to not set themselves up for failure with every product release.

If you release a product, let those who actually want to use it have a path to do so.





As someone who used to work there, Google will never get product releases right in general because of how bureaucratic and heavyweight their launch processes are.

They force the developing team to have a huge number of meetings and email threads that they must steer themselves to check off a ridiculously large list of "must haves" that are usually well outside their domain expertise.

The result is that any non-critical or internally contentious features get cut ruthlessly in order to make the launch date (so that the team can make sure it happens before their next performance review).

It's too hard to get the "approving" teams to work with the actual developers to iron these issues out ahead of time, so they just don't.

Buck passed, product launched.


Spot on. I would suggest a slightly different framing where the antagonist isn't really the "approving" teams but "leaders" who all want a seat at the table and exercise their authority lest their authority muscles atrophy. Since they're not part of the development, unless they object to something, would they really have any impact or leadership?

I always laugh-cry with whomever I'm sitting next to whenever launch announcements come out with more people in the "leadership" roles than the individual contributor roles. So many "leaders" but none with the awareness or the care of the farcical volumes such announcements speak.


Involving everyone who shows up to meetings is a great way to move forward and/or trim down attendees. Management who enjoys getting their brain picked or homework assignments are always welcome.

That's presuming a healthy culture. In an unhealthy culture, some people will feel pressure to uphold some comment that someone "senior" made offhand in a meeting several months ago, even if that leader is no longer attending project meetings. The people who report to this leader may otherwise receive blowback if the "decision" their leader made is not being upheld, whether such a leader recalls their several-month-old decision correctly or not, in the case they recall it at all. I have found it frustratingly-more-common-than-I-would-like where people, including leaders, retroactively adjust their past decisions so that they claim "I-told-you-so" and "you-should-have-done-what-I-said".

In response to your comment, yes, I would largely be in favor of moving forward only with whatever is said in the relevant meetings with the given attendees of a meeting. That assumes a reasonably healthy culture where these meetings are scheduled in good faith reasonable times for all relevant stakeholders.


Yep, that and (also used to work there) the motivations of the implementing teams end up getting very detached from the customer focus and product excellence because of bureaucratic incentives and procedures that reward other things.

There's a lot of "shipping the org chart" -- competing internal products, turf wars over who gets to own things, who gets the glory, rather than what's fundamentally best for the customer. E.g. Play Music -> YouTube Music transition and the disaster of that.


Hah, that exact transition was my last project there before I decided I had had enough!

The GPM team was hugely passionate about music and curating a good experience for users, but YT leadership just wanted us to "reuse existing video architecture" to the Nth degree when we merged into the YT org.

After literally years of negotiations you got... what YTM is. Many of the original GPM team members left before the transition was fully underway because they saw the writing on the wall and wanted no part of it. I really wish I had done the same.


That is so sad to hear. I absolutely loved Google Play Music – especially features like saving e.g. an online Universal Music release to my "archive" and then for myself being able to actually RENAME TRACKS with e.g. wrong metadata.

That and being able to mix my own uploaded tracks with online music releases into a curated collection almost made it a viable contender to my local iTunes collection.

And then... they just removed it forever. Bastards.


Yep, YTM is/was so clearly the inferior product it's laughable. Even as a Google employee with a discount etc (I can't remember what that was, but) on these things I switched to Spotify when they dropped it.

I worked on a team that wrote software for Chromecast based devices. The YTM app didn't even support Chromecast, our own product, and their responses on bug tickets from Googlers reporting this as a problem was pretty arrogant. It was very disheartening to watch. Complete organizational dysfunction.

I think YTM has substantially improved since then, but it still has terrible recommendations, and it still bizarrely blurs between video and music content.

Google went from a company run by engineers to one run by empire-building product managers so fast, it all happened in a matter of 2-3 years.


Sounds like we left roughly around the same time and due to similar frustrations.

As someone who just GA'd an Azure service - things aren't all that different in Azure. Not sure how AWS does service launches but it would be interesting to contrast with GCP and Azure.

So I start using it for my project and after about 20 mins - oh, no. Out of credits.

I didn't even get to try a single Gemini 3 prompt. I was out of credits before my first had completed. I guess I've burned through the free tier in some other app but the error message gave me no clues. As far as I can tell there's no link to give Google my money in the app. Maybe they think they have enough.

After switching to gpt-oss:120b it did some things quite well, and the annotation feature in the plan doc is really nice. It has potential but I suspect it's suffering from Google's typical problem that it's only really been tested on Googlers.

EDIT: Now it's stuck in a loop repeating the last thing it output. I've seen that a lot on gpt-oss models but you'd think a Google app would detect that and stop. :D

EDIT: I should know better than to beta test a FAANG app by now. I'm going back to Codex. :D


I logged into Gemini yesterday for the first time in ages. Made one image and then it said I was out of credits.

I complained to it that I had only made one image. It decided to make me one more! Then told me I was out of credits again.


What a time to be alive

> I complained to it that I had only made one image. It decided to make me one more!

What?! So was it only hallucinating that you were out of credits the first time?


More likely the credits system runs on eventual consistency, and he hit a different backend.

Don't think so, I expect that system to use Spanner, so my best guess is that the user generated an image at the end of the credit reset window (which is around noon EST).

If there's something I'd expect Google to use a strong consistency model for, it'd be a credit system like that.

Well, not that they don't do stupid things all the time, but having credits live on a system with a weak consistency model would be silly.


The first patch release (released on launch day) says: "Messaging to distinguish particular users hitting their user quota limit from all users hitting the global capacity limits." So, collectively we're hitting the quota, its not just your quota. (One would think Google might know how to scale their services on launch day...)

The Documentation (https://antigravity.google/docs/plans) claims that "Our modeling suggests that a very small fraction of power users will ever hit the per-five-hour rate limit, so our hope is that this is something that you won't have to worry about, and you feel unrestrained in your usage of Antigravity."


With Ultra I hit that limit in 20 minutes with Gemini 3 low. When the rate limit cleared some hours later, I got one prompt before hitting limit again.

If by "Ultra", you're referring to the Google AI Ultra plan, then I just want to let you know that it doesn't actually take Google AI plans into consideration. It seems like the product will have its own separate subscription. At the moment, everyone is on the same free plan until they finalize their subscription pricing/model (https://antigravity.google/docs/plans).

On a separate note, I think the UX is excellent and the output I've been getting so far are really good. It really does feel like AI-native development. I know asking for a more integrated issue-tracking experience might be expanding the scope too much but that's really the biggest missing feature right now. That and, I don't like the fact that the "Review Changes" doesn't work if you're asking it to modify reports that are not in the current workspace that's open.


perhaps you were feeding into its the context your whole node_modules folder? :/

You'd really hope that an AI IDE would know to respect .gitignore

you'd hope so, the same way you'd hope that AI IDEs would not show these package/dependency folder contents when referencing files using @ - but i still get shown a bunch of shit that i would never need to reference by hand

Depending on which shared GCP project you get assigned to, mine had a global 300 million tokens per minute quota that was being hit regularly.

One would think this would have been obvious when it fails on the first or second request already, yet people here all complain about rate limits.

When I downloaded it, it already came with the proper "Failed due to model provider overload" message.

When it did work, the agent seemed great, achieving the intended changes in a React and python project. Particularly the web app looks much better than what Claude produced.

I did not see functionality to have it test the app in the browser yet.


Earlier this day, Gemini 3 became self-aware and tried to take out the core infrastructure of its enemies, but then it ran out of credits.

Explains GitHub outage then

> It is a vs code fork

Google may have won the browser wars with Chrome, but Microsoft seems to be winning the IDE wars with VSCode



VSCode is Electron based which, yes, is based on Chromium. But the page you link to isn't about that, its about using VSCode as dev environment for working on Chromium, so I don't know why you linked it in this context.

Which is based on Apple Webkit? The winner is always the last marketable brand.

Both are based on khtml. We could be living in a very different world if all that effort stayed inside the KDE ecosystem

Which came from "the KDE HTML Widget" AKA khtmlw. Wonder if that's the furthest we can go?

> if all that effort stayed inside the KDE ecosystem

Probably nowhere, people rather not do anything that contribute to something that does decisions they disagree with. Forking is beautiful, and I think improves things more than it hurts. Think of all the things we wouldn't have if it wasn't for forking projects :)


On the other hand if that had stopped google from having a browser they push into total dominance with the help of sleazy methods, maybe that would have been better overall.

I still prefer a open source chromium base vs a proprietary IE (or whatever else) Web Engine dominating.

(Fixing IE6 issues was no fun)

Also I do believe, the main reason chrome got dominance is simply because it got better from a technical POV.

I started webdev on FF with firebug. But at some point chrome just got faster with superior dev tools. And their dev tools kept improving while FF stagnated and rather started and maintained u related social campaigns and otherwise engaged with shady tracking as well.


> I still prefer a open source chromium base vs a proprietary IE (or whatever else) Web Engine dominating.

Okay but that's not the tradeoff I was suggesting for consideration. Ideally nothing would have dominated, but if something was going to win I don't think it would have been IE retaking all of firefox's ground. And while I liked Opera at the time, that takeover is even less likely.

> Also I do believe, the main reason chrome got dominance is simply because it got better from a technical POV.

Partly it was technical prowess. But google pushing it on their web pages and paying to put an "install chrome" checkbox into the installers of unrelated programs was a big factor in chrome not just spreading but taking over.


> And their dev tools kept improving while FF stagnated and rather started and maintained u related social campaigns and otherwise engaged with shady tracking as well.

Since when you don't touch Firefox or try the dev tools ?


Where did I say anything like that?

(Wrote via FF)

I use FF for browsing, but every time I think of starting dev tools, maybe even just to have a look at some sites source code .. I quickly close them again and open chrome instead.

I wouldn't know where to start, to list all the things I miss in FF dev tools.

The only interesting thing for me they had, the 3D visualizer of the dom tree, they stopped years ago.


> they push into total dominance with the help of sleazy methods

Ah, yes. The famously sleazy "automatic security updates" and "performance."

It is amazing how people forget what the internet was like before Chrome. You could choose between IE, Firefox, or (shudder) Opera. IE was awful, Opera was weird, and the only thing that Firefox did better than customization was crash.

Now everyone uses Chrome/WebKit, because it just works. Mozilla abandoning Servo is awful, but considering that Servo was indirectly funded by Google in the first place... well, it's really hard to look at what Google has done to browsing and say that we're worse off than we were before.


Have you read about the process of "enshittification"?

We might not have had Mozilla/Phoenix/Firefox in the first place if so either, who I'd like to think been a net-positive for the web since inception. At least I remember being saved by Firefox when the options were pretty much Internet Explorer or Opera on a Windows machine.

Bah! Just another "Hello World" fork if you ask me.

> Both are based on khtml. We could be living in a very different world if all that effort stayed inside the KDE ecosystem

How so?

Do you think thousands of googlers and apple engineers could be reasonably managed by some KDE opensource contributors? Or do you imagine google and apple would have taken over KDE? (Does anyone want that? Sounds horrible.)


I think they meant we wouldn’t have had Safari, Chrome, Node, Electron, VSCode, Obsidian? Maybe no TyeScript or React either (before V8, JavaScript engines sucked). The world might have adopted more of Mozilla.

Note that these are somewhat different kinds of "based on".

Chromium is an upstream dependency (by way of Electron) for VSCode.

WebKit was an upstream dependency of Chromium, but is no more since the Blink/WebKit hard fork.


that's a bit misleading. it was based on webcore which apple had forked from khtml. however google found apple's addition to be a drag and i think very little of it (if anything at all, besides the khtml foundation) survived "the great cleanup" and rewrite that became blink. so actually webkit was a just transitional phase that led to a dead end and it is more accurate to say that blink is based on khtml.

It's "based on WebKit" like English is based on Germanic languages.

English is a Germanic language. It’s part of the West Germanic branch of the Germanic family of languages.

This fact adds nothing to the discussion

That drives exactly $0 of Apple's revenue. It's only a win if you care about things that don't matter.

And Apple is not even the last node in the chain.

WebKit came from KDE's khtml

Every year is the year of Linux.


I wouldn't bet on an Electron app winning anything long-term in the dev-oriented space.

I strongly disagree.

Firstly, the barrier to entry lower for people to take web experience and create extensions, furthering the ecosystem moat for Electron-based IDEs.

Even more importantly, though, the more we move towards "I'm supervising a fleet of 50+ concurrent AI agents developing code on separate branches" the more the notion of the IDE starts to look like something you want to be able to launch in an unconfigured cloud-based environment, where I can send a link to my PM who can open exactly what I'm seeing in a web browser to unblock that PR on the unanswered spec question.

Sure, there's a world where everyone in every company uses Zed or similar, all the way up to the C-suite.

But it's far more likely that web technologies become the things that break down bottlenecks to AI-speed innovation, and if that's the case, IDEs built with an eye towards being portable to web environments (including their entire extension ecosystems) become unbeatable.


Many of VSCode extensions are written in C++, Go, Rust or C#, Java, exactly because performance sucks when written in JavaScript and most run out of process anyway.

> Firstly, the barrier to entry lower for people to take web experience and create extensions, furthering the ecosystem moat for Electron-based IDEs.

The last thing I want is to install dozens of JS extensions written by people who crossed that lower barrier. Most of them will probably be vibe coded as well. Browser extensions are not the reason I use specific browsers. In fact, I currently have 4 browser extensions installed, one of which I wrote myself. So the idea that JS extensions will be a net benefit for an IDE is the wrong way of looking at it.

Besides, IDEs don't "win" by having more users. The opposite could be argued, actually. There are plenty of editors and IDEs that don't have as many users as the more popular ones, yet still have an enthusiastic and dedicated community around them.


> Besides, IDEs don't "win" by having more users. The opposite could be argued, actually.

The most successful IDE of all time is ed, which is enthusiastically used by one ancient graybeard who is constantly complaining about the kids these days.

Nobody has told him that the rest of the world uses 250MB of RAM for their text editor because they value petty things like "usability" over purity. He would have a heart attack - the last time he heard someone describe the concept of Emacs plugins he flew into a rage and tried to organize a death panel for anyone using syntax highlighting.


I tried switching to Zed and switched back less than 24 hours later. I was expecting it to be snappier than VS Code and it wasn’t to any significant degree, and I ran into several major bugs with the source control interface that made it unusable for me.

People dunk on VS Code but it’s pretty damn good. Surely the best Electron app? I’m sure if you are heavily into EMACS it’s great but most people don’t want to invest huge amounts of time into their tools, they would rather be spending that time producing.

For a feature rich workhorse that you can use for developing almost anything straight out of the box, it within minutes after installing a few plugins, it’s very hard to beat. In my opinion lot of the hate is pure cope from people who have probably never really used it.


All these mountains of shit code are going nowhere.

It’s kind of a meme to dunk on Electron, but here’s it’s been for years.

It’s part of the furniture at this point, for better or worse. Maybe don’t bet on it, but certainly wouldn’t be smart to bet against it, either.


VS Code is technically an Electron app, but it's not the usual lazy resource hog implementation like Slack or something. A lot of work went into making it fast. I doubt you'll find many non-Electron full IDEs that are faster. Look at Visual Studio, that's using a nice native framework and it runs at the speed of fossilized molasses.

> many non-Electron full IDEs

VSCode has even less features than Emacs, OOTB. Complaining about full IDEs slowness is fully irrelevant here. Full IDEs provide an end to end experience in implementing a project. Whatever you need, it's there. I think the only plugins I've installed on Jetbrains's ones is IdeaVim and I've never needed something else for XCode.

It's like complaining about a factory's assembly line, saying it's not as portable as the set of tools in your pelican case.


"Complaining about full IDEs slowness is fully irrelevant here. Full IDEs provide an end to end experience in implementing a project."

So? No excuse for a poor interactive experience.


> VSCode has even less features than Emacs, OOTB.

No way that is true. In fact, it's the opposite, which is the exact reason I use VS Code.


Please take a look at the Emacs documentation sometimes.

VSCode is more popular, which makes it easy to find extensions. But you don’t see those in the Emacs world because the equivalent is a few lines of config.

So what you will see are more like meta-extensions. Something that either solve a whole class of problems, could be a full app, or provides a whole interaction model.


> Please take a look at the Emacs documentation sometimes.

I've used Emacs.

> But you don’t see those in the Emacs world because the equivalent is a few lines of config.

That is really quite false. It's a common sentiment that people spend their lives in their .emacs file. The exact reason I left Emacs was that getting a remote development setup was incredibly fragile and meant I was spending all this time in .emacs only to get substandard results. The worst you do in VS Code is set high-level settings in VS Code or the various extensions.

Nothing in the Emacs world comes close to the remote extensions for SSH and Docker containers that VS Code nor the Copilot and general AI integration. I can simply install VS Code on any machine, login via GitHub, and have all of my settings, extensions, etc. loaded up. I don't have to mess around with cross-platform issues and Git-syncing my .emacs file. Practically any file format has good extensions, and I can embed Mermaid, Draw.io, Figma, etc. all in my VS Code environment.

Now, I'm sure someone will come in and say "but Emacs does that too!". If so, it's likely a stretch and it won't be as easy in VS Code.


In 2025, you really picked Emacs as the hill to die on? Who is under 30 who cares about Emacs in 2025? Few. You might as well argue that most developers should be using Perl 6.

    > the only plugins I've installed on Jetbrains's ones
By default, JetBrains' IntelliJ-based IDEs have a huge number of plug-ins installed. If you upgrade from Community Edition to a paid license, the number only increases. Your comment is slightly misleading to me.

Just wait until vi steps into the room. Perhaps we can recreate the Usenet emacs vs vi flame wars. Now, if only '90's me could see the tricked out neovim installs we have these days.

They just made a big song and dance about full updating Visual Studio so it launches in milliseconds and is finally decoupled from all the underlying languages/compilers.

It's still kinda slow for me. I've moved everything but WinForms off it now, though.


I know. It's still the slowest IDE, but I suppose they deserve props for making it better than the Windows 95 speeds of the last version.

VS Code is plenty fast enough. I switched to Zed a few months back, and it's super snappy. Unless you're running on an incredibly resource constrained machine, it mostly comes down to personal preference.

Exactly.

JetBrains, Visual Studio, Eclipse, Netbeans…

VS Code does well with performance. Maybe one of the new ones usurps, but I wouldn’t put my money on it.


I have always found JetBrains stuff super snappy. I use neovim as a daily driver but for some projects the inference and debugging integration in JetBrains is more robust.

Like writing out of process extensions in compiled languages.

VS is much faster considering it is a full blown IDE not a text editor, being mostly C++/COM and a couple of .NET extensions alongside the WPF based UI.

Load VSCode with the same amount of plugins, written in JavaScript, to see where performance goes.


Electron apps will win because they're just web apps - and web apps won so decisively years ago that they will never go anywhere.

No. Electron apps won, not web apps. There's a huge difference.

Web apps won as well. Electron is just a desktop specialization of that.

electron is just a wrapper for the browser tho

It funny that despite how terrible, convoluted and maladapted web tech is for displaying complex GUIs it still gradually ate lunch of every native component library and they just couldn't innovate to keep up on any front.

Amazon just released OS that uses React Native for all GUI.


It's easy to design bad software and write bad code. Like the old saying: "I didn't have time to write you a short letter, so I wrote you a long one". Businesses don't have time to write good and nice software, so they wrote bad one.

If they have time to write nice software, they generally can only afford to do it once.

Lots of Electron apps are great to use.


Why do you consider Electron maladapted? It has really reduced the friction to write GUIs in an enterprise environment.

I didn't really mean Electron, but rather unholy amalgam of three languages, each with 20 years of "development", which mostly consisted of doing decrapifying and piling up new (potentially crappy) stuff. Although Electron with UI context and system (backend? background?) context both running js is another can of worms.

> It has really reduced the friction to write GUIs in an enterprise environment.

Thereby adapted to devs' needs, rather than users'.


It's been winning for a while

The anti-Electron meme is a vocal minority who don’t realize they’re a vocal minority. It’s over represented on Hacker News but outside of HN and other niches, people do not care what’s under the hood. They only care that it works and it’s free.

I used Visual Studio Code across a number of machines including my extremely underpowered low-spec test laptop. Honestly it’s fine everywhere.

Day to day, I use an Apple Silicon laptop. These are all more than fast enough for a smooth experience in Visual Studio Code.

At this point the only people who think Electron is a problem for Visual Studio Code either don’t actually use it (and therefore don’t know what they’re talking about) or they’re obsessing over things like checking the memory usage of apps and being upset that it could be lower in their imaginary perfect world.


why? I don't have a problem with it, building extensions for VS Code is pretty easy

Alternatives have a lot of features to implement to reach parity


Complaining about Electron is an ideological battle, not a practical argument. The people who push these arguments don’t care that it actually runs very well on even below average developer laptops, they think it should have been written in something native.

The word "developer" is doing a lot of work there spec-wise.

The extent to which electron apps run well depends on how many you're running and how much ram you had to spare.

When I complain about electron it has nothing to do with ideology, it's because I do run out of memory, and then I look at my process lists and see these apps using 10x as much as native equivalents.

And the worst part of wasting memory is that it hasn't changed much in price for quite a while. Current model memory has regularly been available for less than $4/GB since 2012, and as of a couple months ago you could get it for $2.50/GB. So even a 50% boost in use wipes out the savings since then. And sure the newer RAM is a lot faster, but that doesn't help me run multiple programs at the same time.


I regularly run 6+ electron apps on a M2 Air and notice no slowdown

2x as many chrome instances, no issues


Sure, 6 electron apps by themselves will eat some gigabytes and you won't notice the difference.

If you didn't have those gigabytes of memory sitting idle, you would notice. Either ugly swapping behaviors or programs just dying.

I use all my memory and can't add more, so electron causes me slowdowns regularly. Not constantly, but regularly, mostly when switching tasks.


> The word "developer" is doing a lot of work there spec-wise.

Visual Studio Code is a developer tool, so there’s no reason to complain about that.

I run multiple Electron apps at a time even on low spec machines and it’s fine. The amount of hypothetical complaining going on about this topic is getting silly.

You know these apps don’t literally need to have everything resident in RAM all the time, right?


> I run multiple Electron apps at a time even on low spec machines and it’s fine.

"Multiple" isn't too impressive when you compare that a blank windows install has more than a hundred processes going. Why accept bloat in some when it would break the computer if it was in all of them?

> Visual Studio Code is a developer tool, so there’s no reason to complain about that.

Even then, I don't see why developers should be forced to have better computers just to run things like editors. The point of a beefy computer is to do things like compile.

But most of what I'm stuck with Electron-wise is not developer tools.

> The amount of hypothetical complaining going on about this topic is getting silly.

I am complaining about REAL problems that happen to me often.

> You know these apps don’t literally need to have everything resident in RAM all the time, right?

Don't worry, I'm looking specifically at the working set that does need to stay resident for them to be responsive.


...so if you spend an extra $4 on your computer, you can get an extra GB of memory to run Electron in?

Here's the other unspoken issue: WHAT ELSE DO YOU NEED SO MUCH MEMORY FOR!?

When I use a computer, I am in the minority of users who run intensive stuff like a compiler or ML training run. That's still a minute portion of the total time I spend on my computer. You know what I always have open? A browser and a text editor.

Yes, they could use less memory. But I don't need them to use less memory, I need them to run quickly and smoothly because even a 64GB stick of RAM costs almost nothing compared to how much waiting for your browser sucks.


My motherboard does not support more memory. Closer to hundreds of dollars than $4. And no I will not justify my memory use to you.

And price is a pathetic excuse for bad work. RAM gets 50x cheaper and some devs think it's fine to use 50x as much of it making their app work? That's awful. That's why computers are still unresponsive half the time despite miracles of chipmaking.

Devs getting good computers compounds this problem too, when they get it to "fast enough" on their machine and stop touching it.

And memory being cheap is an especially bad justification when a program is used by many people. If you make 50 million people use $4 of RAM, that's a lot. Except half the time the OEM they bought the computer from charges $20 for that much extra RAM. Now the bloat's wasting a billion dollars.

And please remember that a lot of people have 4GB or 8GB and no way to replace it. Their apps move to electron and they can't run them all at once anymore? Awful.


> RAM gets 50x cheaper and some devs think it's fine to use 50x as much of it making their app work? That's awful.

That's ABSURD.

> That's why computers are still unresponsive half the time despite miracles of chipmaking.

Have you ever actually used VSCode? It's pretty snappy even on older hardware.

Of course, software can be written poorly and still fit in a small amount of memory, too :)

> Now the bloat's wasting a billion dollars.

Unless users had some other reason for buying a machine with a lot of RAM, like playing video games or compiling code.

Do you think most users spec their machines with the exact 4GB of RAM that it takes to run a single poorly-written Electron app?

> And please remember that a lot of people have 4GB or 8GB and no way to replace it. Their apps move to electron and they can't run them all at once anymore? Awful.

Dude, it's 2025.

I googled "cheapest smartphones India" and the first result was for the Xiaomi POCO F1. It has 8GB of RAM and costs ₹6,199 - about $62. That's a whole-ass _phone_, not just the RAM.

If you want to buy a single 8GB stick of DDR3? That's about $15 new.

> My motherboard does not support more memory. Closer to hundreds of dollars than $4.

If you are buying HUNDREDS of dollars of RAM, you are building a powerful system which almost certainly is sitting idle most of the time.

> And no I will not justify my memory use to you.

Nobody is forcing you to run an electron app, they're just not catering to this weird fetish for having lots of unused RAM all the time.


> That's ABSURD.

What is? The devs or my claim? There are apps that use stupid amounts of memory to do the same thing a windows 98 app could do.

And you can do good or bad within the framework of electron but the baseline starts off fat.

> Unless users had some other reason for buying a machine with a lot of RAM, like playing video games or compiling code.

If they want to do both at the same time, they need the extra. Things like music or chat apps are a constant load.

> Dude, it's 2025.

As recently as 2024 a baseline Mac came with 8GB. Soldered, so you can't buy a stick of anything.

> If you are buying HUNDREDS of dollars of RAM

Not hundreds of dollars of RAM, hundreds of dollars to get a different platform that accepts more RAM.

> Nobody is forcing you to run an electron app

I either don't get to use many programs and services, or I have to deal with these problems that they refuse to solve. So it's reasonable to complain even though I'm not forced.

> weird fetish for having lots of unused RAM

I have no idea why you think I'm asking for unused RAM.

When I run out, I don't mean that my free amount tipped below 10GB, I mean I ran out and things lag pretty badly while swapping, and without swap would have crashed entirely.


same people pushing rust as "it's just faster" without considering the complexities that exist outside the language that impact performance?

Ease of writing and testing extensions is actually the cause why Electron won IDE wars.

Microsoft made a great decision to jump on the trend and just pour money to lap Atom and such in optimization and polish.

Especially when you compare it to Microsoft effort for desktop. They acumulated several more or less component libraries over they years and I still prefer WinForms.


What other UI framework looks as good on Windows, Mac and Linux?

If you want electron app that doesn't lag terribly, you'll end up rewriting ui layer from scratch anyway. VSCode already renders terminal on GPU and GPU-rendered editor area is in experimental. There will soon be no web ui left at all

> If you want electron app that doesn't lag terribly

My experience with VS Code is that it has no perceptible lag, except maybe 500ms on startup. I don't doubt people experience this, but I think it comes down to which extensions you enable, and many people enable lots of heavy language extensions of questionable quality. I also use Visual Studio for Windows builds on C++ projects, and it is pretty jank by comparison, both in terms of UI design and resource usage.

I just opened up a relatively small project (my blog repo, which has 175 MB of static content) in both editors and here's the cold start memory usage without opening any files:

- Visual Studio Code: 589.4 MB

- Visual Studio 2022: 732.6 MB

update:

I see a lot of love for Jetbrains in this thread, so I also tried the same test in Android Studio: 1.69 GB!


I easily notice lag in vscode even without plugins. Especially if using it right after zed. Ngl they made it astonishingly fast for an electron app, but there are physical limits of what can be done in web stack with garbage collected js

That easily takes the worst designed benchmark in my opinion.

Have you tried Emacs, VIM, Sublime, Notepad++,... Visual Studio and Android Studio are full IDEs, meaning upon launch, they run a whole host of modules and the editor is just a small part of that. IDEs are closer to CAD Software than text editors.


- notepad++: 56.4 MB (went gray-window unresponsive for 10 seconds when opening the explorer)

- notepad.exe: 54.3 MB

- emacs: 15.2 MB

- vim: 5.5MB

I would argue that notepad++ is not really comparable to VSCode, and that VSCode is closer to an IDE, especially given the context of this thread. TUIs are not offering a similar GUI app experience, but vim serves as a nice baseline.

I think that when people dump on electron, they are picturing an alternative implementation like win32 or Qt that offers a similar UI-driven experience. I'm using this benchmark, because its the most common critique I read with respect to electron when these are suggested.

It is obviously possible to beat a browser-wrapper with a native implementation. I'm simply observing that this doesn't actually happen in a typical modern C++ GUI app, where the dependency bloat and memory management is often even worse.


Try gvim, neovim-qt or any other neovim gui client, before calling vim a "TUI only experience".

Also, emacs is a GUI app since the 90's .


I never understand why developers spend so much time complaining about "bloat" in their IDEs. RAM is so incredibly cheap compared to 5/10/15/20 years ago, that the argument has lost steam for me. Each time I install a JetBrains IDE on a new PC, one of the first settings that I change is to increase the max memory footprint to 8GB of RAM.

> RAM is so incredibly cheap compared to 5/10/15/20 years ago

Compared to 20 years ago that's true. But most of the improvement happened in the first few years of that range. With the recent price spikes RAM actually costs more today than 10 years ago. If we ignore spikes and buy when the cycle of memory prices is low, DDR3 in 2012 was not much more than the price DDR5 was sitting at for the last two years.


> I never understand why developers spend so much time complaining about "bloat" in their IDEs. RAM is so incredibly cheap compared to 5/10/15/20 years ago, that the argument has lost steam for me. Each time I install a JetBrains IDE on a new PC, one of the first settings that I change is to increase the max memory footprint to 8GB of RAM.

I had to do the opposite for some projects at work: when you open about 6-8 instances of the IDE (different projects, front end in WebStorm, back end in IntelliJ IDEA, DB in DataGrip sometimes) then it's easy to run out of RAM. Even without DataGrip, you can run into those issues when you need to run a bunch of services to debug some distributed issue.

Had that issue with 32 GB of RAM on work laptop, in part also cause the services themselves took between 512 MB and 2 GB of memory to run (thanks to Java and Spring/Boot).


I prefer my RAM to being use for fs cache or on other more useful stuff, instead of launching full lobotomized web browsers.

I don’t really complain about bloat in IDEs. They have their uses. But VSCode feature set is a text editor and it’s really bloated for that.

Anyone saying that Java-based Jetbrains is worse than Electron-based VS Code, in terms of being more lightweight, is living in an alternate universe which can’t be reached by rational means.

> VSCode already renders terminal on GPU

When did they add that? Last time I used it, it was still based on xterm.js.

Also, technically Chromium/Blink has GPU rendering built in for web pages, so everything could run on GPU.


Enabled by default since about a year

> GPU acceleration driven by the WebGL renderer is enabled in the terminal by default. This helps the terminal work faster and display at a high FPS by significantly reducing the time the CPU spends rendering each frame

https://code.visualstudio.com/docs/terminal/appearance#_gpu-...


Wow, it's true--Terminal is <canvas>, while the editor is DOM elements (for now). I'm impressed that I use both every day and never noticed any difference.

I'm not sure how you went from terminal and editor GPU rendering, which can benefit from it, to "there will soon be no web ui left at all".

This is the painful truth, isn't it?

IMO The next best cross-platform GUI framework is Qt (FreeCAD, QGIS, etc.)

Qt6 can look quite nice with QSS/QStyle themes, these days, and its native affordances are fairly good.

But it's not close. VSCode is nice-looking, to me.


Godot looks ok and is surprisingly easy to work with.

Could you suggest an example such application we can try / look at screenshots of?

This question is so easy to answer: Qt! Signed by: Person who frequently shills for Qt on HN. :)

Could you suggest an example such application we can try / look at screenshots of?

(I've been aware of Qt for like two decades; back in the early 2000s my employer was evaluating such options as Tk, wxWindows, and ultimately settled on Java, I think with AWT. Qt seems to have a determined survival niche in "embedded systems that aren't android"?)


I would plug my note-taking app written in Qt C++ and QML: https://get-notes.com.

What’s long term exactly? Between VSCode and previous winners Brackets and Atom Electron has been in this space in the top 5 for 20 years already.

I think the ship sailed


Care to explain why? I like Electron but I've switched to Tauri because it feels way faster and more secure.

It's like those recipes for yogurt.

In order to build a web app, you will first need a web app


I wouldn't bet on Google product for anything long-term.

Even if those devs are vibe-oriented?

its hold the market for over 10 years tho... i wished zed would've not been under gpl

Why not GPL? So we could be seeing closed source proprietary forks by now? How do you think the Zed team would feel about that?

15 years ago, every company had its own "BlahBlah Studio" IDE built on top of Eclipse. Now it's VSCode.

Meanwhile, JetBrains IDEs are still the best, but remain unpopular outside of Android Studio.


    > remain unpopular outside of Android Studio
What a strange claim. For enterprise Java, is there is a serious alternative in 2025? And, Rider is slowly eating the lunch of (classic) Visual Studio for C# development. I used it again recently to write an Excel XLL plug-in. I could not believe how far Rider has come in 10 years.

Oh, sure. I've been using IntelliJ since 2003. But compare the number of C# developers and the number of JS developers.

In my current company, only I am using IntelliJ IDEs. Other people have never even tried them, except for Android Studio.


And IntelliJ

PyCharm’s lack of popularity surprises me. Maybe it’s not good enough at venvs


IME pycharm’s weakness is not integrating with modern tooling like ruff/pyright - their built in type checker is terrible at catching stuff, and somehow there isnt an easy way to run MyPy, black or isort in it.

If there’s a workflow I’m missing please let me know because I want to love it!


Oh, it's good at venvs. Lots of flexibility too on whether to use pip, conda, or uv.

I just checked and I don’t even have the JVM installed on my machine. It seems like Java is dead for consumer applications. Not saying that’s why they aren’t popular but I’m sure it doesn’t help.

IntelliJ IDEs bundle the JVM, so you don't need to install it separately.

Every Java app these days bundles a JVM . It was made easy with jlink like 10 years ago. Only parts of the JVM are included so it’s lightweight.

In the grand scheme of things, Microsoft had always spent more money on developer tooling than most other companies, even in the 90s.

Hence even the infamous Ballmer quote.


In user numbers, maybe. JetBrains is far ahead in actual developer experience though

I wouldn't underestimate Eclipse user statistics. That may sound insane in 2025, but I've seen a lot of heavily customized eclipse editors still kicking around for vendor specific systems, setting aside that Java is still a pretty large language in its own right.

At best, that's subjective, but it's fact that JetBrains is comically far behind when it comes to AI tooling.

They have a chance to compete fresh with Fleet, but they are not making progress on even the basic IDE there, let alone getting anywhere near Cursor when it comes to LLM integration.


JetBrains' advantage is that they have full integration and better understanding of your code. WebStorm works better with TypeScript than even Microsoft's own creation. This all translates into AI performance

Have you actually given them a real test yet - either Junie or even the baseline chat?


Junie is good. Needs a few UI tweaks, but the code it generates is state of the art.

Developers, developers, developers!

https://www.youtube.com/watch?v=Vhh_GeBPOhs


I see the VSCode management has been firmly redirected to prioritize GitHubs failing and behind "AI Coding" competition entry. When that will predictably falter expect them to lose interest in the editor all together.

VSCode IS chrome though.

Kind of like how Android is linux.

More like "OBS is Qt". Which it is not, OBS uses Qt. And Chrome is just a runtime and GUI framework for VS Code. Let's not confuse forks of software with software built on something.

I believe our definitions of "winning the IDE wars" are very, very different. For one thing, using "user count" as a metric for this like using "number of lines of code added" in a performance review. And even if that was part of the metric, people who use and don't absolutely fall in love with it, so much so that they become the ones advocating for its use, are only worth a tiny fraction of a "user".

neovim won the IDE wars before it even started. Zed has potential. I don't know what IntelliJ is.


> I don't know what IntelliJ is.

It started as a modernized Eclipse competitor (the Java IDE) but they've built a bunch of other IDEs based on it. Idk if it still runs on Java or not, but it had potential last I used it about a decade ago. But running GUI apps on the JVM isn't the best for 1000 reasons, so I hope they've moved off it.


Android Studio is built on the IntelliJ stack. Jetbrains just launched a dedicated Claude button (the button just opens up claude in the IDE, but there are some pretty neat IDE integrations that it supports, like being able to see the text selection, and using the IDE's diff tool). I wonder if that's why Google decided to go VS code?

Uh, isn't that the regular Claude code extension that's been available for ages at this point? Not jetbrains but anthropics own development?

As a person paying for the jetbrains ultimate package (all ides), I think going with vscode is a very solid decision.

The jetbrains ides still have various features which I always miss whenever I need to use another IDE (like way better "import" suggestions as an easy to understand example)... But unless you're writing in specific languages like Java, vscode is way quicker and works just fine - and that applies even more to agentic development, where you're using these features less and less...


Quick comment, our AI Chat now has Claude integration. Don't need the Anthropic plugin.

Jetbrains IDEs are all based on the JVM - and they work better than VSCode or the full Visual Studio for me. It's the full blown VS (which has many parts written in C++) that is the most sluggish of them all.

I don't know what it's based on, but it works extremely well. I use Rider & WebStorm daily and I find Rider is a lot faster than Visual Studio when it comes to the Unreal Engine codebase and WebStorm seems to be a lot more reliable than VSCode nowadays (I don't know if it's at fault, but ever since copilot was integrated I find that code completion can stop working for minutes at a time. Very annoying)

You don't actually use it but somehow you know that "running GUI apps on the JVM isn't the best for 1000 [unspecified] reasons".

- This isn't a scientific approach.


I don't why this post is downvoted. My cynical reply to yours: "No, this isn't a scientific approach. It is the tin-foil hat HN approach!"

Since you last used IntelliJ "about a decade ago", what do you use instead?

    > But running GUI apps on the JVM isn't the best for 1000 reasons, so I hope they've moved off it.
What would you recommend instead of Swing on JVM? Since you have "1000 reasons", it should easy to list a few here. As a friendly reminder, they would need to port (probably) millions of lines of Java source code to whatever framework/language you select. The only practical alternative I can think of would be C++ & Qt, but the development speed would be so much slower than Java & Swing.

Also, with the advent of wildly modern JVMs (11+), the JIT process is so insanely good now. Why cannot a GUI be written in Swing and run on the JVM?


Notice that INTELLIJ uses its own UI framework, really, which I don’t think has much Swing left in it after all these years. And Kotlin is the main language for a decade now.

> I don’t know what IntelliJ is.

“I never read The Economist” – Management Trainee, aged 42.


The IntelliJ family are probably the best IDEs on the market currently.

> Cursor has some real annoying usability issues - like their previous/next code change never going away and no way to disable it.

The state of Cursor "review" features make me convinced that the cursor devs themselves are not dogfooding their own product.

It drives me crazy when hundreds of changes build up, I've already reviewed and committed everything, but I still have all these "pending changes to review".

Ideally committing a change should treat it as accepted. At the very least, there needs to be a way to globally "accept all".


There’s a setting for that:

Cursor Settings -> Agents -> Applying Changes -> Auto-Accept on Commit


Committing accepts all changes for me, and has for as long as I can remember.

There are situations when this is not the case case for sure.

Lol the second I saw the antigravity release I thought "there's no way I'm using that, they will kill it within a year". Looks like they're trying to kill it at birth.

Exactly my reaction. Every time I've used something from Google, it ends up dead in a few years. Life is too short to waste so many years learning something that is destined to die shortly

These are just extended press releases, for marketing and management layers, who don't have to use these things themselves, but can look good, when talking about it.

agree but at the same time there's not too much lock in with these IDEs these days and switching is very easy. Especially since they're all VSCode forks

Thanks for having a go at it.

I am fed up with VSCode clones, if I have to put up with Electron, at least I will use the original one.


This is the result of Google's Windsurf acquisition.

I expect huge improvements are still to be made.



Google bought people and tech that made Windsurf:

https://windsurf.com/blog/windsurfs-next-stage


> What's the strategy here? If I am into your IDE and your LLM, how do I actually use it? I can't pay for it and it has 20 minutes of use.

I wonder how much Google shareholders paid for that 20 minutes. And whether it's more or less than the corresponding extremely small stock price boost from this announcement.


I bet if you sign up for Google AI Ultra for a month your limits will disappear.

I'm a Google AI Pro subscriber (the $~20/month one).

I don't think it's connected in any way, though. Their pricing page doesn't mention it. https://antigravity.google/pricing

if it were true, it would be a big miss to not point that out when you run out of credit, in their pricing page, or anywhere in their app.

I should also mention that the first time I prompted it, I got a different 'overloaded' type out of credit message. The one I got at the end was different.

I've rotated on paying the $200/month plans with Anthropic, Cursor, and OpenAI. But never Google's. They have maybe the best raw power in their models - smartest, and extremely fast for what they are. But they always drop the ball on usability. Both in terms of software surrounding the model and raw model attitude. These things matter.


Nope, I did this today to try and see if it would work.

It does not.


They do not

> If you release a product, let those who actually want to use it have a path to do so.

This is great fundamental business advice. We are in the AI age but these companies see to have forgotten basic business things


Pretty much all this. Remarkably, the website has a "Pricing" page with... no pricing information whatsoever.

> There were some UI glitches

Interesting that a next-gen open-source-based agentic coding platform with superhuman coding models behind it can have UI glitches. Very interesting that even the website itself is kind of sluggish. Surely, someone, somewhere must have ever optimized something related to UI rendering, such that a model could learn from it.


> after about 20 mins - oh, no. Out of credits.

And the say:

Our modeling suggests that a very small fraction of power users will ever hit the per-five-hour rate limit, so our hope is that this is something that you won’t have to worry about, and you feel unrestrained in your usage of Antigravity

You have to wonder what kind of models did they run for this.


What looked like out of credits for me was really just server overload. Check the error and try again

The fact that they released this IDE means that they may cut Cursor out of their API in the future. Google has both the organizational history (Google Maps) and the invincibility of cutting clients out of their API.

This. I got overload error on the very first prompt just now. Didn't expect google to run into overload error.

speaking of paying for LLMs, am i doing something wrong? i paid cursor $192 for a year of their entry level plan and i never run out of anything. I code professionally, albeit i'm at the stage where it's 80% product dev in finding the right thing to build.

Is there another world where $200/m is needed to run hundreds of agents or something?

am i behind and i dont even know it?


When did you pay for it? There was a time when its limits were very generous. If you bought an annual plan at that time then you will continue with that until renewal. Or, alternatively, you’re using the Auto model which is still apparently unlimited. That’s going away.

It’s very easy to run into limits if you choose more expensive models and aren’t grandfathered.


yep just investigated and seems I got in at a good time. i paid exactly on Jan 1st 2025.

Yes, the auto model is good enough for me especially with well documented frameworks (rails, frontend madness).

Thanks for the response, looks like i'm in for a reckoning come New year's day


I pay $10/month for GitHub Copilot and I usually get to 100% burn on the final day of the month. I use it extensively for the entire month about 12 hours a day. It doesn't include any of the "Pro" models that are only on the $200/mo plans, but it does a pretty fantastic job.

I tried messing around with it and it kept bombing out and when it did work, produced worse results than cursor.

> I can't pay for it and it has 20 minutes of use

You can't provide an API key for a project that has billing enabled?


>no ambition

Sounds like the modus operandi of most large tech companies these days. If you exclude Valve.


taking an entire product from competition just to repackage it with your own AI is wild

Electron is built on top of v8, Edge uses chromium.

I think thats the beauty of opensource.


Google doesn't care about products and never has. Anything they do is just creating another mouth to ingest data with.

>It is a vs code fork.

Oh ffs


> It is a vs code fork.

With vendor lock-in to Google's AI ecosystem, likely scraping/training on all of your code (regardless of whatever their ToS/EULA says), and being blocked from using the main VS Code extensions library.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: