Hacker News new | past | comments | ask | show | jobs | submit login

Cursor and Windsurf pricing really turned me off. I prefer Claude Code's direct API costs, because it feels more quantifiable to me cost wise. I can load up Claude Code, implement a feature, and close it, and I get a solid dollar value of how much that cost me. It makes it easier for me to mentally write off the low cost of wasteful requests when the AI gets something wrong or starts to spin its wheels.

With Cursor/Windsurf, you make requests, your allowed credit quantity ticks down (which creates anxiety about running out), and you're trying to do some mental math to figure out how those requests that actually cost you. It feels like a method to obfuscate the real cost to the user and also create an incentive for the user to not use the product very much because of the rapidly approaching limits during a focus/flow coding session. I spent about an hour using Cursor Pro and had used up over 30% of my monthly credits on something relatively small, which made me realize their $20/mo plan likely was not going to meet my needs and how much it would really cost me seemed like an unanswerable question.

I just don't like it as a customer and it makes me very suspicious of the business model as a result. I spent about $50 on a week with Claude Code, and could easily spend more I bet. The idea that Cursor and Windsurf are suggesting a $20/mo plan could be a good fit for someone like me, in the face of that $50 in one week figure, further illustrates that there is something that doesn't quite match up with these 'credit' based billing systems.




I'm paying $30/month for Gemini. It's worth every damn penny and then some and then some and then some. It's absolutely amazing at code reviews. Much more thorough than a human code reviewer, including me. It humbles me sometimes, which is good. Unless you can't use Google products, I'd seriously give it a try.

Now, I do still use ChatGPT sometimes. It recently helped me find a very simple solution to a pretty obscure compiler error that I'd never encountered in my several decades long career as a developer. Gemini didn't even get close.

Most of the other services seem focused on using the AWS pay-as-you-go pricing model. It's okay to use that pricing model but it's not easy for me to pitch it at work when I can't say exactly what the monthly cost is going to be.

I still love being a developer, but, knowing what I know now, I feel like I'd like it a lot less without Gemini. I'm much more productive and less stressed too. And I want to add, about the stress: being a developer is stressful sometimes. It can be a cold, quiet stress too, not necessarily a loud, hot stress. AI helps solve so many little problems very quickly, what to add/remove from a make file, why my code isn't linking. Or tricky stuff like using intrinsics. Holy fuck! It's really amazeballs.


How do you get Gemini to do a code review? Do you just paste the code into the web UI?


It depends. For a class or a function, yes, I copy and paste.

When I want a review of a larger project I am working on (2500 classes), I concatenate the chunks of the codebase into a .txt file and upload it to Gemini.

Then we go over it.

I've even uploaded the entire codebase and asked it to write documentation and it did an excellent job providing a framework for me.


It seems most people aren’t worried about handing their code over to LLMs. Why is that? Isn’t there a concern that they will just train on your code? What if you’re trying to build a business, is there a fear that your idea or implementation will just get stolen or absorbed into the LLM and spit out into someone else’s code base? Or do people just not care anymore because code is so trivial to generate that it’s not worth very much?


Most llms have a privacy mode, including windsurf, that state they don’t use your input for training. But I think your last sentence is key, code is hardly ever so unique that it’s super valuable by itself. Ideas can be copied easily. For most software, it’s the whole package: really tackling the user problem, with the best user experience, with the right pricing, training, enablement, support, roadmap, community, network effect, etc.


Thanks that makes sense!


In my case, the code is public facing.


"concatenate the chunks of the codebase into a .txt file"

I think simonw has a tool that also does something like this? Edit- found it:

https://github.com/simonw/files-to-prompt


I have found Repomix[1] to be a good and straightforward tool for the task.

[1]: https://github.com/yamadashy/repomix


Nice this looks great too


I rolled my own, but it's exactly this.


Yeah, my colleague just wrote about this exact problem of incentive misalignment with Cursor and Windsurf https://blog.kilocode.ai/p/why-cursors-flat-fee-pricing-will

The economist in me is says "just show the prices", though the psychologist in me says "that's hella stressful". ;)


Why does the psychologist in you say "that's hella stressful", e.g. Stressful for who? What is the source of their stress?


Seeing the $ every time I do something, even if it's $0.50, can be a little stressful. We should have an option to hide it per-request and just show a progress bar for the current topup.


I find a progress bar for a top up much more stressful than just a counter of how much I've spent.

Counting down to a deadline vs counting up to nothing.

If it costs money and spending money is stressful, there has to be a cost somewhere, we can't remove the stress entirely.


I use windsurf, have the largest plan and still need to top up quite a bit.

So for me it has a price per task, sort of, because you are topping it up by paying another 10 dollars at a time as you run out.

The plans aren't the right size for professional work, but maybe they wanted to keep the price points low?


> The plans aren't the right size for professional work

I think there is a fundamental pricing misalignment between products that seek to do professional work being sold to people who want them to help with professional work.

Another way: their pricing will likely make much more sense for a person whose coding skill is closer to zero (& who doesn't want to learn) than it will for a person who is looking for an AI assistant.


For me it felt the opposite: pricing works for software engineers that carefully validate every line of code, that will ask for test and documentation. It doesn’t work for vibe coding where the user will just trial and error code generation until it works. You’re running through credits much faster this way.


I guess what I am saying is the pricing is high for the person sitting at the keyboard but low for the person signing the payroll checks (because it's cheaper than an engineer). At least, that's the direction of things.


For me, it feels like because I'm asking for the change, and the tests to pass and the types to pass pyright and ect, that the agent spends more cycles on a change then someone who didn't code would.


I haven't used either but reading Cursor's website, they let you add your own Claude API key, do they still fiddle with your requests using your own key?


When you go to add your own API key into Cursor, you get a warning message that several Cursor features cannot be used if you plug in your own API key. I would totally have done that if not for that message.


> I spent about an hour using Cursor Pro and had used up over 30% of my monthly credits

Sorry, but how is this possible? They give 500 credits in a month for the "premium" queries. I don't even think I'd be able to ask more than one question per minute even with tiny requests. I haven't tried the Agent mode. Does that burn through queries?


I had to do a little digging to respond to this properly.

I was on the "Pro Trial" where you get 150 premium requests and I had very quickly used 34 for them, which admittedly is 22% and not 30%. Their pricing page says that the Free plan includes "Pro two-week trial", but they do not explain that on the pro trial you only get 150 premium requests and that on the real Pro plan you get 500 premium requests. So you're correct to be skeptical, I did not use 30% of 500 requests on the Pro plan. I used 22% of the 150 requests you get on the Trial Pro plan.

And yes, I think the agent mode can burn through credits pretty quickly.


With an agent, one request could be 20 or more iterations


How do you run out of premium requests, I've peeked at the stats across my company and never seen this happen.


I chalk it up to VC subsidized pricing. I use my monthly Cursor quota, then switch to Claude Code when I run out.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: