Hacker News new | past | comments | ask | show | jobs | submit login
How I write code using Cursor (arguingwithalgorithms.com)
491 points by tomyedwab 65 days ago | hide | past | favorite | 407 comments



I've been using AI to solve isolated problems, mainly as a replacement of search engine specifically for programming. I'm still not convinced of these "write whole block of code for me" type of use case. Here's my arguments against the videos from the article.

1. Snake case to camelCase. Even without AI we can already complete these tasks easily. VSCode itself has command of "Transform to Camel Case" for selection. It is nice the AI can figure out which text to transform based on context, but not too impressive. I could select one ":, use "Select All Occurrences", press left, then ctrl+shift+left to select all the keys.

2. Generate boilerplate from documentation. Boilerplate are tedious, but not really time-consuming. How many of you spend 90% of time writing boilerplate instead of the core logic of the project? If a language/framework (Java used to be, not sure about now) requires me to spend that much time on boilerplate, that's a language to be ditched/fixed.

3. Turn problem description into a block of concurrency code. Unlike the boilerplate, these code are more complicated. If I already know the area, I don't need AI's help to begin with. If I don't know, how can I trust the generated code to be correct? It could miss a corner case that my question didn't specify, which I don't yet know existing myself. In the end, I still need to spend time learning Python concurrency, then I'll be writing the same code myself in no time.

In summary, my experience about AI is that if the question is easy (e.g. easy to find exactly same question in StackOverflow), their answer is highly accurate. But if it is a unique question, their accuracy drops quickly. But it is the latter case where we spend most of the time on.


I started like this. Then I came around and can’t imagine going back.

It’s kinda like having a really smart new grad, who works instantly, and has memorized all the docs. Yes I have to code review and guide it. That’s an easy trade off to make for typing 1000 tokens/s, never losing focus, and double checking every detail in realtime.

First: it really does save a ton of time for tedious tasks. My best example is test cases. I can write a method in 3 minutes, but Sonnet will write the 8 best test cases in 4 seconds, which would have taken me 10 mins of switching back and forth, looking at branches/errors, and mocking. I can code review and run these in 30s. Often it finds a bug. It’s definitely more patient than me in writing detailed tests.

Instant and pretty great code review: it can understand what you are trying to do, find issues, and fix them quickly. Just ask it to review and fix issues.

Writing new code: it’s actually pretty great at this. I needed a util class for config that had fallbacks to config files, env vars and defaults. And I wanted type checking to work on the accessors. Nothing hard, but it would have taken time to look at docs for yaml parsing, how to find the home directory, which env vars api returns null vs error on blank, typing, etc. All easy, but takes time. Instead I described it in about 20 seconds and it wrote it (with tests) in a few seconds.

It’s moved well past the stage “it can answer questions on stack overflow”. If it has been a while (a while=6 months in ML), try again with new sonnet 3.5.


>My best example is test cases. I can write a method in 3 minutes, but Sonnet will write the 8 best test cases in 4 seconds

For me it doesn't work. Generated tests fail to run or they fail.

I work in large C# codebases and in each file I have lots of injected dependencies. I have one public method which can call lots of private methods in the same class.

AI either doesn't properly mock the dependencies, either ignores what happens in the private methods.

If I take a lot of time guiding it where to look, it can generate unit tests that pass. But it takes longer than if I write the unit tests myself.


For me it's the same. It's usually just some hallucinated garbage. All of these LLM's don't have the full picture of my project.

When I can give them isolated tasks like convert X to Y, create a foo that does bar it's excellent, but for unit testing? Not even going to try anymore. I write 5 unit tests manually that work in the time I write 5 prompts that give me useless stuff that I need to add manually.

Why can't we have a LLM cache for a project just like I have a build cache? Analyze one particular commit on the main branch very expensively, then only calculate the differences from that point. Pretty much like git works, just for your model.


"It's usually just some hallucinated garbage. All of these LLM's don't have the full picture of my project."

Cursor can have whole project in the context, or you can specify specific files that you want.


> Cursor can have whole project in the context

Depends on the size of the project. You can’t shove all of google’s monorepo into an LLMs context (yet)


I’m looking at 150000 lines of Swift divided over some local packages and the main app, excluding external dependencies


Do you have 150000 lines of Swift in YOUR context window?


I know how to find the context I need, being aided by the IDE and compiler. So yes, my context window contains all of the code in my project, even if it's not instantaneous.

It's not that hard to have an idea of what code is defined where in a project, since compilers have been doing that for over half a century. If I'm injecting protocols and mocks into a unit test, it shouldn't be really hard for a computer to figure out their definitions, unless they don't exist yet and I was not clear they should have been created, which would mean that I'm giving the AI the wrong prompt and the error is on my side.


> Why can't we have a LLM cache for a project just like I have a build cache? Analyze one particular commit on the main branch very expensively

It's not just very expensive - it's prohibitively expensive, I think.


With Cursor you can specify which files it reads before starting. Usually have to attached one or two to get an ideal one-shot result.

But yeah, I use it for unit testing, not integration testing.


Ask Cursor to write usage and mocking documentation for the most important injected dependencies, then include that documentation in your context. I’ve got a large tree of such documentation in my docs folder specifically for guiding AI. Cursor’s Notebook feature can bundle together contexts.

I use Cursor to work on a Rust Qt app that uses the main branch of cxx-qt so it’s definitely not in the training data, but Claude figures out how to write correct Rust code based on the included documentation no problem, including the dependency injection I do through QmlEngine.


Sounds interesting, what are you working on?

(Fellow Qt developer)


Same thing: https://news.ycombinator.com/item?id=40740017 :)

Just saw you published your block editor blog post. Look forward to reading it!


Haha, hi again!

Awesome! Would love to hear your thoughts. Any progress on your AI client? I'm intrigued by the so many bindings to Qt. Recently, I got excited about a Mojo binding[1].

[1] https://github.com/rectalogic/mojo-qt


I’ve found it better at writing tests because it tests the code you’ve written vs what you intended. I’ve caught logic bugs because it wrote tests with an assertion for a conditional that was backwards. The readable name of the test clearly pointed out that I was doing the wrong thing (the test passed?.)


Interesting. I’ve had the opposite experience (I invert or miss a condition, it catches it).

It probably comes down to model, naming and context. Until Sonnet 3.5 my experience was similar to yours. After it mostly “just works”.


That sounds more like a footgun than a desirable thing to be honest!


Maybe a TLDR from all the issues I'm reading in this thread:

- It's gotten way better in the last 6 months. Both models (Sonnet 3.5 and new October Sonnet 3.5), and tooling (Cursor). If you last tried Co-pilot, you should probably give it another look. It's also going to keep getting better. [1]

- It can make errors, and expect to do some code review and guiding. However the error rates are going way way down [1]. I'd say it's already below humans for a lot of tasks. I'm often doing 2/3 iterations before applying a diff, but a quick comment like "close, keep the test cases, but use the test fixture at the top of the file to reduce repeated code" and 5 seconds is all it takes to get a full refactor. Compared to code-review turn around with a team, it's magic.

- You need to learn how to use it. Setting the right prompts, adding files to the context, etc. I'd say it's already worth learning.

- I just knows the docs, and that's pretty invaluable. I know 10ish languages, which also means I don't remember the system call to get an env var in any of them. It does, and can insert it a lot faster than I can google it. Again, you'll need to code review, but more and more it's nailing idiomatic error checking in each language.

- You don't need libraries for boilerplate tasks. zero_pad is the extreme/joke example, but a lot more of my code is just using system libraries.

- It can do things other tools can't. Tell it to take the visual style of one blog post and port to another. Take it to use a test file I wrote for style reference, and update 12 other files to follow that style. Read the README and tests, then write pydocs for a library. Write a GitHub action to build docs and deploy to GitHub pages (including suggesting libraries, deploy actions, and offering alternatives). Again: you don't blindly trust anything, you code review, and tests are critical.

[1] https://www.anthropic.com/news/3-5-models-and-computer-use


Yes, it works for new code and simple cases. If you have large code bases, it doesn't have the context and you have to baby it, telling which files and functions it should look into before attempting to write something. That takes a lot of time.

Yes, it can do simple tasks, like you said, writing a call to get the environment variables.

But imagine you work on a basket calculation service, where you have base item prices, where you have to apply some discounts based on some complicated rules, you have to add various kinds of taxes for various countries in the world and you have to use a different number of decimals for each country. Each of your classes calls 5 to 6 other classes, all with a lot of business logic behind. Besides that, you also make lots of API calls to other services.

What will the AI do for you? Nothing, it will just help you write one liners to parse or split strings. For everything else it lacks context.


Are you suggesting you would inline all that logic if you hand-rolled the method? Probably not, right? You would have a high-level algorithm of easily-understood parts. Why wouldnt the AI be able to 1) write that high-level algorithm and then 2) subsequently write the individual parts?


What's the logic here? "I haven't seen it so it doesn't exist?"

There are hundreds of available examples of it processing large numbers of files, and making correct changes across them. There are benchmarks with open datasets already linked in the thread [1]. It's trivial to find examples of it making much more complex changes than "one liners to parse or split strings".

[1] https://huggingface.co/datasets/princeton-nlp/SWE-bench


> Instant and pretty great code review: it can understand what you are trying to do, find issues, and fix them quickly. Just ask it to review and fix issues.

Cursor’s code review is surprisingly good. It’s caught many bugs for me that would have taken a while to debug, like off by one errors or improperly refactored code (like changing is_alive to is_dead and forgetting to negate conditionals)


> changing is_alive to is_dead and forgetting to negate conditionals

No test broke?


Tests don’t care what you name the variable


This “really smart new grad” take is completely insane to me, especially if you know how LLMs work. Look at this SQL snippet Claude (the new Sonnet) generated recently.

    -- Get recipient's push token and sender's username
    SELECT expo_push_token, p.username 
    INTO recipient_push_token, sender_username
    FROM profiles p
    WHERE p.id = NEW.recipient_id;

Seems like the world has truly gone insane and engineers are tuned into some alternate reality a la Fox News. Well…it’ll be a sobering day when the other shoe falls.


> it can understand

It can't understand. That's not what LLMs do.


This is a prompt I gave to o1-mini a while ago: My instructions follow now. The scripts which I provided you work perfectly fine. I want you to perform a change though. The image_data.pkl and faiss_index.bin are two databases consisting of rows, one for each image, in the end, right? My problem is that there are many duplicates: images with different names but the same content. I want you to write a script which for each row, i.e. each image, opens the image in python and computes the average expected color and the average variation of color, for each of the colors red, green and blue, and over "random" over all the pixels. Make sure that this procedure is normalized with respect to the resolution. Then once this list of "defining features" is obtained, we can compute the pairwise difference. If two images have less than 1% variation in both expectation and variation, then we consider them to be identical. in this case, delete those rows/images, except for one of course, from the .pkl and the .bin I mentioned in the beginning. Write a log file at the end which lists the filenames of identical images.

It wrote the script, I ran it and it worked. I had it write another script which displays the found duplicate groups so I could see at a glance that the script had indeed worked. And for you this does not constitute any understanding? Yes it is assembling pieces of code or algorithmic procedures which it has memorized. But in this way it creates a script tailored to my wishes. The key is that it has to understand my intent.


Does "it understands" just mean "it gave me what I wanted?" If so, I think it's clear that that just isn't understanding.

Understanding is something a being has or does. And understanding isn't always correct. I'm capable of understanding. My calculator isn't. When my calculator returns a correct answer, we don't say it understood me -- or that it understands anything. And when we say I'm wrong, we mean something different from what we mean when we say a calculator is wrong.

When I say LLMs can't understand, I'm saying they're no different, in this respect, from a calculator, WinZip when it unzips an archive, or a binary search algorithm when you invoke a binary-search function. The LLM, the device, the program, and the function boil down (or can) to the same primitives and the same instruction set. So if LLMs have understanding, then necessarily so do a calculator, WinZip, and a binary-search algorithm. But they don't. Or rather we have no reason to suppose they do.

If "it understands" is just shorthand for "the statistical model and program were designed and tuned in such a way that my input produced the desired output," then "understand" is, again, just unarguably the wrong word, even as shorthand. And this kind of shorthand is dangerous, because over and over I see that it stops being shorthand and becomes literal.

LLMs are basically autocorrect on steroids. We have no reason to think they understand you or your intent any more than your cell phone keyboard does when it guesses the next character or word.


When I look at an image of a dog on my computer screen, I don't think that there's an actual dog anywhere in my computer. Saying that these models "understand" because we like their output is, to me, no different from saying that there is, in fact, a real, actual dog.

"It looks like understanding" just isn't sufficient for us to conclude "it understands."


I think the problem is our traditional notions of "understanding" and "intelligence" fail us. I don't think we understand what we mean by "understanding". Whatever the LLM is doing inside, it's far removed from what a human would do. But on the face of it, from an external perspective, it has many of the same useful properties as if done by a human. And the LLM's outputs seem to be converging closer and closer to what a human would do, even though there is still a large gap. I suggest the focus here shouldn't be so much on what the LLM can't do but the speed at which it is becoming better at doing things.


I think there is only one thing we should focus on: Measurable capability on tasks. Understanding, memorization, reasoning etc. are all just shorthands we use to quickly convey an idea of a capability on a kind of task. Measurable capability on tasks can also attempt do describe mechanistically how the model works, but that is very difficult. This is where you would try to describe your sense of "understanding" rigorously. To keep it simple for example, I think when you say that the LLM does not understand what you must really mean is that you reckon its performance will quickly decay off as the task gets more difficult in various dimensions: Depth/complexity, Verifiability of the result, length/duration/context size, to a degree where it is still far from being able to act as a labor-delivering agent.


Brains can’t understand either that’s not what neurons do


We experience our own minds and we have every reason to think that our minds are a direct product of our brains.

We don't have any reason to think that these models produce being, awareness, intention, or experience.


What is the best workflow to code with an AI?

Copy and paste the code to the Claude website? Or use an extension? o something else?


Cursor. Mostly chat mode. Usually adding 1-2 extra files to the context before invoking, and selecting the relevant section for extra focus.


I personally use copilot, which is integrated into my IDE, almost identical to this Cursor example.


Copilot is about as far away from Cursor with Claude as the Wright Brothers' glider is to the Saturn V.


Not based on the link, I didn't see anything in that text that I can't do with copilot or which looked better to me than what copilot outputs.


Does Copilot do multi-file edits now?


Copilot Editor is a beta feature that can perform multi-file edits.


Another fun example from yesterday: pasted a blog post in markdown into a HTML comment. Selected it and told sonnet to convert it to HTML using another blog post as a style reference.

Done in 5 seconds.


And how do you trust that it didn't just alter or omit some sentences from your blog post?

I just use Pandoc for that purpose and it takes 30 seconds, including the time to install pandoc. For code generation where you'll review everything, AI makes sense; but for such conversion tasks, it doesn't because you won't review the generated HTML.


> it takes 30 seconds, including the time to install pandoc

On some speedrunning competition maybe? Just tested on my work machine, `sudo apt-get pandoc` took 11 seconds to complete, and it was this fast only because I already had all the depndencies installed.

Also I don't think you'll be able to fulfill the "using another blog post as a style reference" part of GP's requirements - unless, again, you're some grand-master Pandoc speedrunner.

Sure, AI will make mistakes with such conversion tasks. It's not worth it if you're going to review everything carefully anyway. In code, fortunately, you don't have to - the compiler is doing 90% of the grunt work for you. In writing, depends on context. Some text you can eyeball quickly. Sometimes you can get help from your tool.

Literally yesterday I back-ported a CV from English to Polish via Word's Translation feature. I could've done it by hand, but Word did 90% of it correctly, and fixing the remaining issues was a breeze.

Ultimately, what makes LLMs a good tool for random conversions like these is that it's just one tool. Sure, Pandoc can do GP's case better (if inputs are well-defined), but it can't do any of the 10 other ad-hoc conversions they may have needed that day.


Installing pandoc is basically a one-time cost that is amortized over its uses, so... why worry about it?

Relying on the compiler to catch every mistake is a pretty limited strategy.


> Installing pandoc is basically a one-time cost that is amortized over its uses, so... why worry about it?

Because space of problems LLMs of today solve well with trivial prompts is vast, far greater than any single classical tool covers. If you're comparing solutions to 100 random problems, you have to count in those one-time costs, because you'll need to use some 50-100 different tools to get through them all.

> Relying on the compiler to catch every mistake is a pretty limited strategy.

No, you're relying on the compiler to catch every mistake than can be caught mechanically - exactly the kind of things humans suck at. It's kind of the entire point of errors and warnings in compilers, or static typing for that matter.


No, if you are having an LLM generate code that you are not reviewing, you are relying on the compiler 100%. (Or the runtime, if it isn't a compiled language.)


Who said I'm not reviewing? Who isn't reviewing LLM code?


Re:trust. It just works using Sonnet 3.5. It's gained my trust. I do read it after (again, I'm more a code reviewer role). People make mistakes too, and I think it's error rate for repeititve tasks is below most people's. I also learned how to prompt it. I'd tell it to just add formatting without changing content in the first pass. Then in a separate pass ask it to fix spelling/grammar issues. The diffs are easy to read.

Re:Pandoc. Sure, if that's the only task I used it for. But I used it for 10 different ones per day (write a JSON schema for this json file, write a Pydantic validator that does X, write a GitHub workflow doing Y, add syntax highlighting to this JSON, etc). Re:this specific case - I prefer real HTML using my preferred tools (DaisyUI+tailwind) so I can edit it after. I find myself using a lot less boilerplate-saving libraries, and knowing a few tools more deeply.


Why are you comparing its error rate for repetitive tasks with most people? For such mechanical tasks we already have fully deterministic algorithms to do it, and the error rate of these traditional algorithms is zero. You aren't usually asking a junior assistant to manually do such conversion, so it doesn't make sense to compare its error rate with humans.

Normalizing this kind of computer errors when there should be none makes the world a worse place, bit by bit. The kind of productivity increase you get from here does not seem worthwhile.


The OP said they had it use another HTML page as a style reference. Pandoc couldn't do that. Just like millions of other specific tasks.


That's just a matter of copying over some CSS. It takes the same effort as copying the output of AI so that's not even taking extra time.


Apply the style of B to A is not deterministic, nor are there prior tools that could do it.


You didn't also factor in the time to learn Pandoc (and to relearn it if you haven't used it lately). This is also just one of many daily use cases for these tools. The time it takes to know how to use a dozen tools like this adds up when an LLM can just do them all.


This is actually how I would use AI: if I forgot how to do a conversion task, I would ask AI to tell me the command so that I can run it without rejiggering my memory first. The pandoc command is literally one line with a few flags; it's easily reviewable. Then I run pandoc myself. Same thing with the multitude of other rarely used but extremely useful tools such as jq.

In other words, I want AI to help me with invoking other tools to do a job rather than doing the job itself. This nicely sidesteps all the trust issues I have.


I do that constantly. jq's syntax is especially opaque to me. "I've got some JSON formatted like <this>. Give me a jq command that does <that>.

Google, but better.


This


> And how do you trust that it didn't just alter or omit some sentences from your blog post?

How do you trust a human in the same situation? You don't, you verify.


What? Is this a joke? Have you actually worked with human office assistants? The whole point of human assistants is that you don't need to verify their work. You hire them with a good wage and you trust that they are working in good faith.

It's disorienting for me to hear that some people are so blinded by AI assistants that they no longer know how human assistants behave.


It appears op has a different experience. Each human assistant is different.


I agree.

I replaced SO with cGPT and it’s the only good case I found. Finding an answer I build onto. But outsourcing my reflexion ? That’s a dangerous path. I tried on small projects to do that, building a project from scratch with cursor just to test it. Sometimes it’s right on spot but in many instances it misses completely some cases and edge cases. Impossible to trust blindly. And if I do so and not take proper time to read and think about the code the consequences pile up and make me waste time in the long run because it’s prompt over prompt over prompt to refine it and sometimes it’s not exactly right. That messes up my thinking and I prefer to do it myself and use it as a documentation on steroids. I never used google and SO again for docs. I have the feeling that relying on it to much to write even small blocs of code will make us loose some abilities in the long run and I don’t think that’s a good thing. Will companies allow us to use AI in code interviews for boilerplate ?


The AI's are to a large degree trained on tutorial code, quick examples, howto's and so on from the net. Code that really should come with a disclamer note: "Dont use in production, only example code.".

This leads to your code being littered with problematic edge-cases that you still have to learn how to fix. Or in worst case you don't even notice that there are edge cases because you just copy-pasted the code and it works for you. The edge cases your users will find with time.


AI is trained on all open source code. I’m pretty sure that’s a much larger source of training data than web tutorials.


Isn't tutorial-level code exactly the best practices that everyone recommends these days? You know, don't write clever code, make things obvious to juniors, don't be a primadonna but instead make sure you can be replaced by any recently hired fresh undergrad, etc.? :)


Not really. For example, tutorial code will often leave out edge cases so as to avoid confusing the reader: if you're teaching a new programmer how to open a file, you might avoid mentioning how to handle escaping special characters in the filename.


Don't forget about Little Bobby Tables! These types of tutorials probably killed the most databases over time.


Which makes me wonder, if old companies with a history of highly skilled teams would train local models, how better would they be at helping solve new complex problems.


They kinda already are - source code for highly complex open source software is already in the training datasets. The problem is that tutorials are much more descriptive (why the code is doing something, how does this particular function work etc. -down to a level of a single line of code), which probably means it’s much easier to interpret for llm-s, therefore weighted higher in responses.


I’m slightly worried that these AI tools will hurt language development. Boilerplate heavy and overly verbose languages are flawed. Coding languages should help us express things more succinctly, both as code writers and as code readers.

If AI tools let us vomit out boilerplate and syntax, I guess that sort of helps with the writing part (maybe. As long as you fully understand what the AI is writing). But it doesn’t make the resulting code any more understandable.

Of course, as is always the case, the tools we have now are the dumbest they’ll ever be. Maybe in the future we can have understandable AI that can be used as a programming language, or something. But AI as a programming language generator seems bad.


I used to agree with this, but the proliferation of Javascript made me realize that newer/better programming languages were already not coming to save us.


Maybe it's a rectangle between:

   seniors
   copilots
   juniors
   new languages
Wondering. Since the seniors pair with LLMs, world needs much less juniors. Some juniors will go away to other industries, but some might start projects in new languages without LLM/business support.

Frankly, otherwise I don't see how any new lang corpus might get created.


Before you dismiss all of this because "You could do it by hand just as easily", you should actually try using Cursor. It only takes a few minutes to setup.

I'm only 2 weeks in but it's basically impossible for me to imagine going back now.

It's not the same as GH Copilot, or any of the other "glorified auto-complete with a chatbox" tools out there. It's head and shoulders better than everything else I have seen, likely because the people behind it are actual AI experts and have built numerous custom models for specific types of interactions (vs a glorified ChatGPT prompt wrapper).


> I could select one ":, use "Select All Occurrences"

Only if it's the same occurrences. Cursor can often get the idea of what you want to do with the whole block of different names. Unless you're a vim macro master, it's not easily doable.

> How many of you spend 90% of time writing boilerplate instead of the core logic of the project?

It doesn't take much time, but it's a distraction. I'd rather tab through some things quickly than context switch to the docs, finding the example, adapting it for the local script, then getting back to what I was initially trying to do. Working memory in my brain is expensive.


Disagree.

I still spend a good amount of time on boilerplate. Stuff that's not thinking hard about the problem I'm trying to solve. Stuff like units tests, error logging, naming classes, methods and variables. Claude is really pretty good at this, not as good as the best code I've read in my career but definitely better than average.

When I review sonnets code the code is more likely to be correct than if I review my own. If I make a mistake I'll read what I intended to write, and not what I actually wrote. Where as when I review sonnets there's 2 passes so the chance an error slips through is smaller.


Unit tests are boiler plate ?


I'm using an expansive definition of boiler plate to be sure. But like boiler plate most unit tests require a little bit of thought and then a good amount of typing, doing things like setting up the data to test, mocking up methods, writing out assertions to test all your edge cases.

I've found sonnet and o1 to be pretty good at this. Better than writing the actual code because while modifying a system requires a lot of context of the overall application and domain, unit testing a method usually doesn't.


Yes. You write a function ApplyFooToBar(), and then unit tests that check that, when supplied with right Foos, the function indeed applies those Foos to the Bar. It's not a very intellectually challenging work.

If anything, the challenge is with all the boilerplate surrounding the test, because you can't just write down what the test checks themselves - you need to assemble data, assemble expected result, which you end up DRY-ing into support modules once you have 20 tests needing similar pre-work, and then there's lots of other bullshit to deal with at the intersection between your programming language, your test framework, and your modularization strategy.


Quite often, yes. That's why I prefer integration tests.


Indeed. To many tests are just testing nothing other than mocks. That goes for my coworkers directly and for their Copilot output. They’re not useful tests, they are thing to catch actual errors, they’re maybe useful as usage documentation. But in general, they’re mostly a waste.

Integration tests, good ones, are harder but far more valuable.


> To many tests are just testing nothing other than mocks

Totally agree, and I find that they don't help with documentation much either, because the person that wrote it doesn't know what they're trying to test. So it only overcomplicates things.

Also harmful because it gives a false sense of security that the code is tested when it really isn't.


This has been my approach in the past that only certain parts of the code are worth unit testing. But given how much easier unit tests are to write now with AI I think the % of code worth unit testing has gone up.


> But given how much easier unit tests are to write now with AI I think the % of code worth unit testing has gone up.

I see the argument, I just disagree with it. Test code is still code and it still has to be maintained, which, sure "the AI will do that" but now theres a lot more that I have to babysit.

The tests that I'm seeing pumped out by my coworkers who are using AI for it just aren't very good tests a lot of the time, and honestly encode too much of the specific implementation details of the module in question into them, making refactoring more of a chore.

The tests I'm talking about simply aren't going to catch any bugs, they weren't used as an isolated execution environment for test driven development, so what use are they? I'm not convinced, not yet anyway.

Just because we can get "9X%" coverage with these tools, doesn't mean we should.


Completely agree. I find it fails miserably at business logic, which is where we spend most of our time on. But does great at generic stuff, which is already trivial to find on stack overflow.


This might be a promoting issue, my experience is very different, I’ve written entire services using it.


Might be that they work in a much more complex codebase, or a language/framework/religion that has less text written on it. Might also be that they (are required to) hold code to a higher standard than you and can't just push half-baked slop to prod.


I've spent a good amount of time in my career reading high quality code and slop. The difference is not some level of intelligence that sonnet does not possess. It's a well thought out design, good naming, and rigor. Sonnet is as good if not better than the average dev at most of this and with a some good prompting and a little editing can write code as good as most high quality open source projects.

Which is usually far higher than most commerical apps the vast majority of us devs work on.


> with a some good prompting and a little editing can write code good

I agree with a good developer "baby-sitting" the model it's capable of producing good code. Although this is more because the developer is skilled at producing good code so they can tell an AI where it should refactor and how (or they can just do it themselves). If you've spent significant time refitting AI code, it's not really AI code anymore its yours.

Blindly following an AI's lead is where the problem is and this is where bad to mediocre developers get stuck using an AI since the effort/skill required to take the AI off its path and get something good out is largely not practised. This is because they don't have to fix their own code, and what the AI spits out is largely functional - why would anyone spend time thinking about a solution that works that they don't understand how they arrived at?


I've spent soo much time in my life reviewing bad or mediocre code from mediocre devs and 95% of the time the code sonnet 3.5 generates is at least as correct and 99% of the time more legible than what a mediocre dev generates.

It's well commented, the naming is great it rarely tries to get overly clever, it usually does some amount of error handling, it'll at least try to read the documentation, it finds most of the edge cases.

That's a fair bit above a mediocre dev.


It's easy to forget one major problem with this: we all have been mediocre devs at some point in our lives -- and there will always be times when we're mediocre, even with all our experience, because we can't be experienced in everything.

If these tools replace mediocre devs, leaving only the great devs to produce the code, what are we going to do when the great devs of today age out, and there's no one to replace them with, because all those mediocre devs went on to do something else, instead of hone their craft until they became great devs?

Or maybe we'll luck out, and by the time that happens, our AIs will be good enough that they can program everything, and do it even better than the best of us.

If you can call that "lucking out" -- some of us might disagree.


i love those hot takes because the market will historically fire you and hire the mediocre coders alone now.


Without knowing much about my standards and work you’ve just assumed it’s half baked slop. You’re wrong.


IA generated content is the definition of slop for most people.


> IA generated content is the definition of slop

The irony.


https://en.wikipedia.org/wiki/Slop_(artificial_intelligence)

English is not my mother tonge. I have never noticed the word "slop" until people started to use it to talk about AI generated content.

So for many people around the world slop = AI content.

Where is the irony if I may ask?


You misspelled AI - that's the extent of the irony


In my mother tongue is called IA, inteligecia artificial. I mix it up all the time.


you didn't have to explain. he knew, my friend. he knew.


Which is deeply sad, because this both tarnishes good output and gives a free pass to the competitor - shit "content" generated by humans. AI models only recently started to match that in quantity.

But then again, most of software industry exists to create and support creation of human slop - advertising, content marketing, all that - so there's bound to be some double standards and salary-blindness present.


Without knowing much about my prompts and work you’ve just assumed it’s why AI gives me bad results. You’re wrong. (Can you see why this is a bad argument?)

Don't get me wrong I love sloppy code as much as the next cowboy, but don't delude yourself or others when the emperor doesn't have clothes on.


> But does great at generic stuff, which is already trivial to find on stack overflow.

The major difference is that with Cursor you just hit "tab", and that thing is done. Vs breaking focus to open up a browser, searching SO, finding an applicable answer (hopefully), translating it into your editor, then reloading context in your head to keep moving.


The benefits of exploring is finding alternatives and knowing about gotchas. And knowing more about both the problem spaces and how the language/library/framework solves it.


I've had o1 respond with a better alternative before. And it's only going to get better.


I mean sure, but this is exactly the argument against using a calculator and doing all your math by hand also.


There was a thread about that and gist was: A calculator is a great tool because it's deterministic and the failures are known (mostly related to precision); It eliminates the entire need of doing computation by hand and you don't have to babysit it though the computation process with "you're an expert mathematician..."; Also it's just a tool and you still need to learn basic mathematics to use it.

The equivalent to that is a good IDE that offers good navigation (project and dependencies), great feedback (highlighting, static code analysis,..), semantic manipulation, integration with external tooling, and the build/test/deploy process.


Yes, I think agree. And when you use a calculator and get a result that doesn't make sense to you, you step in as the human and try and figure out what went wrong.

With the calculator, it's typically a human error that causes the issue. With AI, it's an AI error. But in practice it's not a different workflow.

Give inputs -> some machine does work much faster than you could -> use your human knowledge to verify outputs -> move forward or go back to step 1.


My experience has been different. My major use case for AI tools these days is writing tests. I've found that the generated test cases are very much in line with the domain. It might be because we've been strictly using domain driven design principles It even generates test cases that fail to show what we've missed


Have you had a go with the o1 range of models?


Yesterday, I got into an argument on the internet (shocking, I know), so I pulled out an old gravitation simulator that I had built for a game.

I had chatGPT give me the solar system parameters, which worked fine, but my simulation had an issue that I actually never resolved. So, working with the AI, I asked it to convert the simulation to constant-time (it was currently locked to render path -- it's over a decade old). Needless to say, it wrote code that set the simulation to be realtime ... in other words, we'd be waiting one year to see the planets go around the sun. After I pointed that out, it figured out what to do and still got things wrong or made some terrible readability decisions. I ended up using it as inspiration instead and then was able to have the simulation step at one second resolution (which was required for a stable orbit) but render at 60fps and compress a year into a second.


This sums up my experience as well. You can get an idea or just a direction from it, but itself AI stumbles upon its own legs instantly in any non-tutorial task. Sometimes I envy and at the same time feel sorry for successful AI-enabled devs, cause it feels like they do boilerplate and textbook features all day. What a release if something can write it for you.


I have a corporation-sponsored subscription to Github CoPilot + Rider

When I'm writing unit tests or integration tests it can guess the boilerplate pretty well.

If I already have a AddUserSucceeds test and I start writing `public void Dele...` it usually fills up the DeleteUserSucceeds function with pretty good guesses on what Asserts I want there - most times it even guesses the API path/function correctly because it uses the whole project as context.

I can also open a fresh project I've never seen and ask "Where is DbContext initialised" and it'll give me the class and code snippet directly.


Have you tried recently to start a new web app from scratch? Specially the integration of frontend framework with styling and the frontend backend integration.

Oh my god get ready to waste a full weekend just to setup everything and get a formatted hello world.


That’s why I use Rails for work. But I also had to write a small Nodejs project (vite/react + express) recently for a private project, and it has a lot of nice things going for it that make modern frontend dev really easy - but boy is it time consuming to set up the basics.


I can't imagine having nice things to say about node after working in Rails. Rails does so much for you, provides a coherent picture of how things work. Node gives you nothing but the most irritating programming tools around


I thought node was great until I had to upgrade some projects, and then realizing that those frameworks maintainers never maintain their dependencies. While in the C world, lot of projects treat warnings as errors.


that's an indictment of the proliferation of shitty frameworks and documentation. it's not hard to figure out such a combination and then keep a template of it lying around for future projects. you don't have to reach for the latest and shiniest at the start of every project.


> you don't have to reach for the latest and shiniest at the start of every project.

Except you kind of do, because if you're working frontend or mobile, then your chosen non-shitty tech stack is probably lacking some Important Language Innovations or Security Featuers that Google or Microsoft forced on the industry since last time you worked with that stack.

(Yes, that's mostly just an indictment of the state of our industry.)


every time you capitulate, you tell them that you're happy to play along, bring more "innovation" so you keep having to run very hard just to stay in place.


I’ve been pretty happy with vite, chakra, and Postgrest lately


Most frontend frameworks come with usable templates. Setting up a new Vite React project and getting to a formatted hello world can be done in half an hour tops.


Half an hour is still an overestimation, most of these frontend tools go from 0 to hello world in a single CLI command.


On a good day, when you're using the most recent right version of MacOS, when all of the frontend tool's couple thousand dependencies isn't transiently incompatible with everything else, yes.

(If no, AI probably won't help you here either. Frontend stuff moves too fast, and there's too much of it. But then perhaps the AI model could tell you that the frontend tool you're using is a ridiculous overkill for your problem anyway.)


I'll be honest, I've never had the command line tool to setup a React / NextJS / Solid / Astro / Svelt / any framework app fail to make something that runs, ever


I had create-react-app break for me on the first try, because I managed to luck into some transient issue with dependencies.


What exactly magic command line tool are you referring to? What cmd tool configures the frontend framework to use a particular css framework, webpack to work with your backend endpoints properly, setups cors and authentication with the backend for development, configures the backend to point to and serve the spa?


dotnet new <the_template>


It takes me all of 5 minutes with Phoenix


Exactly, the idea that it would take a weekend seems crazy to me. It’s certainly not something I need AI for.


Yep, it will likely work and do what it's supposed to do. But what it's supposed to do is probably only 90% of what you want: try to do anything out of what the boilerplate is setup for and you're in for hours of pain. Want SWC instead of TSC? Eslint wasn't setup, or not like you want? Prettier isn't integrated with ESlint? You want Typescript project references? Something about ES modules vs CJS? Hours and hours and hours of pain.

I understand all this stuff better than the average (although not a top 10%), and I'd be ashamed to put a number of the amount of hours I've lost on setting up boilerplate _even_ having used some sort of official generator


> Boilerplate are tedious, but not really time-consuming.

In the aggregate, almost no programmer can think up code faster than they can type it in. But being a better typist still helps, because it cuts down on the amount you have to hold in your head.

Similar for automatically generating boilerplate.

> If I don't know, how can I trust the generated code to be correct?

Ask the AI for a proof of correctness. (And I'm only half-joking here.)

In languages like Rust the compiler gives you a lot of help in getting concurrency right, but you still have to write the code. If the Rust compiler approves of some code (AI generated or artisanally crafted), you are already pretty far along in concurrency right.

A great mind can take a complex problem and come up with a simple solution that's easy to understand and obviously correct. AI isn't quite there yet, but getting better all the time.


> In the aggregate, almost no programmer can think up code faster than they can type it in.

And thank god! Code is a liability. The price of code is coming down but selling code is almost entirely supplanted by selling features (SaaS) as a business model. The early cloud services have become legacy dependencies by now (great work if you can get it). Maintaining code is becoming a central business concern in all sectors governed by IT (i.e. all sectors, eating the world and all that).

On a per-feature basis, more code means higher maintenance costs, more bugs and greater demands on developer skills and experience. Validated production code that delivers proven customer value is not something you refactor on a whim (unless you plan to go out of business), and the fact that you did it in an evening thanks to ClippyGPT means nothing—-the costly part is always what comes after: demonstrating value or maintaing trust in a competitive market with a much shallower capital investment moat.

Mo’ code mo’ problems.


> In the aggregate, almost no programmer can think up code faster than they can type it in. But being a better typist still helps, because it cuts down on the amount you have to hold in your head.

I mean on the big picture level sure they can. Or in detail if it is something that they have good experience with. In many cases I get a visual of the whole code blocks, and then if I use copilot I can already predict what it is going to auto complete for me based on the context and then I can pretty much in a second know if it was right or wrong. Of course it is more so for the side projects since I know exactly what I want to do and so it feels most of the time it is having to just vomit all the code out. And I feel impatient, so copilot helps a lot with that.


100%. Useful cases include

* figuring out how to X in an API - eg "write method dl_file(url, file) to download file from url using requests in a streaming manner"

* Brainstorming which libraries / tools / approaches exist to do a given task. Google can miss some. AI is a nice complement for Google.


I don’t even trust the API based exercises anymore unless it’s a stable and well documented API. Too many times I’ve been bitten by an AI mixing and matching method signatures from different versions, using outdated approaches, mixing in apis from similar libraries, or just completely hallucinating a method. Even if I load the entire library docs into the context, I haven’t found one that’s completely reliable.


It just has to be popular and common boilerplate like the example I gave.

It's hard with less popular APIs. It will almost always get something wrong. In such cases, I read docs, search sourcegraph / GitHub, and finally check the source code.


> Snake case to camelCase > VSCode itself has command of "Transform to Camel Case"

I never understand arguments like this. I have no idea what the shortcut for this command is. I could learn this shortcut, sure, but tomorrow I’ll need something totally different. Surely people can see the value of having a single interface that can complete pretty much any small-to-medium-complexity data transformation. It feels like there’s some kind of purposeful gaslighting going on about this and I don’t really get the motive behind it.


Exactly. I think some commenters are taking this example too literally. It's not about this specific transformation, but how often you need to do similar transformations and don't know the exact shortcut or regex or whatever to make it happen. I can describe what I want in three seconds and be done with it. Literal dropbox.png going on in this thread.


If you aren’t using AI for everything, you’re using it wrong. Go learn how to use it better. It’s your job to find out how. Corporations are going to use it to replace your job.

(Just kidding. I’m just making fun of how AI maxis reply to such comments, but they do it more subtly.)


Boilerplate comes up all the time when writing Erlang with OTP behaviors though, and sometimes you have no idea if it really is the right way or not. There are Emacs skeletons for that (through tempo), but feels like they are sometimes out of date.


1. Is such a taste task for me anyway that I don’t lose much just doing it by hand

2. The last time I wrote boilerplate heavy Java code, 15+ years ago, the IDE already generated most of it for me. Nowadays boilerplate comes in two forms for me: new project setup, which I find it far quicker to use a template or just copy and gut an existing project (and it’s not like I start new projects that often anyway), or new components that follow some structure, where AI might actually be useful but I tend to just copy an existing one and gut it.

3. These aren’t tasks I really trust AI for. I still attempt to use AI for them, but 9 out of 10 times come away disappointed. And the other 1 time end up having to change a lot of it anyway.

I find a lot of value from AI, like you, asking it SO style questions. I do also use it for code snippets, eg “do this in CSS”. Its results for that are usually (but not always) reasonably good. I also use it for isolated helper functions (write a function to flood fill a grid where adjacent values match was a recent one). The results for this range from a perfect solution first try, to absolute trash. It’s still overall faster than not having AI, though. And I use it A LOT for rubber ducking.

I find AI is a useful tool, but I find a lot of the positive stories to be overblown compared to my experience with it. I also stopped using code assistants and just keep a ChatGPT tab open. I sometimes use Claude but it’s conversation length limits turned me off.

Looking at the videos in OP, I find the parallelising task to be exactly the kind of tricky and tedious task that I don’t trust AI to do, based on my experience with that kind of task, and with my experience with AI and the subtly buggy results it has given me.


Have you tried Cursor, or is this just your guess at what your evaluation would be?


Don't mean to be rude, but was this comment written with an LLM?


have you tried using cursor or claude?


It's amazing how little of my colleagues don't use Cursor simply because they haven't taken the 10 minutes to set it up.

It's amazing how many naysayers there are about Cursor. There are many here and they obviously don't use Cursor. I know this because they point out pitfalls that Cursor barely runs into, and their criticism is not about Cursor, but about AI code in general.

Some examples:

"I tried to create a TODO app entirely with AI prompts" - Cursor doesn't work like that. It lets you take the wheel at any moment because it's embedded in your IDE.

"AI is only good for reformatting or boilerplate" - I copy over my boilerplate. I use Cursor for brand new features.

"Sonnet is same as old-timey google" - lol Google never generated code for you in your IDE, instantly, in the proper place (usually).

"the constantly changing suggested completions seem really distracting" - You don't need to use the suggested completions. I barely do. I mostly use the chat.

"IDEs like cursor make you feel less competent" - This is perhaps the strongest argument, since my quarrel is simply philosophical. If you're executing better, you're being more competent. But yes some muscles atrophy.

"My problem with all AI code assistants is usually the context" - In Cursor you can pass in the context or let it index/search your codebase for the context.

You all need to open your minds. I understand change is hard, but this change is WAY better.

Cursor is a tool, and like any tool you need to know how to use it. Start with the chat. Start by learning when/what context you need to pass into chat. Learn when Cmd+K is better. Learn when to use Composer.


I've noticed that tools like Cursor doesn't really seem to make difference in the end. The good software developers are still the good software developers, regardless of editors.

I don't think you should be upset or worried that people aren't adopting these tools as you think they should. If the tool really lives up to its hype then the non-adopters will fall behind and, for example, be forced to switch to Cursive. This happened with IDEs (e.g. IntelliSense, jump to definition). It may happen with tools like Cursive.

I certainly don't feel this way but if I'm proven wrong thats good.


To be proven wrong would be that Cursor is used by all devs or that IDEs adopt AI into their workflow?

Like OP using cursor has been a huge productivity boost. I maintain a few postgres databases, I work as a fullstack developer, and manage kubernetes configs. When using cursor to write sql tables or queries it adopts my way of writing sql. It analyzed (context) my database folder and when I ask it to create a query, a function, a table, the output is in my style. This blew me away when I first started with cursor.

Onto react/nextjs projects. In the same fashion, I have my way of writing components, fetching data, and now writing RSA. Cursor analyzed my src folder, and when asked to create components from scratch the output was again similar to my style. I use raw CSS and class names, what was an obstacle of naming has become trivial with Cursor ("add an appropriate class to this component with this styling"). Again, it analyzed all my CSS files and spits out css/classes in my writing/formatting style. And working on large projects it is easy to forget the many many components, packages, etc. that integrated/have been written already. Again, cursor comes out on top.

Am I good developer or a bad developer? Don't know. Don't care. I'm cranking out features faster than I have ever done in my decades of development. As has been said before, as a software engineer you spend more time reading code than writing. Same applies to genAI. It turns out that I can ask cursor to analyze packages, spit out code, yaml configuration, sql, and it gets me 80% done with writing from scratch. Heck, if I need types to get full client/server type completion experience, it does that too! I have removed many dependencies (tailwind, tRPC, react query, prisma, to name a few) because cursor has helped me overcome obstacles that these tool assisted in (and I still have typescript code hints in all my function calls!).

All in all, cursor has made a huge difference for me. When colleagues ask me to help them optimize sql, I ask cursor to help out. When colleagues ask to write generic types for their components, I ask cursor to help out. Whether cursor or some other tool, integrating AI with the IDE has been a boom for me.


> The good software developers are still the good software developers

Correct. Because they know they need to use the correct tools for the job.

> If the tool really lives up to its hype then the non-adopters will fall behind

This is already happening. I'm able to out-deploy many of my competitors because I'm using Cursor.

Have you actually spent much time with Cursor? The comparison to "Jump to definition" is pretty bad. You also misspelled its name twice.


> I'm able to out-deploy

This is a very poor metric for your efficacy as a software engineer and if you optimize for this, you're gonna have a bad time long term.


You're right. MRR is a better metric. Gotten great MRR via Cursor too.


How’s your ARR?


Oh yeah, typo, I work with the Cursive IDE a lot. I've spent a good amount of time with Cursor. And I have no doubt that it provides a lot of utility. I also would agree that most good devs I know definitely adopt some form of LLM integration. I would even agree that a lot of cursive features will bleed into other editors, maybe being considered a necessity.

I just haven't made the observation that most people have switched to Cursor full-time and I also haven't noticed that those who have are on another level compared to those using their other editor plus chatgpt/copilot/etc.


> I just haven't made the observation that most people have switched to Cursor full-time

I noted that same thing in my initial comment.


Cursor's show-stopping problem is not if it is useful, the problem is that it is proprietary. These sorts of tools are fun to play with to try out things that might be useful in the future but relying on them puts you at the mercy of a VC backed company with corresponding dodgy motivations. The only way these technologies will be acceptable for widespread use is to measure them as we do programming languages and to only adopt free software implementations.


To an ideological position like yours, I would say... maybe? I, for one, am happy to pay for good solutions and let the market figure it out. If there are open source solutions that are just as smooth, that's great. I've seen a few, but none have been as good thus far.


I've been keeping an eye out for a good, free software development tool like this but I've also seen nothing viable yet. The main problem really seems to be that the required hardware is expensive and resource intensive which keeps most of the talent from being able to work on it. Once the required hardware becomes more common place I think multiple free software versions will pop up to fill the niche.


Have you looked at Continue.dev? It’s open source and allows both local/open source and commercial models. It’s definitely got challenges / bugs (particularly for remote dev) but I think is worth a look.


Thanks! I'll check it out.


I also find it fascinating how in almost every LLM-related discussion there are people always writing arguments to prove that LLMs do not work.

OK, I understand. Maybe they can't get much use of them and that's fine. But why they always insist that the tools don't work for everyone is something I can't make any sense of.

I stopped arguing online about this though. If they don't want to use LLMs that's fine too. Others (we) are taking their business.


I've been a paying customer for Jetbrains IDE for years.

After trying Cursor, I'd say if I were Jetbrains devs I'd be very worried. It's a true paradiam shift. It feels like Jetbrains' competitve edge over other editors/IDEs mostly vanished overnight.

Of course Jetbrains has its own AI-based solution and I'm sure they'll add more. But I think what Jetbrains excels -- the understanding of semantics -- is no longer that important for an IDE.


Why would they be? Cursor took an existing editor and added some AI features on top of it. Features that are enabled by a third party API with some good prompts, something easily replicable by any editor company. Current LLMs are a commodity.


Meanwhile other commenters say:

> I don't use it because I already use JetBrains (Pycharm, mostly). Hard to see any value add of Cursor over that.

lol


I don't blame them though. Honestly this was the exact thought I had before I tried Cursor.


I am slow to take on new tools and was coding in Notepad for far too long... but I am already on the Cursor boat. Right now, I use it for two things - code completion and pasting error messages into chat.

When people complain about LLMs hallucinating results, that doesn't really apply because it is either guessing wrong on the autocomplete (in which case I just keep typing) or it doesn't instantly point out the bug, in which case I look at the code or jump to Google.


The naysayers used to bother me, but then I realized it’s no skin off my back if they don’t want to become familiar with a transformative technology. Stay with the old tools, people are getting excited for no reason at all, everyone is just pretending to be more productive!

It reminds me of how blackberry users insisted physical keyboards were necessary and smartphone touchscreen users were deluded.


I think most naysayers are threatened by the inevitable decimation of our value. Supply and demand laws are going to annihilate this profession


The one thing i've seemed to notice about this technology, is that technology never really replaces people... It just forces those to add things...

More, more more is always the reaction to transformative technologies because us humans have this underlying obsession with growth and scale.

For instance, in 1910 Fords manufacturing lines were producing about 7,000 cars a week. As robotics, conveyance and general automation was introduced they didn't hire less workers, they hired more. Now they produce millions of cars per year.

Software will be the same. Devs will be expected to write more code, and produce more features. There has been an explosion in AI based hiring since.


This is the fundamental question most of us seem to have.

On the one hand, logic does seem to dictate supply/demand of the profession will lower salaries. Also no one really cares how code was written or if it's pretty.

On the other hand, these tools have only seemed to increase our value so far. Someone who knows how to code with AI is now 1000x more valuable than someone who doesn't know how to code.

You still need to know how to code to be able to contribute. How long that remains the case is the question. You could be right


That's a great comparison.


I’m not using Cursor because I don’t want my code to go through yet another middleman I’m not sure I can trust. I can relatively safely put my trust in OpenAI, but Cursor? Not so sure. How do I know they’re secure?


Is it allowed to use cursor in work places? does cursor uploads company code or leak any information?


At one company, the CEO said AI tools in general should not be used, due to fear of invalidating a patent application in progress after the lawyer said it must be kept secret except with NDA partners. I explained that locally run LLMs don't upload anything, so those are ok. This is a company that really needs better development velocity, and is buried alive in reports that need writing,

On the other hand, at another company, where the NDAs are stronger and more one-sided, and there's a stronger culture of code silos, "who needs to know" governing read access to individual code repos, even for mundane things like web dashboards, and higher security in general, I expected nobody would be allowed to use these tools, yet I saw people talking about their Copilot and Cursor use openly on the company Slack.

There was someone sitting next to me using Cursor yesterday. I'd consider hiring them, if they're interested, but there's no way they're going to want to join a company that forbids using AI tools that upload code being worked on.

So I don't think companies are particularly consistent about this at the moment.

(Perhaps Apple's Private Cloud Compute service, and whatever equivalents we get from other cloud vendors, will eventuall make a difference to how companies see this stuff. We might also see some interesting developments with fully homomorphic encryption (FHE). That's very slow, but the highly symmetric tensor arithmetic used in ML has potential to work better with FHE than general purpose compute.)


Companies where bureaucrats are in charge won't allow it.


Depends on your workplace. You can adjust some settings to fine tune what gets sent to Cursor/stored by them


I don't use it because I already use JetBrains (Pycharm, mostly). Hard to see any value add of Cursor over that.

For a lighter-weight IDE I use Zed


lol so you have no idea?


Yeah. The CoPilot plugin for PyCharm is pretty good, so not sure what Cursor offers above that esp now that CoPilot can use Claude Sonnet on the backend.


Nope


You wouldn't be the first engineer to fade into irrelevance because they were too proud to adapt to the changing world around them. I'd encourage you to open your mind a bit.


I recently started using Cursor for all my typescript/react personal projects and the increase in productivity has been staggering. Not only has it helped me execute way faster, similar to the OP I also find that it prevents me from getting sidetracked by premature abstraction/optimization/refactoring.

I recently went from an idea for a casual word game (aka wordle) to a fully polished product in about 2h, which would have taking me 4 or 5 times that if I hadn’t used Cursor. I estimate that 90% of the time was spent thinking about the product, directing the AI, and testing and about 10% of the time actually coding.


You're happy to have gone from thinking of cloning something to cloning something using a cloning tool quickly?


Your remark raises a question : how much of your daily work is truely original ?

Unless you work in R&D i've got some bad news for you..


You don't have to be a researcher to do original work. Out of ten people on my team one has a PhD, but we're all doing more-or-less unique work a large fraction of the time.


Why shouldn't they be? They achieved their goal. There is little shame in cloning something.


Apologies for the misunderstanding, I just caught the typo, should have read "ala wordle", not "aka wordle", the game is fully original.

Using AI enabled me to spend more time thinking about game mechanics.


You're getting a lot of snark in the comments, but your excitement is warranted. It's fascinating how any claims of a code tool being useful always seem to offend the ego and bring out all the chest-thumping super-programmers claiming they could do it better.


Id love to see your GitHub before and after


I would love to see the world where you didn't use AI and instead invested the time to make yourself a stronger programmer. A react wordle clone isn't something most developers would need 2 hours to make (sure maybe the styling / hosting AROUND the wordle clone might take longer) - I'm not saying you're a bad programmer or a bad person but what is the opportunity cost of using AI here? Are you optimising yourself into a local-minima?


they said 90% of it was spent on ideation and exploration

they didnt specifically mean they built a wordle clone, just a game like it. if they wanted just a wordle clone, they wouldve gotten one within a few minutes of using codegen tools.


No, what they said was

> I estimate that 90% of the time was spent thinking about the product, directing the AI, and testing

In other words 90% of the time was spent in the proompt-test-proompt loop. Not ideation and exploration.

> they didnt specifically mean they built a wordle clone, just a game like it. if they wanted just a wordle clone, they wouldve gotten one within a few minutes of using codegen tools.

If you really believe that I'm not sure what to say other than: have you tried to use an AI to make a full wordle clone? (not just the checking logic, or rendering - the entire thing)


yes, the quote is what I'm referring to, directing the AI is part of it, people use these to quickly brainstorm and refine ideas. I'd be more charitable and wouldn't hastily assume it was some skill issue, especially them being a principal engineer


I think this excitement reflects the fact that most devs are shoemakers without shoes. They could get cursor-like experience decades ago by preparing snippets, tools, templates, editor configs and knowledge bases. But they used that “a month of work can save two days of planning” principle, so now having a sort of a development toolkit feels refreshing. Those who had it aren’t that impressed.


Counterpoint: learning to make better buggy-whips is fun but, in the grand scheme of things, also a local minimum.


The existence of other minima does not imply all minima are equal.

In fact, without knowing the entire graph it's impossible to say whether a particular minima is the global minima or just a local one.


Interesting. Have you published the game anywhere (eg GitHub or so)?

Have you written about your experience anywhere in greater length?


Any gotchas using Cursor? Or headsup?


In my experience, Cursor writes average code. This makes sense, if you think about it. The AI was trained on all the code that is publicly available. This code is average by definition.

I'm below average in a lot of programming languages and tools. Cursor is extremely useful there because I don't have to spend tens of minutes looking up APIs or language syntax.

On the other hand, in areas I know more about, I feel that I can still write better code than Cursor. This applies to general programming as well. So even if Cursor knows exactly how to write the syntax and which function to invoke, I often find the higher-level code structure it creates sub-optimal.

Overall, Cursor is an extremely useful tool. It will be interesting to see whether it will be able to crawl out of the primordial soup of averages.


Exactly right. Cursor makes it easy to get to "adequate." Which in the hundreds of places that I'm not expert or don't have a strong opinion, is regularly as good as and frequently better than my first pass. Especially as it never gets tired whereas I do.

It's a great intern, letting me focus on the few but important places that I add specific value. If this is all it ever does, that's still enormously valuable.


This is true. But with a little push here and there you can usually avoid the sub-optimal high level code structure. That's why it makes so much sense to have it in the IDE.

You can see in general anything AI produces is pretty average.

But people who buy software don't care that the code behind it is average. As long as it works.

Whereas people who buy text, images and video do care.


I've been having some difficulties with deprecated code and old patterns being suggested all the time. But I guess this is an easy issue to fix and will probably be fixed eventually.


I’m doing an experiment in this in real time: I’ve got a bunch of top-flight junior folks, all former Jane and Google and Galois and shit, but all like 24.

I’ve also been logging every interaction with an LLM and the exit status of the build on every mtime of every language mode file and all the metadata: I can easily plot when I lean on the thing and when I came out ahead, I can tag diffs that broke CI. I’m measuring it.

My conclusion is that I value LLMs for coding in exact the same way that the kids do: you have to break Google in order for me to give a fuck about Sonnet.

LLMs seem like magic unless you remember when search worked.


> LLMs seem like magic unless you remember when search worked.

Yikes. I didn’t even think about this, but it’s true.

I’m looking for the kinds of answers that Google used to surface from stack overflow


The best way to get useful answers was (and for me still is) to ask Goggle for "How do I blah site:stackoverflow.com". Without the site filter, Google results suck or are just a mess, and stackoverflow's own search is crap.


Google used to be better but so was stack overflow. Now a lot of the answers are out-dated. And even more importantly they got rid of any questions where the answer was even a little bit subjective. Unfortunately for users that's almost all the most useful answers.


Kagi…

Fully switched over more than a year ago and never looked back.


I had a kagi account for a year, but it’s just bing with some admittedly nice features on top.

I don’t get the results because there’s just not a lot of people talking about what I’m interested in.


Kagi with the "Programming" lense turned on


Nowadays, I just read manuals, docs and books. I mostly use search as a quick online TOC or for that specific errors I’m in no mood to debug.


I don't understand, are you using LLMs purely for information retrieval, like a database (or search index)? I mean sure that's one usecase, but for me the true power of LLMs comes from actually processing and transforming information, not just retrieving it.


I have my dots wired up where I basically fire off a completion request any time I select anything in emacs.

I just spend any amount of tokens to build a database of how 4o behaves correlated to everything emacs knows, which is everything. I’m putting down tens of megabytes a day on what exact point they did whatever thing.


I’m actively data-mining OpenAI, they get a bunch of code that they have anyways because they have GitHub, I get arbitrary scope to plot their quantization or whatever with examples.

Flip it on em. You’re the one being logged asshole.

https://youtu.be/un3NkWnHl9Q?si=VOnH2krJkJLRA2BQ


To be clear I’m a huge fan of the Cursor team: those folks are clearly great at their jobs and winning at life.

They didn’t get ahead by selling you the same thing they do, if they did Continue would be parity.


What domain/type of software do you and they work on? Cursor has been quite effective for me and many others say the same.

As long as one prompts it properly with sufficient context, reviews the generated code, and asks it to revise as needed, the productivity boost is significant in my experience.


Well, the context is the problem. LLMs will really become useful if they 1.) understand the WHOLE codebase AND all it's context and THEN also understand the changes over time to it (local history and git history) and finally also use context from slack - and all of that updating basically in real time.

That will be scary. Until then, it's basically just a better autocomplete for any competent developer.


What you describe would be needed for a fully autonomous system. But for a copilot sort of situation, the LLM doesn't need to understand and know of _everything_. When I implement a feature into a codebase, my mental model doesn't include everything that has ever been done to that codebase, but a somewhat narrow window, just wide enough to solve the issue at hand (unless it's some massive codebase wide refactor or component integration, but even then it's usually broken down into smaller chunks with clear interfaces and abstractions).


I use copilot daily and because it lacks context it's mostly useless except for generating boilerplate and sometimes converting small things from A to B. Oh, also copying functions from stackoverflow and naming them right.

That's about it. But I spend maybe 5% of my time per day on those.


I dislike Copilot's context management, personally, and much prefer populating the context of say Claude deliberately and manually (using Zed, see https://zed.dev/blog/zed-ai). This fits my workflow much much better.


Imagine you are coding in your IDE and it suggests you a feature because someone mentioned it yesterday on #app-eng channel. Needs deeper context, though. About order of events, an how authoritative a character is.


I get value out of LLMs on stock Python or NextJS or whatever where that person was in fact a lossy channel from SO to my diff queue.

If there’s no computation then there’s no computer science. It may be the case that Excel with attitude was a bubble in hiring.

But Sonnet and 4o both suck at why CUDA isn’t detected on this SkyPilot resource.


> But Sonnet and 4o both suck at why CUDA isn’t detected on this SkyPilot resource.

I don't understand this sentence, should "both suck at why" be "both suck and why" or perhaps I'm just misunderstanding in general?


SkyPilot is an excellent piece of software attempting an impossible job: run your NVIDIA job on actively adversarial compute fabric who mark up the nastiest monopoly since the Dutch East India Company (look it up: the only people to run famine margins anywhere near NVIDIA are slave traders).

To come out of the cloud “credits” game with your shirt on, you need stone cold pros.

The kind of people on the Cursor team. Not the adoring fans who actually use their shit.



Watching the videos in the article, the constantly changing suggested completions seem really distracting.

Personally, I find this kind of workflow totally counter-productive. My own programming workflow is ~90% mental work / doing sketches with pen & paper, and ~10% writing the code. When I do sit down to write the code, I know already what I want to write, don't need suggestions.


It's a tool. You get used to new tools. These days I can easily process "did something interesting appear" in the peripheral vision at the same time as continuing to type. But the most useful things don't happen while I write. Instead it's the small edits that immediately come up with "would you also like to change these other 3 things to make your change work?" Those happen in the natural breaks anyway, as I start scanning for those related changes myself.


A tool that forces me to shift from creating solutions to trying to figure out what might be wrong with some code is entirely detrimental to my workflow.


Is that your actual experience or expectation? If you're just making assumptions, I'd encourage you to give it an actual try. Talking about using Cursor is a bit like talking about riding a bike with someone who never did it. (But yeah, it's totally not for everyone, and that's fine)


Very early on I took pains to figure out what toolchains gave me traction and which tools produced waste complexity.

I think a lot of the excitement about using LLMs to code is because a lot of teams are stuck in local optima where they need to use noisy tools, and there's a lot of de-noised output available to train LLMs.

This is progress in searching and mitigating bad trade-offs, not in advancing the state of the art.


The latter is usually much easier than the former on a small scale, so your statement is very surprising.


You find revising correctly looking but potentially wrong code easier than just writing correct code?


Obviously, yes. For the exact same reason it's true for math homework, too!

Most code most people write is trivial in terms of semantics/algorithms. The hard bit is navigating the space of possible solutions: remembering all the APIs you need at the moment - right bits of the standard library, right bits of your codebase, right bits of third-party dependencies - and holding pieces you need in your head while you assemble some flow of data, transforming it between API boundaries as needed. I'm totally fine letting the AI do that - this kind of work is a waste of brain cycles, and it's much easier to follow and verify than to write from scratch.


When I’m writing code, I often switch between an high-level mental description and the code itself. It’s not a one way interaction. The more I code, the more refined my mental solution becomes until they merge together. I don’t need to hold everything in my memory (which is why there are many browser tabs opened). The invariant is that I can hand over my work any time and describe the rest of my solution. And the other advantage is the growing expertise in the tech.


> in the peripheral vision

this seems physiologically unlikely.


We do it all the time. I can type while talking to people. I can also read/process the text ahead while saying out loud what I read a few words back. We do a lot of things concurrently when dealing with text - I'm not staring at the cursor when writing code either.

(Unless you meant it literally. No, I didn't mean actual peripheral vision. Just noticing things beyond what I type.)


I did mean it literally. the solid angle covered by the fovea is tiny.


The eye also doesn't literally stay 100% static when you look at something.


You can create markdown files containing all the planning you did and Cursor will have all of that as context to give you better suggestions. This type of prompting is what gives amazing results - not just relying on out of the box magic, which I think a lot of people are expecting.


Cursor has been an enabler for unfamiliar corners of development. Mind you, it's not a foolproof tool that writes correct code on the first try or anything close to that.

I've been in compilers, storage, and data backends for 15ish years, and had to do a little project that required recording audio clips in a browser and sending them over a websocket. Cursor helped me do it in about 5 minutes, while it would've taken at least 30 min of googling to find the relevant keywords like MediaStream and MediaRecorder, learn enough to whip something up, fail, then try to fix it until it worked.

Then I had to switch to streaming audio in near-realtime... here it wasn't as good: it tried sending segments of MediaRecorder audio which are not suitable for streaming (because of media file headers and stuff). But a bit of Googling, finding out about Web Audio APIs and Audio Worklet, and a bit of prompting, and it basically wrote something that almost worked. Sure it had some concurrency bugs like reading from the same buffer that it's overwriting in another thread. But that's why we're checking the generated code, right?


I've had similar experiences. I've basically disengaged any form of AI code generation. I do find it useful to pointing me to interesting/relevant symbols and API's however, but it doesn't save me any time connecting plumbing, nor is that really a difficult thing for any programmer to do.


In the article, you mentioned that you've been writing code for 36 years, so don't you feel IDEs like cursor make you feel less competent? Meaning I loved the process of scratching my head over a problem and then coming to a solution but now we have AI Agents solving the problems and optimizing code which takes the fun out of it.


I feel like in the early stages of becoming a programmer, learning how to do all those little baseline problems is fun.

But eventually you get to a point where you've solved variations of the problem hundreds of times before, and it's just hours of time being burnt away writing it again with small adjustments.

It's like getting into making physical things with only a screwdriver and a hammer. Working with your hands on those little projects is fun. Then eventually you level up your skills and realize making massive things is much easier with a power drill and some automated equipment, and gives you time to focus on the design and intricacies of far more complicated projects. Though there are always those times where you just want to spend a weekend fiddling with basics for fun.


That should be when you move to more sophistcated (and also complex/complicated) languages that relieve you from as much of this boilerplate as possible.

The rest is then general design and archiceture, where LLMs really don't help much with. What they are really good for is to get an idea of possible options in spaces were you have little experience or to quickly explain and summarize specific solutions and their pros and cons. But I tried to make it pick a solution based on the constraints and even with many tries and careful descriptions, the results were really bad.


I think it is not the boilerplate of the programming language necessarily, but it is more to do with boilerplate of common business logic. E.g. even say form validation, I have done it countless of times and I can't be bothered to write out rules for each field again, but AI can easily generate me Zod schema with reasonable validation based on the database model or schema. It probably does better validation rules than I would do quickly on my first try.

Then I use these validations both in the backend and frontend.


There is nothing that stops a good PL from doing the same. In fact, that is why languages like F# support a concept called "type provider".


I'm talking about web/app UI validation, are we talking about the same thing?

Does F# provide ability to display validation errors with good UX to users using your app? How does it know what user friendly error messages look like?

Okay very simple example prompt that will generate 100+ lines of code.

"Give CRUD example using shadcn, react-hook-form, zod, trpc to create an example of a TODO app"

Which programming language would be able to replace the UI at this level of quality and how the UI looks?

Now this is very basic example, and it's a "todo app" without existing project, but I find it can give me a lot various different 100+ lines of boilerplate, from charts to sql explorers, to much else.

Also most web app developers and especially juniors usually have terrible error, loading, validation handling, and SOTA seems far superior to that, handling those cases everytime with ease and most intuitive error messages.

It will patiently always do "isLoading", checks for error, etc.


I'm not saying that this is a solved problem today. And yeah, llms are also helpul here.

What I meant is that this can be solved by a good language. But you would have to use F# (or another language with a feature like type providers) on the frontend.


It seems like F# is more like a type safety tool from arbitrary untyped data for developers rather than what I'm talking about though?

But even TypeScript can infer types in many cases?


Typescript can infer a lot of types, but you cannot read an SQL file at compiletime (with custom logic) and make it generate types that you can then use.

You have to generate those types as source code (like you basically do with the llm).

In F# and other languages, you can generate those types on the fly and use them. It can even go as far as describing errors with sql columns. Then, if there is any mismatch, the project won't compile. And if you add a new field, the code and validation will automatically work.


Completely agreed. Don’t be afraid to embrace this. You have to give it an active month until it starts to work in your hands though.


I’ve been using cursor for a while now and I think that if a problem is simple enough for an LLM to work out on its own, it’s probably not worth scratching one’s head over…


I dont think people need to think, that the AI is supposed to make complicated code

i think for the most part its meant to help you "get past" all the generic code you usually write in the beginning of a project, generic functions you need in almost all systems, etc.


I'm not sure that's a good heuristic? People love playing Tetris or solving crossword puzzles, and machines are much better at them than us.


People keep playing trivial or repetitive games because they enjoy it. People keep writing trivial or repetitive code because they have to.


…or maybe because they actually enjoy it the same as the games?


Agreed with added experience of mine: sometimes Cursos gives me a simpler yet perfect solution. And I am grateful for it.


I don't agree, in the initial stages solving problems without LLMs will give a good enough knowledge about the intricacies involved and it helps develop a structured approach while solving a problem!


For me it’s like riding an e-bike. More fun because I can go faster and see and do more.


And you get less tired. I can complete more work because I'm not always getting stuck in minutia. I can focus on architecture, structure, and refactoring instead of line-by-line writing of code.

I'm not saying that I don't like writing code. I'm just saying that doing a lot of it can be mentally exhausting. Sometimes I'd just prefer to ship feature-complete stuff on-time and on-budget, then go back to my kids and wife without feeling like my brain is mush.


"An e-bike for the mind" as Steve Jobs might have said.


I have compared AI to an emtb. Was riding one on slick rock trail in Moab which has some areas with consequences earlier this year.

If you don't know how to handle a bike, the ebike won't help you in these situations. (You might even get yourself in a tricky spot).

But if you know how to ride, it can be really fun.

Same with code. If you know how to code it can make you much more productive. If you don't know how to code, you get into tricky spots...


I have a motorbike, yet I prefer cycling because the exercise feels good and is good for me.


This is actually a great analogy. You get to accomplish a lot more, much faster, but you lose much of the benefit to your fitness.


> you lose much of the benefit to your fitness

If you're biking for the purpose of fitness then this is a downside, but if your goal is to see more and go further, then it's an acceptable tradeoff.

Similar to coding. If you're writing code because you enjoy writing code, it's less fun. If you're writing code to build stuff, AI will help you build faster


I think you are still thinking just on another level. E.g. you go on a walk, you fantasize about everything you are going to do, and it builds up in your head, then you come back, it is all in your head and AI will help you get it out quickly, but you have already solved the problem for yourself and so you are also able to validate quickly what the AI does.


yes this!!! Whenever I write a prompt, I tend to divide it into smaller prompts, and in this process, my brain thinks of multiple ways to solve the problem. So yes, it's not limiting my thought process. I didn't notice this thing until I read this.


Do they really solve the hard problems though? For me, the LLMs solve the low level problems. Usually I need to figure out an algorithm, which is the actual problem, and finally give some pseudo code to the LLM and surrounding code so it can generate a solution that looks idiomatic.

In some cases, LLMs act as a stackoverflow replacement for me, like „sort this with bubble sort, by property X“. I’d also ask it to write some test cases around that. I won’t import a bubble sort library just for this, but I also don’t want to spend any more time than necessary, implementing this for the nth time.


I don't find figuring out the syntax of a new language interesting. There's absolutely no fun in that. I know what I want to do and already understand the concepts behind it, that was the fun part to learn.


I do think that is a real risk, yes. I don't want to use LLMs as a crutch to guard against having to ever learn anything new, or having to implement something myself. There is such a thing as productive struggle which is a core part of learning.

That said, I think everyone can relate to wasting an awful lot of time on things that are not "interesting" from the perspective of the project you are working on. For example, I can't count the number of hours I've spent trying to get something specific to work in webpack, and there is no payoff because today the fashionable tool is vite and tomorrow it'll be something else. I still want to know my code inside and out, but writing a deploy script for it should not be something I need to spend time on. If I had a junior dev working for me for pennies a day, I would absolutely delegate that stuff to them.


For a lot of people the fun and rewarding part is actually building and shipping something useful to users. Not solving complex puzzles / algoritic challenges. If AI gets me in front of users faster then I'm a happier builder.


Was going to ask a similar question. Where in the experience of Cursor do you feel like you're losing some of the agency of solving the harder problems, or is this something you take in mind while using it?


I’ve “only” been coding for 20 years, but it’s the tedious problems, not the actually technically hard problems that cursor solves. I don’t need to debug 5 edge cases any more to feel like I’ve truly done the work, I know I can do that, it’s just time spent. Cursor helps me get the boring and repetitive work out of coding. Now, don’t get me wrong, there was a time where I loved building something lower level line by line, but nowadays it’s very often a “been there, done that” type of thing for me.


If I need an RNG rolled to a standard distribution, I can either spend 5 minutes looking it up, learning how to import and use a library, and adding it to my code, or I can tell Cursor to do it for me.

Crap like that, 100 times a day.

"Walk through this array and pull out every element without an index field and add it to a new array called needsToBeIndexed, send them off to the indexing service, and log any failures to the log file as shown in the function above".

Cursor lets me think closer to the level of architecting software.

Sure having a deep knowledge of my language of choices is fun, and very needed at times, but for the 40% or so of code that is boring work of moving data around, Cursor helps a lot.


All of the examples given in the article are contrived, textbook-style examples. Real world projects are far more messy. I want someone to talk about their flow with Cursor on a mature codebase in production with lots of interlaced components and abstractions.

I have a feeling that blindly building things with AI will actually lead to incomprehensible monstrous codebases that are impossible to maintain over the long run.

Read “Programming as Theory Building” by Peter Naur. Programming is 80% theory-in-the-mind and only about 20% actual code.

Here's an actual example of a task I have at work right now that AI is almost useless in helping me solve. "I'm working with 4 different bank APIs, and I need to simplify the current request and data model so that data stored in disparate sources are unified into one SQL table called 'transactions'". AI can't even begin to understand this request, let alone refactor the codebase to solve it. The end result should have fewer lines of code, not more, and it requires a careful understanding of multiple APIs and careful data modelling design and mapping where a single mistake could result in real financial damage.


I also found myself feeling a bit dumb after using Copilot for some time. It felt like I didn’t have to know the API, and it just auto-completed for me. Then I realized I was starting to forget everything and disabled Copilot. Now, when I need something, I ask ChatGPT (like searching on Stack Overflow).


Same. I find myself having to pause and let Copilot finish. At some point, you lose/ not retain anything which you don’t use. I’m not sure I want to give that.


Here's a few cursor perks:

  1. Auto-complete makes me type ~20% faster (I type 100+ WPM)
  2. Composer can work across a few files simultaneously to update something (e.g. updating a chrome extension's manifest while proposing a code change)
  3. Write something that you know _exactly_ how it should work but are too lazy to author it yourself (e.g. Write a function that takes 2 lists of string and pair-wise matches the most similar. Allow me to pass the similarity function as a parameter. Use openai embedding distance to find most similar pairings between these two results)


My problem with all AI code assistants is usually the context. I am not sure how cursor fare in this regard but I always struggle to feed the model enough of the code project to be useful for me on a level more than providing line per line suggestion (which copilot does anyway). I don't have experience with cursor or cody (other alternative) and how they tackle this problem by using embeddings (which I suppose have similar context limit).


All the SOTA LLM solutions like this have nearly the same problem. Sure the context window is huge, but there is no guarantee the model understands what 100K tokens of code is trying to accomplish within the context of the full codebase, or even into the real world, within the context of the business. They are just not good enough yet to use in real projects. Try it, start a greenfield project with "just cursor" like the ai-influencers do and see how far you get before it's an unmanagable mess and the LLM is lost in the weeds.

Going the other direction in terms of model size, one tool I've found usable in these scenarios is Supermaven [0]. It's still just one or multi-line suggestions a la GH Copilot, so it's not generating entire apps for you, but it's much much better about pulling those one liners from the rest of the codebase in a logical way. If you have a custom logging module that overloads the standard one, with special functions, it will actually use those functions. Pretty impressive. Also very fast.

[0] https://supermaven.com/


Cursor has a built in embeddings/RAG solutions to mitigate this problem.


Embeddings/RAG don't address the problem I'm talking about. The issue is that you can stuff the entire context window full of code and the models will superficially leverage it, but will still violate existing conventions, inappropriately bring in dependencies, duplicate functionality, etc. They don't "grok" the context at the correct level.


Cursor has 10K tokens context window. Which is quite low compared to the top LLMs.

https://forum.cursor.com/t/capped-at-10k-context-no-matter-a...


The main and best model according to many is Claude 3.5. But it provides maximum of 200k [1]. While I understand cost effectiveness and other limitations with embeddings. But maximum context of just 5% is probably too low with any standard.

[1] https://support.anthropic.com/en/articles/7996856-what-is-th...


You have to switch to long context chat to get the full 200k (I don’t think this works for Composer though).


Cursor is pretty uniqe and advanced in this regard. They tell a lot about this in the Lex Fridman podcast, very interesting.


It's the users job to provide the context the LLM needs in plain language instruction. Not just relying on the LLM to magically understand everything based on the codebase.


This last month I decided to try the Jetbrain equivalent of Cursor, for their IDEs (https://www.jetbrains.com/ai/). It's a pluging well integrated in the code editor that you can easily summon.

I work in Rust and I had to start working with several new libraries this month. One example of them is `proptest-rs`, a rust property testing library that defines a whole new grammar to define the tests. I am 100% sure that I spent much less time to get on-boarded with the librariy's best practices and usages. I just quickly went through their book (to learn the vocabulary) and asked the AI to generate the code itself. I was very surprised that it did not do any mistakes, considering that sort of weird custom grammar of the lib. I will at least keep trying for another months.


FYI, 2024.3 that's coming in November/December will use a new model and the new code completion suggestions rewritten from scratch.

I suspect some are inspired by Cursor?

https://blog.jetbrains.com/ai/2024/10/complete-the-un-comple...


How do you know that it didn’t make any mistakes, which you wouldn’t make if you learned the usage of that library without AI? Even before AI generated code, people made mistakes about which they didn’t know, because they never read the documentation for example, and it “worked”… except the unintended side effects of course. Adding an AI layer into the picture makes this definitely worse.


It's not that you ask it to write 200 lines of code at once, and blindly trust it. It's more that you start to use the lib, ask it to generate one helper method at the time, for an isolated task. Which leave you time to "review" the code that it wrote properly. Even when a human writes code, it needs to go through peer-review. So the exact same applies with AI. It's the job of the reviewer (in this case, the one who invokes the AI) to make sure that the one who wrote the code does not do mistake, which can include going to read the doc in more detail.


You need to know the used methods to review either way (when you really review code, which is not done by most people). You need to read the doc either way. At that point you could just write the code yourself. And as many examples showed (e.g. https://news.ycombinator.com/item?id=41307387), it’s not even that quick.


How's the AI worse in this respect that having a coworker or teammate that you review code from?


Responsibility


Wondering how the JetBrains gets the context of the library? Is it fetching cargo docs somehow or are you having to paste docs into a context window?


I did not have to provide any context, so I guess it was trained partly with the cargo doc of the crate.


Recent interview to Cursor developers: https://lexfridman.com/cursor-team


thanks


I tried Cursor and while I was extremely surprised by the ability to do multiline edits in the middle of the lines, I could not get to accept how aggressive it was when trying to autocomplete/auto-edit segments of code while I was just typing.

It was like a second person being in the editor having a mind of its own constantly touching my code, even if it should have left it alone. It felt like I was finding myself undoing stuff it made all the time.


Using ChatGPT and AI assistants over the past year, here are my best use cases:

- Generating wrappers and simple CRUD APIs on top of database tables, provided only with a DDL of the tables.

- Optimizing SQL queries and schemas, especially for less familiar SQL dialects—extremely effective.

- Generating Swagger comments for API methods. Joyness

- Re-creating classes or components based on similar classes, especially with Next.js, where the component mechanics often make this necessary.

- Creating utility methods for data conversion or mapping between different formats or structures.

- Assisting with CSS and the intricacies of HTML for styling.

- GPT4 o1 is significantly better at handling more complex scenarios in creation and refactoring.

Current challenges based on my experience:

- LLM lacks critical thinking; they tend to accommodate the user’s input even if the question is flawed or lacks a valid answer.

- There’s a substantial lack of context in most cases. LLMs should integrate deeper with data sampling capabilities or, ideally, support real-time debugging context.

- Challenging to use in large projects due to limited awareness of project structure and dependencies.


Interesting to hear the perspective of an experienced developer using what seems to be the SOTA coding assistant of Cursor/Claude.

I thought his "Changes to my workflow" section was the most interesting, coupled with the fact that coding productivity (churning out lines of code) was not something he found to be a benefit. However, IMO, the workflow changes he found beneficial seem to be a bit questionable in terms of desireability...

1) Having LLM write support libraries/functions from scratch rather than rely on external libraries seems a bit of a double-edged sword. It's good to minimize dependencies and not be affected by changes to external libraries, but OTOH there's probably a lot of thought and debugging that has been put into those external libraries, as well as support for features you may not need today but may tomorrow. Is it really preferable to have the LLM reinvent the wheel using untested code it's written channeling internet sources?

2) Avoiding functions (couched as excessive abstractions) in favor of having the LLM generate repeated copies of the same code seems like a poor idea, and will affect code readability, debugging and maintenance whereby a bugfix in one section is not guaranteed to be replicated in other copies of the same code.

3) Less hesitancy to use unfamiliar frameworks and libraries is a plus in terms of rapid prototyping, as well as coming up to speed with a new framework, but at the same time is a liability since the quality of LLM generated code is only as good as the person reviewing it for correctness and vulnerabilities. If you are having the LLM generate code using a framework you are not familiar with, then you are at it's mercy as to quality, same as if you cut and pasted some code from the internet without understanding it.

I'm not sure we've yet arrived at the best use of "AI" for developer productivity - while it can be used for everything and anything, just as ChatGPT can be asked anything, some uses are going to leverage the best of the underlying technology, while others are going to fall prey to it's weaknesses and fundamental limitations.


Just once I'd like to see an article like this from someone who's not currently working on an AI tool (some sort of Khan Academy tutor in this case)


I've gotten a lot of mileage from simonw's blog, who as far as I know isn't working on any (paid) AI tools.

https://simonwillison.net/


I recommend that naysayers for technologies like Cursor watch the documentary Jurassic Punk. When comparing the current AI landscape to the era of computer graphics emerging in film, the parallels are pretty staggering to me.

There is a very vocal old guard who are stubborn about ditching their 10,000+ hours master-level expertise to start from zero and adapt to the new paradigm. There is a lot of skepticism. There are a lot of people who take pride in how hard coding should be, and the blood and sweat they've invested.

If you look at AI from 10,000 feet, I think what you'll see is not AGI ruining the world, but rather LLMs limited by regression, eventually training on their own hallucinations, but good enough in their current state to be amazing tools. I think that Cursor, and products like it, are to coding what Photoshop was to artists. There are still people creating oil paintings, but the industry — and the profits — are driven by artists using Photoshop.

Cursor makes coders more efficient, and therefore more profitable, and anyone NOT using Cursor in a hiring pool of people who ARE using it will be left holding the short straw.

If you are an expert level software engineer, you will recognize where Cursor's output is bad, and you will be able to rapidly remediate. That still makes you more valuable and more efficient. If you're an expert level software engineer, and you don't use Cursor, you will be much slower, and it is just going to reduce your value more and more over time.


I think llms are useful and also that cursor is not.

It's a specific thing and it doesn't suit me.

I've seen the glittery eyed hype on hn before and it basically means it will become a common tool. Whether it's good or not, that's a different question.


I'm not sure what you mean here. Cursor is a LLM wrapper, so if you like LLMs, I don't know how you can like LLMs but not like Cursor. It's just a chatbot inside of VS Code. Cursor just provides the LLM more context, and an easier path to implementing the LLM's suggestions.


Good take. It really makes you realize that as much as engineers like to pretend they're purely rational and analytical, we're just as emotional as everyone else.


This website would make you believe that all your peers exhibit Spock levels of rationality, when really it's a combination of hubris and a desire to be right on the internet


Some uses I have, e.g. a notepad in Cursor with a predefined set of files and a prompt e.g. to implement storybook stories and documentation out of components I use. I give it the current file I'm working on, and it will generate new files and update documentation files. Similarly for E2E or unit tests, it often suggests cases I would've not thought about.

The people that are negative about these things, because they need to review it, seem to be missing the massive amount of time saved imho.

Many users point to the fact that they spend most of the time thinking, I'm glad for them, most of the time I spend is glueing APIs, boilerplates, refactoring, and on those aspects Cursor helps tremendously.

The biggest killer feature that I get from similar tools (I ditched Copilot recently in favor of it) is that they allow me to stay focused and in the flow longer.

I have a tendency to phase out when tasks get too boring, repetitive or stressed out when I can't come up with a solution. Similarly going on a search engine to find an answer would often put me in a long loop of looking for answers deeply buried in a very long article (you need to help SEO after all, don't you?) and then it would be more likely that I would get distracted by messages on my company chat or social media.

I can easily say that Cursor has made me more productive than I was one year ago.

I feel like the criticism many have comes from the wrong expectations of these tools doing the work for you, whereas they are more into easing out the boring and sometimes the hard parts.


What are the closest Emacs packages and flows for something similar to Cursor? Is the ability to use a tab in this way something that can be simply instructed or finetuned?


Not in the same way. Places like Cursor and Continue have their own tuned and minimal models which can work really fast. In an interview, Cursor people also mentioned some extra magic they do like streaming the changes/context in as you type and potentially doing some speculative processing before you even type things. You would need to write lots of new code which, as far as I understand, doesn't exist in the open source world at this point. And neither do the appropriate models.

I'm sure we'll get there, but I haven't seen anything even close available.


Anthropic have a streaming API (incrementally stream responses back to client) that I believe Cursor uses.


That's the basic part. Cursor people talked about effectively streaming in and other ideas. See https://youtu.be/oFfVt3S51T4



"For example, suppose I have a block of code with variable names in under_score notation that I want to convert to camelCase."

Oh my, oh my... How have I done this all these years before "AI" was a th- hype?

I did it without wasting even a fraction of the CO2 needed for these toys.

"AI" has some usecases, granted. But selling it as the holy grail and again sh•tting on the environment is getting more and more ridiculous by the day.

Humanity, even the smarter part, truly deserves what is coming.

Apes on a space rock


I think AI is still nowhere near where it needs to be to provide business value in software development.

As an experiment, some time ago, I tried to build a TODO app entirely with AI prompts. I used a special serverless platform on the backend to store the data so that it would persist between page refreshes. I uploaded the platform's frontend components README file to the AI as part of the input.

Anyway, what happened is that it was able to create the TODO app quickly; it was mostly right after the first prompt and the app was storing and loading the TODOs on the server. Then I started asking for small changes like 'Add a delete button to the TODOs'; it got that right. Impressive!

All the code fit in a single file so I kept copying the new code and starting a new prompt to ask for changes... But eventually, in trying to turn it into a real product, it started to break things that it had fixed before and it started to feel like a game of whac-a-mole. Fixing one thing broke another and it often broke the same thing multiple times... I tried to keep conversations longer instead of starting a new one each iteration but the results were the same.


The article is about Cursor. You don't need all your code to fit in a single file. It sits in your IDE. You don't need it to create your app entirely. I just tell it what I need, and where, and it makes me 20x faster. It solves the problem you're describing exactly.


> I think AI is still nowhere near where it needs to be to provide business value in software development.

And you go on to say that your experiment was to build a TODO app "some time ago" in a single file of code.


do you feel like it needs a major architectural change or breakthrough leap in abstraction before it can provide business value in software development, or would far larger context window sizes be enough for it to provide substantial value already?

For me it feels like it just starts to break after a certain length, but may not require a breakthrough new architecture to provide more value. Just larger context window sizes, so it can do the same thing it does on smaller pieces of code, on larger pieces of code, too.


Can be, yes. I think besides the context window size though, there is the issue that performance seems to decline as the input size increases. It can become a problem before you reach the context window size limits.


Personal observation from a heavy LLM codegen user:

The sweet spot seems to be bootstrapping something new from scratch and get all the boilerplate done in seconds. This is probably also where the hype comes from, feels like magic.

But the issue is, that once it gets slightly more complicated, thinks break apart and run into a dead end quickly. For example yesterday I wanted to build a simple CLI tool in Go (which is outstandingly friendly to LLM codegen as a language + stdlib) that acts as a simnple reverse proxy and (re-)starts the original thing in the background on file changes.

AI was able to knock out _something_ immediately that indeed compiled, only it didn't actually work like intended. After lots of iterations back-and-forth (Claude mostly) the code balooned in size to figure out what could be the issue, adding all kinds of useless crap that kindof-looks-helpful-but-isn't. After an hour I gave up and went through the whole code manually (few hundred lines, single file) and spotted the issue immediately (holding a mutex lock that gets released with `defer` doesn't play well with a recursive function call). After pointing that out, the LLM was able to fix it and produced a version that finally worked - still with tons of crap and useless complexity everywhere. And thats a simple straightforward coding task that can be accomplished in a single file and only few hundred lines, greenfield style. And all my claude chat tokens of the day got burned for this, only for me at the end having to dig in myself.

LLMs are great to produce things in small limited scopes (especially boilerplate-y stuff) or refactor something that already exists, when it has enough context and essentially doesn't really think about a problem but merely changes linguistic details (rewrites text to a different format ultimately) - its a large LANGUAGE model after all.

But full blown autonomous app building? Only if you do something that has been done exactly thousands of times before and is simple to begin with. There is lots of business value in that, though. Most programmers at companies don't do rocket science or novel things at all. It won't build any actual novelty - ideal case would be building an X for Y (like Uber For Catsitting) only but never an initial X.

Personal productivity of mine went through the roof since GPT4/Cursor though, but I guess I know how/when to use it properly. And developer demand will surge when the wave of LLM-coded startups get their funding and realize the codebase cannot be extended anymore with LLMs due to complexity and the raw amount of garbage in there.


> After lots of iterations back-and-forth (Claude mostly) the code balooned in size to figure out what could be the issue, adding all kinds of useless crap that kindof-looks-helpful-but-isn't. After an hour I gave up and went through the whole code manually (few hundred lines, single file) and spotted the issue.

That's what experience with current generation LLMs looks like. But you don't get points for getting the code in/by the LLM looking perfect, you get points for what you check-in to git and PR. So skill is in realizing the LLM is running itself in circles, before you run out of tokens and burn an hour, and do it yourself.

Why use an LLM at all if you still have to do it yourself? Because it's still faster than without, and also that's how you'll remain employable - by covering the gaps that an LLM can't handle (until they can actually do full blown autonomous app development, which is still a while away, imo).


Is Cursor still sending everything you do to their own servers? Last time I looked into it, that’s what was happening which made it an absolute no-go for corporate use.

Using Cody currently with our company enterprise API key


I use and am happy with Cursor and Chatgpt. Cursor will write out the syntax for me if I let it, sometimes wrong, sometimes right, but enough to keep a flow going. If there are larger questions, I just flip over to chatgpt and try and suss out what is going on there. Super helpful with tailwind and react, speaking as a dba and backend systems person.


Curious why you don't just use Cursor's built-in chat?


> Subsequently, but very infrequently, I will accept a totally different completion and the previously-declined suggestion will quietly be applied as well.

This sounds like a nightmare.

I think the biggest problem with AI at the moment is that it incorrectly assumes that coding is the difficult part of developing software, but it's actually the easiest part. Debugging broken code is a lot harder and more time consuming than writing new code; especially if it's code that someone else wrote. Also, architecting a system which is robust and resilient to requirement changes is much more challenging than coding.

It boggles the mind that many developers who hate reading and debugging their team members' code love spending hours reading and debugging AI-generated code. AI is literally an amalgamation of other peoples' code.


Wow reasons I can think of are the emotional overhead of not being a total dick to the person who's coffee is being reviewed is, if you're not an asshole, is higher than for an unfeeling bot. (Unless that bots name is Roku.) The other thing is you just go in and fix said code instead of writing a comment and then waiting an undetermined amount of time for a reply.


I have tried both cursor and cline (formerly continue dev), and I don't seem to see the incredible performance boost using cursor like a lot of people say


I think some people are writing dumb code for a living


And let me guess, you aren't?


Small correction: Cline is former Claude Dev.

Continue Dev is a different extension which continues with the same name.


Oops, yeah you are right


I am the least knowledgeable user of emacs ever, however just some notes as I read this

> For example, suppose I have a block of code with variable names in under_score notation that I want to convert to camelCase. It is sufficient to rename one instance of one variable, and then tab through all the lines that should be updated, including the other related variables.

For me that would be :%s/camel_case/camelCase/gc then yyyyyynyyyy as I confirm each change. Or if it's across a project, then put cursor on word, SPC p % (M-x projectile-replace-regexp), RET on the word, camelCase, yyyynyyynynn as it brings me through all files to confirm. There's probably even better smarter ways than that, I learn "just enough to get it done" and move on, to my continual detriment.

> Many times it will suggest imports when I add a dependency in Python or Go.

I can write a function in python or typescript and if I define a type like:

    function(post: Po
and wait a half second, I'll be given a dropdown of options that include the Post type from Prisma. If I navigate to that (C-J) and press RET, it'll auto-complete the type, and then add the import to the top, either as a new line or included with the import object if something's already being imported from Prisma. The same works for Python, haven't tried other. I'm guessing this functionality comes from LSP? Actually not sure lol. Like I said, least knowledgeable emacs user.

As for boilerplate, my understanding is most emacs people have a bunch of snippets they use, or they use snippet libraries for common languages like tsx or whatever. I don't know how to use these so I don't.

I still intend to try cursor since my boss thinks it might improve productivity and thus wants me to see if that's true, and I don't want to be some kind of technophobe, however I remain skeptical that these built-in tools are at least any more useful than me just quickly opening a chatgpt window in my browser and pasting some code in, with the downside of me losing all my bindings.

My spacemacs config: https://github.com/komali2/Configs/blob/master/emacs/.spacem...


I have the opposite experience with Chat and Compose. I use the latter much more. The "intelligence level" of the Chat is pretty poor and like the author says, starts with pointless code blocks, and you often end up with AI slop after a few minutes of back and forth.

Meanwhile, the Compose mode gives code changes a good shot either in one file or multiple, and you can easily direct it towards specific files. I do wish it could be a bit smarter about which files it looks at since unless you tell it about that file you have types in, it'll happily reimplement types it doesn't know of. And another big issue with Compose mode is that the product is just not really complete (as can be seen by how different the 3 UXes of applying edits are). It has reverted previous edits for me, even if they were saved on disk and the UI was in a fresh state (and even their "checkout" functionality lost the content).

The Cmd+K "edit these lines" mode has the most reliable behavior since it's such a self-contained problem where the implementation uses the least amount of tricks to make the LLM faster. But obviously it's also the least powerful.

I think it's great that companies are trying to figure this out but it's also clear that this problem isn't solved. There is so much to do around how the model gets context about your code, how it learns about your codebase over time (.cursorrules is just a crutch), and a LOT to do about how edits to code are applied when 95% of the output of the model is the old code and you just want those new lines of code applied. (On that last one, there are many ways to reduce output from the LLM but they're all problematic – Anthropic's Fast Edit feature is great here because it can rewrite the file super fast, but if I understand correctly it's way too expensive).


Ok, so… I’m looking at the Python example with HTTP Endpoints and I already have questions.

Why both functions are inlined and why FastAPI is used at all. I’m also not seeing any network bindings. Is it bound to local host (doubt it) or does it immediately bind to all interfaces.

It’s a 3 second thought from looking at Python code that I know only enough to write small and buggy utilities (yet Python is widely popular - so LLM have ton of data learned in). I know it’s a demo only but this video strengthen my feeling about drop of critical thinking and a raise of McDonalds productivity.

McDonalds is not bad: it’s great when you are somewhere you don’t know and not feeling well. They have same menu almost everywhere and it’s like 99% safe due to process standardization. Plus they can get you satiated in 20 minutes or less. It’s still type of food that you can’t really feed on for long; and if you do, there will be consequences.

The silver lining of it is that it most likely cut off all people from the field who hate the domain but love the money, as it’s exactly the problem it automatizes away.


Could you clarify what your problem with FastAPI here is?

Also did I understand correctly that you’re not very well versed with python in general but still decided to criticize the decisions the model and the author made in that language?


I have doubts about the design and on spot decision of using FastAPI.

It’s kind of chicken and the egg problem. If author knows FastAPI then it’s great, but if not then should verify its applicability. It might not be the choice they’d make and need to verify both library and other solution if it’s not the fit.

If I’d get suggestion like that I’d immediately go to documentation because I don’t know consequences. Experienced person might make a conscious choice, unexperienced - implicit one which might not work in context.

As for the design - I’m against inlining handlers. It’s not like Python is the only language that allows it, but I would suspect consequences are very similar. But maybe that’s idiomatic thing. Had this be coming from documentation I’d had much more faith than a random LLM suggestion.


FastAPI is a nobrainer choice of webserver backend for a python API server in 2024. If you’re not aware of that, then you really shouldn’t be commenting on this topic.


I have not tried cursor using a combination of copilot and chatgpt o1. Is cursor considered a better solution? I find it takes the tedium out of work and allows me to focus on the important bits. Right now working on a flutter app and it’s great to be able to quickly try out different ideas iterating quicker towards a final design than before LLMs.


> Is cursor considered a better solution?

It's the best I've tried so far. I only tried copilot when it came out, so it might have improved a lot since then, perhaps someone using both now can chime in.

There's two things I like about cursor - the overall multi-file edit -> I use this for scaffolding whatever I need, at a high level, and claude 3.5 seems a bit better than 4o for this.

The tab thing really works, and it will surprise you. They don't do just autocomplete, they also do auto-action complete so after you edit something, your cursor (heh) goes to the next predicted place where you're likely to make an edit. And it works ootb most times, and a few times I've gone "huh, might have missed that, if it weren't for the cursor going there".

So that's my current workflow. Zoomed out scaffolding and then going in and editing / changing whatever it is that I need, and cursor has two really strong features for both cases.


Didn't sublime text have this with snippets? I vaguely recall setting some up with tab stops for the parts that need manual completion.

Not AI, but it was nice to have back 10-ish or whatever years ago.


I've seen something like you mention in vscode for snippets - that is fixed pieces of code with some variables, and the IDE skips to them w/ tab, so that you can fill in the values. But that's only for already made snippets, afaik.

The tab autocomplete in cursor is more of a "next action autocomplete", wherein the LLM (a 70b fine-tune from what the team said) will "autocomplete" the next action that makes sense in context. Imagine changing something in a fn definition, and then the cursor jumps to the line where you used that fn. I'm sure something like this example could be hardcoded in an IDE, but this works ootb in a general way.


No, that was a completely different thing.


It's a completely different class than copilot in my opinion. Or a higher generation. I find copilot occasionally useful, but very chaotic. Cursor on the other hand is consistently saving me lots of typing every day.


My problem is that i need an inteliJ plugin. Somebody mentioned codebuddy as an alternative.


I find LLM’s to be much worse than the junior engineers I work with. I have tried copilot and I always end up disabling it because it’s often wrong and annoying. Just read the docs for the things you are using folks. We don’t need to burn the rainforest for autocomplete.


I am currently convinced that the only thing useful about AI so far is that it's reviving nuclear power plants -- granted, they are to power AI engines -- but I hope that they'll be kept running after this AI fad passes!


Cursor is unusable for many as long as https://github.com/getcursor/cursor/issues/598 is not fixed.

I would like to use it, but I literally cannot because of this bug!


I've been using Cursor and Claude 3.5 Sonnet for about two months. My typical workflow with Cursor involves the following steps: I ask Cursor to generate the code. Then, I have it review its own code. Next, I ask it to improve some code based on the review. Sometimes I ask it to explain the code to me. Finally, I make any necessary manual adjustments. The only downside is that I tend to use up my fast request limit pretty quickly.


OK I have been writing code for 50 years but can only use cursor for home use. From my experience, I echo the authors comments. You do have to be careful with larger suggestions that it makes sense but the syntax will be right. It is just faster.


i had an interesting experience with cursor. i use it everyday btw

i had two csvs. one had an ISBN column, the other had isbn10, isbn13, and isbn. i tried to tell it to write python code that would merge these two sheets by finding the matching isbns. didn't work very well. it was trying to do pandas then i tried to get it to use pure python. it took what felt like an hour of back and forth with terrible results.

in a new chat, i told it that i wanted to know different algorithms for solving the problem. once we weighed all of the options, it wrote perfect python code. took like 5 minutes. like duh use a hashmap, why was that so hard?


Chat knows how to merge CSVs with pandas. You just need to know how to prompt it correctly. (Or at least it did at the start of this year when I gave a keynote at PyCon Philippines with that exact use case.)


seems simple and obvious but not when the columns have different keys, and there could be one or more keys to match upon. i would be curious to see how you can prompt that?

and you are correct, you just need to know how to prompt it correctly by going back to first principles, just like what i did.


Tossing in the first few rows of the CSV in the prompt helps.


no kidding?

this industry should stop telling people: "you are just prompting wrong" when they've never seen the prompting session.


From what I see, it's not better than GitHub Copilot for my use case. I work in large code bases, where the code in one files depends on the code in other files. Copilot seems to only be aware of code in current file, unless I write very long prompts to direct it to specifically look in some other files.

So for making changes to existing code, Copilot isn't helpful and neither seems to Cursor.

I can use Copilot to write some new methods to parse strings or split strings or convert to/from JSON or make http calls. Bat anything that implies using or changing existing code doesn't yield good results.


If you are using vscode in the copilot chat you can use @workspace and it tries to get the relevant files and context.


Are you using the @codebase feature? It will perform a search across the codebase for files relevant to your query and suggest changes across files.

https://docs.cursor.com/chat/codebase


Ai assistant tuned for your sepecific large code base seems like the best final destination. I am not sure if this is on the horizion, miles away or just something that is of such little marginal benefit that it would only be useful internally for a handful of orgs.

Certainly for writting my emails for me I would quite like an Ai fine-tuned to my written voice.


Have you actually used Cursor? It does not seem like it. I'd suggest trying it before dismissing it.

> Copilot seems to only be aware of code in current file

Cursor does not have this limitation


From my experience using both:

* Cursor is massively faster than Copilot

* Cursor is absolutely aware of the codebase when you add it to the workspace (ctrl/cmd+b)


As an aside, do we really need a new IDE for AI copilot? What makes Cursor better than say Cody, which is just an extension on VSCode which helps me stay with the mature IDE that has all the bells and whistles I need.


Cursor -is- Visual Studio Code, bar the AI-specific tooling.


I’ve yet to switch to Cursor as my main editor (Sublime still wins on performance by miles, plus no distracting tab anything), but I do hop in when stamping out boilerplate and repetitive code for which it is great, but it’s only a minor performance bump.

I have also used cursor to write Kubernetes client code with great success because the API space is so large it doesn’t fit into my head that well (not often writing such code these days) so that has been incredibly helpful.

So it’s not revolutionising my workflow but certainly a useful tool in some situations.


I recently tried again github copilot, and I was much more convinced of its use. I'm using it mainly as a better autocompletion tool, and I sometimes use the chat for discovering/understanding librairies and errors, because google is now too polluted by SEO spam.

I'm not trying to generate programs with it, I find it still far too weak for it.

However, and while I believe the current models can't really reach it, there is nothing in my eyes that prevents to create an AI good enough for that


I've been working with Cursor for a few months. I learned quickly to stay away from the hype around creating entire features using Composer. Found a ton of value in using it to work with context from the codebase as well as external documentation.

Shared my 8 pro tips in this post towards the bottom https://betaacid.co/blog/cursor-dethrones-copilot


Take the video of "building an HTTP REST api". On the one hand it is lighting fast to create something basic. On the other hand it misses so many details... proper error responses that the frontend - presumably there is one - can use, translations, setting up the db, infra, deploy, testing, etc. There is so much more to getting something ready for the real world.

As a learning tool and for one offs AI can be very nice.


I had horrible experience with a number of ai tools for infra-as-code.

My theory is, that in that case it’s hard to predict what to do from context and libraries are at the same time hyper specialized and similar.

Example: Creating a node and attaching a volume using ansible looks similar for different cloud providers but there a subtle differences in how to speciy the location etc.


This mirrors my experience.

I've tried various tools (including Cursor) and my problem is that they often (more than 50%) generates non-working code. Why does it do so? Because the ecosystems change so fast and they have been trained on old versions of various libraries (by definition) but when I use the latest version it's a constant uphill battle. And there are so many different combinations of how to use libraries together....

I can't be the only one facing this issue but I didn't see a lot of discussion around this.

So, as someone said: It's generating at most average code and most of it is outdated and sometimes vulnerable because... Well that's what's out there.

I still use these tools but mostly to know how a solution could look roughly and then start to do my own research and to avoid the black page problem. Sometimes just to learn what to even Google for especially in ecosystems I'm unfamiliar with.


Someone experienced enough could give some insights about how does it work with big monorepos and/or legacy code?


Cursor will use embeddings to figure out if it needs more files for additional context, or you can manually add them. It’s one of the weakest parts of the experience. Probably varies from repo to repo. Still not really capable of large scale automatic refactors.


Agreed. Even long-context models lose track of what they're doing pretty quickly as the size of the code base grows.

Cursor et al seem great for make small changes in context, or big changes in small apps. But I don't trust them to make large-scale refactoring decisions.

That said, if you can define really rigorous interfaces between parts, you can often get more mileage by just keeping those interfaces in context while asking for some kind of refactoring or implementation.


Good Taste, Management / Communication Skills, Code Review Ability.

If you have these skills, the productivity gains from tools like Cursor are insane.

If you lack any of these, it makes sense that you don't get the hype; you're missing a critical piece of the new development paradigm, and should work on that.


So the Cursor AI can make edits to the .cursorrules file that's meant to control it? Hmm...


I really honestly don't get the hype. VSCode with some copilot plugins seems to do effectively 100% of this fork and last time I scrolled through cursor's 1000+ open github issues, they didn't seem responsive or know how to fix things.


I've been using Copilot in VSCode as I have it free as a student. I wanted to try Cursor, but money is tight. Are they that much different? If so, what makes Cursor so special?


Any thoughts on how does cursor compare to cline (claude.dev) and aider?


I tried Cursor but I honestly didn't see a huge improvement over VSCode + Continue.Dev. Cursor is better at editing multiple files, and it's cheaper than using the API directly... But I prefer VSCode + Continue.Dev (for chat) + Supermaven (for completions).

Cline, on the other hand, is a whole different beast. It edit multiple files, run the program, checks the shell for errors, go back to the files, edit them, run it again, and even access localhost:8000 to check. It's incredible! But if you use the recommended Sonnet 3.5, it'll eat your money very fast.

I've been using Cline + 4o-mini for a few days. Sometimes it's magical; it's the first time I truly feel that I have an assistant. I tell it 'run, check for errors, and correct them' and I leave to grab some coffee.

The bad side: I got lazy and once I almost let it delete rows in a table because it misunderstood what I said.


I'm a skeptic, but this weekend I worked with Cline in vscode (using free gemini nano backend and free mistral backend via openrouter) and managed to make some progress on a flutter app I'm working on.

Flutter/Dart isn't something I'm actually familiar with though I have a lot of dev experience, so I was able to distinguish between poor advice and good advice from the agent. There was a good share of frustrating code deletion happening that required stern instruction adjustment.

For work I'd be concerned about sharing our IP and would want to be on an LLM tier that promises privacy, or maybe even a local LLM.

All in all I was more impressed than I thought I would be.


And Cody! Which has a pretty decent free tier.


For me, all these tools suffer from the same basic question, how can I put proprietary private source into a tool notorious for siphoning up and copying everything you say to it?


Business tier: "Enforce privacy mode org-wide".


I’m surprised more people don’t use Supermaven. Its completions are quite a bit faster than Cursor and it also integrates into IDEs (Jetbrains), not just editors.


Many pro comments above, but I like it as a coding newbie!


Slightly OT, but perhaps someone can help me out: what's a tried & tested setup to integrate a locally running AI coding assistent into Emacs?


Trying to remember: Was cursor the one that took yc money and basically reskinned some open source project, the project that was stolen, or neither..?


You're thinking of PearAI, which copied Continue. https://techcrunch.com/2024/09/30/y-combinator-is-being-crit...


Nope, that’s PearAI, a fork of Continue and vscode and its ChatGPT generated license Pear Enterprise License


Why is 60 - 70% of my screen whitespace on this website (Or maybe more accurately... light pink space?)

If cursor made those margins, humans 1 cursor 0


I actually like it. Better readability and the empty space can be used for showing citations, which the author seems to do a lot.


If you're going to employ large empty space, at least make it dark themed.


I don't know if the author will read these comments, but there's a missing word:

the architecture of I am building.


Honestly, seems like a cool tool and I could see myself using something like it instead of just my current GitHub Copilot subscription.

Sure, you probably don't want to blindly copy or accept suggested changes, but when the tools work, they're like a pretty good autocomplete for various snippets and I guess quite a bit more in the case of Cursor.

If that helps you focus on problem solving and lets the tooling, language and boilerplate get out of the way a little bit more, all the better! For what it's worth, I'm probably sticking with JetBrains IDEs for the foreseeable future, since they have a lot of useful features and are what I'm used to (with VS Code for various bits of scripting, configuration etc.).


Cursor is awesome. Have been using it for a month now, greatly improved efficiency.


Can someone please elaborate how Cursor is different to Copilot?


The founders did an episode on Lex Fridman's podcast that covers the technical sides of Cursor far better than I ever could: https://www.youtube.com/watch?v=oFfVt3S51T4

But to summarize, Copilot is an okay ChatGPT wrapper for single line code completion. Cursor is a full IDE with AI baked in natively, with use-case specific models for different tasks, intelligent look-ahead, multi-line completion, etc.

If Copilot is a horse, Cursor is a Model T.


For me, the main draw is Cursor's repo-wide indexing (presumably a RAG pass). I asked Copilot to unify two functions, one at the top of a massive file, and the other at the bottom. Since they were so far apart, Copilot couldn't see both functions at the same time. Cursor didn't give me a great answer, but it did give *an* answer.


You can pass in @workspace before the prompt (in copilot chat) and it looks at the full context. It works OK, I could imaging Cursor being more powerful in this!


Parallelize a task, write a server that exposes. It’s not code.

Show code.


Related question. For those who do any kind of programming for fun on the side, how do you feel about using tools like cursor for those projects. Is it a cool productivity enhancer that allows you to focus less on the code and more on the end-product, or does it suck the fun out of it for you?

I work in an environment right now where feeding proprietary code/docs into 3rd party hosted LLMs is a hard no-go, and we don't have any great locally hosted solution set up yet, so I haven't really taken the dive into actively writing code with LLM assistance. I feel like I should practice this skill, but the idea of using a tool like Cursor on personal projects just seems so antithetical to the point that I can't bring myself to actually do it.


There's a lot of people in these comments who are talking about the state of off-the-shelf LLMs for writing code, and they are missing the point of this article. The article is about Cursor, the IDE.

It is much, much more than a ChatGPT wrapper. I'd encourage everyone to give it a shot with the free trial. If you're already a VSCode user, it only takes a minute to setup with your exact same devenv you already have.

Cursor has single-handedly changed the way I think about AI and its capabilities/potential. It's truly best-in-class and by a wide margin. I have no affiliation with Cursor, I'm just blown away by how good it is.


Wow, I did not expect to see such negativity in this thread. Most of them read to me like the "Dropbox is just an FTP"-narrative. Yes, you and your pride can do most of these things in 0.3ms and better, but so will 1 million more people now.

You can do most of the things the author showed with your craftfully set-up IDE and magic tricks, but that's not the point. I don't want to spend a lifetime setting up these things only to break when moving to another language.

Also, where the tab-completion shines for me in Cursor is exactly the edge case where it knows when _not_ to change things. In the camel casing example, if one of them were already camel cased, it would know not to touch it.

For the chat and editing, I've gotten a pretty good sense as to when I can expect the model to give me a correct completion (all required info in context or something relatively generic). For everything else I will just sit down and do it myself, because I can always _choose_ to do so. Just use it for when it suits you and don't for when it doesn't. That's it.

There's just so many cases where Cursor has been an incredible help and productivity boost. I suspect that the complainers either haven't used it at all or dismissed it too quickly.


> You can do most of the things the author showed with your craftfully set-up IDE and magic tricks, but that's not the point.

Wrong you can do most of the things the author showed with a fresh install of vim/emacs or by logging in to a fresh install of vscode/intellij - In other words no lifetime was spent on this, I like having as bare an experience as possible so I can use the same setup on any computer.

> I don't want to spend a lifetime setting up these things only to break when moving to another language.

Editor configs don't break across languages?

> For the chat and editing, I've gotten a pretty good sense as to when I can expect the model to give me a correct completion (all required info in context or something relatively generic). For everything else I will just sit down and do it myself, because I can always _choose_ to do so. Just use it for when it suits you and don't for when it doesn't. That's it.

A lot of people don't have this level of wisdom or the skills to pick and continue without AI. Would I be wrong for assuming you've been programming for at least 10 years? I don't think AI is bad for a senior who has already earned their scars, but for a junior/no skill developer it stunts their growth simply because the do expect the model to give them a correct completion, and the thought/action of doing it without an AI is painful (because they lack the requisite skills) so they avoid it.


> Wrong you can do most of the things the author showed with a fresh install of vim/emacs or by logging in to a fresh install of vscode/intellij - In other words no lifetime was spent on this, I like having as bare an experience as possible so I can use the same setup on any computer.

Sure, though, for example, I haven't a clue for the shortcut for wrapping an expression in a try/catch block. With Cursor I just press tab and it often also adds a useful print or other useful expression inside the catch block. It comes down to requiring less discoverability.

> A lot of people don't have this level of wisdom or the skills to pick and continue without AI.

I have been coding for some time, but I think you underestimate people's BS detector. People are well aware that language models hallucinate. Most of the time you'll figure it out soon enough (compiler/run time) and adapt accordingly. I have learned much of my coding through reading public repositories/code which were also not always up to standards. You figure this out by banging your head once or twice.


> You figure this out by banging your head once or twice.

Amen to that. People really underestimate the power of brain damage in this field.


Naw, I've seen enough hiring fads ;).


It's pretty clear that the utility of tools like Cursor depends on a lot of variables such as:

- the type of project you are working on (what are you writing)

- who are you writing for: is this meant to be bulletproof corporate code, a personal project, a throwaway prototype, etc

- the experience level of the developer

If your use case plays to the strength of the tool/technology, then obviously you will have a better experience than others trying to see if it can do things that it is not really capable of.


I would personally like to a join a breakaway HN for people who actually want to use these tools.

"AI positive Hacker News" or something like that.

There is just really not much point in reading anything on AI here. I get it, AI sucks. Next.


That article reads just like half-a--ed chatgpt prompt designed to shill Cursor because it became irrelevant after introduction of Canvas tool.


> to shill Cursor because it became irrelevant after introduction of Canvas tool.

Haha, no. A web interface won't make a full-fledged IDE irrelevant. Canvas is really cool for a quick session in the browser, when travelling, while in bed, on a plane, etc. It's neat and works. But it's nowhere near the IDE experience.

Cursor is still a full IDE in the background, with all the bells and whistles that come with it. So if you're working on anything more complicated than one-off scripts, you'll still benefit a lot from having it, over a web interface.


Haha, yes. I assume we will see more Cursor related shilling as it becomes irrelevant, don't we? Even this comments reads like an ad.


Also good to note the author is currently involved in building - yet another - ChatGPT wrapper, so I'm not surprised at all he's hyping and shilling AI tools/Cursor.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: