Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

14% - that's the first time i've seen a hard number like that. It's significant.

Have there been any good studies on programmer productivity? 14% would be a just a blip in the constant flow of productivity-enhancing language, tools, architectures etc. that have been steady accelerating the pace and scale of development for decades ...

When I moved from Java to Rails, for example ... definitely at least 14% more productive, I'm guessing much much higher. Even just from load times.

Or going from printed documentation all the way to Stack Overflow - how many multiples was that?

68000 to M2 ... page reload to React's fast refresh ... etc. etc.



I feel like it's making me somewhat more productive because of how it gives me a kick in the butt and stops my mind wandering off. Getting immediate suggestions and ideas from a "third party" on whatever I'm doing leads me on to the next step of whatever I'm doing rather than getting tempted to procrastinate. I doubt this will benefit everyone, but I guess this is how pair programming can be quite effective for some people too.


I've written a few things (words) where one of the LLMs gave me a pretty good starting point. It needed some work but it saved me Googling around to get some sort of starting draft down on paper--which invariably turns into more Googling around and distractions. It's always been stuff I've known well enough that I knew what was right, wrong, and insipid. But it was a starting point (which can be useful, especially if you're just going for workmanlike anyway).


GenAI allows me to stay in a flow state for longer and it reduces cognitive fatigue.


I guess you're using ChatGPT 3.5? I'd have thought v4 is simply too slow and you'd be unavoidably jolted out of flow waiting for it.


Both. v4 is fast during periods of low load, it is slower now than when I first got access. My usage patterns have shifted because of this, so I write longer prompts, have it do more work and switch to different tab. I'd pay more than twice as much to have it be at least 50% faster.


I feel it too, even if the suggestion is wrong it still trigger my brain to think "No, it should be like this." and then I write the thing. Rinse and repeat.


> Have there been any good studies on programmer productivity?

Yes, but mostly from the companies developing these products:

* The CoPilot productivity study by Peng et al. that is summarized in a Blog post [1] with some additional details in the Arxiv preprint [2]. They find "developers who used GitHub Copilot completed the task significantly faster–55% faster than the developers who didn’t use GitHub Copilot". (Grep for "but speed is important, too" in [1].)

* Amazon's Q1 earnings report [3] includes a paragraph stating that "participants [using] CodeWhisperer completed tasks 57% faster (on average) and were 27% more likely to complete them successfully than those who didn’t use CodeWhisperer.

* I seem to remember seeing something on Replit's blog a while back with a similar number, but can't find it anymore, so maybe I'm mistaken.

These speed numbers are on specific and well-specified programming tasks, which is of course only one part of a developer's job, so there's a lot of opportunity for impactful research in this space. I suspect that the Empirical Software Engineering community will be awash in third-party empirical studies asking all sorts of more specific productivity questions by this time next year.

[1] https://github.blog/2022-09-07-research-quantifying-github-c...

[2] https://arxiv.org/pdf/2302.06590.pdf

[3] https://s2.q4cdn.com/299287126/files/doc_financials/2023/q1/...


> 14% - that's the first time i've seen a hard number like that. It's significant.

On the other hand, "zero or small negative effects on the most experienced/most able workers".

In other words, if you're skilled, there's nothing AI can do for you…


I agree with the overall assessment, but there's a catch: even "highly skilled" workers are not skilled in "all the things". I've felt that ChatGPT hasn't increased my productivity in my core languages and frameworks, but it has helped immensely in areas where I'm not an expert.

To be concrete about it, I recently ported a project from SwiftUI on iOS (an ecosystem I am very comfortable with) to an Android app written in Kotlin (an ecosystem I mostly dread). I don't find ChatGPT very helpful with my day-to-day Swift stuff, but it was incredibly helpful with the Android work, from language syntax to idioms to outright translation of code.


What framework or approach did it recommend you use for the Android UI? Compose? XML layouts? From code?

I tried using the Bing chat interface, and it repeatedly pushed me in the direction of using Microsoft tools to accomplish tasks for a cross-compile mid tier solution. I had to explicitly tell it to exclude them.


That's exactly my experience.

In areas where I'm an expert, I find myself correcting ChatGPT constantly. But for a language I want to learn, it's been really helpful to get me started on a simple algorithm or debug an error message.


I can't help but think that it sounds like productised Gell-Mann Amnesia. If a system is useless and/or counterproductive in areas you're an expert in, but appears useful in areas that you aren't, shouldn't it be a red flag? How would you know if solutions it comes up with are bad, wrong, or just bad practices?


No, it’s much darker. It means if you’re skilled the gap between you and unskilled just got much much smaller.

Think about that…


That’s what Google did, supposedly for “everyone”.

In practice, my coworkers call me to find and then explain solutions to them.

I don’t see this changing with AIs. Having a “super Google” is great… for people already comfortable with such tools and capable of using them.

My coworkers will now ask me to ask the AI… on their… behalf.

OMG! I’m going to turn into Lieutenant Tawny Madison: https://chat.openai.com/share/0efbca4d-9e33-4af0-9ed6-2f8821...


That's awesome. Let's make progress. Their success is not your downfall.


I really love your optimism here, but as an ex-executive I can see how this will be used -- to put wage pressure on any skilled/senior leaders by inflowing a churn of unskilled to replace them.

I don't think you grasp how this will play out over multi-game


Different companies and executives will play this out differently. Some will replace their unskilled workers, some will fire everyone, some will ignore AI altogether.

What matters is which of these strategies will result in actual success. This is honestly far too hard to tell at this point.


You can apply the law of "power wins" I think to this pretty well. It will be initially be used to put labor in a lower position and expand ownership margins -- until that creates an opposing power scenario. Which maybe be sooner... or could be later. Hopefully before robots become real, because otherwise it will be never.

I'll say here, I've spoken to and spent time with a few well known billionaires and I'd say deep down, they are exterminists no matter how nice a face they try to put on it. Over time they just come to believe most people don't matter at all. Its really dark, and the sooner we come to terms with that the better.


Software has been cannibalising itself for 70 years, and yet look and behold - being a programmer is still one of the best paid jobs. I don't believe a simple 20-50% boost for intermediate users is going to change things much.


70 years? I’d argue that programming has only recently (last 10-20 years) become a mainstream source of high paying jobs. Before that it was relatively low paying or inaccessible/undesirable to most people. Recent simplification through the movement of much CRUD work toward web frameworks has made it more accessible.


As others have remarked, it allows them to be marginally more skilled in areas where they are lacking. I am expert level in a couple domains, but now I am advanced novice to intermediate in a whole lot more due GenAI. I can now hang with junior folks in their domain, not mine.

While I agree with your assessment on wage pressure, the folks this is going to hurt the most are the new graduates that don't have the knowledge or experience. Their competition just got a whole lot stiffer.

It benefits two groups the most, someone with literally zero experience and experts.


Wouldn't most of labor skilled or unskilled, be somewherever in that middle area that gets way more pressurized?


Yes, I think that middle section will see enormous pressure from below (bootcamp folks probably doubled their productivity) and from above (skilled folks can roll up their sleeves rather than delegate).


That's the way with about any real leap in productivity applications. About every time you add "smart" to something, you take from skilled domain experts.

Just think of when Photoshop 4.0 added layers and now composing was for everyone, not just for those venturing into channel operations… (To be fair, here, things got much easier even for those who managed previously without this.)


> In other words, if you're skilled, there's nothing AI can do for you…

If the task you are working on is phone support. If you are a dev, you are perpetually learning new things, it's not possible to memorise the whole field so AI would have more opportunity to help.


The study group were mostly customer support agents. The nature of that work is very different from other knowledge workers so probably the results don't map to more creative fields.


That's mostly how I interpreted it as well:

> "Customer support agents using an AI tool to guide their conversations saw a nearly 14 percent increase in [overall] productivity, with 35 percent improvements for the lowest skilled and least experienced workers, and zero or small negative effects on the most experienced/most able workers... [out of] 5,000 agents working for a Fortune 500 software company."

AI quality is somewhere in the middle between highly skilled and neophyte (at present anyway).


>In other words, if you're skilled, there's nothing AI can do for you…

Yet...


The 35% improvement for low skill workers and 0% improvement for high skill workers seems to confirm everyone’s worst fears about this stuff.


This would mean companies can fire or put wage pressure much more effectively on highly Skilled or experienced workers by onboarding low skill low experience much faster

AI will further consolidate mega corp power to a terrifying degree.


From a cursory read, they don't reveal the productivity difference between high skilled and low skilled workers (https://www.nber.org/system/files/working_papers/w31161/w311... guessing this is confidential info.

My guess though is that high skilled workers remain far more productive than AI-augmented low skilled workers.


It could go that way, but another possibility is that the technology undermines the education and the informal apprenticeship situation to such a degree that only older workers are effective while younger workers can only achieve what the AI allows them to achieve. Cheating is already pervasive in education.

Consider the relationship between atmospheric nuclear testing and low-background steel [1] but for AI and older workers whose knowledge predates the introduction of LLMs in the workforce.

[1] https://en.wikipedia.org/wiki/Low-background_steel


It can onboard low skilled labor faster, but they don't have the rest of the skills to round them out. I am not saying that AI isn't coming for your jobs, because it is. GenAI is extremely useful for high skilled folks, just not directly in the thing you are super skilled at, but it can play an enormous supporting role in everything else.


Why would this be scary? You've upgraded the productivity of maybe hundreds of millions of people.


There are quite a few scary things about this. For one, who’s to say everyone will have access to this (or similar) technology? It will further concentrate wealth in the hands of the few. Also, since that increased productivity is not coming from the human, and is instead coming from the AI, workers’ share of profits will likely decrease, and with it, their bargaining power, and their overall position in the social hierarchy.

Elaborating further on the idea of productivity, as I said above, this doesn’t make employees more productive. It is doing some of the producing. It is not like an improved tool, where someone has to operate it to reap the benefits. It “upgrades productivity” more like how a kiosk at a McDonald’s upgrades productivity - not of the worker, but of the business. And much like the kiosks, we would be naive to believe they won’t replace the humans in due time.

This also has larger consequences for knowledge workers. Previously, to attain a higher level of productivity out of a worker, companies would have to invest in them so they could develop the necessary skills. Gone (or at least lessened) is that need. So workers will have less skill, less job freedom, a smaller share of profit, less social mobility. This is a nightmare for the working class.


ChatGPT is free to use. Stable Diffusion is free to use. Those are two of the largest if not the largest generative models out there.


The point of these models isn't to help poor people, it's to make as much money as possible. One of the most straightforward ways for that to happen is to drastically suppress salaries. This tech could very easily harm way more people than it helps.

Absolutely no one is spending tens of millions of dollars developing these tools in order to let random plebs capture much of the value produced by them.


Yes, mr smith, it’s a eureka moment for capital. I’m sure the high skill employees will be just fine.


For now. I really don't like where it's going long term. Right now your worth as a human is in what you are able to do. I am terrified to think of a time when for everything you could ever hope to learn there would be a model that would be able to do this 10 times better than you for $3/hr amortized cost. The few who would own the models would own the world and the rest would be rendered essentially worthless.


Only with upskilling and opportunities to share in the growth.


It's scary because AI is going to put the developing world out of business.

This coming century is supposed to be about 'Rest of World' - first China, then India and the rest.

But it's going to risk the ability of the nearly-unskilled labour to help.


The Developing World will have ChatGPT as well.

They can learn new things and cover over the things they do now know. Hopefully they can join the Developed World quickly.

Every time someone says X is going to suffer under ChatGPT, I remind that person that X will also improve their productivity by using ChatGPT.

ChatGPT is the most amazing personal tutor I can find. I'm adding 1 IT cert per month, largely due to ChatGPT's help as my personal tutor.


My gosh 'the robots that replaced the factory workers ... will just help the workers - so now worries!'

Do you know what happened to manufacturing when automation low-cost outsourcing happened?

It was wiped out.

It was devastating for certain sectors of the economy, even as 'net productivity rose' - the surpluses were acquired by some, not others.

A lot of 'ghettos' in the US are a direct result of mass factory closures.

Now - imagine that happening over the developing world, or rather, factories that were supposed to open, never did.

The developing world are 'services export' economy, with things like call centres etc. - and AI will more likely than not just evaporate those roles.

That those people will have 'access to ChatGPT' is besides the point when most of them don't even have computers (just mobile phones), or any way to apply that knowledge.

It's a bit like saying: 'The developing world has access to Wikipedia and all of Harvard courses online! They should all have great jobs!'

Unfortunately that's not how it works.

AI is going to help white collar workers, not pink or blue collar work which is low-skilled.


My biggest jump in productivity comes when I switch off the internet router.


I don’t know where people are getting such low numbers. I’ve been a developer for 10 years. I care a lot about optimizing my workflow.

When copilot was released, I’d say I got a 15% increase. When ChatGPT was released it was like 50% at least and I can’t imagine going back. I remember how slow it was now.

My advice would be to force yourself to leverage it more or something. I hate googling now. I’ll find a page, copy the entire thing into gpt4 and the file i’m using with the error message and i have to do nothing.

How are y’all using it?


curious to hear more about the kind of projects you're working on and the kind of problems it helps you with.

> I’ll find a page, copy the entire thing into gpt4 and the file i’m using with the error message and i have to do nothing.

can you explain more about this I don't quite understand this passage.


Even more significant when you take this into account:

> The tool was rolled out to the agents gradually, mostly between November 2020 and February 2021.

So the 14% gain came from vastly less capable systems than what we now have today.


Probably VSCode is already more than 14% boost even without Copilot




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: