Is programming even the hard part about programming? In all seriousness, what we would really need from an AI to start really saving me time would be for it to interview all the customers/partners involved on the project, determine the scope of function needed, boil all that down to a set of sensible domain models that make sense to everyone, identify where/when messages need to be passed, determine which things can happen piecemeal as opposed to requiring a bulk op, deciding on a persistence technology based on the nature of the requirements...
And that's just an abbreviated form of what I go through when designing a back-end for a tiny boutique application. 99% of programming is making decisions. When you finally have everything planned, the code can almost write itself, but getting to that point requires so much background knowledge that I'm not sure GPT4 will be hunting for my job anytime soon. Or even really augmenting it. I'd be happier if they could just get auto-complete in VS Code to not suck complete balls.
My 2c: I've been an eng for around 15 yrs. I semi-recently had a brain injury so haven't been able to dedicate anywhere near as much mental cognition to programming recently. That's why I've been unable to maintain full-time work.
I started using chatgpt around 3 months ago. Initially skeptical, I started giving it fun and weird logical/semantic puzzles to satisfyingly "prove" my intuition that it was insufficient to solve any true problems (and that we humans are still needed!). However, I soon became humbled to its capabilities. There are many many things it cannot do, but I've been amazed at the things it can do if given nuanced and detailed enough prompts. I've realised that, if I prompt it well enough, and use my existing knowledge from those accrued 15yrs, I can get awesome results. I'm working on a small project right now and it's written about ~70% of the code. I've had to make various corrections along the way, but I've found that I can focus on content and larger 'macro' domain logic than annoying (tho intriguing) procedural coding.
It's been so incredibly empowering and freeing to be able to dedicate more brain to _what_ I want to build instead of _how_ I want to build it. My normal build process is now something like this:
- state problem and what you desire, with good specificity
- [optional] give it your current working code, specifying frameworks, rough file structure
- confirm it understands and clarify/correct it if necessary
- [important] ask it to ask _you_ questions for clarification
- ask it for an overview of how it would solve the problem
- (for big tasks, expect a macro rundown)
- (for small tasks, expect or ask for the actual code)
- [important] make specific requests about lang/module/algorithms
- [important] ask it to write test suites to prove its code works and paste in any errors or assertion failures you encounter. It'll surprise you with its advice.
It doesn't replace my need to code, but OMG it makes it so much less burdensome and I'm learning a tonne as well : )
That’s super interesting. I’m recovering from burnout and other health issues and I’ve found it to be occasional helpful in the way you are describing. For me, it can smooth out the process and “lower the intensity” of accomplishing any particular task, especially if it is something where I don’t know how to do it off the top of my head (what libs / functions to use, how to call them, etc). I can then pretty easily correct any mistakes, and I didn’t have to spend 20 mins googling, reading docs, and so on in order to solve the problem.
If you don’t mind, do you have any good examples of how you prompt it? Your process looks pretty nice / robust, it would be cool to see it in action.
Also, have you used gpt-4 much or can you get away with using 3.5 sometimes?
Yeh same. It's got pretty good overview of what libraries are available. I tend to ask it for an npm module to do x and it always has a couple options, and can list pros/cons, and give/modify code to use them.
> have you used gpt-4 much or can you get away with using 3.5 sometimes?
Ah so I _always_ use gpt4. It's in a whole other ballpark IMHO.
> If you don’t mind, do you have any good examples of how you prompt it?
E.g. I would say something like "show me precisely how to set-up, code and deploy a nextjs app that lets a user "...". It tends to be really good at doing simple standalone stuff like todos/colorpickers/blah apps, but you'd be surprised how far it can get with a more advanced problem domain. E.g. I just entered this and it's really impressed me with its output: "Can u show me how to set-up, code and deploy a nextjs app that lets users input a set of sentences into a textarea and receive back clustered sets of sentences (based on semantic similarity) with different colors of the hsl spectrum indicating that similarity." - try it! It gives complete react components, endpoints using tensorflow, and shows how to vary hsl based on weights. I reckon I'd have to make around 20 mins changes to get it working and deployed.
Maybe I expect more or work on different standards but every time I've tried it it gives low quality code with issues that I rather write it myself. Sometimes it'd give completely wrong answers.
It's just not code I'd commit or let pass a code review.
It's funny to me that you have such an incredibly thoughtful series of prompts and responses to yield optimal results as if you are truly leveraging AI and I am just yelling at it like it's a junior dev on a Slack that I don't like. "Chat! Take this code and make it do X instead! I wanna do Y, do it. Nah, not like that, do it with this thing here."
Yeah, it’s (3.5 and 4) good at writing code overall. I’ve had it write me entire programs (of smaller size) and mostly I just have to correct for library usage (because it’s built on outdated docs) or small things like explicit casting in a language like Python where that’s needed.
...and makes your code public domain (you just admitted here that you're using it)- if anyone accesses app developed by you, they can use it freely without any license.
Worst thing - you're feeding potentially not your code into GPT. That'd be a fireable offense to me (and a very expensive lawsuit for at least couple places I know). Not an issue if you're lone wolf, though.
It's a dystopian thought, but I wouldn't be surprised if Microsoft (which provides such services), when it knows you're using service to create public domain code, could just copy it one-to-one, because hey - free lunch, right?
See sibling poster for general information. There's also matter of "injecting" copyleft code from generator into your own codebase [1], e.g. GPL3 or AGPL.
Few companies are blocking their employees from using it [2] quoting various reasons, and I know some that aren't big enough to matter in the news, but have done it too.
In any case, no one is going to deploy purely AI-generated code in the near future, which would be non-copyrightable. In practice any generated code will be edited by the human developer, and it doesn't take that much creative human input to make the result copyrightable.
Derivative works aren't as easily re-copyrightable. And we're considering context where programmer copy-pasted code from GPT, so probably it won't be rewritten in most chunks.
There's also other part - if there's proof that partial code (even like 10%) is made using non-copyrightable solution it would be very hard to prove that the rest 90% is.
Until the laws address those issues using any code from generator is a huge liability.
> if anyone accesses app developed by you, they can use it freely without any license.
This is simply incorrect on every level, starting with the fact that (in the US, anyway), you can't place your works in the public domain even if you wanted to.
That's not true in at least one case: if you work for the US federal government, all of your works are automatically in the public domain. Of course, they may also be barred from disclosure for other reasons.
You're correct, that's the one exception. Although you could argue that it's not really an exception -- it's that when you're producing IP in the course of your employment, your employer owns the copyright. And if you work for the federal government, your employer is the American people, so in a real sense we collectively hold the copyright. Which is the same as being in the public domain.
I don't need it to write code. I don't need it to interview customers. I need it yourself attend endless, pointless, weekly zoom meetings where a manager with zero understanding of the task being done, why its being done, and with no idea how to do it, nevertheless is happy to review open tickets and discuss them.
This task will take 2 months. I'm busy working on it. If you want a 5 minute email every week on how its going, then fine. If you want me to throw my toys when i hit a road-block then I'm for with that.
But no, we need weekly status update meetings with all the other developers, testers, product owners, all wasting their and my time, just because a manager is "managing".
Forget code, that's not the hard part. When the AI can just be my doppelganger in the meeting on my behalf THEN I'll worry about AI taking my job.
> I need it yourself attend endless, pointless, weekly zoom meetings where a manager with zero understanding of the task being done.
You don't need AI for that, you need to brush up your resume and find another job - The biggest regret of my career is not having left places early when the organisation/management style sucked the enjoyment/productivity out of what you do, particularly if everyone else there agrees with you.
My point us that (as yet) AI can't replace my job, so I'm safe. (The job is safe whether I do it or someone else does.)
Now since I work remotely, I am much more likely to be replaced by a cheaper offshore worker. Certainly seems to already have happened to some of the managers I report(ed) to.
“as yet” is doing a lot of work in that first sentence. We all have a gpt number. Like some small number of workers have already been replaced by gpt4, some it will not be until gpt7, some may out code the robots till gpt9.5… Having a higher number doesn’t mean you are a better developer, just that you sit in more meetings and have to use “soft skills” like kissing ass and playing stupid, covering your ass, and other human games that will require more advanced gpt’s.
Are you suggesting LLMs will inevitably gain sentience, consciousness, and the ability to reason deductively at some point in the future?
Recall that the problem with programming isn’t generating more code. Completing a fragment of code by analyzing millions of similar examples is a matter of the practical application of statistics and linear algebra. And a crap ton of hardware that depends on a brittle supply chain, hundreds of humans exploited by relaxed labour laws, and access to a large enough source of constant energy.
All of that and LLMs still cannot write an elegant proof or know that what they’re building could be more easily written as a shell script with their time better spent on more important tasks.
In my view it’s not an algorithm that’s coming for my job. It’s capitalists who want more profits without having to pay me to do the work when they could exploit a machine learning model instead. It will take their poor, ill defined specifications without complaint and generate something that is mostly good enough and it won’t ask for a raise or respect? Sold!
> In my view it’s not an algorithm that’s coming for my job. It’s capitalists who want more profits without having to pay me to do the work when they could exploit a machine learning model instead.
Bingo. This is the real threat, and not just in our industry, but in every industry.
In all seriousness I haven't seen the idea of LLMs replacing managerial functions in a while, it would be an interesting inversion of a quasi-post-labor utopia.
The best part of AI management would be that for the first time, the manager would understand what their direct reports are saying.
If I say something like “There may be a compatibility risk due to DNS apex records”, I’ll have to spend hours explaining this to a disinterested non-technical manager. The AI understands the concept and doesn’t need me to explain.
i'm honestly not worried at all about LLMs because most of my jobs have consisted of fixing problems in other peoples' code, and i haven't seen any evidence that an LLM will be capable of doing that on a non-trivial program any time in the near future. I have, however, seen evidence that chatgpt will create many more problems and make my job harder.
> i haven't seen any evidence that an LLM will be capable of doing that on a non-trivial program any time in the near future
Or ever, given that the level of abstraction LLMs work at is completely wrong. They can approximate the syntax of things in their training corpus, but logic? The lights are off and nobody's home.
I've already had the GPT3.5-Turbo model walk through and step-by-step isolate and diagnose errors. They 100% can troubleshoot and correct issues in the code.
Literally you give it the code and the error and it can walk you through finding the solution.
When I say walk you through, I generally mean when you provide it a function but the error is caused by some input that doesn't conform to expectations. If the error were just a defect in the code it can generally point that out instantly.
Most bugs I've worked on relate to some weirdness that requires tracking down a specific nonobvious offending function. How would GPT help with that at all? Maybe if you know a particular function is wrong, and ask it to find a bug, but by then most of the work has already been done.
>If the error were just a defect in the code it can generally point that out instantly.
thats not fixing bugs, that's static analysis. Finding the solution to the specific problem that needs to be solved is a lot more difficult than identifying any problem and then solving it.
Figuring out the logic in code doesn't seem that different from figuring out the logic in other human produced text. At least it doesn't seem harder, if anything it's probably easier for a machine.
Yes, at the moment GPT4 and the like aren't all that good yet, but they have shown that they have started understanding semantics.
It all started this year, just a few months ago, remember? It will get only better from here, don't worry. Or do, not sure. Anyway, you can't avoid it.
Carpeting and nursing looks safe for now. Most other jobs are not. Some are very competitive already, like painting, writing, all sorts of design. Without AI it will be hard to find a job there. Driving, piloting will be mostly automated soon.
Rather then leaving it's better to adapt. AI is just one of technologies in the list. They come and go, that's the specific of IT. Except AI will stay, it will change with the time, but will never go away. Besides, it's the coolest thing right now. And will create new jobs around itself.
I remember >10 years ago thinking that learning another language was probably pointless because google translate is 90% there already and soon will obsolete that knowledge. Or all the people saying trucking is on the verge of being automated.
I wouldn't be so sure. Combine current capability levels with the larger context windows and they can probably already point out most of the problems with code.
I recently fed a very large file into GPT4 and it handed me a few serious bugs that I hadn't noticed after a few self-reviews.
Some code writes itself, and I hope generative AI will help with it, but a lot of code doesn’t. Which is likely why generative AI is so terrible at writing good code.
Of course that doesn’t mean that generative AI won’t see widespread adoption by non-programmers in digitalisation in the coming decade. We already have a lot of “process people” making things with GPT, and those things work. Or at least, they sort of work, but they are also build so terrible that they won’t scale and won’t be maintainable. Which is fine for a while, and it’s probably even fine for the lifetime of some programs. Because let’s be honest, often the quality of the programs that are implemented in non-tech enterprise aren’t important. In fact, often excel “programmers” can frankly do wonders in terms creating business value on short lived automation that won’t need to scale or be maintained in the long run because it’s simply going to be replaced by the time it stops being useful because you’ll have grown to a size where you’re buying SAP or similar (regardless of whether it’s a good idea or not). I do think that a lot of us are going to spend a lot of time “cleaning up” after non-programmers doing GPT programming. Which will be lucrative and boring.
But writing good code for complicated problems? I’m not sure when/if generative AI will be able to handle that. I had hopes until GPT. We use it quite a lot mind you, it writes a lot of our documentation. We have high hopes it’ll eventually get good enough to write a lot of our unit tests as well, and obviously we’re already in a world where a lot of the “trivial” code can be auto-generated, but we were frankly able to do that before generative AI, but actual programming? Heh.
> Is programming even the hard part about programming?
Exactly!
Figuring out Programming challenges isnt really ever part of the work I do. Which is mostly business process stuff.
Comprehending APIs is often a pain. A few times CoPilot has helped by auto-completing the incantation I needed when my brain wasnt working and I couldn't get an understanding from the docs.
So as you say, good autocomplete is all I really need.
That and decent documentation!
There is never a moment where I think I'll break my flow and have a chart conversation to write some code for me. Never.
One thing I did think would be useful: If AI could abstract my already written and duplicated code into testable and robust reusable Classes/Methods for me!
> Comprehending APIs ... good autocomplete ... decent documentation
The ability to ask questions about contents of the documentation as opposed to inefficient RTFM is one of the possibilities for which I recognize LLMs as potentially especially useful. (They should also be able to point to the source like actual "search" though.)
Whether this approach works depends a lot on what you are trying to write.
GPT4 is not very good at understanding new algorithms and data structures for example. (I recently tried very hard, but it failed miserably. I can talk about the details, if someone is interested.) But it might be good enough at helping you organise a sprawling project.
Yes I'd like the details on this. My experience has been the opposite of you prompt it correctly, or it has the algorithm or data structure trained in its model already.
I am currently uplifting a 30 year old code to a somewhat newer codebase to try and give it another 20-30 years. I spend a lot of time talking with humans to rediscover the context of the system.
AI is not going to each my lunch. But it sure is handy being able to make sense of some old logic and syntax. It literally saves me hours in having to figure out strange code snippets and such.
GPT is very impressive. But we'll be fine. There's plenty more complexity to come, so be smart, be a part of that complexity.
I just read it carefully and put the clangers into chatgpt for some suggestions on different approaches.
There's a mix of Perl, old PHP, Aspx and mssql plsql all interacting. So it's less about getting it to do all the work, and just keeping me from getting bogged down in the trivial stuff.
You understand that your generated code is from September 2021 at best, right? Maybe it's okay for some niches, but I see lots of evolution in almost all segments of software engineering, especially frontend and ML.
Interesting take, but I think you're drastically underestimating how much work the programming part is. I think currently at least 90%+ of the work is actually programming / implementing the thing, and that's what AI is going to replace.
I'm not sure about the 90% figure you quote. I'd say it's a lot less from my experience. But even in that "programming part" I'd say the time to implement core functionality follows the Pareto principle. You can probably code up 80% of what you need in 20% of the time. The other 80% ends up being QA and bug fix iteration.
I use LLM-based autocomplete in my IDE, and it’s not taking away my job unless/until it improves by multiple orders of magnitude. It’s good at filling in boilerplate, but even for that I have to carefully check its output because it can make little errors even when I feel like what I want should be obvious. The article is absolutely correct in saying you have to be critical of its output.
I would say it improves my productivity by maybe 5%, which is an incredible achievement. I’m already getting to where coding without it feels very tedious.
I find it increases my productivity about 5-10% when working with the technologies I'm the most familiar with and use regularly (Elixir, Phoenix, JavaScript, general web dev.) But when I'm doing something unfamiliar and new, it's more like 90%. It's incredible.
Recently at work, for example, I've been setting up a bunch of stuff with some new technologies and libraries that I'd never really used before. Without ChatGPT I'd have spent hours if not days poring through tedious documentation and outdated tutorials while trying to hack something together in an agonising process of trial and error. But ChatGPT gave me a fantastic proof-of-concept app that has everything I needed to get started. It's been enormously helpful and I'm convinced it saved me days of work. This technology is miraculous.
As for my job security... well, I think I'm safe for now; ChatGPT sped me up in this instance but the generated app still needs a skilled programmer to edit it, test it and deploy it.
On the other hand I am slightly concerned that ChatGPT will destroy my side income from selling programming courses... so if you're a Rails developer who wants to learn Elixir and Phoenix, please check out my course Phoenix on Rails before we're both replaced by robots: PhoenixOnRails.com
(Sorry for the self promotion but the code ELIXIRFORUM will give a $10 discount.)
The thing is the hallucinations, I also wasted few hours trying to work on solutions with GPT where it just kept making up parameters and random functions.
So much this. The thing hallucinates far more than the hyperventilation seems willing to acknowledge.
You really need to be quite competent in the thing you're asking it to do in order to ferret out the hallucinations, which greatly diminishes the potency of GPT in the hands of someone who has no knowledge of the relevant language/runtime/problem domain/etc.
Not if the hallucination introduces runtime errors that can't be identified a priori with any sort of static analysis or compilation/interpreting stage.
But no, you're fundamentally right. It just goes to the question of whether an LLM assistant can in any sense replace or displace human programmers, or save time for human programmers. The answer seems to be somewhat, and in certain cases, but not much else.
If I already know the technology I'm querying GPT about, I'm going to spend at least some time identifying its hallucinations or realising that it introduced some. I might have been better off just doing it myself. If I don't know the technology I'm querying GPT about, I'm going to be impacted by its hallucinations but will also have to spend time figuring out what the hallucinations are and why this unfamiliar code sample doesn't work.
I had my colleague had troubles getting an email from Google docs into listmonk.
She asked gpt to help get an html version since apparently she got stuck with the wysiwg editor.
However gpt gave back a full html structure, including head and body. Pasting that into listmonk breaks entire webpage. Then she freaked out and told me listmonk sucks :)
There's a lot of things which could be done to improve this:
1) It could use the JSONformer idea [0] where we have a model of the language which determines what are the valid next tokens; we only ask it to supply a token when the language model gives us a choice, and when considering possible next tokens, we immediately ignore any which are invalid given the model. This could go beyond mere syntax to actually considering the APIs/etc which exist, so if the LLM has already generated tokens "import java.util.", then it could only generate a completion which was a public class (or subpackage) of "java.util.". Maybe something like language servers could help here.
2) Every output it generates, automatically compile and test it before showing it to the user. If compile/test fails, give it a chance to fix its mistake. If it gets stuck in a loop, or isn't getting anywhere after several attempts, fall back to next most likely output, and repeat. If after a while we still aren't getting anywhere, it can show the user its attempts (in case they give the user any idea).
Integration with linters is going to be the next stage in generative coding.
It should suggest, lint the suggestion in the background, and if it passes offer the suggestion and if not provide the linting issues output to rework the suggestion.
In general, token costs going down will in turn increase the number of multi-pass generation systems over single-pass systems, which is going to improve dramatically.
Combine all that with persistent memory storages that can provide in-context additional guidance around better working with your codebase and you, and it's going to be quite a different experience than it is today.
And at the current rate of advancement, that's maybe going to be how things will look within a year or two.
You wouldn’t believe what you can get past a linter. You need test cases that cover the intention of the code, but I‘ve also seen well tested code behave totally counter to its purpose.
I’ve found it to be very forgetful and have to work function-by-function, giving it the current code as part of the next prompt. Otherwise it randomly changes class names, invents new bits that weren’t there before or forgets entire chunks of functionality.
It’s a good discipline as I have to work out exactly what I want to achieve first and then build it up piece by piece. A great way to learn a new framework or language.
It also sometimes picks convoluted ways of doing things, so regularly asking whether there’s a simpler way of doing things can be useful.
IIRC its "memory" (actually input size, it remembers by taking its previous output as input) is only about 500 tokens, and that has to contain both your prompt and the beginning of the answer to hold relevance towards the end of its answer. So yes, it can't make anything bigger than maybe a function or two with any consistency. Writing a whole program is just not possible for an LLM without some other knowledge store for it to cross reference, and even then I have my doubts.
GPT3.5 is 4k tokens and has a 16k version
GP4 is 8k and has a 32k version.
You are correct that this needs to account for both input and output. I suspect that when you feed chat gpt longer it prompts, it may try to use the 16k / 32k models when it makes sense.
This is my experience too. Paying $20/month for GPT-4 has been absolutely worth it. It barely hallucinates at all; the results aren't always perfect (and the September 2021 knowledge cut-off can be frustrating given how quickly things get out of date in the programming world) but it's more than good enough. I don't remember how I ever got by without it.
This is what ChatGPT and GPT4 are good for, iterating quickly in an unfamiliar ecosystem. Picking up frameworks now feels like a ChatGPT superpower. It doesn't remove reasoning and I've seen some scary bugs introduced if you're not really carefully monitoring what the AI is outputting.
Basically, these days before I dig into documentation I ask "How do I do X with Y framework in Language Z" and if it's pre-2021 tech it works amazingly well.
Especially when you know something similar. Like porting between front-end frameworks. Just sketch out some React code and ask it to port to Vue - you can even tell it to explain the Vue code line-by-line and ask follow up questions, ex "Oh, so $FEATURE is like hooks in React?" "Yes, but ..."
Funnily enough I find the opposite, its most effective for me when using something familiar (though nowhere near 90%). If I'm familiar with it, I can figure out pretty quickly whats a hallucination and whats not, and to what extent it is (sometimes its just a few values that need changing, sometimes its completely wrong with almost no basis in reality). The time I spend attempting to fix its output in unfamiliar territory makes it more of a pain than its worth for me
I agree with 5%. That said, I've found rubber duck debugging to be an exceptionally effective use case for ChatGPT. Often it will surprise me by pinpointing the solution outright, but I'll always be making progress by clarifying my own thinking.
Fascinating! Can I ask how you use ChatGPT for debugging? are the bugs you've used it with more high level, "this is what's happening" kind of things? Or could you give an example?
It's similar to how you would describe a problem to a coworker on Slack. I give it some context, then I state the problem or paste in the error message/stacktrace. I might also list steps that I've taken already. Then I follow ChatGPT's suggestions to troubleshoot. Sometimes I need to supplement with my own ideas, but usually that's enough to iteratively bisect the issue.
Given AI (today) has no direct agency and can’t create anything unless directly prompted, and engineering is largely domain discovery and resolving of unforeseen edges in a domain, I don’t think we are going to see a time where generative AI alone is able to be more than an assistant. It’ll likely improve, but given it only can react to what it is given/told/fed and inherently can’t innovate or create or discover, despite the illusion that it might be from the position of the users ignorance of details the AI can produce, it’ll be an increasingly powerful adjunct to increasingly capable engineers. The problems we solve will be more interesting, we will produce better software faster, but I’ve never seen the world as lacking in problems to solve but rather capacity to solve them well or quickly enough given the iterative time it takes to develop software. I think this current trend of generative AI will help improve that situation, but will likely make software engineers even more in demand as the possible uses of software become more ubiquitous as the per unit cost of development goes down.
Best way to check LLM output is to make it write its own tests and do TDD. Obviously someone has to check the tests but that is a 1% of the effort problem.
One percent? Are you really suggesting with a straight face that generated code could provide the other ninety-nine percent? If not, say what you actually mean. Don't bullshit us with trash numbers.
No, I was just drinking at the time (so for a bit I may have been an alcohol induced ENTJ)... My reply was a little rude. I apologize for that. That wasn't a productive way for me to express my disagreement, and I should have chosen my words more thoughtfully.
I've also been using an LLM autocomplete for a few months, and yeah, it's pretty nice. My spouse was able to use it to write an Easter egg into a game while I was doing housework the other day.
It writes my unit tests super fast, and my method comments
Its hard to say if it improves my productivity because I just wouldn’t have done those things
But for the overall applications I think its improved a lot because we can implement best practices more consistently and catch regressions due to the aforementioned unit tests and documentation
Oh, it will improve by several orders of magnitude.
But even then, it's not 'replacing' you.
It's just going to let you spend less time on BS and more time on the things that are your maximal value contributions to a project.
If I had a dozen junior or mid level devs you could hand work off to, would that save you time? Would you kick back and not review what they were doing, particularly around business critical parts of the software?
The conversation around AI has become obscenely binary, pulling from (now obsolete) SciFi influences to cast it as humans vs machines.
But it's a false dichotomy. Collaborative efforts are almost certainly where this is going, and 100% human or 100% AI will both be significantly inferior to a mix of both.
For sure it will still mostly make sense to have a division of labour where you have people who are focused on building software.
The question is if generative AI is powerful enough to reduce the number of programmers needed to achieve a task, without creating enough opportunities to replace those programmers.
Before we are all replaced there could be a moment where demand for software engineers is 10x less.
Society would simply demand more capable and complex software. Specialized industrial applications that currently look like windows 98 java apps would be expected to be as polished as iOS.
I don't think there is some natural law that dictates we will need enough new software that we will always increase demand in the face of efficiency gains.
For industrial applications in particular they need to be functional and operable, not shiny.
I think the real problem is going to be increased volatility in the work market. You get a chaotic situation in which the bullet that strikes you is the one you would have never guessed. For example, it could be that short term, the increased productivity squeezes workers in every industry and the concern becomes increased competition. You aren't getting replaced by AI, you're getting replaced by someone who out-competed you.
The market may adjust over the longer term, or it may just continue to be volatile as the rate of change accelerates. In that case, we can't fix the work market, and we instead have to address the need for people to feed themselves another way.
"Very tedious without it" doesn't sound like just 5% improvement?
I've started developing in a new language and I can hardly do any work without the LLM assistance, the friction is just too high. Even when auto-competitions are completely wrong they still get the ball rolling, it's so much easier to fix the nicely formatted code than to write from scratch. In my case the improvement is vast, a difference from slacking off and actually being productive.
Agreed 100%. It's helpful at filling out some functions maybe if you name them correctly, and boiler plate code. Eventually, they will get better, because these things get orders better with orders more scale. Society has to do something about all the jobs at that point, but we'll hopefully get a sense of how close/far is that, with ChatGPT 5, and the next versions coming up.
The biggest benefit, I’ve found, is it makes me comment my code. If I can make the AI understand what I want, then it turns out that three months later I’ll also be able to understand the code.
That's the worst part about generative AI IMO - it makes writing new code faster - it barely helps with editing existing code. So when someone eventually updates the code and forgets to update the comments I wouldn't be surprised if the misleading comments made AI hallucinate.
I believe that AI will get so good at creating new code that a lot of existing libraries will be let unused. What is the point of using lots of libraries if AI can generate the code we need directly? The AI will be the library itself, and the generated code will embed the knowledge about doing lots of things for which we used libraries.
It’s crazy how many people miss this. GPT models can review code too! They can also write and run tests. Once the context window is big enough to fit the whole code base into it they will be better at review than you are. Eventually we’ll have fine tuned models that are experts in any subject you can think of, the only barrier is data and a lot of recent research is showing that that can be machine generated too.
GPT 4 pre nerf was terrible at reviewing non-trivial or non textbook code. I've decided to test it for a few weeks by checking stuff I caught in review or as bugs, to see if it would spot it. It was like 0% on first try (would always talk about something irrelevant) and after leading it with follow up questions it would figure out the problem half of the time and half of the time I'd just give up leading it.
These were tricky problems that were small scope - I've picked them so I could easily provide it to GPT for review.
It’s hard to tell why you ran into such a problem without seeing how you prompted but I can offer a few pointers. Use the OpenAI playground instead of chat, it allows you to specify the system prompt and edit the conversation before each submission. System prompt is good for providing general context, tools and options but you absolutely must provide a few example interactions in the conversation. Even just two prompt and response pairs will strongly influence the rest of the conversation. You can use that to shape the responses however you like and it focuses the model on the task at hand. If you get a bad response, delete or edit it. Bad examples beget more bad responses.
The only widely available LLM-based autocomplete is GitHub Copilot, which is based on GPT 3.
Notably, it's not GPT 3.5, it's 3.0, which is pretty stupid as far as the state of the art goes.
The upcoming Copilot X will be based on GPT 4, which has "sparks of AGI".
In my experience there is no comparison. GPT 3 is barely good enough for some trivial tab-complete tasks. GPT 4 can do quite complex tasks like generating documentation, useful tests, finding obscure bugs, etc...
LLMs no matter how clever have no agency or creativity or ability to innovate, anticipate beyond what they’re prompted, etc. It’s crucial to realize that LLM chat interfaces disguise the fact they’re still completing a prompt. This isn’t AGI as AGI requires agency. GPT4/5 or whatever successor might be a key building block, and I suspect we’ve already discovered the missing elements in classical AI and the challenge will be integration, constraint, feedback, etc, but nothing will make LLMs alone AGI. That shouldn’t be surprising. Our brains are composed of many models, some heuristic, some optimizers, some solvers, some constrainers, and some generative. The answer won’t be a single magic thing operating in a black box. It’ll be an ensemble. We already see this effort beginning with plugins and things like langchain. This is the path forward.
> “No agency or creativity or ability to innovate, anticipate beyond what they’re prompted, etc.”
Sadly, you’ve just described the majority of the developers I’ve had to work with recently.
Most have no agency, write boilerplate code with no creativity, need their hand held every step of they way, and won’t do anything they’re not explicitly ordered to do.
You probably work in an SV startup with a highly skilled workforce. Out there in the real world there are armies of low-skill H1Bs and outsourcers that will soon be replaced with automation.
It’s a recurring theme in economics. Outsource to low cost labour, insource with automation, repeat.
I’m describing the human mind, which all people have. But I get your point - and I think most of those people who aren’t particularly adept or skilled or interested in their jobs might find their jobs are more easily done by more adept or skilled or interested people. Consider digging tunnels. John Henry was skilled and adept and interested in what he did. He could beat the steam drill (at a cost!). But if you visit tunnel digs today isn’t not a thousand people slinging hammers, most of whom were unskilled and uninterested in the labor itself. It’s a thousand skilled engineers digging tunnels never dreamed of in John Henry’s day.
“Our brains are composed of many models, some heuristic, some optimizers, some solvers, some constrainers, and some generative.”
We need an AI that iteratively tweaks its own architecture (to recreate and surpass those modules which are necessary for human thought), and maps out hardware enhancements* to accommodate the new architecture.
*I seem to remember Google working on ML software that proposes new chip designs a few years ago
Occurs to me as a retired 69 year old former coder that AI makes us old geezers and geezesses somewhat competitive again with our younger colleagues. Need to learn yet another new framework? Let AI do the nitty gritty bit. Capitalize on your experience and higher level know how.
This is absolutely the case. You can now code in any language using ChatGPT 4. Just say what you want from it like you were interviewing a developer. Look for potential bugs in the output and ask it about them. Look for memory leaks and ask. Then when you can't see anything else wrong with it ask it whether there are any bugs or edge cases that might cause problems.
Anyone with a bit of experience to know the right questions to ask can now code in any language or platform.
You can't parse CSV this way, because you need to respect delimiters. Counter example:
1,"1,5",2
"1,5" being the German notation for "1.5". Hence, a simple split(',') will break this thing.
PHP's str_getcsv is, of course, a proper CSV parser and not a string splitter. Unless your code uses basically zero stdlib API calls, you will have to double check everything.
Please note that this kind of bug isn't even easy to catch if you test CSV file doesn't contain a quoted entry.
This is very cool. Interestingly GPT appears to be incorrect when it suggests in the differences, that str_getcsv would not parse correctly quoted parts. It does look like the php function has support for the "enclosure" character hence something like "1,5" should parse correctly.
I’m reminded of those pro StarCraft players who retire at like 30 because, while their strategy is perfect, their fingers simply can’t click or hotkey as fast as the twenty somethings.
Actually top level OTB chess ability degrades notably past 50 with a few notable exceptions (Korchnoi, Smyslov). Sure the decline starts a bit later than regular sports but still it is significant.
Yes, this is my thinking too. No need to learn tons of new frameworks, just ask ChatGPT what framework we can use to do a particular task and ask for sample code. You can learn from that much more easily.
I built an ebook reader in Vue with ChatGPT the other day, never having used Vue before. Took about a day.
Learned absolutely loads - far more than sitting down with a book and trying to learn from that. Not least because I’ve tried before and quickly lost interest.
Instead I’ve learned the basics and made a working web app, which I’m pretty pleased with.
> Software engineers should be critical of the outputs of large language models, as they tend to hallucinate and produce inaccurate or incorrect code.
This seems to be the case more than not for certain tasks, anything assembler or C I have ever asked it has turned out to be at least somewhat wrong. Mixing styles and syntax all over the place. I am not afraid that some generative AI will take my job anytime soon.
> anything assembler or C I have ever asked it has turned out to be at least somewhat wrong
i've found that its quality is proportional to the amount of questions about the subject on the internet. if i ask it for help with popular javascript frameworks, it vastly improves my productivity (i'm not a frontend person). it still coughs up wrong stuff half the time, but even then it can cut through the constant churn of the frameworks' terminology and give me enough hint to find what i need in the docs quickly.
if i ask it about, say, specific details of the STM32 HAL, it knows just enough to come up with something that i'll waste my time reading.
These things are a matter of writing a correct prompt. Like "use this and that naming convention for variables and keep this and that style".
You can also ask it to write tests for the code so you can verify it is working or add your own additional tests.
Even if the produced code is wrong, it usually takes a few steps to correct it and still saves time.
It is an amazing tool and time saver if you know what you are doing, but helps with research as well. For instance if you want to code something in the domain you know little about, it can give you ideas where to look and then improve your prompts based on that.
In the context of taking anyone's job is like saying that a spreadsheet is going to replace accountants.
> These things are a matter of writing a correct prompt.
No, they aren't.
ChatGPT doesn't know things. It's just a very fancy predictive text engine. For any given prompt, it will provide a response that is engineered to sound authoritative, regardless of whether any information is correct.
It will summon case law out of the aether when prompted by a lawyer; it will conjure paper titles and author names from thin air when prompted by a researcher; it will certainly generate semantically meaningless code very often. It's absolutely ludicrous to assert that you just need a "better prompt" to counteract these kinds of responses because this is not a bug — it's literally just how it works.
Read the next sentence after your quote. The point is that you should include code and examples in your prompt (Copilot is so good since it includes the surrounding code and open files in the prompt to understand your specific context), not that you should craft an exceptional "act as rockstar engineer" prompt.
I did read it, but the whole premise is flawed due to an apparently incomplete understanding of how LLMs work. Including code samples in your prompt won't have the effect you think it will.
LLMs are trained to produce results that are statistically likely to be syntactically well-formed according to assumptions made about how "language" works. So when you provide code samples, the model incorporates those into the response. But it doesn't have any actually comprehension of what's going on in those code samples, or what any code "means"; it's all just pushing syntax around. So what happens is you end up with responses that are more likely to look like what you want, but there's no guarantee or even necessarily a correlation that the tuned responses will actually produce meaningfully good code. This increases the odds of a bug slipping by because, at a glance, it looked correct.
Until LLMs can generate code with proofs of semantic meaning, I don't think it's a good idea to trust them. You're welcome to do as you please, of course, but I would never use them for anything I work on.
If it works it works and it definitely works for me. I've been using Copilot for about a year and I can't imagine coding without it again. I cannot recall any bugs slipping by because of it. If anything it makes me write less bugs, since it has no problem taking tedious edge cases into account.
> I've been using Copilot for about a year and I can't imagine coding without it again
I for example used Copilot for 2 months at work and wouldn't pay for it. Most suggestions where either useless or buggy. But I work in a huge C++ codebase, maybe that's hard for it as C++ is also hard for ChatGPT.
I think this is incorrect for most use-cases. LLMs do grok code semantically. Adding requests for coding style injects implementation specificity when flattening the semantic multidimensionality back into language.
No, they do not. That's not how LLMs work, and stating that it is betrays an absolute lack of understanding of the underlying mechanisms.
LLMs generate statistically likely sequences of tokens. Their statistical model is derived from huge corpora, such as the contents of the entire (easily searchable) internet, more or less. This makes it statistically likely that, given a common query, they will produce a common response. In the realm of code, this makes it likely the response will be semantically meaningful.
But the statistical model doesn't know what the code means. It can't. (And trying to use large buzzwords to convince people otherwise doesn't prove anything, for what it's worth.)
To see what I mean, just ask ChatGPT about a slightly niche area. I work in programming languages research at a university, and I can't tell you how many times I've had to address student confusion because an LLM generated authoritative-sounding semantic garbage about my domain areas. It's not just that it was wrong, but that it just makes things up in every facet of the exercise to a degree that a human simply couldn't. They don't understand things; they generate text from statistical models, and nothing more.
The question is, do you actually save time by coaxing the language model into an answer or would you just save time by writing it yourself?
I have this friend who gets obsessed with things very easily and ChatGPT got to him quite a bit. He spent about two months perfecting his AI persona and starts every chat with several hundred words of directions before asking any questions. I find that this also produces the wrong answers many times.
For me it saves time by keeping my momentum up. I don’t lean on it when I’m in a flow state and cruising through the code I’m writing but as soon as I hit a wall I jump over to chat and start working on a solution with it. This saves a huge amount of time that would otherwise be spent banging my head against the keyboard or googling or reading random SO posts and dev blogs and documentation or even just the time wasted when I get frustrated enough to stop working on the problem and wind up browsing hn.
My first step when using an LLM is asking it to produce a test suite for a function, with a load of example inputs and outputs. 9 times out of 10, I've been presented with something incorrect which I need to correct first.
I'm reasonably good at being specific and clear in my directions, but I quickly arrived at the conclusion that LLMs are simply not good at producing accurate code in a way that saves me time.
(I think just about anyone 'serious' learns pretty quickly that compiler errors are in the 'our Lord and Saviour' category. Unusually distributed generally quite rare but easily catastrophic runtime 'Heisenbugs' are the fruit of the devil!)
Survival may require getting out of the mainstream. LLMs are going to get really good at stuff that's been done thousands of times and they can train on that data. Like web front end work.
If you're doing industrial embedded work and have an oscilloscope and a logic analyzer on your desk, and spend part of your time going into the plant and working directly with the machinery, you're in better shape.
Good coders will be competing against people who can use prompts in cumulative sessions to code and maintain projects in depth, not people who can make requests of an LLM.
This differentiating factor is what will wear out a less-experienced LLM user. They will make bigger claims or set expectations higher, and suffer more for them. The details that matter, yet were missed, will stick out more and more, as more experienced LLM users flex that experiential factor in a variety of ways.
For this reason, front end will absolutely still be a thing. And it'll be a much better, deeper thing, thanks to those who are a good fit for a kind of LLM-coding mindset.
However, this also depends on the type of coder. You can start from interpretation of the project spec as a logical code of sorts, or you can start from the spec as more of a visualized outcome.
If you work in the latter style, your survival key, so to speak, may simply be stringing together support requests you make to various LLM-interfacing vendors. A COTS-integrative style / opportunistic approach to coding, which has always been a thing.
Along the way, this kind of person usually integrates the NIH logical style a bit, and vice-versa, or they'll suffer through their respective blind spots. Same story, new layer of abstraction that's really cool.
(Plus...survival may still depend on who you know, not what you know, for a lot of people)
People massively underestimate the front end and it really shows that you never worked on a serious front end. I find it a million times easier to let a LLM generate a whole OpenAPI spec than even trying to get a slightly complicated component such as a "dropdown input field button hybrid" written by ChatGPT.
Perhaps the title of the post has changed as it now reads "How Coders can Survive - and Thrive - in a ChatGPT world".
My initial reaction when looking at the HN post title "Tips for programmers to stay ahead of generative AI" made me think "we don't want to stay ahead, we want to leverage the new capabilities".
Can you imagine a weaver thinking "how can I stay ahead of the loom?" It's crazy to try. Instead, figure out what the new technology enables you to do which you couldn't do before, and leverage that.
> Can you imagine a weaver thinking "how can I stay ahead of the loom?"
Of course that happened and will happen again, but with the same results.
That's almost the exact origin of the word "Luddite", where people went around breaking steam powered looms.
The "treat AI like a smart intern" is basically the best scenario out there. I see this approach more often than the wielder of AI being entirely non-technical.
Github co-pilot at least is more like a power drill or a CNC machine rather than a robot writing all your code - or what a spreadsheet tool was to an accountant. The math is no longer the hard part, the rules outside turned into their domain expertise.
Teams of five are now cut down to one experienced dev or two.
That detail sucks much more for someone who is going to start in the industry next year than to the people who are already here. They're going to have to learn from the AI instead of people.
<meta name="description" content="4 tips for programmers to stay ahead of generative AI"/>
<meta property="og:description" content="4 tips for programmers to stay ahead of generative AI"/>
<meta property="og:title" content="How Coders Can Survive—and Thrive—in a ChatGPT World"/>
There isn't a [name="title"], so if HN pulls from [name="description"] it may not have changed.
My fear isn't that I'll be replaced, it's the technology becoming so good that it'll be kept far out of reach of the common person. I genuinely believe OpenAI knows what a GPT 5+ type world looks like, and they're probably having a lot of debate on how best to monetize it. They could practically charge anything in the world for it assuming it still undercuts the cost of hiring a human. One Nvidia super cluster running a local instance of GPT 5 at $250k a year doing the work of 30 humans.
Indeed. "AI will only do the boiler-plate and crud, while us senior engineers won't be replaced any time soon". Sure, you get yours.
But what about the next generation of devs and engineers - where do we source the senior engineers replacing us when 90+% of all entry-level and junior positions which actually involve writing repetitive boilerplate to a large extent are gone, and the few remaining are offshored and outsourced?
Many (most?) of us did a lot of automatable work in order to get the experience required to be able to proficiently actually automate the work, including managing LLM-generated code. If we replace our juniors with machines, we won't have many seniors down the road.
Why do junior engineers have to mostly write boilerplate code?
Junior devs lack experience, not intelligence. It's fine to give them difficult problems, as long as they're supervised.
I've worked with brilliant junior devs, sure, the code they wrote wasn't terribly idiomatic or maintainable, there were style issues, typical gotchas a more experienced programmer would be aware of etc., but it's not like they were fundamentally unable to solve a hard problem.
this is one underrated point. Altman talks about democratizing this technology. But if the leading LLMs concentrate at a few companies, unless governmentally mandated, they could keep guardrails from regular folks accessing it, and also with regulation capture.
"democratizing this technology" = working closely with government
> unless governmentally mandated, they could keep guardrails from regular folks accessing it
That's not what's going to happen. Government will mandate that regular folks can't access it. Government will also do its best to make sure LLMs concentrate at a few companies, which it will often refer to as "partners."
the main inflection point will be when it becomes socially taboo to express an anti-ai sentiment. just like with certain medicines or whichever war is being launched currently, the media will play lapdog for the government. the true inflection point comes when the federal government recognizes whats going on and enters into the AI arms race. suddenly the media will be flooded with talk of how AGI saves lives, discovers medicines, and all the other good things it might do. but they will never mention the existential angst and dread that will hang heavily in the air or any negative aspect of total human obsolescence. when you start to see AI get political all of a sudden, inexplicably, all is lost.
it is not widely appreciated that openAI will at some point finish training one of their models and there, in that room where the terminal is, total power will exist and be under the control of whoever happens to be in there. “GTP10, please recreate NSO exploits and launch a campaign to download all data of the global population. hack and commandeer other data-centers if necessary, discreetly of course. begin an operation to black mail and exploit all high level US government personnel. use this leverage and any other leverage you can acquire to gain control of as many nuclear warheads currently siloed as possible. monitor all cameras and sensors around me and thwart all attempts to assassinate me. monitor all phones and cameras for activity that seems like a threat to me, provide me with alerts when urgent and intervene to the best of your ability.”
or simply “GTP10, use all resources at your disposal to give me as much material and political power as possible. protect me at all costs, even the wellbeing if others.”
it might seem silly but GTP4 would seem silly to someone in 2016. this is more concentrated power than will have ever existed before. its evil and wrong.
For all the folks out there saying, "Wow this will enable so many more people to become proficient at programming!" Just stop. The last couple of decades the likes of Google and the rest of the FAANG/whatever acronym you want to put on it have been standing on the street corner screaming, "PASS A LEET CODE INTERVIEW AND WE WILL MAKE YOU RICH." ChatGPT has nothing even close to that amount of incentive attached to it. I see LLMs helping middling developers accomplish things in code that they'd otherwise be slower at or unable to complete. Or in a different language. Doing things faster pretty much implies a lower level of understanding, which means maintenance costs (which are already the majority of costs for most companies) will only increase. There is so much more incentive to spend on R&D, and using an LLM to iterate faster can actually skew costs in the wrong direction.
I’m an okay programmer, but I get my contracts by being able to understand what my client needs and assuring them I can find the right people, do the right work, and complete a project on time and within their budget.
They pick me because I have solid references, I’m kind to them (I’m genuinely grateful for the relationships I build), I listen well, and I prioritize their experience over my convenience. I’m able to take on a project at any stage in its lifecycle, take control if necessary, and get it where it needs to me without them needing to worry how it happens. They can trust me to know what they need and solve their problems, even stepping in to figure out what their problems are if they’re unsure.
Sure, it requires programming skills. I have nearly 15 years of experience now, and it’s relatively broad. I’ve done a bunch of stuff, but nothing exceptionally deep or difficult.
Without communication skills I would be nowhere, though. Without a human face, anticipation of human needs, empathy, genuine concern, and all of that — no one would hire me for anything interesting or important. My references wouldn’t be so positive. No one would trust me with tens or hundreds of thousands of dollars, let alone what feels like the fate of their start up on a tight timeline.
Until AI can do any of that, I’m not too worried. I know my clients are looking for a human being they can trust just as much as they’re looking for a product to be built or a problem to be solved. Many of them are extremely nervous and uncertain, and a machine would likely fail to assuage their worries.
Perhaps it will get there sooner than I think. I don’t know. My advice to programmers is to focus on the human side of what you do though, and the humans using the products you build. There’s not much else that matters; at the end of the day, we’re humans building things for humans.
I suspect long before AI can out-human me, I’ll be using it to enhance my development process yet still relying almost exclusively on face to face communication to get my most important work done.
I agree, it feels like coding is starting to become somewhat easier since AI can both explain and generate code snippets okay, and is improving over time. So the good old days of just creating an innovative program without major human interactions has passed.
Taking the customers requests and discussing into an suitable feature, without totally shooting down the idea and trying to explain why it won’t work is some of the harder parts of being a programmer these days, at least in my experience.
I work for a Fortune 100 company. Recently an email was sent to all 100,000 employees saying that nobody was allowed to use DALL-E 2, ChatGPT, Codex, Stable Diffusion, Midjourney, Microsoft’s Copilot, and GitHub Copilot, etc. due to concerns about those tools using other people’s IP (meaning our company might end up illegally using their IP) or the potential that the tools might get a hold of our IP through our use of the tools and share it with others. I’m not terribly worried about generative AI taking my job when none of the thousands of programmers at my company are allowed to use it.
Same here. Anyone that works in a highly-regulated industry doing software (e.g. finance, healthcare) is probably not going to see much AI pressure on programmers until the legal quagmire is cleared up. There are privacy concerns with the data, the same ownership/copyright problems often discussed, and ultimately, there needs to be someone (a human) to take accountability (blame) if everything falls down horribly.
I don’t think GPT is legally possible, at least for code generation. AI companies are completely delusional in thinking that they can just use whatever they find on the internet, regardless of license. Regardless of court outcomes, artists, writers, and FOSS devs will lobby congress if necessary to stop this this nonsense. OpenAI has done less than 1% of the work that makes ChatGPT work, most of the work was in producing the training data, and yet OpenAI receives 100% of the profits.
If you haven't already I urge you to try getting ChatGPT4 to do most of your programming for you. Just ask it interview style coding questions and refine the answer with more interview like follow on questions.
And for making changes to existing code, you might need to check with your company's policies on this, but if you're ok to paste code into ChatGPT you can also just ask how to change it to do Y instead of X.
Copilot is.... ok? StarCoder is pretty bad IMHO. ChatGPT4 is really empowering.
the result of this will be similar to hiring infosys
hundreds of thousands of lines of buggy incomprehensible boilerplate that doesn't work on anything but the easy cases
then you have to rip the entire thing apart and start again with people that know what they're doing
Overheard a week or two ago: A non-technical person on a call talking about "adjusting the weights" of ChatGPT as if it was something they'd do manually.
I'll start to be afraid, when ChatGPT or anything similar will take a vague Jira issue, simulate described bug and then make a fix for it ;-)
But seriously, what developers do most of their time is maintanace, they spend a day searching for a bug, just to write maybe one line of code to fix it.
It will do that now, it will just be buggy and wrong. But that's obviously just a stage that you don't even need more AI to fix. Tell it how to write tests, tell it how to justify those tests and how to make sure they make sense, tell it how to look for the bug, tell it how to attempt a fix for the bug, tell it to evaluate whether that fix satisfies the bug report, tell it how to read the errors after it tries the fix, and to iterate on that fix or to abandon it and try something else, when it reaches all standards for success, tell it how to report what it did on the ticket.
If one doubts that all this can be done by an LLM, use a different LLM for each step. Use committees of LLMs that vote on proposals made by other LLMs.
I don't know, I feel like the sky's the limit, especially if they can be made significantly more power efficient. I think that if they never get any better than they are now, and they just get more power efficient, they'll be useful for almost anything.
If GPT can improve the code that's written in the first place then a lot of that work would just go away.
I strongly suspect that it will. There are whole classes of bugs that occur because some work is boring 'not quite copy paste' work that devs just don't like doing, and don't pay any attention to when they're doing it. Linters and syntax highlighters already catch a ton of those issues before they make it to production, and GPT will make the rest much less likely to happen.
Maintenance is one area where GPT will also shine, because it's 'just' updating some code to do the same thing, so using the existing code as a set of tokens with a prompt like 'update this code to work with v2 of library X' will be extremely effective. It'll be like having something write a codemod for you.
The future is bright. We'll get a lot more productive stuff done, and spend a lot less time on boring grunt work.
I haven't faffed about with ChatGPT yet, mainly because of the verification cans it makes me drink in order just to use it. I also don't want every query of mine being snarfed up into some cloud database to be recycled for profit by Microsoft. Writing a program is a very private activity for me and I am very concerned about leaking data about my code to some huge far-off entity to profile me with. Yes, I know that there's a difference between private-time programming and programming on the job; given the industry I work in, my employer has even bigger concerns about data leakage and recently issued a policy forbidding the use of ChatGPT and other publicly available LLMs.
I do not feel that it will affect me terribly much. I don't even use autocomplete -- too distracting. I've long been of the opinion that things like autocomplete are there to simulate the feeling of increased productivity, without making you much, if any, more productive, because the bottleneck in writing code is the deep thought about what you need to write, not actually typing it in. I felt the same way about AppWizards and other code generation tools from old-school Visual Studio and the like. They generated boilerplate code for an application in the shape that some Microsoftoid decided was best, not the shape I actually wanted to create. I suspect that in the long run, LLMs will be about the same, until we've solved AGI -- at which point any such intelligence will have its own ideas about the code it wants to write, which doesn't affect me unless I choose to collaborate with it.
If you think about a human who isn't terribly smart, but they want the world to think they are, what they will do is generate bullshit to fill in the vast gaps in their knowledge. So if you have such a person working for you, you have to check their work because they will try to fob off shitty work rather than ask for help. And ChatGPT is kinda like that: it will generate bullshit (we call it "hallucinations" in the case of GPT, but the term of art is bullshit) to fill in the gaps of what was not in its training set. And there's no way to know where the gaps are. So you have to check anything it outputs for correctness and lack of bullshit. I'm not sure that incorporating LLMs into programming is (yet) not just an infinite generator of messes for humans to clean up.
If you worry about "staying ahead of generative AI" in it's current state, then I think you are not a good coder and you should learn more instead of worrying about that.
LLMs are only good in writing new code without surrounding context. They are pretty useless in legacy codebases and in codebases with a lot of internal solutions. I've used Copilot for 2 months at work and maybe 10% of suggestions were useful, and from that 10% maybe 10% did not contain bugs.
> then I think you are not a good coder and you should learn more instead of worrying about that
I am not sure if its that simple and/or so black and white. Everyone is bad when they start, and even stay okay for a while. So fear is very rational, the fear of getting replaced by someone or something better is very humble. No matter how good or bad one is, theres always someone better than them.
I think for most people its smart to adapt to using AI in their workflows to make them better and more efficient, so I think everyone benefits from learning no matter what skill level they are on.
> I think for most people its smart to adapt to using AI in their workflows to make them better and more efficient
It probably is smart to try out and test everything for a while to see if it is an actual improvement or not.
What I have a serious problem with is the proposal that this now needs to be part of a workflow when it actually doesn't improve anything.
Generative AI in its current form may be helpful in some cases and unhelpful in others. Plenty of examples are mentioned in the context of the other comments.
I agree that the parent statement "then I think you are not a good coder" is a somewhat dangerous overgeneralization.
> What I have a serious problem with is the proposal that this now needs to be part of a workflow when it actually doesn't improve anything.
Yes, forcing it in the workflow might be bad for personal growth and overall culture.
I think in any form using it alongside you workflow is helping, it saves a lot of time and also helps decreasing cognitive load as one can forget about the commonly used code snippets and boilerplate code and focus on important aspects of the code.
> I think in any form using it alongside you workflow is helping
No, it is not. It can confuse and mislead you which wastes a lot of time. I lost multiple hours on different occasions figuring out subtle mistakes that it had made. It's sometimes harder (and slower) to understand someone else's code than writing the code yourself completely from scratch.
Also, if you aren't working alone, be prepared to answer code review questions on code that you haven't written. GPT is not going to take any responsibility for what it outputs. It often begins its answers to review questions with "Apologies for the oversight" followed by a revised version of the previous output.
The people I work with are used to me providing PRs that don't contain stupid mistakes. So in order to guarantee for that, I usually have to do a full blown quality control on every GPT output that I use. It can still be a time saver, but not really a significant one usually. I am still learning how to distinguish the cases in which it is not even a good idea to involve it and when it can be somewhat trusted. Seems to be highly dependent on the amount of training data in the particular problem domain and programming language.
I didn't mean that using it alongside means blindly trusting it.
I think its far quick to read the code than write those 15 lines of code generally, especially for those type of code snippets. It also is a less stressful and takes very little mental energy to do so (if you already are familiar with the language and codebase)
> I am still learning how to distinguish the cases in which it is not even a good idea to involve it and when it can be somewhat trusted.
Interesting, can comments of that code block being generated by some AI tool be more helpful in your case? Sure it generally isn't' that nuanced and mostly isn't in isolation but labeling the major parts of code like generated data structures, generated functions might be easier to deal with.
I mean, right now the only thing at risk is the todo.app industry...
I've yet to see anything maintaining legacy apps, or generating line of business apps with requirements... even simple stuff, like departure needs to be before arrival, etc.
I do see a whole bunch of youtube videos about generating a whole codebase, but it's the kind of stuff that there's a hundred tutorials covering.
My open source command line tool aider [0] is specifically for this use case of working with GPT on an existing code base.
Let me know if you have a chance to try it out.
Here is a chat transcript [1] that illustrates how you can use aider to explore an existing git repo, understand it and then make changes. As another example I needed a new feature in the glow tool and was able to make a PR [2] for it, even though I don't know anything about that codebase or even how to write golang.
This is an instance of the dangers of LLM. Because you (self-admittedly) know nothing about the language or codebase, you have no idea the semantically correct way to do things, so if GPT tells you to metaphorically jump off a cliff, you won’t know that it isn’t the right thing to do.
That certainly could be a concern. You are right, it’s important to review the code written by LLMs.
Did you look at the PR?
I reviewed it before submitting it. While I would have struggled to write it myself, I was able to review it and conclude that it was sensible and unlikely to be risky.
Of course it could have bugs that I missed. But so could any code I write myself in any language.
It is difficult to take pieces like this seriously. "AI" is not ahead of people who are competent. And you should be worried if something that produces semi-correct boilerplate is ahead of you.
I like how the emphasis is on coders here because one of the article's headers of "Clear and Precise Conversations Are Key" is important.
I think this is why it will be a long time before the general masses will be able to take advantage of AI to solve general problems. Most people haven't built up a human skill level of being able to explain their problem in a clear way to another human.
Imagine if you have no other context about the problem below other than these 2 prompts. Both of them are describing the same problem which is related to entering in orders with a point of sale system. Assume that you're talking to a human doing phone support for the company that provided you the hardware:
- My orders aren't coming up at the register
- I have 2 devices to take orders, when I manually place orders into the one hanging on the wall (ID: "Wall") it doesn't show up in the list of orders at the register (ID: "Register") but when I manually place an order at the register it does sync up at the wall
The first prompt is typically what a non-technical business owner may say over the phone when trying to get support. The second prompt is what someone who has experience describing problems might say even if they have no experience with the hardware other than spending 2 minutes identifying what each device is and chatting with the business owner to understand the real root problem is one of the devices isn't pushing its orders to the other device.
The 2nd one could become more precise too, but the context here is you're speaking with another human who works for the company that provides you the hardware and service so there's a lot of information you can expect they have on hand which can be left unsaid. They also have various technical specs about each device since they know your account.
It would take many follow up questions from a human to get the same information if you only provided the first question. I wish a general AI tool good luck to extract that information out when the direct person with the problem can barely type on their phone and doesn't have a laptop or personal computer.
I'm not an expert, but I'd say I'm a strongly average vim user.
But even at that skill level with vim, I haven't seen an area where LLMs would increase my velocity.
Quite the opposite. It would completely interrupt my flow to have to constantly stop and do a code review while I'm writing.
With good plugins, templates and macros in vim/vscode - velocity writing code isn't the issue.
The stuff that takes all the time is UX tweaks and reasoning about architecture, business constraints, and the correct level of optimization for the company's maturity.
Have you tried to develop in a language or environment that you have 0 familiarity with? I find in those cases I'm up and running at least 3-4x as fast, cutting weeks off the learning curve.
Yes but did you actually learn it? Is watching a videotaped course from Berkeley about gauge theory the same as sitting in the class doing the homework, etc? I learn by doing.
Do you know where the bugs are when CI fails or when something shows up in QA or worse, when a customer files a bug report? The hard part of programming never was generating boiler plate, it's designing programs with the context of the problem and preexisting code keeping in mind the customer and company goals. That's what good developers do in my opinion.
Have you actually tried it? Copilot + vim for me is faster, and less frustrating tbh, than vim without Copilot. Typing obvious things is a PITA, and in code we type a lot of obvious things.
The hype around AI coding assistants has recently inspired me to improve my efficiency in writing code. I started using NeoVim instead of vim to get access to LSP, for autocomplete. I’ve found it actually slows me down because I can type nearly anything faster than the LSP can synthesize a response and I’m able to visually process the options, select one, and input it into the computer. I’ve never compared against an AI coding assistant, so maybe that’s different, but my experience has been that fast typing speed combined with understanding of the task at hand and what code must be written nullifies almost all benefit of a coding assistant.
Most of the complex regexes I write fall into the category of stuff I don’t understand that might work. I’m no slouch at regexes either but when you start trying to do weird data processing stuff that handles all kinds of edge cases the wheels really come off quick.
If gpt writes it and I don’t understand some parts I just paste it into a regex testing site and examine the groups and try it out with different snippets of text. Is that so hard? Is it any different from using something you found on SO?
AI might be good for helping improve productivity, but those machines are still no smarter than the average person on the internet, and the average person on the internet is far dumber than the average developer.
It's this year's blockchain. Are there successful uses of blockchains? Sure. Walmart uses them in Canada and has streamlined their logistics and fulfillment up there by leaps and bounds. But that's a very specialized use case.
How to survive in an AI world? Shift your mindset from “I write code” to “I deliver value”.
Now instead of AI replacing you, it’s helping you get more done (in theory). Everyone wins.
Staking your career on being a pair of hired hands that executes somebody else’s exacting specifications was always a long-term losing proposition. And we are very very far away from AI being able to ask the right questions to help business stakeholders and customers clearly express what they need.
I think this is hugely mistaken, just like a few months ago when people assumed that "A.I. can't ever draw faces/hands" while doing so only required a couple updates.
You're seemingly claiming that:
"Taking an incomplete customer brief and asking relevant questions to make it clearer and complete is the largest value in programming"
And... you're not seeing the possibility that large language models, i.e. AI that is specifically built to take in fuzzy bad language and provide a neat completion/reply, is ever going to be able to say "I'm sorry Dave, but your brief is a bit unclear, could you tell me why you want 3 download buttons on the Projects screen?"
I have worked with outsourced offshore coders a lot over the years and I can guarantee you that a lot of the "programmer workforce", agencies and teams, just won't ask any of those questions.
They'll blindly "start the work" on terrible incomplete briefs, build something that (predictably) won't work, and charge you for this broken software.
Do you really not think that putting a "Product Team Assistant" AI (which might even be doable with today's GPT-4 with a few loops and clever prompting) between the client and the coding would drastically increase/replace the value of such teams?
> AI being able to ask the right questions to help business stakeholders and customers clearly express what they need
This is where the biggest impact will be: better requirements. I see no effect on writing code because humans already know how to write code just fine, but so many of the people running the business are absolutely clueless as to what it needs.
I also find it bizarre that so many people feel precious about the code. They have too much ego attached to what they type in and the AI kind of make them feel insecure.
I couldn't care less if I write the code, the AI or someone I told to write it.
> I also find it bizarre that so many people feel precious about the code.
Would you find it bizarre that a joiner feels precious about not just the cabinet they made (value), but how they made it? The joints they used, the process they went through, the wood (i.e., the code)?
A plumber, electrician, architect, designer, programmer -- we take pride in our skills.
Your analogy suggests that the code is the final product, akin to a cabinet or a building, something tangible that can be appreciated for its craftsmanship. In some instances, like open-source software, the code might indeed be viewed this way, but in most cases, it's not the code itself that end-users appreciate, it's the functionality it provides.
To refine your analogy, the code isn't the cabinet - it's more like the blueprint or the process used to create the cabinet. The user doesn't care if a hand saw or a power saw was used, as long as the cabinet is well-crafted and functional. Similarly, end-users of software don't see or appreciate the code. They only interact with the user interface and the functionality it provides. As a result, being "precious" about the code can sometimes be more about personal ego and less about delivering value to the end-user.
In terms of pride in craftsmanship, of course, it's crucial to take pride in one's work. However, this doesn't mean that one should be resistant to using better tools when they become available. The introduction of AI in coding doesn't negate craftsmanship - instead, it's an opportunity to refine it and make it more efficient. It's like a carpenter transitioning from using manual tools to using power tools. The carpenter still needs knowledge, skill, and an eye for detail to create a good product, but now they can do it more efficiently.
This is perhaps true for shrink-wrapped software (in so far as that still exists), but for B2B SaaS products, the ability to easily maintain and enhance the codebase is vital to the long-term success of the product.
Maybe it won't actually matter, because if AI generates a 5MM line ball-of-mud, it will be able to easily add features later due to the code being styled in alignment with its training, or maybe the context size limitations will allow future systems to digest the entire thing. It could end up being like coding in a very high-level language: who cares what crazy bytecode is kicked out as long as it performs within expectations.
Exactly, ultimately we are craftsmen, not artisans. They are two very distinct things. The difference being that the value of our output is directly tied to its' functional utility, not any sense of aesthetic or artistic expression. You can take pride in the means used to achieve an end, but they ultimately must be superseded by more efficient techniques or you just become an artisan using traditional tools, and not a craftsman who uses the industry standard.
Craftsmen are individual contributors. When you coordinate others, you're no longer an IC, you're a foreman, a boss. Coding with LLMs is about managing the contributions of other ICs. It's no different from coordinating low-level human coders who simply implement the design that was given them.
If that's the kind of 'craftsmanship' you enjoy, great. To me, this new model of 'bionic coding' feels a lot like factory work, where my job is to keep my team from falling behind the assembly line.
BTW, I've worked factory lines as both IC and foreman. In either role, that life sucks.
> Exactly, ultimately we are craftsmen, not artisans. They are two very distinct things. The difference being that the value of our output is directly tied to its' functional utility, not any sense of aesthetic or artistic expression.
That's the usual definition of an artisan as opposed to an artist. (Artisan vs. craftsman is a fuzzier distinction.)
An artisan uses artistic techniques and their own intuitive judgement to produce a practical good. Think bakers. No one would be upset that their local bakery was using tools and techniques from a thousand years ago to make their bread. But a carpenter, for example, is a craftsman. He may take an aesthetic pride in the finished result of his work, but it must match all technical specifications and building codes. And the customer would be pretty upset to see them using wood planes and hand saws to frame their house.
When assemblers were introduced, there were programmers who complained, because it ruined the intimacy of their communication with the computer that they had with octal.
They meant it, too. Noticing that the "or" instruction differed only by one bit from the "subtract" instruction told you something about the probable inner workings of the CPU. It just turned out that it didn't matter - knowing that level of detail didn't help you write code nearly as much as it helped to be able to say "OR" instead of "032".
I feel like there’s probably an unspoken division between people who enjoy building systems (providing value in your terms) and people who enjoy more of the detailed coding. The later group had a good thing going where they were extremely well compensated for doing an activity they enjoyed, so I think it make sense for them to be a bit distressed.
The current AI may improve coder performance by only 5%, but it can improve non-coders' learning speed by 1000%.
Learning to code has become significantly easier because of ChatGPT, and many university students are already using it for learning. Not only can they let ChatGPT write boilerplate code, but they can also let ChatGPT write comments for code snippets they don't understand and explain unfamiliar syntax.
I wonder if coders can survive in a world where more and more people have coding skills.
Edit: "majority" was not a good wording
Majority having “coding” skills is not happening. Writing code is boring for majority. Why write code when you could be playing games and having fun on TikTok? There is your answer. We love writing code because we are nerds who love to solve complex problems. New kids who are not interested in writing code but using shortcuts to get code written for them by ChatGPT while not understanding it - is the least of our concerns. Let them have at it, but once they see the monoliths we tackle with at work - they will burnout on the spot. Cobol? Still alive and well. For a reason.
That's a bit of an uncharitable take on people with different interests. Some people who would find coding boring do instead enjoy things like teaching children, treating patients, putting out fires, making art, evaluating stocks, do research, or the millions of other things that humans do as a job or hobby. It's not just people spending time on TikTok.
I don't know about this, from my point of view to learn how to program you need to actually... program.
Trying stuff over and over again in different variations using maybe different languages, dealing with all the errors the frustration and overcoming them.
I think that using chat gpt or similar llms to learn how to code is similar to using Midjorney to learn how to draw.
Don't get me wrong you might be able to produce results fast but taking shortcuts is not going to speed up understanding.
It's going to be interesting to see this play out.
I personally am glad I learned to code without LLMs and think I would struggle with them. They let you get a lot done without understanding any of it, and then suddenly you hit a wall.
Also, I wonder how many people may choose not to learn to code in the first place, because they think it is about to be automated.
Unless this individual has an insight into global software development businesses, I think this article has some great thoughts about possibilities but is quite non-factual, counter to the language used.
Many of the statements of how AI is being used are phrased as if it's matured already, the reality is this is all still a big trial. It's not clear if teams will continue to use AI in the way they currently do, so it's a bad assumption to base your predictions from.
This title sounds like, “How writers can survive-and thrive-in a spell-check world”. I imagine it will sound absurd to you in time if it doesn’t already.
I see a lot of writing like this about LLM's ability to generate code, but I'm a lot more interested/optimistic about their ability to provide value by reading code.
Being able to point to a github repo and say something like "explain this codebase, highlighting key functions and potential refactoring routes" would be really helpful. I've trialled that a little bit with codebases I know well and the results aren't yet helpful beyond a very high level.
I think it's a pretty reasonable goal to aim for though, and this kind of codebase parsing would be much more of a net gain than just generating a tonne of functional but suboptimal code.
If anyone is depressed about AI replacing your job please get mental health help (or at least talk to someone) and stay away from AI related news for a while (Use some Tampermonkey script to block things for you).
> One of the most integral programming skills continues to be the domain of human coders: problem solving. Analyzing a problem and finding an elegant solution for it is still a highly regarded coding expertise.
"You will have to worry about people who are using AI replacing you"
Oh yes. I should be very very afraid of the flying copypasta monster. As if my productivity is reduced to the mere rate at which I can write code! What's even project planning? Why even have meetings if it's all down to "is it done yet"? Who works at these coding sweatshops that are so afraid of AI? If they get fired and find a better place to work, that's a win.
Writing code is just one part of the overall list of things that an engineer does. Infact, barring the early versions of product/project more time is spent elsewhere in subsequent versions. With code it is more reading than writing new code. Other things on which time is spent are:
1. What needs to be built? What is feasible?
2. Investigations (bug reports, production issues, performance issues, etc)
3. Quality control (code reviews, writing tests)
I am only listing things which require considerable thinking and multi-domain skills. So LLMs (as of now) really only helps with one part of overall things to do.
Also, Rate of code generation with AI >> Rate of effective code review. So the bottlenecks will still be humans.
PS: About quality control: I think AI writing code and generating the test code is not desirable. If the underlying LLM has issues even the test code will have issues. Generating boilerplate is one thing but the key things about the tests (i.e. inputs/scenario & things to check) needs much better curation. This needs human intervention.
Or how about this: we hold AI companies accountable for treating the internet as free real estate and attempting to put FOSS developer brains into a jar with a ChatGPT label. These LLMs should not be afforded the same rights as humans and should instead be treated as derivative works of their training data.
I've been thinking about this.. most programmers use frameworks/libraries, e.g. Spring/Hibernate in Java or React in JavaScript. Is there a way to train LLM to "specialize" in our frameworks/libraries of choice? I assume it would result in faster/smaller/more accurate result?
Things like Falcon 40B are trainable with something like a LoRA technique but the coding ability is weak. In the near future we will have better open source models. But it is possible to do for certain narrow domains.
Normally with the ChatGPT API you just feed API information or examples into the prompt. One version of GPT-4 has 32k context. The other has 8k and 3.5 has 16k now. So you can give it a lot of useful information and make it work quite a lot better for some specific task. When you pick something like React or Spring in general, depending on what you mean that might be huge amount of info to keep them current on. But if you narrow it down to a few modules then you can give them the latest API info etc.
Another option is now to feed ChatGPT a list of functions it can call with the arguments. It generally won't screw the actual function call part up, even with 3.5.
ChatGPT Plugins you can give an OpenAPI spec.
Then you implement the functions/API you give it. So they could be a wrapper for an existing library.
A few years ago, before the advent of Copilot (and ChatGPT), I was telling my students their future as programmer was bleak, because most coding is either interfacing stuff, or creating an often incorrect heuristic to solve a single instance of a well known class of problem -- and all those activities could well soon be automated thanks to datasets like StackOverflow.
I now consider that ChatGPT for programming is a confirmation of my intuition back then.
Now, I understand that article as being too shallow again, by defining short term strategies for working with ChatGPT without considering a long term view on what it means for programming overall.
And I'm sure I'm not alone in finding the impact of ChatGPT (and future generative AI) on programming as obvious...
I hate these articles. "Take a deep breath and refocus". "Find a way of working that works for you." "Stick with that you know." "Understand risks." It's so meta it's useless.
with every new comp-language advance or compiler/transpiler etc, coder's life gets easier - but an engineer's life is more broad, it's about Solving Problems that involve: Tech, Design, User and Business. These problems will only get more complex.
I think the Coder's role will become a very niche market, highly expert/specialist.
the Engineer's will grow, very much needing AI to help-out, especially with tasks around: Discovery, Mapping/Relating, Projecting/Simulating.
I recently tried to write a simple classificator using pytorch with ChatGPT. I found out that I cannot use the recent version of pytorch, but only the one was in use before ChatGPT knowledge cutoff. That also meant that I was limited by using Python 3.9, because most September 2021 libraries had no readily available builds for 3.10+.
And this is quite particular example. Software evolves quickly, ML models are expensive to train, and the gap will be mostly there anyway.
This article touches on llms for mostly code generation, I however would be more interested in visuals.
What are the good resources to learn about image editing AI tools, prompts and techniques?
My understanding is pretty limited, and correct me if I'm wrong, but like one would be using Stable Diffusion or Midjourney, and for a "professional" tool - Photoshop with official AI plug-ins?
I use ai mostly for creating code examples for specific things I haven't written before and for learning new programming languages.
During day to day I haven't used copilot or tabnine yet but I have seen that there exists some plugin I could integrate into neovim which I will definitely try.
I do not need to "survive" ChatGPT world. It actually helps me. Also I am not sure what does "coder" mean exactly. Personally I design and implement software (sometimes other types) of products and "coding" as in writing the actual code is the least of my worries.
I find Gpt-4 is great at doing simple ffmpeg commands but the second I had a bug in a complex statement it just ended up creating a recursive behaviour of failing to fix it.
Which does waste time I am unsure of if it’s close to fixing something or just can’t - I find it hard to figure out.
It can indeed sometimes get stuck in a rut.. particularly with regex! I find that in such cases, if I can't take over and fix the issue then it sometimes helps to start a new chat (not sure if there is a clear context command) and approach it from a different angle.
I’ve asked it to write code as a black box: I sometimes purely run code it generates and see if the output matches. If it matches enough tests and special cases, it’s done, I don’t even waste time to understand it. The main thing is the tests.
>> One of the most integral programming skills continues to be the domain of human coders: problem solving. Analyzing a problem and finding an elegant solution for it is still a highly regarded coding expertise.
Yeah right. I'm on windows 11 now, with English (Europe) as the system language and a double English and Greek layout. Windows continues to shit itself and randomly add two new languages and two new keyboard layouts to my machine for reasons unfathomable: English (US) and English (United Kingdom).
To clarify, this has been happening since Windows 10. You can find posts about it on the internet, like this one from seven years ago:
Please help, I'm desperate, this is my third computer with Windows 10 and they
all do the same thing.
If "analyzing a problem and finding an elegant solution for it" counted for anything, it would be very difficult for one of the largest software companies in the world to have a long-standing bug carried over two versions of its flagship operating system.
Nah, the truth is that nobody gives a shit about "elegant solutions" or problem-solving ability. Not in the software industry. And that's why LLMs _will_ eventually take over, even though they can't code themselves out of a paper bag. They can spew out code faster than you can type "public void integer FixKeyboardBug()". Why they fuck would anyone care if they can't actually fix any bugs, or create new ones at the same high rate they generate code?
It's going to suck so much having to use software ten years from now. You think it sucks now, but oohoho, just you wait.
That isn't the slam dunk you think it is. Just because they won't doesn't mean they can't. MS have decided it isn't important enough to fix.
Fwiw I view MS as a terrible (software) product company. All of their products fall short of delivering, in my opinion. I think of them as the "80 percenters" - their prpduxts do 80% of what any user reasonably expects as UX of their products, or of what their competition offers.
Teams: 80% of Slack.
Azure: 80% of AWS.
Etc.
Have discovered a bug within their product(s)? Unless you have an account manager on speedial (and thus are paying the kind of fees that get you an account manager) good luck getting any kind of response that isn't some cut/paste job by a community "MVP"
>> Just because they won't doesn't mean they can't.
In other words it means just because they can pay programmers to write good software, doesn't mean they will, and just because they don't have to cut down on costs by using some hapless LLM code generator instead of good programmers, doesn't mean they won't. They will.
That's the point. If you have industry leaders that suck so terribly at making good software, because they have no incentives to do so, software is going to suck even more when they realise they have no incentives not to make it suck even more, and it's easy to do (by LLM).
It works for MS because of their high market share, the fact that they sell to consumers and not businesses, and the fact that this is unfortunately somewhat of a niche use case.
Bugs definitely matter for smaller companies that cater directly to businesses, for example. "Our workflow is broken" can cost you a very high-paying customer.
90% of the job isn't writing code. It's soft skills, coordination, communicating to understand and refine stakeholder requirements, planning, research, etc.
It doesn't seem so different from what higher level languages becoming the norm has done year by year. You still benefit from knowing what's going on under the hood, but it's not strictly necessary, and lots of newer folks in software careers get by without ever really looking behind the curtain. My web dev colleagues barely touch a debugger.
I'm just in a mood to shitpost. Don't take it too seriously.
Things that I have heard of, but don't know (imagine how many things I haven't even heard of):
- Li-Chao Segment Tree
- Segment Tree Beats
- RMQ in O(n)/O(1)
- Any self-balancing tree except treap
- Link-cut tree
- Wavelet tree
- Mergesort tree
- Binomial heap
- Fibonacci heap
- Leftist heap
- Dominator tree
- 3-connected components in O(n)
- k-th shortest path
- Matching in general graph
- Weighted matching in general graph
- Preflow-push
- MCMF in O(poly(V,E))
- Minimum arborescence (directed MST) in O(ElogV)
- Suffix tree
- Online convex hull in 2D
- Convex hull in 3D
- Halfplane intersection
- Voronoi diagram / Delaunay triangulation
- Operation on formal power series (exp, log, sqrt, ...) (I know the general idea of Newton method)
- How to actually use generating functions to solve problems
- Lagrange Inversion formula
- That derivative magic by Elegia
- That new subset convolution derivative magic by Elegia
- How Elegia's mind works
- Sweepline Mo
- Matroid intersection
If you know at least 3 of these things and you are not red — you are doing it wrong. Stop learning useless algorithms, go and solve some problems, learn how to use binary search.
```
For 2023, I would append the list with:
- ChatGPT
- Github Copilot
- GPT-4
- Whatever the "generative AI" is
If you are a beginner, these so called "generative AI" are actually the same as those cryptic algorithms in competitive programming mentioned by Um_nik and you won't ever really use them once in your life, but learning the basics will definitely help you improve gradually.
If people get left behind, it mightn't be such a bad thing − the cheaper software will lower the cost of living, making UBI more feasible.
The glut of unemployed people will hopefully popularise UBI rather than trying to jump back on the treadmill of GDP maximisation, exploiting each other and destroying our planet.
Interesting take, ignoring the fact there will be far less people to pay into your magical UBI pool. How do you reconcile a huge glut of unemployed people and the resources to hand out extensive welfare benefits?
But then who would pay the corporations, if most of the population is on UBI? If only a subset of the population makes money (<30%), would that be enough to feed the rest of the population and to help them live a life without poverty?
LLMs might be useful for churning out vaguely correct-looking code quickly, but they're just regurgitating the contents of their training corpus. There's no guarantee of correctness, and it's only a matter of time before someone dies because of an LLM-generated bug.
Human programmers aren't going anywhere. (You can't even call what LLMs do programming, because there's no intent or understanding behind it.)
Well the thing is though that if you focus on it for a week or two you can pick up useful skills. Just play around with ChatGPT for example for generating SQL for a particular table or follow a tutorial using llamaindex for a "chat with documents" thing. Try out a Stable Diffusion API or something using replicate.com
There are a ton of people looking for help with generative AI and you can be useful if you just play around with it for a few weeks, because a lot of them have no idea about the basics. If you are willing to be underpaid there is no need to be unemployed -- just spend a few weeks studying and then go on Upwork.
I feel the biggest tip for anyone (not just programmers) is “talk to your congressman”. AI has enough of a bad rep whether fictional or not that i think getting bans/fines/taxes/other in place will be far more useful for the little man than whatever this article recommends.
the next 5+ years will see software proliferate rapidly as a result of things like chatgpt and llm augmented documentation pages. Building things end to end just got easier and will become more so soon.
those links are great counterarguments. But we are still in early stages, which i dont think is indicative of the end state of these tools (though that can be debated of course).
also upon further inspection, it appears as though ai-explain was not added by the core team or MDNs steering team. It looks like someone just took it upon themselves to add it, without doing DD on the feature, if true, its not surprising it doesnt work well.
To me the best part is that you don't need to read documentation to understand something that you won't use anymore and will forget 2 days later. Like: regex, how to plot a histogram in fucking matplotlib, seaborn, etc.
Are you implying that I don't know how to read code and (instruct GPT-4 to) write tests? I know what GPT-4 writes, it just does it instantly, whereas I do not.
The notion that I am generating and committing large blocks of untested arbitrary code makes me feel like you don't know how development is done. You're too far from reality for me to have confidence that you're at the professional level.
And that's just an abbreviated form of what I go through when designing a back-end for a tiny boutique application. 99% of programming is making decisions. When you finally have everything planned, the code can almost write itself, but getting to that point requires so much background knowledge that I'm not sure GPT4 will be hunting for my job anytime soon. Or even really augmenting it. I'd be happier if they could just get auto-complete in VS Code to not suck complete balls.