I think this article hits on some truths and gets a handful of things wrong. First, they are correct that we are in a dynamic profession that requires constant learning and expanding. No doubt the people who choose to stay in software are likely to be people who are curious, life-long learners where this is a benefit of the profession rather than a drawback. That said, one thing I noticed from teaching computer science while a graduate student is that the poor students think about languages and libraries as key skills, while the better students think about the data structures, algorithms, and design principles. These slightly more meta-level concepts are far more stable and timeless, while the underlying implementations of them are constantly evolving. If you think of each new library as "having to learn a new skill" I can imagine burnout and overload are more likely, but if you figure once you know 3D graphics you can pretty easily pickup any new graphics engine, then it might even seem fun to explore different takes on the same old concepts. It's like hearing a new band in your favorite genre.
As for attributing burnout as the core issue here, I would strongly disagree with this idea. When teaching undergrads I noticed immediately that a good portion of each student cohort was basically there because they were interested in making money in the future rather than exploring the ideas of computer science. They were no doubt going to get frustrated or bored and move into management or some other profession rather than continue to expand as an engineer. This is totally fine, and they are probably richer for having learned what they did, but I don't know why we can't just see this and appreciate it for what it is rather than portraying it as the drama of burnout.
It's hard to blame students when they look at job postings and all that they see advertised is positions for programmers using languages X, Y and Z, or when they see tweets and blog posts by hotshot programmers about frameworks A and B.
The entire industry focuses way too much on 'experience with tool X' as a proxy for 'technical skill Y'. It's a bit like asking a carpenter how many years of experience they have with DeWalt cordless nail guns rather than ask about their house framing skills.
Worse, industry is routinely bashing on CS degrees because they don't turn people into framework X-ready candidates. It's getting a little tiring just how little credit is given to the idea of "maybe these tools can be learned in a reasonable amount of time by people with a degree showing they can pick things up rather quickly".
Yes but I interview for a role where we dont have set languages: we're a small optimisation team working across system when they reach capacity bottleneck and we do JS, C#, Java, C++, PL/SQL fixes completely transparently, we've slowly learned that languages really dont matter all that much for performance tuning as much as programmers mishandling which are identical in all of them (hash map loops, abusive recursion, over abstraction leading to extreme indirection, serialization laziness producing giant message transfers, database transaction mismanagement, zero profiling skills, and the main one: please stop frigging saying it's the garbage collector and look at your crap for real lol).
When we recruit we ask "if we tell you no more Spring and we'll work on any language as long as there's a problem to solve, how do you feel". Most either say they d feel horribly sad, or dont even comprehend how it's possible. Some are indifferent because they need the job. Still looking for the guy that says "anything can be learned in a reasonable amount of time" so it s not that obvious I suppose :(
I think both sides of the argument have reasonable points. There are advantages to having deep experience with a tool, but I have also solved bugs in languages I didn’t know.
What I think is missing is there are serious diminishing returns with more experience in a specific language or framework. 10 years of experience split 3 in Java and 7 in C# is very similar to 7 in Java and 3 in C#.
But the two languages are very similar.
I get your point though, it's not clear how much you gain by doing 20 years of PHP over 10. I think this isn't unique to programming - is a lawyer with 20 years of experience necessarily better than one with 10? An accountant? A nurse? And so on and so on. I do think looking at myself I am much better in my 10th year than I am in my 5th - not only in specific technical tools but how I approach things, how I relate to others etc. So some of the gains are in soft skills which are super important to be effective.
However will I keep improving at this pace? Probably not. Who says I have to keep improving exponentially?
Actually yes I would expect them to. They should keep up to date with changes - laws change all the time. New equipment is created for medicine all the time. These might be pretty fundamental and are not soft skills.
Maybe the base "lawyering" or "nursing" does change but they'll have seen a lot more examples and be able to pattern match quicker.
Also I'd rather a 20 year nurse was the Charge Nurse rather than a 5 or 10 year one, I'd expect they'd know more how to lead others (although time served isn't always an indicator of a good leader).
> Maybe the base "lawyering" or "nursing" does change but they'll have seen a lot more examples and be able to pattern match quicker.
They've also been burned out by being 20 years in the field. Motivation is a very very crucial thing for success and motivation often times declines due to burnout. Yes said lawyer saw more cases as mine than his younger colleague but he might not be super motivated to fight for me in court as he was 20 years ago, and in any case there are constant tiny changes to laws that make the 20 years experience worth less.
Btw I think 55 year old lawyers/accountants who look for a job face pretty much the same discrimination as software developers - they are expected to be director level or there's not that many jobs for them. This thing is definitely not unique to our profession...ageism is a thing everywhere.
It's what I do at work for fun, in between my "real" development / support tasks. I profile and optimize our internal application, its loading time, user interactions, etc.
Sometimes when it's a long term I make a proof of concept and ask for permissions later, so for a while optimization becomes my actual job. Just finished a couple of months of performance tuning, actually, which I did in parallel to other tasks.
TBH there are plenty of easy, low hanging fruits since the team who wrote it wouldn't recognize a performant code if it smacked them in the face. Or they concentrated on very minor optimizations in rarely executed parts of the code and tended to bikeshed them for hours instead of actually checking what the real problems were.
I actually almost never use an actual profiler when starting, I just run under the debugger and pause the application at random instances when it appears to be stuck.
In almost all cases I find it's executing some unnecessary batch of queries, or reading from a table using an unindexed column, or redrawing parts of the screen which shouldn't be updated, or throwing and catching exceptions that were used instead of if statements, or blocked when it could run in parallel to some other computation, or doing some other unnecessary O(n^2) loops instead of using a hash table. A combination of such improvements will sometimes gain up to 20%-40% speed or even more.
When I'm done with these simple cases I start using a profiler for the that extra optimizations for those extra percentages.
Agreed about the dream job, that's pretty much what i explicitly answer every time i am asked about my preferences in terms of stack/tech to recruiters and engineers i interviewed with. I lucked-in into such a situation at least once, and I can confirm that I still enjoy it.
I can definitely see why some people might hate that situation, because it feels like nothing is working and you dont know anything at first. But if you aren't demoralized by that, and instead find joy in learning/figuring out all those new things on a regular basis, it all feels extremely rewarding. Like, why would I dislike being paid to essentially learn a bunch of new stuff regularly and solve actual real world impact problems using that new knowledge.
my gosh this was my first real developer job back in the day, just like this. we were a so called "hit squad" and brought in to fix longstanding issues (this was a Fortune 100 company, where they had more internal software than developers to manage it, some of which was actually mission critical for the departments it served)
I learned so much in that job, and probably would have kept it had they not relocated it to Santa Clara.
If you can work in an environment like this for a few years you definitely should, that's my advice to everyone. You'll learn so many things that will be useful over the rest of your career.
If you are doing optimizations I would say you need at least a few years experience in the language. You often see people say they switched from language A to language B due to performance, then some guy optimizes the same lib/functions in language A and it gets 10x faster then the optimized version in language B... and people say: Yeh, but that guy has several years of experience. Once you know the layout of the minefield almost all languages are usable and performant.
Depends of the level of maturity of the system. I d agree you need to be a C genius to accelerate the linux kernel a bit, maybe.
But most software aren't the linux kernel, they're small systems solving a class of specific problems as fast as possible with rotating teams, so it sometimes is just a matter of profiling this stupid hashcode function or asking why the mouse is freezing during high trading volume on the C# GUI :D
You wouldnt believe how common optimization problems are and are not related to GC, a belief I have to disprove very often (look 300ms of GC a day, but hey, you're checking an unindexed table each calculation for a trivial non essential decision, what if we fix it)
Agreed. I once "optimised" an in-house software that took minutes to process a single report, down to seconds, just by adding an index to the database (Microsoft Access, mind you) table. Not all developers know what they are doing [1].
Eh, there are a few language-specific gotchas, but if you're good at optimization it's really nothing too tricky to figure out within a few weeks. The core problems are the same, depending on the class of language: far from the metal, allocations and GC are almost always what kill you. Close to it, cache misses.
Beyond that, it's just making sure you do less stuff, and being really aggressive with profiling (what looks like it takes the most time in the code almost certainly isn't what's really taking the most time).
Anything can be learned in a reasonable amount of time, but I already have a job. Good luck in finding your candidate though, sounds like a great environment to work in :)
I hear you. I'd also ask what seniority you're asking. My relatively young friend group routinely scoffs at Java and C# shops for making a big deal out of experience in one language, while the two languages are slowly converging to be nearly identical in most places that matter. But I definitely see this attitude more around the 10+ YoE crowd.
>> "anything can be learned in a reasonable amount of time"
I routinely say something to this affect in jobs for which I a interviewing. I tell them I am not an expert in anything, but I feel I can learn anything given enough time. The caveat is in some businesses time is the limiting factor and the business doesn't have the time for me to learn.
But I'd love to work for your company or one like it.
Anything can be learned, but not necessarily in a reasonable amount of time.
Some things are easier to pick up, some are just impossible.
Example, I was doing PHP for 11 years then jumped to Go 9 years ago (wow it has already been 9 years), Go was good for me because it lifted a lot of mental burden of the OOP craze and wrong concepts applied back then. On the downside, it didn't have an ecosystem. But it was easy to pick up.
k8s surfaced and I tried to learn about it, but it's just too complicated. After I had spent 2 weeks of intensive mind bashing at it, I still wasn't able to manually set up a 3 node initial control cluster. I gave up on it, never looked back.
Spring boot time and again, medium difficulty picking up, mostly because of the exclusive Java community, if something like that even exists. Very arrogant bunch that speaks in implicit ways and assumes you just magically know stuff.
Android is similar, always in flux, despite the many resources offered by Google you don't know what to pick. New Jetpack compose or old, ancient seeming xml layouts and configuration and navigation configuration. It's a hot mess.
I tried to use AppAuth OIDC client, has a few questions, asked, the response was "We assume some prior Java and Android knowledge". Well, ok thanks for nothing.
Angular, aside from rxjs, which was royal pain in the ass to learn, straightforward, albeit a bit overly complicated and typescript gets in the way more than it should.
Jumped to Vue. Oh dear god, it's so nice and easy. None of the rxjs stupidity, no forced typescript and a large ecosystem. I can be super productive with it and create SPA, PWA, SSR, Electron apps, browser plugins, even hybrid mobile apps with it.
Rust, my arch enemy. I tried to get into it, I was drawn by hype and the promise of more performance. But knowing Go, which is just good enough for the backend, and where things make sense, Rust doesn't make sense for me. For someone coming from C++ it must seem like the 2nd coming of Jesus, for me - the Go guy - it's an abomination. Yeah I get the basics, but imports don't make any sense, structure doesn't make any sense, erorrs don't make any sense. I feel like a freaking transporter always worrying about crates and boxes and who they belong to. Not fun at all. Handle with care, careful breakable. UGH.
Yeah anything can be picked up but it depends on me being interested in what's to be picked up, the willingness of others to teach and the justification of why I need this added complexity.
I write monoliths and I'm happy with that. Now all jobs postings require Microservices and EDA. I'm unhappy with that because I don't see the need for it in most cases. Now I have to learn expensive cloud on my own, which I have ZERO motivation for. I have to make everything extra complicated, it's hard enough as is getting a project done start to finish, client and server, the monolith way.
If I'd have to go Microservices I'd never finish anything and even if, when launching until people actually use my service, 3 years later I'd be bancrupt.
So where am I supposed to learn about this unless employed by those people who want Microservice EDA? But they won't employ me because I have no experience with that.
See I feel you have missed a lot of points, it’s like you’re pragmatic as hell are creating stuff but by the choices you seem to have made it seems like you have got quite a surface level understanding of some very well thought out stuff that has good reasoning behind it because you want to make stuff with ease regardless of how maintainable it is it how much tooling can help you. I wouldn’t chalk stuff entirely up to hype.
I think some skills are more valuable than others because they don't change that much.
Java is valuable. Go is valuable. PHP is valuable. C is amazing.
JS frameworks...not thanks. I think of moving from full stack development to backend due to this..its a bit more sustainable.
The first issue I tackled at one of the jobs I had was slow JSP page performance.
The code had some sizable chunks of JavaScript in Strings and looked at each authorization the user had. Its been awhile, but in dynamically generating that JavaScript (ICK!) it generated several megabytes of garbage Strings by doing `script = script + "little bit more";` way too many times. This was done for each page load. 8am and hundreds of people click the page to get their start of day stuff... and... server has problems.
That particular issue was fixed by changing it all to a StringBuilder.
I've yet to see any sloppy garbage creating that is on the same order of magnitude as that JSP was.
> industry is routinely bashing on CS degrees because they don't turn people into framework X-ready candidates.
To me that's noise and not much else.
Something I don't think a lot of folks realize is that there's two parallel industries (and pipelines to jobs in the industry). They almost never overlap.
One in which recruiting is largely done by non-technical folks who match keywords of frameworks, and where rote and learning whatever library is seen as the objective (bootcamps come to mind).
The other one where CS fundamentals are seen as the priority, and where hiring focusses on finding people who posses the skill of acquiring new knowledge.
You can guess in which one Google and FAANG or whatever the acronym is now and Stanford/MIT exists.
I work at a non-Google FAANG - from what I've seen in my org, "CS fundamentals" (which I assume is some proxy statement of sorts for leetcode-style interviews with emphasis on data structure/algo questions & knowledge) isn't as important for the work or to get hired for us. Our view is hard technical skills can be taught/picked up on the jobs, but behavioral aspects are not so easy to develop.
We routinely have & hire interns from top programs and many lesser known ones, or even non-CS degree holders.
> can guess in which one Google and FAANG or whatever the acronym is now
Small aside. It describes Google and Facebook and the others several years ago. Speaking anecdotally, the impressive people at those firms are fewer and farther in between.
TBH CS degrees don't really produce people who can code in general either. As far as I can tell they take people who already understand the basics and teach them to formally communicate.
They typically also require 1000-2000 hours (over the course of the entire degree) of instruction time in things relating to software development. Part of that is homework and the practice of the craft of software development.
One of the key things that self taught developers miss is the instruction and review of the code. You write code differently if you're going to be graded or if its a throw away script.
In theory, job code that is going to last for more than a run should be written closer to the rigor of the graded code while people who have self taught have never had their code reviewed in their instructional period and tend to (not always, but tend to) write code that wouldn't stand up as well to review.
There are plenty of graduates who don't write code that stands up to review either... but there is a "you have spent a few hundred hours writing code and had it graded, and changed how you write code to get a better grade."
For a person who comes into a CS degree program without experience at coding, and they do their homework, I tend to believe that they will have code that stands up better to review and fewer bad habits than someone who followed a self taught progression.
As a self-taught programmer, I tend to be pretty self-conscious about my code quality for precisely this reason. As such, I’ve never had issues on this front.
Computer Science degrees aren't intended to produce people who can code. The CS field is literally the science of computation, a branch of applied mathematics. Complaining about lack of coding skills in CS graduates is sort of like complaining that Chemistry graduates aren't good at washing test tubes.
> It's a bit like asking a carpenter how many years of experience they have with DeWalt cordless nail guns rather than ask about their house framing skills.
This kind of logic only works for tech organizations that already have enough in-house domain expertise to onboard new programmers. The other day somebody asked me how to find a programmer to implement something for them. From a programming standpoint there was very little to do, but it involved many obscure technologies that you couldn't pickup in a day (and no you can't pick different technologies). For a person who's already done something similar it'd be a quick and easy job, shouldn't cost too much. With a generic programmer it'd take much longer, cost much more and you couldn't be sure they'd actually deliver.
Finding someone "experienced with specific tool set" is also not a trivial problem.
It is even harder if you don't even know how to verify that person really is experienced with specific tool set.
So it goes like "pick your poison", hire generic dev and account for learning curve or spend money on recruiting fees/time for finding an expert. Where I would not be so quick to say "expert shouldn't cost too much" - because I can imagine expert taking less time but costing orders of magnitude more than generic dev.
>The entire industry focuses way too much on 'experience with tool X' as a proxy for 'technical skill Y'
Strongly emphasizing this. This is HIGHY applicable to the analytics environment. As a business analyst who specialized in mostly ad-hoc development because it was the most value-add area at the companies I worked with.. I had a lot of trouble finding new work because I didnt use Tableau, or Power BI, or Looker, etc. I was some sort of fool for doing everything in SQL, Excel, and Python.
IMO the tools are great, and you need a much lower level understanding of analytical concepts to get value from them. But for some reason people kept getting the impression I would somehow be less effective with them because I dont use them. And I had trouble correcting them with the limited bandwidth that exists to communicate with a single applicant in the hiring process. If I tried to get right to the point, I felt myself appearing arrogant.
The carpentry analogy is very similar to how i described it. "I am currently using a ruler and screwdriver, and these tools provide lasers and power drills"
It's interesting because at least in terms of working professionals, of the most productive professions I've worked with, the ones who focus on "meta-level" concepts are usually the ones who overthink every detail and get very little work done and ultimately they are the ones who burn out.
They tend to bike-shed details, take way too long to try to create sophisticated abstractions that never quite achieve the silver bullet they originally thought it would, and spend too much time dwelling on irrelevant details that ultimately leads no where and results in a kind of paralysis that can be very hard to break out from.
The ones who master a specific language or master a specific library/technology that focuses on doing a few things very well and in very concrete terms are able to deliver the most business value. Furthermore they absolutely have the ability to take their mastery of that and map it to new languages or new libraries. I mean I'm not talking about going from Java to Haskell or Haskell to Idris, but rather people who master C++ can fairly easily pick up Java or Python or TypeScript. People who have mastered Unity can easily pick up Unreal Engine. People who have mastered web development can easily pick up mobile development.
The idea that people who have a solid mastery of a technology and a programming language are somehow just stuck and unable to take those skills and apply them to other areas I think is overstated and just untrue, but those who treat software engineering as highly theoretical and focus on abstractions, design principles and get caught up on these high level details tend to not get much done and when they realize that software is not as clean and elegant as they would like it to be, they get burned out and give up.
I think going over any substantial codebase for products that are widely used and deliver solid business value on Github where most code is not at all reflective of the ideals often espoused on blog posts validates my point of view.
In short, people who treat software as just a tool to accomplish a concrete task are more productive than those who write software for the sake of writing software. They don't write the cleanest code, or the most elegant data structures and algorithms, but they produce the greatest amount of tangible business value.
If your comment does anything for me its to show how terribly few words we have to discuss these things.
> "meta-level" concepts
I'd say having a strong grasp of what you can achieve with just using files and folder, or understanding how SQL solves en entire problem space are meta level concepts. Its just that we take them for granted.
> business value
Is apparently something different than 'value', but still includes every software ever that was valuable to a business?
> high level details
...?
> software engineering
Building constraint solver for a compiler or ensuring a JS animation centers a div?
> highly theoretical and focus on abstractions, design principles
I'd recognize all these things. But out of context 'in the general case' they become meaningless.
---
I understand the picture you are trying to paint, but i don't think it tells anything beyond "I've noticed people make things overly complex". I agree.
However, keep in mind the 'get things done and provides value' software you've seen: is the software 'that survived', might have been set up by a very experienced person ( whose failures we're not seeing ), nobody might recognize it as being not-simple ( e.g. I've seen high value business software partial recreate 'regex'. Worked great, straightforward and easy to read function, just ~50 lines or so, could have been a single function call. ), how the requirements are presented is hugely important.
I think no one was writing about ones who master specific language.
There is a lot of people who learn just a surface without going deep into tool and think they know enough.
For me it seems that someone who would really go deep into learning language would get most of theoretical stuff on the way. Because there is no way to really master C++ or really master Java without learning about data structures and all kinds of "meta-level" concepts.
Maybe the difference is mostly approach to learning more practical/more theoretical.
I understand the concept here but there is also a level of right tool for the job.
Some guys see a screw and reach for their trusty hammer. Some guys know to grab a screwdriver.
I had a project the last two weeks where the code was just going to fail about as often as it was going to succeed. I had to write a resource manager and an Erlang style supervisor and use an embedded key value store.
A better dev may have intuited what took me basically a midstream rewrite to figure out, a worse developer may still be grinding on the problem.
I think my solve is "robust enough" but there was no real way to power through that. You either found the right abstractions or you didn't.
Not to be rude, but did you finish reading the article? The whole point was that high-aptitude learners give up. In fact I don't agree that re-learning the same tasks over and over with the zillionth framework iteration is a rewarding learning experience. It makes perfect sense to change careers instead.
And as music goes, you sound like the record companies that thought everyone should listen to disco for the next 50 years...
As I’ve gotten older it’s harder for me to learn an entirely new area (like say going from web dev to mobile or ML). But it’s actually easier to learn a new variation of something (like the latest JS framework) because it’s usually pretty similar to one of the things I already know. I guess this leads to increasing specialisation, but it also means studies that merely count “new skills” will be misleading if they don’t differentiate in which way the skills are new.
I'm the complete opposite. Hand me a new JS framework that does the same thing I've done a million times but have to learn it's opinionated abstraction set that's somehow better and I just turn off. I simply do not care, at all. You need to simply explain to me the improvement you're proposing or it might as well be trash to me.
Now give me a new theoretical concept where I can expand my knowledge or integrate into my knowledge map and view of the world and I'm excited, there aren't enough hours in the day. Tell me about this all new concept I wasn't familiar with--I'll start thinking of ways I can use it, how I can leverage it, or how it may connect with other ideas and concepts I have.
Now give me a tight deadline which most business environments create and I agree with you, give me the boring stuff I can pump out, get my paycheck and go home to enjoy the rest of my day.
> No doubt the people who choose to stay in software are likely to be people who are curious, life-long learners
The article showed the opposite effect though. Curious, life-long learners stop working in software development because they have to constantly learn new skills and believe they can get more bang for their buck when they can invest in skills that don’t lose their value over time.
I once got excited about ExtJS, the way it created a desktop-like experience in the browser, and I said to myself, "I will learn this, all of it, I will become an expert. Tips and tricks, best practices, the works".
After six months of this, ExtJS 4 came out, which was essentially a totally new framework. Everything I learned was not only not applicable, it had to be actively unlearned.
The lesson here is: become good and proficient at something, but don't focus on becoming a ninja in one particular transient tech. There is value in becoming a Jedi of Unix build-in tools, or more persistent technologies like Git, for example.
Also, this is a bigger problem in the Javascript echosystem, where the hype cycles are more intense than in, say, Python. I checked out my Flask project from seven years ago and it's ready to rock.
I get the thing about constant learning, but learning in this industry used to be cumulative. Now it's a hamster wheel. You are learning how to solve the same problems, in a different, presumably in a "new" way.
People seem to be spending more time coming up with catchy names for their projects than making sure this is all sustainable.
Yes, this is how I feel. I have no problem learning a new skill. I get discouraged when I learn a new skill and just when I start to get really comfortable and productive with it, it's suddenly "legacy" and some new thing is popular.
The only skills that have really stood the test of time for me are C, PHP, unix shell stuff, and SQL.
It's a mix of both. You need to have solid fundamentals and need to keep learning new ways to apply those fundamentals in the real world. There is absolutely effort involved in learning a new language, library, framework, platform no matter how good you otherwise are.
> I noticed from teaching computer science while a graduate student is that the poor (a) students think about languages and libraries as key skills, while the (b) better students think about the data structures, algorithms, and design principles.
The truth is that the programmers in group (b) think about both. Who's designing a lot of the new languages, libraries, and frameworks? Chances are it was someone from group (a). If you're in group (b) then do you want to spend your whole career being forced by your bosses to constantly relearn and follow the latest vogue vision from group (a)? Of course not. So this might not apply to students, but everyone from group (b) will eventually get burned by fads enough times that they start caring about the politics of software. Namely, not wanting to depend on bloat that doesn't actually solve computer science and systems engineering problems. Group (b) might even create alternatives themselves. Go is great example. The guys who built the Fifth Bell System watched their vision behind their techniques decline over the decades and said, enough is enough. So they made Go and it was like a ray of sunshine when it came out.
I actually find the model in the article pretty convincing despite agreeing with you on that it's a profession for people who like to learn knew things and that university/grad school should teach more generic, theoretical knowledge that depreciate slower.
However, these still don't invalidate the main point of the article, that a faster rate depreciation means that your max knowledge level, given your specific rate of learning, will be lower. I.e. your advantage over a less skilled, younger professional will be lower.
And you may say that learning a new 3D library shouldn't be counted as learning a new skill, but it doesn't make the problem go away. If anything, it underlines it: if you have to start working with a new 3D library then you will have to spend time and effort on learning it (to become efficient at using it) while if you were able to keep using it, you could spend that time and effort on learning something that we could count as a new skill.
The article is also hitting on the fact that your skill premium as an engineer has a cap, and so does your willingness to burn the midnight oil on a project. This means that as time goes on, as an engineer you'll face the following
- A younger engineer will have the same value to your employer as you do.
- A younger engineer will work harder than you are willing to.
These two items are inevitable given the current rate of change in the industry. While some engineers will find next level differentiated work to engage in such as leading a core piece of infrastructure which defines the changing field... Many will not. If the rug gets pulled on this core piece of infrastructure.. then it's often the case that the engineers are not particularly more skilled than others on brand new projects.
Well, that's not what the data is showing. Why are you trying to create a narrative that is not trying to explain what we observe? Smarter people leave the field earlier and the author offers a compelling explanation why.
> When teaching undergrads I noticed immediately that a good portion of each student cohort was basically there because they were interested in making money in the future rather than exploring the ideas of computer science.
In my school, those who wanted to make money went straight to management or finance. Computer science was for the passionate ones and probably not the right path to make money for the brightest students.
> the poor students think about languages and libraries as key skills
well so do the recruiters, they’ll be fine
in fact, the better students are the ones wasting their time unless they prefer to be in academia, like you
so what metric are you really gauging for?
the “poor students” are pivoting for money and the name of the university to boost their employment prospects, maybe this shows in their academic performance and ability to understand, I saw the same in undergrad
> they are correct that we are in a dynamic profession that requires constant learning and expanding
Not true. I have met many developers who haven't learned anything new for 15+ years and are still doing just fine developing software. A lot of Java developers come to mind. They have pretty much done the same thing their whole career and have no need or desire to learn anything new.
Once you understand how the industry churns and burns and doesn't really give much credence for capability as you age - it becomes disheartening to try and want to be an IC. Most people see the writing on the wall for being an older IC therefore they move into management or product or other roles.
A fascinating premise, and matches what I saw as an engineering director. The best just get bored and move on.
I think many teams are unaware how much extra value is possible by retaining existing employees vs hiring new ones. Each year I'd try to make sure I was "making them an offer they couldn't refuse" with new interesting challenges, new tech, plenty of personal research time, as much pay increase as I could possibly give, etc. A lot of engineering managers think that it's no big deal to just hire new staff, but even going from average turnover of two years to three years is an massive improvement.
its not just the best ones. If you remove people from the grind for 1-2 days a week, give them a small budget and enough autonomy to do what they want, most people will fix shit that bugged them for a long time.
The main problem is how micromanage-y the current development processes are. Everything has to be a ticket/user story and has to be planned and approved by some people that never even wrote a single line of code. Everything has to have some immediate business impact. I even see scrum teams measuring for team utilization now. target is 90% atm and they wonder why productivity is down.
> If you remove people from the grind for 1-2 days a week
The modern office seems hellbent on killing every last bit of slack in their workers, then wondering why they leave or get burned out.
I realized the other day that a big part of my drive to move towards self-employment is really just a way to carve out time to take adequate care of myself. I have significant doubts that it is possible to continue to advance in tech to staff+ levels, be a good spouse, parent, and friend, and not run myself into the ground with physical/mental issues. And that is sad on multiple levels.
So I respond by easing up on advancing my career, because it gives back to me the least.
Pretty much. I often wonder where people find positions claimed on the internet where they're working 2-3 hours a day. I've increasingly throughout my career saw more and more slack evaporate to a point it's almost nonexistent.
I always wondered why people complained about how much time certain aspects took up they could automate away and my question was always: well, once you automate away that nice simple task, what do you do with the extra time? You created more slack and someone's going to come looking to fill that void the second they're aware. And the new task is going to be more difficult until you get to sets of tasks so cognitively intense and complex you can't simply automate them away. Then your day is filled with incredibly challenging stressful work.
I have no issue with doing complex work, I've spent my career doing it. What I have issue with is the amount of such work I can do in any given time span. At some point I need a break where I do something simple and mundane. Continous complex problem solving is the road to burnout. You'll be greeted by more and more failure and lack of visible progress combined with ever increasing stress levels.
If you're an entrepreneur, small business owner, or manager looking to optimize your labor force then you may want the opposite. You want more time to focus on the complex and the more simple you can automate, the better or if you have a workforce, you want your highest comped individuals focusing on the most optimally complex tasks they're capable of handling. You don't want your Fellow level engineer refilling the coffee maker because it's empty or implementing some basic features on some UI, go back to inventing new algorithms, math, or building new technology... but people need those nice relaxing breaks and slack, they can't run at their best constantly.
I think you write about people who mention that they can get 2-3 hours of actual work, which in turn they count as "actually writing code in editor".
I can easily imagine how it goes - because pushing tickets over takes time, bunch of meetings during the day takes time, explaining stuff to junior devs takes time, reviewing pull requests and answering to comments takes time, clarifying things with QA/BA/PO, figuring out which libraries to use by googling takes time.
I saw devs that think these things that I have listed don't feel like "real work" but it is. There is also no way to automate meetings or discussions over PRs.
>I have significant doubts that it is possible to continue to advance [as a worker under capitalism] (...), be a good spouse, parent, and friend, and not run myself into the ground with physical/mental issues.
It's heartbreaking how much human suffering is entirely avoidable in a post scarcity society where it is still artificially enforced to avoid the "moral hazard" of commoners daring not to toil or worry every waking hour
That definitely sounds broken. I have a hard rule with my PO: 30% of the sprint is mine (read: the team's). He only gets to schedule 70% of the stories according to his priorities. I use that 30% for tech debt, primarily, but sometimes for spike projects and other things that interest us.
It sounds like SOC2 compliance requirements unfortunately. Plus process overhead on who can raise tickets.
I've found compliance makes it harder to write good code. If you get a PR approval with optional suggestions you're heavily disincentivised from actually addressing those comments since if you push those changes you now need to wait for review again.
Like everything process and compliance it's designed by low-confidence managers with an inate distrust of their teams.
In our product, a change has the potential to cost businesses lots of money and also bring our customers into legal trouble, potentially making us liable too.
That's why we have heavy-handed change control, code vetting and so on. Yes it makes things slower, but due to the risks involved.
I've also worked on embedded projects where field updates are HARD and costly. We had heavy-handed change control then.
When I put those controls/processes in place, it wasn't due to low confidence, it was due to confidence in two things: (a) even the best SWE makes mistakes and (b) work to control change risk pays off
Sure, it isn't appropriate in many chases, but to write-off process as being designed by "low-confidence managers" because you don't see the point, is a bit myopic.
Any SWE who thinks that a codebase doesn't benefit from review before merging is driving on ego.
I see your point, I think this is the crux of the matter though:
> Sure, it isn't appropriate in many cases
I think where low-confidence management comes in is the application of the process without reference to whether the process is appropriate. It's easier to require all changes to be reviewed, every change to have a ticket and all post-approval changes to require re-approval even if the thing being edited is CSS for an internal tool than it is to build out process that accounts for the field and risk.
It feels like many places/management teams take an "off-the-shelf" compliance approach rather than constructing a process that works for the team.
On my team, we had a process (for nearly a decade) that lead to us being done on time, on budget, and a very low bug count that didn't involve a lot of overhead (not everything required a ticket, we did code reviews only when we thought of doing so, etc.).
New management took over (they finally realized they bought us out a few years ago), shoving "Agile" down our throats, we've missed multiple deadlines, every deployment has been a disaster, and there are still outstanding bugs that won't be fixed Any Time Soon because they're not on the release schedule.
Oh, and the new mandatory code reviews before checking in code hasn't helped since bugs still got through (and I've lost code because it wasn't checked in---it's not helping matters that we're still stuck on SVN, and half the team can't access the master SVN server).
My company is soc2 compliant and we don’t need work items to push code. Certain changes need to be approved, but they can be approved by a software engineer on one of the approving teams for the repository.
As a product manager I appreciate this take. Lots of bureaucracy is caused by a few base requirements for compliance/governance required by laws or customer need. It’s a huge time suck for PM and engineering, but I don’t know if this is avoidable. Maybe more automated verification systems?
> Lots of bureaucracy is caused by a few base requirements for compliance/governance required by laws or customer need.
Read the underlying compliance requirements carefully. In the case of e.g. SOC2, the regulation requires visibility but does not say who may open tickets or who needs to approve them. You can do a lot by making processes more open, and so long as they are still visible, you can still pass.
If specific customers tell you how to run your business, either write process that isolates their requirements to a bare corner of the business (e.g. a completely separate environment for FedRAMP) or consider firing those customers.
I like this in a certain light. It's not so much a problem that PRs should be associated with tickets, it just sounds like the gatekeeping involved in getting tickets into an iteration is horribly overbearing. There _should_ be work happening that is driven by technical wants and needs, and the PO should be amenable to that.
Yeah - team members are also stakeholders and improving dev experience by allowing refactoring is something that needs to be taken into account.
Unless company does not want to have any devs working on that project in the future. But it would be just like not wanting customers to use that project.
Guess how much would get done if you learned to explain why that work is important to a non-technical colleague? Lots. People don't always understand code, but they do understand problems, and why those problems are important to keep on top of.
If your PO is sensible then a couple of paragraphs explaining why refactoring is important with a closing line that says spending a week catching up on refactoring now will save 4 sprints of work in a year's time will get you the time. People aren't stupid. Once they understand why something is necessary they've very receptive.
Also, add refactoring time in to your estimates in the future and you won't end up in this situation, plus the code that's committed will be better.
Do your non tech colleagues need to have approval before saving a spreadsheet or writing a doc? Do they need to plan it out in tickets weeks ahead of time and get sign off before starting their work? Then report status daily on their progress?
One of my friends is a technical writer and she is amazed we put up with this on the engineering side. No one would ask it of other professionals.
>Guess how much would get done if you learned to explain why that work is important to a non-technical colleague? Lots.
From my experience, this is more a cop-out than anything else. At some point I'd expect you understand children with a track record of doing as they are told and informing you of important details do not need to be strictly parented around every corner. To expect that very thing from adults with way bigger incentives to behave feels off.
Updating a dependency, refactoring some code, or just making a 'simple' change sets off a chain of events that affect other people. The other devs need to review the code, the QA team need to test it, and then regression test the rest of the app, the devops team need to deploy it, the legal team need to update the legal documentation that lists the licenses for the dependencies the code has, the technical writers need to change the docs if the update has a visible impact for users, and so on, all across your org.
What looks like a small change to a developer is never actually a small change.
I really hope there aren't that many people impacted when you go for a piss.
Once you have technical decisions being taken by non technical managers you are sunk. It's hugely insulting to experienced engineers and it's a major reason that people leave. People crave respect and if they perceive that they aren't getting it, they will do what they can to get out. You cannot micromanage engineers in this fashion and expect to have good retention.
> making a 'simple' change sets off a chain of events that affect other people
Not every change affect other people, but isn't that my call to decide? Also "simple" in quotes implies it isn't i.e. you aren't trusting what you are told.
> What looks like a small change to a developer is never actually a small change.
Not true, this is hyperbole. If the lawyers need to be consulted to change dependencies this is something a developer should know and account for. Why keep devs out of the loop?
I consult with other devs, QAs (if needed) & external teams, perhaps with a ticket if deemed necessary, other times just a PR. I run the (CI) regression, I schedule/announce and perform deployment (we have platform team, not "devops" which is generally done by the app devs), I write the app docs, we have no legal documentation that lists the licenses for the dependencies.
I do this as a dev - why should I not be able to recognise if a change is small or not? let alone never being able to.
Explaining why maintenance is necessary time and time again (tech evolves fast) is soul-crushing and quite frankly rather demeaning. I need x amount of hours for maintenance work every sprint, or the codebase will slowly become outdated and undesirable to work with. Again, tech evolves and changes rapidly, open source packages die quite often. Maintainers abandon their packages (my PO likely has no idea that we rely on free open source software to ship features), there are security issues with using abandoned packages… To sum up: when I see issues, I want to fix them. If I have to go through a corp process and ask for time to fix the issues… I sort of tend to lose the apetite/mood. This right here is the core issue.
> spending a week catching up on refactoring now will save 4 sprints of work in a year's time will get you the time.
In my experience that’s very rarely if ever the case. Managers will always favor features over fixes in the kind of company that ends up with a crippling tech debt problem. They got there by not listening to engineers and having salesmen managers. These companies never change and no manager wants to be the one who’s fixing things for the next manager after they get promoted. People are idiots (maybe) but they respond to incentives. And very often the incentive is that problems that may arise in a year or two are someone else’s problems so not worth fixing now.
> We can’t even push to a git repo unless it has a linked work item/story/task/bug
Exactly the same where I work. The pace of getting things done is absolutely glacial compared to what you know you could achieve if you had any agency. I think the only reason this organization I'm temporarily a part of can even compete is that all its competitors must be equally inefficient.
But when something major breaks, and the answer to the question of "why?" is ... "well, I just thought i'd make that change, but nobody asked for it" what happens then?
I wouldn't want to be accountable in that situation.
And what if the same major thing breaks but you were asked to do it? Your necks on the line and you did something wrong that you were asked to do correctly. The problem is that part of current micromanagement environments isn't just about micromanagement but also passing down risk and responsibility to developers.
You can do the change work in a feature branch and propose the idea after the fact. If there's interest "I've already done it." Stakeholders get a bit of instant gratification like their request just materialized into thin air. If they're not interested, don't mention it and let the work go unused, rack it up as professional development time and work.
I do this fairly often. If a decision has a bunch of real risk associated with it I make sure to get sign off and create an appropriate evidence trail to pass risk back up when it's passed down. Much of work is just passing risk and liability around to PYA.
Because if you're asked to do something, someone has presumably thought it through and accepted the risk to the business/client.
I'm not sure I'd keep someone on the team who did a branch AWOL and proposed the idea after the fact. Doesn't show much respect for the team, that time could've been spent working towards goals agreed by the whole team.
If you don't have a lead or management environment with ears open to exploratory change, tech debt payoff or "do it better" tasks or whatever... and you have to manage up so much... that sounds like an issue to me.
If you consider such behavior 'AWOL' then you've bought into modern development micromanagement and may be part of the issue. That perspective doesn't believe any degree of autonomy and desires drone teams who just crank out on orders. I would never work in such a role or environment, but to each their own I suppose.
There's also an assumption buried in there that any time spent working is somehow owned by you, your leadership, or the organization and not your team or teammates time. I've personally spent plenty of hours "off the clock" investing in directions I think are correct in an IC environment and it's paid off many times (I've also wasted my time on occasion but it's my own time and my choice). If you have slack in your schedule or want to push something out by taking initiative, then the type of management philosophy describe loathes initiative, creativity, and innovation in engineering. It's a great way to drive those abilities out of your teams and organizations.
It's good to foster teamwork and target goals but you also have to give your teams some degree of autonomy, otherwise just as the article describes, they will leave from the drudgery. The allure of technology is the tangibility of innovation. If you rip that off development, for many, the work becomes unenjoyable, tedious, repetitive, etc.
What issue? My projects are delivered on time, on budget and to the customers expectations without undue risk or unpredictability. That's my job.
Nobody on my teams would say I micromanage them, everyone has a large degree of autonomy within a framework of shared goals and shared values that keeps efforts working towards cohesive results.
With autonomy comes responsibility to the team, business, customer and every stakeholder ... so yes, i'd consider it AWOL to undertake work that doesn't respect the input of everyone else by getting agreement beforehand.
Perhaps that's b/c you are apparently in a position to get rid of people who "go AWOL" despite maintaining "a large degree of autonomy".
If autonomy is prefixed on "within a framework of shared goals and shared values" then why do you think individuals can't do work based on their own conception of those shared goals/values, rather than requiring signoff first? Autonomy is being able to make (and execute) decisions on your own (possibly based on shared information/value/etc) - requiring signoff is not autonomy, it's merely the ability to participate in decision making.
There really shouldn't be an issue getting work approved and visible to the whole team (if anything for team input, collaboration, opportunities for others to object).
If you're working for the benefit of the product/team I don't see why you would need to, or have an issue with this.
> team input, collaboration, opportunities for others to object
You don't always need a ticket for this, if it applies at all. I'm not unaware of these benefits, but the burden lies with you to demonstrate devs cannot be trusted to be autonomous, or choose the appropriate mode of collaboration.
> getting work approved
Why is this always needed?
> If you're working for the benefit
This is a strawman, you can do this without the overhead.
Who do the approvers seek approval from, for the same reason(s), and who do they seek approval from?
> visible to the whole team
This is what standup / status updates are for. It takes all of a few seconds, no approvals needed.
Why is it any better if someone further removed from the change is accountable instead? Major breaks shouldn't be avoided by avoiding change, They should be avoided by having strong QA process & support - it shouldn't be a single-person accountability.
I don't believe this is how it is actually implemented in _most_ companies. Where I work every PR must have a linked story / bug / etc but anyone has the rights to create a story so it acts more as a way to track what changes actually goes into a release for x-teams to review and see if they need to document it, etc.
In regard to refactors, people tend to just squash them into another change they are making. This makes the git log a bit harder to follow at times, but people did this back when we just used to push to trunk too so I don't think the story is the deciding factor.
You wrote: <<I don't believe this is how it is actually implemented in _most_ companies.>>
I would say for non-tech companies with a strict set of IT guidelines, this is mostly true. Please ignore non-tech companies with weak or zero IT culture. It will be the 'Wild West' at those places! Nothing will be maintainable beyond a certain size because there will be so much key person dependency.
For pure tech or tech heavy (banking, insurance, oil & gas, etc.), there is frequently more flexiblity, including "dummy Jiras" just to track a non-QA'able code change like upgrade C++ / DotNet / Java / Python library, or refactor some code. In my experience, 'Jira-per-commit' rule isn't awful, as long as tech debt does not require non-tech approval, and the ticket is just a tracking device. (A few different vendors offer very nice total integration between issue ticket, bug ticket, pull request, code review, etc.) Just a one liner in the Jira should be enough. In my experience, the best teams try hard to "do what works for us", instead of be a slave to the Jira process. Yes, I realise this is highly dependent upon team and corporate culture!
Finally, I would be curious to hear from people who work in embedded programming -- like automotive, aeronautical, other transport, and consumer electronics. I have no experience in those areas, but there is a huge number of embedded programmers in the world! Do you also have a very strict 'Jira-per-commit' rule?
I get why bureaucracy is a total pain, getting work approved by stakeholders constantly ...
But the actual ticketing/PR system? Change requires control.
The actual issue is not _using_ that control tool to get the right things done. If basic technical debt issues are not an easy sell in your org, that's the real problem and one that should be handled by senior/dev manager.
A big red flag for me is any org that doesn't recognise and service technical debt and empower engineers to make a win.
I also wouldn't say tech debt pay-off should be without its justification in some cases. If an engineer can't measure the positive impact of doing something, it can make it a hard sell. Why should an engineer spend 2 weeks doing something if we can't describe the payoff?
> But the actual ticketing/PR system? Change requires control.
The ticket system isn't for engineers. If it were for the engineers, they wouldn't be continually forced to use it. The ticket system is for the legibility of management or sometimes compliance (other flavors of management). This visibility is at the expense of the productivity of the engineers themselves.
> Change requires control
No, fundamentally, change is gated by control. The more control, the less the change, with sufficient levels of "control" leading to no change.
Requiring a "non-tech PO" to upgrade a package is just broken, though. PMs are good at some things, but giving them power over every minute of an engineer's day is a recipe for badness.
But code, unit tests, git commit messages and merge requests are already providing 4x documentation of code changes. Adding Jira tickets and production deployment documentation gets you to 6x documentation.
In my experience, if your company's problems weren't solved with 4x documentation, they won't be solved by going to 6x documentation.
I'm not sure that's a like-for-like comparison and if those things overlap like that, it sounds wrong:
- Ticket: Description of the requirement
- Code: How it was done
- Review: Peer-learning, change evolution
- Unit test: Testing of implementation as understood by SWE
- QA: Did the change match the requirement, did the SWE understand it? Is the outcome the right one?
Each "item" should serve a distinct purpose, have distinct value and be justified. If they seem like duplicates, then that probably points at issues elsewhere.
- Incident report: Regression. When running on AWS, page size of 1000 causes the process to crash intermittently with data loss. Recommendation: reduce the max size to 500.
I know what you mean, but to be a bit of the devil's advocate here (only half kidding):
As a reviewer of such a pull request, I'd go over all the places in the code where this page size constant is used.
I'd also like to see a rough assessment of the impact of this change. Does it affect a lot of code? Some code? What percentage of users are to be affected by this change?
Also, who asked for it? It's ok if no user asked for it and it's your own initiative. But if users did ask for it (or rather complained something like "the app rejects our API requests" or "the app is effin slow, please fix"), then it'd be nice to connect to their tickets / mails / chat logs. This could serve as a proof to management if someone decides to question this change.
Deployment:
If this change is in an API called by many functions (so, big impact), but it can bring with it a big benefit to many users, I'd like to see a rollout plan - as simple as putting it into a beta version, or (if we have them) using feature flags to enable it, and a plan (can be an automated script) that tracks crashes during this rollout.
If the change doesn't have a big impact then that's not necessary.
Ideally I'd like to see coverage results that proves that all those functions which use this constant and all code paths leading to them have coverage.
It's perfectly ok if they don't, perfect is the enemy of the good, but at least the major ones. I would also go over carefully at least over some of the code which uses this constant directly and indirectly to ensure no funny business like too many threads allocating this bigger buffer, no funny out of bounds issues due to code assuming size is of a certain length (if it's C/C++/C#) etc.
So really, what is the user-visible impact of this change? If it has no user-visible impact, then why was it made? The ticket as it was specified here doesn't answer this question and therefore reflects a somewhat broken organization/team, where engineers are disconnected from their users and/or lack the eloquence or willingness or time to explain their changes. I bet the person who wrote this doesn't even bother writing comments about non-obvious changes (such as this one!), making their code harder to maintain.
Requiring it to be documented and approved is just responsible from a change-management perspective. At my company we have similar requirements and it is basically required to do that in order to meet security audit expecations. The problem is when they managers don't let you have a say in what gets done.
If a developer on our team things something should be done and can do it quickly, they are encouraged to create a ticket and do it. It gets code-reviewed and accepted. If it is not a quick change, they need to bring up the ticket at a planning meeting to make sure it is balanced against other priorities.
Mostly state level here in the US. The feds just don't allow any type of gambling across state lines but otherwise leave it up to the states to regulate. Its actually a huge pain in the ass to deal with since there has been very little standardization of state regulations so far.
You wasted time on something that doesn't get you out of PIP, into promotion, or a higher salary. At companies that micromanage, I would imagine your direct manager is the sole owner of your performance rating.
I'm not trying to get a higher salary or promotion. I make enough money. I can't say for certain since it has never happened but I would probably quit on the spot if PIP'd.
Worst case, you get criticized for doing unauthorized work.
Best case, you spend your time on a task that gets unnoticed, and now you have to do overtime to do stuff that you are assigned to and actually supposed to do.
It gets worse, too - as long as I've worked as a software developer there's been some sort of time tracking system in place, and it has to be planned up-front, and has to work out to at least 40 hours (after they "negotiate" your estimates down). Which leaves no time for the unplanned stuff that inevitably comes up. This always goes in a cycle like this:
1. Management demands that every bit of work be associated with a ticket
2. devs just open tickets for the unplanned stuff so that it shows up in the ticket tracking system
3. management complains about devs opening "their own" tickets and prohibits self-opened tickets
4. devs do the unplanned (always "super high priority!") stuff without any ticket tracking and fall behind on their "planned" tickets (that nobody really cares about any more, but are still on their board)
1. management demands that every bit of work be associated with a ticket...
It feels bad too have a ton of structure, but the opposite is worse IMO.
Single line tickets from the CEO that turn into months long projects with no guidance on the functionality. Engineers that burn down entire features because "it's bad code." Secret projects where you get berated for asking stakeholders to to clear up requirements because "you're scaring them."
It's easy to look at a rigid structure and assume it sprang wholecloth from Zeus's head - but most of the time it's an overcorrection. Being burned by a company where Freedom is just an excuse to make employees to work overtime will make anyone go a little overboard.
I hate this. An estimate is an estimate, there is no negotiation, negotiation an estimate is simply interfering with it, making it less objective. It can also be a trick to put pressure on developers.
I've dreamed about a 20% policy like google had, except it's where you can work on anything, including code debt.
I've tried to stress to managers in the past that developers feel the pain of code debt. It makes us slower! Enable us to spend time sharpening our tools and managing our codebase.
One problem of course is, not all SWE can do this well. I wouldn't necessarily trust a junior hire to recognize and execute a proper refactor.
I've worked at companies that tried to have explicit 20% policies, and it worked okay for some period of time, but then became difficult to prioritize. That said, I've typically been pretty successful at casually using 10 - 20% of my time to work on random low hanging fruit (dev ergonomics, performance, etc). For some reason using a quiet Friday afternoon, or time between tickets seemed to work better than an explicit "work on whatever you want every Friday".
At my old place they just overestimated my tasks complexity by 200-300%. I wasnt going to be the one saying no... So i had plenty of time to fix whatever.
I've always been a fan of the under promise and over deliver paradigm in tech. Overestimating gives you nice fat error margins should something unplanned come up, you're not slaving evenings and weekends away trying to meet a rough estimate that someone turned into a deadline for scheduling.
Once you finish meeting minimum requirements, should you have that margin padding, you can then refine what's been created. Fix issues or shortcuts you may have taken, improve or optimize some portion that'll give significant improvement in experience, add some additional functionality you think would be nice to have where permitted (while the context of everything is fresh in your mind).
Ultimately what's delivered will more often than not meet minimum requirements so whomever requested the work will be satisfied. They may even be incredibly pleased and consider you a wizard for some of the improvements (they also may not want the improvements so be sure to keep those modular you can very easily slice them off if they're undesired).
This keeps everyone happy really. When you start trying to optimize on the estimates so they reach actual time or a little under actual time to pressure developers to do OT, that's when you get into toxic environments. If you give a little error margin developers will likely reinvest it in your application where it sparks joy in them meaning you're about to get the highest quality work, the stuff the engineer wants to do.
In my old workplace, I overestimated my tasks by 2x. Who was to question me? The PM who hadn't coded in 10 years? My manager who had never used Python before?
Lucky you.. in my experience the answer to your 2 questions is: yes. Having no or outdated experience doesn't stop them from questioning even accurate estimates and attempting to filibuster their way into us lowing them.
My take is that it's always going to come with an implicit expectation of some "business value" resulting if it's time granted to you by your management. If you say "let us hack, we could come up with something that makes millions of dollars!" they're going to wonder constantly when exactly you're going to come up with those millions.
I've dreamed about a 20% policy like google had, except it's where you can work on anything, including code debt.
Where I work we have 1 'maintenance day' each sprint where the devs choose what to work on. That can be fixing annoying issues, improving the code, learning something, trying something out, etc. It works well when other things aren't taking priority (which is far too often tbh).
This is really important. At my first job, in 2010 or so, we were using Servlets. Not just Servlets, but Servlets with some home-rolled shims to make it act like Servlets 1.0 in a distinctly post-1.0 world. Doing pretty much anything in there was slow, and I realized it at the time. So during a hack week I built an MVC layer on top, with routing that didn't involve XML files and just--made it generally better. Or so I thought at the time. To their credit, the team took it seriously and gave it a try, and while in a vacuum it was a better system I hadn't understood how the people on the team thought about web development and it wasn't a better fit for them. They didn't feel like there was a problem, so a solution didn't make sense. It could've been technically amazing (it wasn't, but it was fine), but it didn't solve a real problem, so it was an academic exercise at best.
Other refactors and systemic expansions, like a slot-based system for advertisement placement, worked a lot better, because I'd learned a little about how to dig into what problems actually existed and how they were causing aggravation to people.
> I hadn't understood how the people on the team thought about web development and it wasn't a better fit for them. They didn't feel like there was a problem, so a solution didn't make sense.
There's got to be a limit to this line of justification though. Lots of people have just plain wrong ideas about 'web development', so catering to their ideas doesn't serve anyone well (except, perhaps, those people, who in the short term don't have to learn anything correctly).
A colleague shares stories of his team who don't grasp the difference between GET and POST, don't understand the term idempotency, believe that 'web dev' testing only means 1 thing, etc. There's... 5 of 6 of them, and only one of him, so... much stuff ends up staying 'wrong', and the 'wrongness' in each section of the code ends up compounding 'wrongness' in other systems/features as they're being added. This matches how this team thinks about web/software development. But it's not in any way beneficial.
Oh totally! I firmly agree that at some point you have to pull the ripcord.
The question becomes--where?
This company was incredibly successful up until COVID and is still ticking along, so it's hard to argue. I gather they've done significant work to rebuild the universe in that time, though. I might've just been early.
Where do you think your understanding went wrong initially? You said you realized that doing anything in the old system was slow. Was it not actually slow? Was the slowness not relevant because there weren't actually that many routes? Were your coworkers resigned to having some minimum level of tedium in web development? Or something else?
Oh no, it was slow--but it was slow and they were used to it. (And there were plenty of routes--probably somewhere close to a thousand at the time, in 2011?)
The problem I hadn't fully understood was a mix of the last point you suggested--tedium's just how it is--and a set of libraries built up as coping mechanisms that were intrinsically tied to the servlets process. Not really written to be able to be shimmed into a better world without dragging the whole thing there, and not enough appetite to try for it.
Yeah, totally. And in my experience (on both sides of things), a good junior developer is going to probably realize what sucks well before having a strong understanding of how to make it not suck. The first attempts are probably not going to be perfect (maybe not even good) - but that's the sort of thing folks don't really learn by being told. You have to walk into that screen door a couple times. Which is why hiring junior folks is such an investment.
When I was a non-senior eng., I interacted with plenty of senior engineers, found that I easily possessed the skills to work at their level / do their job.
My company at the time laid off my entire division. I decided it was time, needed a job of course, and so applied for and landed a job as a "senior SWE".
> give them a small budget and enough autonomy to do what they want, most people will fix shit that bugged them for a long time.
This has been huge for me at my current job. I saw some unused equipment in a lab and started asking questions why. Turns out the thing worked, but not great, so no one used it. What started as just fixing bugs and adding features became my own line item in the budget and requests for the (new and improved) equipment from other departments. It's something I look forward to working on.
That reminds me of the time I was a junior dev, and the team lead told me verbatim: "I know you are too busy to write tickets, but can you take some time off [this urgent thing] to do that? Thanks!"
This was after they encouraged a certain "cool culture" for a couple of months due to the lack of direction. It was pretty funny that I did not only get micromanaged, but was told I did the wrong thing, and then asked to do a third job that was not my responsibility.
We have a lot of bugs everyone complains to me about and I have sufficient downtime to fix them* but I have to go through drawn out planning, UI, UX processes before I can even start. I just don't bother any more.
And yeah, it's definitely not just the best ones. I am mediocre and am so bored and so done with dev.
* the downtime is there because I am waiting for planning, UX, and UI for a different high priority task.
I think that many companies don’t know or have forgotten that programming is a creative process more than a manufacturing process. You’re not pulling chicken breasts off an assembly line and wrapping them in plastic.
So you'd rather have bugged out people which decreases morale which decreases productivity? See it as an investment. Yes, motor oil is more expensive than no motor oil, but it makes the engine run a lot better.
You're both right, unfortunately - which makes it hard to ever consistently choose a path. Many people are stuck in the middle of the two sides in lots of orgs.
Your tedious tasks are important. But some of your research/autonomous work is important as well. But both are sometimes hugely wasteful as well. I'm regularly reminded that someone more senior can ascribe "business value" to something and push that to the top of your priority list even when that thing isn't valuable.
To me, as a manager, it's worth thinking about it from the perspective of praise. People might feel better if you're reminding them that the tedious stuff IS actually important, IS actually valuable (and why), and etc. And it's important to tell folks to share their side/research efforts as well. I've neglected to share so many of these little efforts over the years, but feel that they're almost always well received.
Last part said a different way. Share something, get the response, and then do what you can to connect and make it more relevant to a real problem or issue if it's not already.
I see this a lot at my current job. The tech stack is ridiculously complicated and I think a lot of it is due to this sort of motivation. They let developers run wild and build using whatever tools or new hotness that they wanted. But ultimately we sell Widgets. And the 3rd refactoring of an application to use some immutable data library doesn't do anything to help us sell more Widgets.
So yes, we have happy developers that have good morale. But we also have probably twice the number we need because nobody put their foot down and said "this is work, not play".
Shit that bugged them for a long time can multiply overall productivity going forward, and add far more value than slowly banging out the next user facing feature in a low productivity environment.
> Everything has to have some immediate business impact.
Or more specifically, explainable business impact.
But it's hard to explain how the code has become horrible and needs a refactor to makes it easier on devs, reducing stress, reducing likelihood of both bugs, and developers leaving.
It may not be a matter of "the best". I have taken a personality test that had an item on it that covered product lifecycle. If 1 is initial conception, 2 is prototype, 3 is initial release, 4 is major enhancement, and 5 is maintenance, my personality is that I prefer 2 or 3. By 4 (major enhancement) I start to get bored, and by 5 (maintenance) I'm definitely bored.
It's not that I'm one of "the best" (though I like to think that I am). I have a personality clash with the later stages of product lifecycle.
Is that the premise? Seems to be saying that constantly changing skills exhausts developers to the point that it becomes more lucrative to work in another profession.
Although, I suppose learning new things just to tread water can be boring too.
This is how I read it too. That a fast learners skill is degraded in a field that is constantly reset (software dev) vs one where you can stack knowledge. And thus the fast learners will eventually leave the software field and transition to one that doesn’t reset constantly so they can stand out more.
Doesn’t have to do with boredom so much as maximizing potential.
To be fair, almost all my managers were amazing, people who truly cared about their staff: at professional level as well as a personal level.
I've only had one absolute psychopath as a manager ... but I should thank him because he was the last straw and gave me enough courage (and anger) to leave AWS and start my journey as a solo entrepreneur.
Oddly enough I joined AWS not too long ago and while the job itself sucks, my manager is exceptionally awesome. In fact I only interviewed with AWS as kind of a half-joke, but he was so likeable that it convinced me to take it seriously. He has a healthy "fuck em" attitude when it comes to pressure from outside the team, so he's constantly protecting us in a multitude of ways (e.g. during on-call rotations or deadlines being imposed on us). He has yet to do a single thing that I would consider micro-management.
I think for some engineers (me) there's not much you can offer. I don't just want any new random tech challenge. It's unlikely your company or most companies have something I could truly be passionate about solving.
Literally every engineering director I've ever seen says exactly what you're saying. E.g., "as much pay increase as I could possibly": for my experience and growth, an employer I worked with offered a -6.6% increase in real salary for my time there. (≈3 years tenure.) Negative. The work was also … not what I'd like to be doing. So between "stay with company" and "find different company", it shouldn't be too hard to predict which option was more appealing.
My experience, unfortunately, is that good managers like you seem to be don't last long. They get replaced by manage-up sleazeballs who'll never, ever protect the people beneath them because it's not game-theoretically optimal.
The thing is, executives measure themselves by how quickly they get promoted into the next role, so no one cares that good management might reduce turnover in the next 2-3 years--in fact, the executive mindset is that it could just as easily increase turnover (what if we invest in their careers, and they leave?)
My philosophy as an engineering manager is to actually pursue this outcome. If I treat them badly they will leave. If I treat them well and train them into better engineers then they will leave. It's like being a college football coach. Having your star athlete be drafted to the NFL is entirely the point.
There is no shortage of swe. What I see and what I have experienced first hand is that companies, especially the more hyped/small ones, pretend to be FAANG and gets very picky when interviewing. They often employ FAANG style interview.
Now, if I really have to spend that much time prepping to interview at your unprofitable company (that most likely will go under) don’t you think that I would try my best to work at faang instead ?
As matter of fact, I was rejected at plenty of these small insignificant companies, but end up having offers as L6 at FAANG.
Be humble and you will find plenty of good engineers out there.
I know tons of good swe that don’t want to interview/ work at mega FAANG, and if I was running a business I would definitely try to attract those talents by being different. Offering a “normal and reasonable” interview process along with better perks, flexibility and wfh.
Instead, they all want to run bizzilion of micro services in k8s
I definitely think "we do not do leetcode interviews" and maybe "we only have three interviews total" would be selling points on a job posting. People who are experienced in the field don't want to go through the same hoops that newbies do just to prove they know how to write basic algorithms.
You wrote: <<People who are experienced in the field don't want to go through the same hoops that newbies do just to prove they know how to write basic algorithms.>>
I am constantly interviewing candidates for roles at my company. It seems like CVs are a complete gamble. Either some are lies, or wildy understated, and everything in between -- at all levels of experience! "[J]ust to prove..." and yet so many can not do the 2022 version of FizzBuzz. I am stunned how many senior (well, so they say!) hands-on technical applicants cannot do basic things like write a very simple linked list class, or explain to me how a hash map works. For get about explaining the finer points of sorting algorithms (honestly, very low value in my line of work).
There is no reasonable alternative to testing of some kind for hands-on techincal roles -- I am flexible about method: (1) white board coding (ugh in 2022), (2) IDE/text editor on shared PC / video chat (meh / eh in 2022), or (3) take home (the best in 2022, even if there are drawbacks for people with families).
Joel Spolsky said it best about hiring: The goal is to avoid bad hires. Average and above are fine.
...explain to me how a hash map works. For get about explaining the finer points of sorting algorithms
I've been in software development for 15 years and being able to explain either of these things has never come up and I probably wouldn't be able to give a decent answer without reviewing the topics specifically.
Yes, companies are hiring for competency not tenure or years of life lived while holding job. If you want the latter, I have heard it is more structured that way in Japan and much of Europe.
I would expect most people I hire to be able to explain how a hash map works.
Depends on what exactly your company is doing? Why do you care if an applicant understand how a hash map works; and what's particular about a hash map?
I'd care more about an applicant understanding the concept of a hash; or hashing in general. If an applicant shows that he understands that a hash is a magical and fascinating mathematical concept; and it can have uses in Information/Computers, that would be more interesting (to me) than someone who memorized a hash map definition.
He can always learn about a particular application of hashing (hash map, for example). But the latter shows aptitude and capacity to learn these later on the job.
> why do you care if an applicant understand how a hash map works; and what's particular about a hash map?
Because a hash map is:
1) a pretty basic concept in data structures
2) Variations on hash maps are used all the time in the real world. If you use objects in javascript, dictionaries in python, or maps in C++, then you are using things that essentially implement hash maps.
Point number 1 is like if I went to an orthopedic surgeon and they couldn't tell me what the liver does. You can say "well the liver has nothing to do with my finger that got smashed in a car crash, so what do I care." Or you can say, "that seems like a red flag. Maybe I'd be safer choosing a different doctor."
* Note: I have no idea how often the liver comes up in orthopedic finger surgery and for all I know it's a lot. But I think you get the point.
Yes, you use them. You don't build them. To torture your analogy it's like asking the surgeon to explain how their bone saw works. Why should they know? All they care is that it cuts bone.
Knowing how to build a bone saw is a completely different skill set than surgery. Building a hash table is a job for a software engineer even if not every engineer does it frequently. Still my analogy wasn’t great.
Analogies aside, hashmaps (in one implementation or another), arrays, vectors/lists, strings and arguably sets are very very common data structures in most modern languages. I don’t expect someone to be able to build a hashmap from scratch but I do expect an experienced engineer to have some basic idea of the pieces that go into building it as well as it’s properties (not necessarily ordered, O(1)ish sets/gets, collisions, etc). Knowing this kind of thing helps you understand when to use an object in JavaScript vs a map. Or how dictionaries differ between python 2 and 3. If you understand the underlying data structure, then you know what questions to ask.
* in python 2, a dictionary’s keys are not guaranteed to have consistent order. In more recent versions of python 3, they ARE guaranteed to have consistent ordering. This has ramifications for the code you write, and it’s especially confusing if you don’t understand hashmaps because 99% of the time, the order will be maintained in python 2 even though it isn’t guaranteed. But relying on things that are true most of the time is a very bad way to write production code :)
>I don’t expect someone to be able to build a hashmap from scratch but I do expect an experienced engineer to have some basic idea of the pieces that go into building it as well as it’s properties (not necessarily ordered, O(1)ish sets/gets, collisions, etc). Knowing this kind of thing helps you understand when to use an object in JavaScript vs a map. Or how dictionaries differ between python 2 and 3. If you understand the underlying data structure, then you know what questions to ask.
Yeah, I'd say this is all in knowing how to use a hash. How to build one would go into the underlying data structure.
To me, "what a hashmap is" is just an extremely basic engineering concept. Your reference to "memorizing" (to refer to the topic you don't like) vs. "understanding" (to refer to the topic you do like) is value-laden and also suggests to me that you think a hashmap is more complicated than it actually is - it is truly just one step from understanding what a hash is.
well I mean there is quite a lot of differences between what the answer to 'what is a hashmap?' can be - is it acceptable to say a hashmap is a bunch of key value pairs where the key is an identifier by which you can look up the value you want - or should it be more in depth - akin to the wikipedia article https://en.wikipedia.org/wiki/Hash_table describing not just what it is but also its place in CS and how it is implemented? If the second it's probable the memorized description would be accurate.
I would expect the basic notion of hashing to decide where to place an element as well as a strategy for dealing with collisions (the basic approach being hashing to a linked list). This didn't require any "memorization" on my part.
Ok fine, so not an in-depth definition, but someone might think you mean an in-depth definition if you don't specify the level you expect is being used (perhaps because they have encountered that situation). I mean if you ask me define a hashmap and how it works for example I might think: "aw damn, this guy wants the low level details, I bet this place sucks to work at!"
Although I would probably ask them to specify the level of detail they want before trying to comply, and then not get the job because I ask a lot of unnecessary questions or something.
I'm not sure what you even mean by an "in-depth definition." What I said is basically constitutive of a hashmap, it's place in the history of CS or whatever is not.
a not in-depth definition would be a 1 sentence definition of a hash map with probably give me an example of when you might want to use it.
an in-depth definition would be that, followed by how you implement a hash map (thus if your language of choice already has a hash map, don't use that but show us how you would implement this classic data structure), how do you avoid collisions, maybe discuss some various ways that you could implement it and what the tradeoffs are. This would be you 'understand!!' what a hashmap is, like deeply. (I hope my tone makes clear I do not advocate for this)
on edit: I think it seems you might actually be advocating for this deep definition? If so, probably you are working at relatively low level?
on edit 2: for example if you were using Python you might say a hash map is a dict, and talk about how to use dict, the in-depth definition would not allow this. Which I think is what the other posters were worried about, being asked not to use your language's implementation of the concept but go lower and show you can make the whole thing.
I don't think understanding how a hash map, an incredibly simple data structure used frequently, is all that "low level." If I really wanted low level, I would talk about having linked lists of arrays rather than a linked list of nodes so that you could benefit from fewer random lookups.
Ok, perhaps for your job it is reasonable to know what I call the in-depth definition - implement hash map functionality in language interview is using without using built in hash map functionality of language, discuss trade-offs (or perhaps it is your particular interests that allows you to keep this information readily available) but I guess you can see how many people's jobs do not require this and it would be reasonable for job interviews in those fields to only want the shallow understanding - one line definition of hash map, when would you use it.
If you ask for the in-depth understanding for positions that would never need it, it follows that you will be disadvantaging many applicants who might be great for the position and advantaging applicants who know at least one non-relevant thing.
Yes, companies are hiring for competency not tenure or years of life lived while holding job.
Oh I wasn't trying to say that this knowledge was valuable or not. I was just pointing out that you seem surprised that experienced people wouldn't be able to provide a good answer to that question. And the answer is that in many jobs, its not useful knowledge.
I concede it may just be that I've been lucky, but when interviewing experienced candidates I've never really had any problem judging their competency just from discussing their past work. A little bit of back-and-forth deep dive into the technology has always exposed people who are shallower than their CV would have you believe.
What has burned me, however, cannot be tested adequately in an interview. Bad attitude and laziness. Anyone can behave well in interviews, so we've hired a few people who turned out to have a passive aggressive streak or condescending attitude that interferes with the rest of the team. We've also hired people who are really great developers ... when they work. But they're really lazy and getting them interested enough to do the work is the hard part.
Hiring is hard. I doubt I need to tell you that. There is a reason we gravitate towards hiring people we've worked with in the past, or come recommended by someone we trust.
I've had the same experience. Everyone wants you to "just look at their resume". Meanwhile, I've interviewed plenty of people with great resumes who can't solve a simple problem like reversing a string [1].
If you can point me to extensive open-source experience on projects that roughly approximate professional coding, then fine, I'd be happy to walk through your code with you instead of doing an algorithmic problem. The issue is that that's a minority of developers... most people don't have the time or desire to do extensive open-source work outside of work hours and I don't blame them for it [2]! But given that resumes are unreliable, I need to test you somehow.
[1] No, I don't ever have to reverse strings at work. But I do have to write efficient code. And if you can't conceptualize how to reverse a string then you probably won't stand a chance at more difficult algorithmic issues I often come across.
[2] I don't recall who it was, but I once heard a very well-regarded chef say that he's tired after work and so he doesn't like to cook much at home. I have zero problem with a developer doing the same with coding! Go home and work on a hobby, spend time with your family, or smoke pot and watch netflix... I care about what you are capable of at work, not what you do with your free time.
>technical applicants cannot do basic things like write a very simple linked list class, or explain to me how a hash map works
In my experience people know these things, they just don't realize that what they're doing can be described generically.
An example: If you're in an iOS interview and ask the person to describe a graph, they will get very angry and complain that this is useless knowledge and they don't need it to get the job. But if you ask the same person to describe an UIView hierarchy, they often have no problem doing so. So they _know_ what a graph is, they just didn't know it had that name.
With that in mind, my tests shows that generic algorithm questions suck and questions themed around actual features of the product are the best. If you word the question in a way that feels relevant and is familiar to developers, they will be more likely to know the answer even if deep down the solution is exactly the same as the generic ones.
Personally I do because I enjoy working with very smart people.
The backlash against leetcode is the same as backlash against other types of tests: most people are going to fail and most people don't like failing, so they blame the test.
I consider solving technical challenges in interview the "LC style" interview as compared with talking about your past experience or language trivia grab bag. I am not saying literally ask questions found on leetcode.com
I think it is easier to know the shit to do LC interviews than somehow memorizing the question bank. I haven't seen many people succeed who were unskilled but managed to just memorize the questions.
so how come that I find a good chunk of swe at FAANG are not that smart, and barely get anything done. Plus the quality of their work (generally speaking) is
very low compared to what one would expect.
And this is true across the board, it is a recurring theme
when talking with peers.
I think LC style interview have ruined the interview process in the tech industry.
Note: I do ask candidates to code during the interview, but I ask things
that are related to real problem, some of which, I had to solve in my day-to-day.
In addition, I put a lot of emphasis on how well they articulate their thought process, and the quality of their craft.
Also, I would not discount `past experience talk` that easily. Actually I use
that to drill down in their resume to better understand their real contribution.
More often than not, people just lie. They are very easy to spot. At that point
is game over. I don't care if you nailed the coding. If you lie and oversell yourself you are done.
Another thing that I find very annoying is that very often interview are conducted
by junior engineer, and they don't have imo the maturity and experience to properly assess candidate skills and potential. You either do well according to what their expected solution is, or you are out.
Interviewing is not just a binary process coding well yes/not. It is a little more
involved.
I passed candidates that did not do well on coding, but I was convinced they
had potential. Whereas I did not pass candidate that did very well on coding,
but did not show any interest or passion at all.
> so how come that I find a good chunk of swe at FAANG are not that smart
You put the bar wherever you want. A company can decide to set it lower than what you would expect or like, but it could still make business sense, e.g. if 99%+ of hires still perform well with this bar and the company would like to hire faster.
This is related to:
> and barely get anything done. Plus the quality of their work (generally speaking) is very low
That's a problem of the performance review process. If everybody considers that someone is not delivering, that could be for multiple reasons, and even with a very high hiring bar that could still happen, so you can't rely solely on interviews.
If you don't have a good performance review process, you'll end up with worse hiring because you can't measure the impact of your changes.
> I passed candidates that did not do well on coding, but I was convinced they had potential. Whereas I did not pass candidate that did very well on coding, but did not show any interest or passion at all.
Can you do that objectively? It's very easy to introduce bias if you try to evaluate whether candidates show passion.
I like this kind of answer. You are trying to be the leetcode alternative, which I fully support. I have previously written on HN about my most common interview programming question: "Please implement the classic C function atoi() in any language of your choice. Do not use built-in string-to-number functions." This question has a lot of edge cases, but it is simple enough to program on whiteboard, paper, text editor, weird-IDE-that-I-never-used-before-this-interview(!). The questions that people ask and how they explain their solution says a lot about them as engineers.
i would call that leetcode style i guess i have a broader interpretation than what most people have in mind. it's more easily memorized IMO than traditional leetcode - atoi and std::string class are very common questions nowadays.
The backlash is the same as the one against standardized testing methods in school. People that don’t fit the mold of the testing method will fail regardless of how competent they are at their actual job.
It’s good you like that I suppose, but it sounds absolutely bonkers to me.
> don’t fit the mold of the testing method will fail regardless of how competent they are at their actual job.
The mold being answering questions about their supposed area of expertise.
I think people really like to claim that they are misunderstood geniuses who just don't fit the mold of being able to answer questions about the things they know. I have no doubt that such people exist, but I would not want to scrap an evaluation system simply because it doesn't catch every possible person, more important to me is keeping bad people out.
But I have never encountered a company that doesn't have a on-the-spot technical interview (involving coding or math) that has had more success keeping bad engineers out than FAANG.
ah right, leetcode interviews are a woke/politically correct practice promulgated by fake talented FAANG engineers to keep out the real salt of the earth SWEs who know how to do real work.
it's fascinating how these playbooks can recycle themselves in any number of scenarios. yes, conditioning on being an engineer at FAANG you are much more likely to get a better engineer, I'm not going to apologize for saying the truth.
e: not going to keep replying, I seem to recall getting in previous fruitless arguments with you when you suggested banning renting was the way out of California's housing crisis.
> I am arguing that LC/FAANG interview does not do a better job at filtering them out.
You're talking of gigantic tech companies that actually have a business interest in getting that right and you just assume that haven't done any studies about that. If you want to argue they're wrong, fine, but the money is against you on this, so more proof and arguments would be welcome.
I think we all agree that no interview process is perfect, but you're basically claiming that they're all equally bad.
> "we do not do leetcode interviews" and maybe "we only have three interviews total" would be selling points on a job posting.
It depends on the job. I have had interviews that broke the mold here and were panel discussions or more job-talk experience, and I found the interviews uniquely exhausting because they required their own set of skills to study for that were different from the “leet code” style. At the extreme end were the take home projects, which I simply didn’t have time to do for every company and were extremely unattractive to me for that reason. I actually find doing leetcode style interviews for me required the least amount of prep and was the most straightforward, especially when they were structured to leave me with time to ask and talk to real engineers at the company.
> Now, if I really have to spend that much time prepping to interview at your unprofitable company (that most likely will go under) don’t you think that I would try my best to work at faang instead ?
I feel the same way.
That said, my company and many others make the interview process much easier but still find it difficult to hire. I know this is a common problem, because I get bombarded with good job postings by recruiters and they are usually still there months later when I finally get around to responding.
Companies have only tested my code skills and handed me personality tests when applying for full time jobs. As a freelancer, I get one or two interviews where I talk to the architects, tech leads and managers, and that's it - either I'm in or out after that. They end up treating me as an employee anyway, so don't really know why this distinction is made in the first place, but I suspect HR.
I don't think FAANG style interviews are a good way to do it, but it is more important to be picky about hiring at a small company. If you're in a company where everybody knows everybody, then any new hire will affect the entire team. A good hire will pull the whole team up. A bad hire will negatively affect everyone.
>High-ability workers are faster learners, in all jobs. However, the relative return to ability is higher in careers that change less, because learning gains accumulate.
This is not the only reason for the quick learner -> high dropout thing. By the article's definition, I'd be a "fast learner". Most of the industry expects me to come in already knowing what they want me to know, while most ways to obtain said knowledge are blocked by barriers difficult to bypass for non-corporates. Meanwhile, almost every corporate I get in expects me to do the same things for several months and gives me a few learning opportunities every year. At the same time, university primed me to absorb knowledge like a sponge and never get stuck on a single perspective, while corporates are complaining why graduates don't know Spring after graduating.
So somehow you're expecting me to stay while my knowledge deteriorates unless I keep it up in my own time, all the while giving lowball raises and not satisfying my desire for challenges. Yes, I get it, grunt work has to be done. But you really can't tell me you're in need of software developers when you actively push people to do the very thing you claim you don't want them to do.
"somehow you're expecting me to stay while my knowledge deteriorates unless I keep it up in my own time, all the while giving lowball raises and not satisfying my desire for challenges"
That's not really the expectation... the expectation is that you'll be replaced by someone younger who's already learned all that.
The expectation for more senior people is that they'll go in to management or architecture.. or maybe burn out.. it doesn't really matter to most corporations, as their workers are replaceable.
This is spot on, and personally, the main reason why I decided to move to an engineering manager after 10 years as a developer.
And even as an engineering manager, I do not feel safe. I think only once you reach director level, you are protected from market hype and newest frameworks trends.
Safe about their current job? Probably not. Safe in the sense that they can definitely get a job at the same level (albeit at a smaller company in some cases) - pretty much
This. Slap the golden handcuffs if possible for the most boring work and give employees extra vacation time to recover from that slog and they might not leave. Most people are willing to deal with some BS as long as they feel compensated and the comp keeps up with changes in the marketplace.
Some really good points on the ultra-fast depreciation of SE tech skills. A relative of mine is a mechanical engineer, well past retirement age and still going strong in his 2-man consulting shop because that's what he loves doing. He works with precision manufacturers, automotive suppliers,... all very cutting-edge stuff, helping them develop new product lines, manufacturing processes,... He says the core skills that he's using are still those that he learnt in university several decades ago.
I was actually surprised because I heard so much about crazy new materials like carbon fibers, graphene, technologies like 3D-printing,... but apparently what makes a machine break are still the same things: mechanical stress, heat dissipation, friction,... new materials and processes might change the coefficients, but not the (mostly Newtonian) physics (my interpretation btw, not his words).
One could say the same thing about software engineering - true fundamental advances in algorithms and data structures are sufficiently rare that it wouldn't be a nuisance to keep up with them. But the %-age of how important those basics are relative to the extremely fast-changing landscape of tools and frameworks is much smaller (plus, one could argue that even the fundamentals see a lot of shifting ground in CS, with neural architectures, differentiable programming, not to mention quantum computing).
I think the skill depreciation concept is a bit exaggerated. Especially in the context of the newness of computers relative to the engineering field in general, the latter which has been around, arguably, for thousands of years.
For example, SQL & Unix have been around since the 1970s.
Linux since late 1991.
Javascript: The end of 1995. NodeJS: 2009.
Sure, there's a ton of churn in the JS Ecosystem, but all it takes is a bit of wisdom, skepticism, and patience to avoid the hype-cycle.
Also, once you learn to build certain things with a programming language you learn the paradigms of the system you build.
For example-- Web Servers. Looking at ExpressJS vs Python Flask documentation, there are many analogous pieces, because they follow the same standards for protocols.
Another example-- data engineering / statistical computing: Checking out R vs Python packages, there are a lot of the same concepts, just in a slightly different format/language.
HTTP/1.0: 1996 (RFC 1945)
TCP: 1974 "In May 1974, Vint Cerf and Bob Kahn described an internetworking protocol for sharing resources using packet switching among network nodes."
"TLS is a proposed Internet Engineering Task Force (IETF) standard, first defined in 1999"
Considering all of this... I don't think most major things actually change that much. Sometimes a popular new framework takes the world by storm, but that's pretty rare compared to the output and churn of the ecosystem.
The issue is that every job I’ve had requires learning a bunch of new shit. Rarely am I just transferring over to the same languages or frameworks. Add on that each company decides different design patterns that they want to utilize and has a different interpretation of what REST is and what HTTP status codes are… it’s a pain in the ass to be an expert in any good amount of time. Expert being one that can dive into the true weeds like cryptic memory leaks that require special profiling tools that aren’t documented anywhere - etc. (and able to do this at a moments notice with ease)
Especially if you’re a full stack eng who is constantly swimming over the entire stack and they keep pushing new DBs, new logging tools, etc.
There are commonalities but it is a lot of learning as you go. I used to know Angular pretty well but now I don’t remember it at all. I haven’t even gotten to really ramp on React as much because my company uses it in such a terrible way that it’s clearly not fit for.
I stopped being full stack for this reason. It's too much effort to keep up with the entire stack and companies don't compensate great full stack devs more than great front end or back end devs. I think it's just a marketing ploy to get unassuming youngsters to spend more time at work being full stack so they can try to pay one person to do two jobs.
From the perspective of whoever is experiencing it, it doesn't really matter how it fits into the historical picture. What you experience is sitting in front of your screen while your family is having dinner, not points on a multi-decade timeline.
In the last 20 years I have seen mostly ultra-fast depreciation of SE _interviewing_ skills.
After the first ten years software development becomes quite intuitive and you internalize all those best practices. You can be trusted to start a new service from an empty git repository. Later it gets incremental, there's a lot of path dependency in languages and frameworks and few things come out of the blue. Those that do are frequently intellectually stimulating to learn.
But interviews have been steadily getting strange and difficult (in a way not related to real life software development), at least in the last 10 years.
Very true. This stuff started at places like Google and Facebook and it makes a sort of sense for them as right or wrong they're very focused on hiring new college grads. With no real work experience to speak of you can do a lot worse than hire the ones that show they can apply their CS coursework to leetcode problems.
But doing the same to workers with 10 years of real world experience doesn't make nearly as much sense. Like hiring medical doctors by quizzing them on organic chemistry problems. Google and Facebook do it because they can and they don't know what else to do, but I don't understand how it became a universal practice.
Yeah. I agree. I have "Cracking the Coding Interview" on my bookshelf. There's a lot of very good stuff in it. I enjoy thumbing through it. But I keep waving it at my boss saying "I will _not_ do this to people with several years of experience ~ This book is _not_ going to be my blueprint for interviewing"
>I was actually surprised because I heard so much about crazy new materials like carbon fibers, graphene, technologies like 3D-printing,... but apparently what makes a machine break are still the same things: mechanical stress, heat dissipation, friction,... new materials and processes might change the coefficients, but not the (mostly Newtonian) physics (my interpretation btw, not his words).
I don't really see the difference between that an programming. Writing code is still a bunch of "if" statements, the same underlying data structures, same algorithms, etc. There's some new technology being added on the top akin to carbon fiber and the other things you mentioned but it's fundamentally the same.
You really think that a SE in their 70's who learnt how to write if statements and data structures 50 years ago would say that they're still basically doing the same thing now as back then? Maybe if they work on legacy stacks like the famed COBOL devs coming back out of retirement. But the thing is that what he's working on is cutting edge, not maintaining systems that were built decades ago.
An unsolvable problem. The rapid deterioration of skill will not stop, it is accelerating instead. People's desire for stability as they grow older will not change either.
The only attraction in software development is the relatively good pay. The job itself sucks. You'll spent your life sitting in a chair looking at a text editor, and that's the best part of your day as about 50% of it is distractions. You're quite unlikely to work on something truly creative or thrilling, so it's mostly a boring grind.
Then, as the article mentions, it turns out the grind was for nothing and the rug is pulled every few years and you have to start over again. The job is cognitively taxing so you'll turn into an absent person that lives in their heads, it drains your life energy.
If I would be young now, I'd say fuck it and go install solar panels or heat pumps. It's outside, physical but not too physical, thus healthy. You get to meet lots of people and you see the direct result of your work. There's no office politics and you're contributing to a tangible good thing for the world. Skill requirements don't change much.
You might come home somewhat physically tired (but over time it normalizes), with a clear head and not a care in the world. There's no overflow between work and personal life.
You go to different places and different people, every single day. That's 100% less repetitive compared to sitting at home or going to the office to see the same people.
As for the tasks themselves, it was merely an example, but even for this example I disagree. My brother-in-law basically does all of these things, both for private citizens and industry and comes across a wide array of different situations.
I'm not saying it's absolute perfection, no job is. But I stand by my point that it has a series of very meaningful advantages: far healthier, more social, direct impact of your work, no cognitive overload, no politics.
Go ask people installing solar panels and heat pumps if the idea of working a "cognitively taxing" job where they could make more money, work from anywhere and not be dead physically tired at the end of the day sounds appealing... I don't think you'll hear them all say "screw that, I love installing solar panels". The grass is always greener for sure, but none-the-less software engineering and tech offers a very high quality of life with significant earning potential and a ton of flexibility.
> If I would be young now, I'd say fuck it and go install solar panels or heat pumps
Amen. For me the advice I give the young—-go for a trade. HVAC (especially if you live where it’s hot) and plumbing to me are the most future and recession proof jobs there are. Literally by the time other folks are graduating with CS degrees and massive college debt, you have been making 70k a year for 3 years and are about to clip 6 figures for the rest of the time you want to work.
One day a computer will be able to write its own code (and that’s not that far off now), but no robot or computer in this world will be able to come to your house and fix a clogged toilet, or replace a blown capacitor in your heat pump and people will always be willing to pay nearly anything to get those two problems replaced.
Unfortunately, this is also a low cost skill to acquire, so the barrier to entry is low.
As all the mid level white collar jobs keep getting automated away, there's going to be a glut of people in need of income who are more than capable of learning a trade if their ego can handle it.
It's not so much a desire for stability in general rather than the depreciation. I found that piece in the post rather convincing. I don't think most people have a problem with learning new skills in this field. But having to throw away old ones sure feels like a waste and given the model in the post limits how good you can ever get.
Also, sitting and staring at a text editor should not sound that awful for anyone who likes to create via writing code. Because that's how you do it. But that's not what you experience, obviously. It's about what goes on in your head.
I remember the feeling after taking a new job as a fresh graduate. It felt ridiculous that I know get paid for doing the thing I've wanted to do most of the time in the past ~12 years but I was told that you can't do that all day because you have duties. (Now the pay was actually pretty bad, even for local standards, as my first job was in an academic research institute.)
I think the problem of ageism wasn't mentioned at all and it should be.
People who love the field have no problems keeping up to date. Sure some job posts will mention Hadoop or Kafka but whatever, a good dev will have no problem learning these in a few days.
Does he get a chance though if he's 50?
Ironic thing here is that Hadoop is mostly already outdated.
Which is btw one of the depressing thing for a lot of data engineers: we used to play with those cool distributed processing frameworks, and now? We are mostly writing some terraform to deploy cloud resources, most of the distributed part being handled by those cloud providers.
> Which is btw one of the depressing thing for a lot of data engineers: we used to play with those cool distributed processing frameworks, and now? We are mostly writing some terraform to deploy cloud resources, most of the distributed part being handled by those cloud providers.
Sounds to me like switching one provider/tool by another - or are data engineers feeling bummed because the job has become too trivial / less fun?
I am only speaking for myself here, but I am really feeling the switch from "data engineering" to "data ops", for whatever that means.
In short, 5-10 years ago, writing mapreduce / spark jobs (or even debugging / optimizing hive jobs) was complex enough that it was often the job of the data engineer (and not the data analyst / scientist). And I do not only mean writing the data processing logic, but more importantly, properly configuring it so that the resource footprint was acceptable. This required a good understanding of the underlying framework, analyzing the job execution plan, tweaking the resource configuration, etc.
Now, writing distributed jobs is pretty trivial with most cloud providers, hence it is now purely done by data analysts and scientists. And the data engineers have switched to doing more of a devops kind of work, doing the plumbing between the various cloud components and the IaC required to provide those cloud resources to other data users. In short, you can be a data engineer and have absolutely no clue on how distributed systems are actually working, this will not be an issue in your daily job.
It is still ongoing, but the trend is really on managed services. Most shops that are still running hadoop distribution are doing it for legacy reasons (and I used to work in one).
I mean, just look at job offers: how many offers do you see where hadoop experience is a plus VS cloud experience?
I'm not disputing the fact that there is ageism, as I'm sure there are thousands of examples of it, but there's so much demand and so many different companies. I've worked with plenty of over-50's. Maybe there's a sweet spot for growing companies where you need the experience, which has been where I've worked. Small companies don't need structure, and maybe want cheap employees. Large companies put all that structure in management and a few super senior folks. (though I saw plenty of over 50s in my large company experiences) Medium growing companies need experience. I dunno, just guessing since certain companies I've worked for seem to have a higher concentration of older folks.
Here, this rando website says that 46% of software engineers are 40+
That's a survey though, I'm curious what biases are going to exist in the data. We might assume that older software developers move around less? May be less likely to respond to surveys? May be less likely to visit stack overflow, especially if they do less hands on coding?
I'm with you, I don't know the answer this is definitely complex.
It could be a combination of things -
a) burnout due to ever changing tech
b) ageism
c) highly paid devs choosing to retire / switch professions early simply because they can financially
40 is definitely not that old anymore for tech I think. Well I'm 38 I'll find out soon.
There should really be a surplus of software developers. Most companies are solving problems they don't have, employing a team of 10x more developers than they need - because of course microservices, Kafka, Kubernetes, you name it - cargo culting the shit out of it.
I feel like the need to continuously learn new things is a feature of the profession, not a bug. It is certainly a difficult challenge, but I've always felt that the most important trait in a software developer is an eagerness to learn new things.
I think the overall premise of the article makes sense, but where I differ from the writer is in how I view the need for constant learning and self-development. The author writes,
> Like a fast, expensive car that quickly loses value as it's driven around town, the skills and human capital of software engineers fall apart without constant, expensive maintenance.
What the author is talking about, however, is not a material object with transient importance to one's life (i.e. a vehicle), it is your mind. Learning new things doesn't just mean you retain your relevance as a software developer, it also helps stave off cognitive decline. It keeps you sharp, relevant, mentally agile. That may not be the case in every job, but the field as a whole certainly has that attribute.
> the opportunity cost of working in a rapidly-changing field is highest for the best learners
It's not that learning is an unattractive aspect (I agree, it's a great feature!). Instead it's that it levels the playing field between those entering the profession and those that have been in it for many years. Yes, you still have "experience" as an advantage, but you can't say you've worked with LATEST_TECH for many more years than someone coming out of college.
Contrast that with other professions, like law, or medicine. Where it's not just "experience" working in your favor. But actual knowledge of the existing laws and medical practices.
That's an excellent point which mirrors my experience as well. My parents are both recently retired civil engineers one which was working for the government/city, the other in a private company. Both very valued in their professional circles, and even outside just because of their reputations.
I've never seen them at home reading a law, bylaw or a "teach yourself how to design a bridge in 30 days". Whenever something changed in their profession (very rarely) be it law or similar they went to seminars about it. Some of the time they (or their "guild") were even consulted so that stuff ended in the law itself. Something really new in the industry (say some software tool or whatever), the company paid for the trip, and the training. The older they were, the more compounding experience and knowledge they had. With each year they were worth more to their respective companies.
Contrast this to my current company (previous were even worse), where my boss proclaimed that all new infrastructure is going to be in Terraform (I had no problem with that, but zero experience), hired a new guy almost straight out of school with 1.5 years of TF experience (and absolutely nothing else from what I've found out later) with much better pay than the rest of the team. Oh, and he said he'll expense us any TF book we want to buy. So here I am, on my 2 week "long" vacation reading a fat book about some technology X which we'll be abandoned in couple of years time.
> Contrast that with other professions, like law, or medicine. Where it's not just "experience" working in your favor. But actual knowledge of the existing laws and medical practices.
In my experience knowledge decreases very fast if you work in a programming job (exception: if you use an insane amount of your free time to avoid this decrease).
> I feel like the need to continuously learn new things is a feature of the profession, not a bug.
The bug is in our brains. After two decades, you are not the fast learner you were, especially after you manage to fill your clothes with keys traded for responsibilities.
Realizing that is a source of mid-life crisis. Trust me, I've been there.
Also: you're learning the same thing you already learned several times before, except slightly different this time. Things are no longer as exciting as when you learned it the first time, and especially if $new_thing isn't a fundamental improvement over $old_thing, or even worse (in your view, whether that view is correct is another matter) it just becomes a slog.
I found that excitement/motivation is pretty important in actually learning new things. I'm close to 40 now and don't find it's harder to pick up new things when I'm motivated. If anything, I find it easier as I have a broader background knowledge and spend my time more effectively – I didn't believe my teachers when they told me that taking notes helps you remember stuff but they were right and I was a stubborn idiot. All of that offsets the undoubtedly decreased ability of my brain compared to when I was 21. It's just that I've done a lot of things before and find it hard to motivate myself for "$new_thing that's fundamentally just like $old_thing building $new_app that's not really any different from $old_app".
Agreed. I hire interns every year, and I am continually astounded at just how incompetent so many people are who are months away from finishing a master's in computer science. On paper, I wouldn't expect a shortage of software developers. But as a practical matter, more than half of everyone I talk to who is about to finish their CS degree is very clearly not going to last long in this field. Not just because they are incompetent, but because software so clearly doesn't interest them. Right now they think the salary will be enough to keep them happy. Think again.
> so many people are who are months away from finishing a master's in computer science
Because by hiring masters degree interns you're not exactly scraping the cream off the top...
Masters degrees in Computer Science -- unless used as an immigration thing -- make no sense.
The make no sense for the research route -- in CS in the USA, you go straight to a (zero tuition + living wage stipend) PhD from undergraduate. No reason to pay for a master's degree.
The make no sense for the SWE route either.
Masters students are mostly in it for either a domestic degree or because they couldn't find a (good enough) job out of undergraduate.
Most departments know this and treat their MS program as a total cash cow.
I've always considered Computer Science to be a distinct field from programming (or "Computer engineering", if you will). I mean, a brilliant particle physicist does not necessarily make a good engineer, an astronomer doesn't necessarily good telescopes, and a cancer researcher doesn't necessarily make for a good GP.
Not that these people automatically lack the ability to do these things well – some do, some don't – it's just that they're different fields with different training, mindsets, interests, etc.
I would call it "software engineering" personally, but "computer engineering" is adequate, and it has a nice parallel to "computer science". But yes, I think they're distinct, as distinct as chemical engineering is from chemistry.
In chemistry, they worry about the properties of individual atoms, and how those atoms combine to form molecules, and how much energy that takes or gives off, and where the electrons distribute themselves in the molecule, and how that affects the properties of the molecule. In chemical engineering, they worry about how to efficiently make this stuff in multi-ton quantities without blowing up the city, and cost of raw materials, and disposal of waste products, and things like pipe bursting strength. Yeah, they'd better know some chemistry, but they need to know a lot more than that.
In the same way, computer engineering or software engineering is about the efficient construction of larger-scale programs that adequately meet the need they are written to address. Let me unpack parts of that definition.
"Efficient": Well, that's actually a bit of a lie. What I should have said is "somewhat less inefficient", but my description was already long enough. But large scale software construction is inefficient, and the larger it is, the more inefficient it is. The fundamental reason is that brain-to-brain transfer of information is lossy. The bigger the program, the more brain-to-brain transfers involved.
"Larger scale": There's something called the "rule of 10", that says that for every factor of 10 larger the program gets, a new set of problems comes to predominate. You still have at least 10 times as many of the old problems, but you also come to have new problems. On truly large programs, the biggest problems may be transfer of knowledge between generations of workers.
"Adequately meet the need": I didn't say "bug-free". Larger programs have bugs. They have databases just to keep track of the bugs, and steps to reproduce them, and to decide which ones to bother fixing, and to figure out who (or at least which team) should fix them.
I'm not sure that many CS degrees to much at all to prepare people for any of this. But I suspect that 90% or more of people with a CS degree are going to wind up working as computer/software engineers rather than as computer scientists, and so I suspect that there's a mismatch between education and career here.
Where are you from, if you don't mind sharing? I believe it depends on the education system. From my point of view in France new grad need 1 to 2 years to be "work ready". They just have to learn how work life is working.
I suspect it's a result of the bucket being wider and larger - many many people have latched onto the idea that "software" is the way to make lots of money, so people are signing up to learn it.
In the past CS (if there even was a CS degree/course offered) would me mainly populated by "the nerds" who were really into it, money be damned.
Arguably this could be why the educational requirements around other well-known lucrative positions became so difficult.
I think it's time to start separating what is a software developer.
Is a software developer someone willing to learn new languages and uses software to solve problems? Or is a software developer someone who knows latest.js and writes front end code or someone who writes C code to makes the LEDs on the machine blink.
There are two opposing hiring methods:
* hire for positional skills
* hire people not positions.
At a smaller scale, maybe you need to to just be a flexible person who can do anything. But how many start ups are going to hire a 35 year old C programmer to do js front end, or type coffee script? Yes that's 35 year old may have learned a lot of lessons along the way, but at their stage in life they're also going to likely cost you more and in all likelihood have more responsibilities outside of work.
Myself, I'm a generalist. I have programmed all levels of the stack. Consequently though, I can't claim deep mastery of many specific areas that a lot of "skill" positions require. On the hire "people" not positions, perhaps I can fulfill those roles but in doing so, as I become more senior I leave a lot of my skills sitting in the toolbelt to do a narrow software task (at a mega corp) and my lifestyle isn't well suited for the grind of startups (many many hours). Sometimes feels a bit of a catch-22
> Is a software developer someone willing to learn new languages and uses software to solve problems? Or is a software developer someone who knows latest.js and writes front end code or someone who writes C code to makes the LEDs on the machine blink.
Yes. A software developer is someone who develops software.
I got the impression that there is much less of this in lower level languages. It seems like there is a fairly stable foundation of C, Cpp that everything else is built on top of. I wonder if embedded programmers have this problem.
> I got the impression that there is much less of this in lower level languages.
Not only the lower level languages, but fundamental skills in general, I think. Undertanding the computer from the hardware level, OS level as well network protocols, filesystems etc tend to continue to provide benefits even if one fashionable technology is replaced by another.
Similarly, fully undertanding algorithms, data structures, design patterns and architectural patterns also generalize, provided you DO understand these concepts with their strengths, weaknesses, and when to use them and not use them.
If you have the skills above (+ some math and general troubleshooting ability), you are able to approach most software/compute problems from first principles. If so, you may find that you are able to take up senior roles even involving technology you have not used before, as long as the tech introduces few new fundamental ideas. (And if there are new fundamental ideas, you need to learn those to keep up, but such ideas arrive much more rarely than new tech).
People who do not learn these things from first principles, but instead are memorizing patterns they learn from other people, have to do a lot of new memorization when new tech becomes fashionable.
Not only does that take a lot of effort, it also makes it unlikely that they will be able to identify antipatterns by themselves, and it may cause them to end up trying to use the new and fashionable tech in ways it is not suitable for.
This is what I’m hoping for. At least, I’ve noticed on the admin side that lower level tools and system calls seem to be more stable and better documented than the abstractions built overtop of them.
I think market pressures make this a difficult fit in the workplace. High turnover (~2yrs) and larger systems encourage shorter term results with a shallower understanding.
This isn’t a criticism (I have so so much to learn still, I’m in no position to judge, nor do I want to be), but more of an observation. Balancing learning churn(frameworks/languages) vs fundamentals(theory, concepts) is a struggle I think I will have for the rest of my life.
>If so, you may find that you are able to take up senior roles even involving technology you have not used before, as long as the tech introduces few new fundamental ideas.
But will you be hired for those roles? Not that many companies will do that I think.
From an outside perspective, as someone who does embedded development as a hobby rather than a profession, the answer is both yes and no. The stability of C and C++ helps, but you still have some churn in libraries and you almost certainly have to deal with vendor specific libraries (which makes moving between vendors difficult). Trying to bypass libraries doesn't really help on that front, since you are simply trading of churn in libraries with churn in microcontrollers.
In my experience it's the opposite problem. Advocating for modern development practices or languages (even unit testing, let alone Rust) for embedded projects is hard work. Most new embedded projects, even today, are written in C or C++.
How does the language matter? I write software (mostly) in C for a living (the simple man's "embedded" as in "Linux on an ARM board", not the "bits on a micro controller" kind) and I made sure we use test-driven development with proper version control and CI.
We have been trying to use Rust for some new projects but cross-compiling is still much more hit-and-miss with Rust (i.e. third-party libraries) than it is with C. I imagine it would be worse for proper embedded projects.
My comment wasn't saying that modern development practices only exist in newer languages - I agree that all those things are possible with C.
But the language does bring benefits. In my experience modern strongly typed languages with large standard libraries and nice tooling are more productive than C (and probably more than C++, although writing idiomatic modern C++ is quite nice). C's simplicity is nice, but it still exists in a world where your only option for 3rd party libraries are zips downloaded from some random Sourceforge. Trying to write a C program with effective string handling is an absolute nightmare. All those things are solved many times over in other languages.
As a tool for writing very low level routines handling fixed length data, C is pretty good. But embedded development is moving away from that - every project seems to have some sort of web API, and that's when the downsides of C really start to show themselves.
I have the same experience. I cant event get people onto github. We spend thousands a month trying to keep our international teams connected to our local servers and they still have to zip up the project and send it to me over teams.
The turnover problems saying that relative wage advantage declines? Their chart on a log scale shows that the wages are still really high compared to non-CS jobs. A more likely interpretation is that junior programmers are for whatever reason highly paid in their sample. Those error bars are suspiciously tight, so there's a good chance their sample is biased.
The selection problem data? First, they're missing a plot that shows that the same correlation (higher AFQT scoring individuals are slightly more likely to leave the field) doesn't hold for other majors/fields as well. Second, is the AFQT psychometrically valid when compared across different age groups? Third, is their sample valid here at all? It wouldn't take much bias to make these correlations disappear. The super bright principle engineer is probably not going to take the time to sit an AFQT and be part of this.
Then we have weird assertions without citation like "Some workers, endowed with superior ability, learn faster than others, picking up skills at a quicker pace. Those workers will tend to sort into high-skilled, fast-changing professions initially, maximizing their early career earnings. Less impressive workers will sort into low-skilled, slower-changing professions."
A quick glance through the original paper the data is from does not fill me with confidence. The idea that maybe conventions for job postings in different fields might be different and thus mess up their data doesn't seem to have occurred to them to start with. The NLSY data they work from for AFQT scores can't be used naively. People drop out steadily after the initial tracking period.
So: bad academic research, bad interpretation of it by a layman.
I really wish that I had something comprehensive to suggest. There are some things that I think everyone who deals with statistics should read, such as Freedman's 'Statistical Models and Shoe Leather'[^1] and Tukey's 'Exploratory Data Analysis'[^2]. Pearl's 'Causal Inference in Statistics' is the best introduction I know of to issues of causality as we understand them today. For actual inference, the basis is one player game theory (aka, decision theory). I learned it from Kiefer's 'Introduction to Statistical Inference,' which sets out the theory very nicely in the first few chapters. That's a starting point at least. There are some interesting courses[^3] that try to teach statistics via resampling that seem pedagogically very valuable. Resampling does build intuition nicely and using it gets people over their squeamishness around using randomized procedures.
I read an article some time ago with the reasoning that went like this:
"The goal of software development is to automate things. So, in principle, with time there should be less demand for software development skills because most of the hard work has already been done, and we have better, more high level tools. The only reason we see demand for software developers growing is because we let the bad developers in :) Bad developer can easily generate 2 FTEs per year, by introducing subtle bugs in code that someone has to find and fix. "
Ye no. The reason is not Javascript framework churn. I feel the notation that the software field change fast is a MBA take on it since the in fashion key words change name all the time.
I guess the underlying reason is that engineers wont be promoted to (real) chief engineers with agency. Being a boss requires reports for some reason. So experienced people leave sweetshops to escape the grind when they get fed up.
> I guess the underlying reason is that engineers wont be promoted to (real) chief engineers with agency. Being a boss requires reports for some reason.
Sadly, this is absolutely true. Manage or be managed. Capitalism invariably becomes corporate (if unchecked, monopoly) capitalism which invariably becomes managerial capitalism, in which people are assessed by their position in the tree structure rather than anything they do, and 0 reports |= loser.
Unfortunately, if you think you can take on a middle-management role and also do technical work, you're probably wrong. Middle management isn't hard but it's super time-consuming, if you do it right (that is, if you care at all about the people you're managing, although you won't have much ability to protect them as you might hope for, because PMs). You'll barely know what the people under you are doing (most managers'll admit this in private) because so much of your time will be spent on meetings and political bullshit.
To me it is acceptable if the management class is drawn from tech people. MBA people are largely.. not the type of person I am interested in being managed by.
Big tech that I work at generally doesn't have MBA type managers and it is good.
> Programming-related jobs have high rates of skill turnover
True, but there are fundamental patterns that remain the same. And if we are talking about the level-of-abstraction type of change, then having experience in the underlying techs certainly helps.
Many software positions do not account for the opportunity cost of learning job-specific knowledge, or soon-obsolete knowledge, making it hard to justify the investment in the long run. With pressure, the human capital will also erode faster, leading to a situation where more fresh blood is needed.
Consequently a smart software company need a mechanic to maximize learning RoI for its workers, else the best choice for them would be to leave (even a job they appreciate). So called "accidental complexity" is almost always low RoI.
I retired a bit early and I have found it immensely rewarding to take classes in various hard technical subjects. Some of these are directly applicable to jobs I could do, or have done. I think tech employers should be more supportive of life long learning. I get dinged by recruiters constantly because my resume is great but I'd rather do this, TBH. I have worked in FAANG btw. Retired at principal level.
Hey! I'm trying to plan out my career a bit and I was wondering if I could pick your brain a bit. There's an email that can get to me in my profile, if you're willing to give me some advice :)
I think the profession is too new for the research data to really make sense. The sample size of devs with 20y of experience can't be comparable to that of the devs with 2y experiencie. I think the statistics from the paper stop making sense with the higher ages.
I'm 26 years old. I have no idea what my future looks like. Neither do other people my age.
> The highest ability, fastest learners disproportionately leave the field over time. They have a multitude of other ways to profitably leverage their intellect and skills. Software development carries serious opportunity cost.
Curious where some HN users have gone when they left software dev, especially if it is something other than management of some type.
This is super interesting. One of the reasons why I got to develop my strongest CS skill into Smalltalk (a puristic niche technology) is because I've understood that working with it makes easier the maximization of what is timeless and the minimization of what ages faster than fish out of the fridge (libraries).
They’d still need Gpt or AGI specialists or any new type of engineers that would arise from that. One class of problems may be solved but it will certainly be replaced by a different one.
Exactly. I was discussing Joscha Bach's theories with a younger psychology student at a family event a couple of weeks ago. Turns out Bach's ideas from AI research are of his corriculum in his psychology training.
Maybe a decade or two from now, they will hire psychologists to make sure the AGI's remain sane or at least safe, and not at all software developers.
> Perhaps, or maybe gpt models will reach AGI capabilities way before version 42, and able to fill every role in a corporation, from CEO and down.
That’s assuming that corporation are all about delivering the product they sell, and that the "side effect" of social bounds it generates is insignificant.
Just wait till you have to debug GPT42-generated code produced by offshore subcontractors fulfilling GPT43-generated business requirements beamed straight down from Orbital 4B.
By "debug", you mean shooting up all the drones that ended up being created with some instruction that had as a side effect that humanity would go extinct? (Perhaps something like: "Collect all H20 from Earth, and bring it to Orbital 4B")
Be careful this isn’t an an excuse for ageism. Skills like problem solving, debugging,, planning, … these are invariants and always useful. The language or tool that is used will change, but the underlying process remains the same.
Medical and scientific fields have continuous learning via papers, conferences, even courses that update peoples skills. Also, there’s new ideas and methods coming in with new people who then get involved in the skill exchange once hired (latest synthesis ideas vs. Picking up medchem).
And it seems to be panic buying and selling in this field - inhaling anything with a pulse 5-7 weeks ago and now trying to lay off half the staff today or next week. Same problems, same market issues.
And if the damned SWE’s we’re valuable they’d give them proper working conditions. As somebody wrote, lawyers don’t do jira tickets.
> Programming-related jobs have high rates of skill turnover. Over time, the types of skills required by companies hiring software developers change more rapidly than any other profession.
There's a reason I still strongly prefer candidates with a computer science degree to someone with 9 months at a bootcamp for some roles. New tools, frameworks, programming languages, and libraries constantly enter this industry. I believe knowing the fundamentals and having a deep understanding of how computers work allows you to more quickly pick up new things.
I understand there are curious graduates that come out of bootcamp programs who will dive deep and fill in the gaps and there are universities with subpar computer science departments. I still prefer the latter in general cases.
An ass-clown engineering director/manager I had, proudly shared a chart – I guess it was from something related to the post - that at the current rate, everyone in the US will need to be a SWE at Amazon for it to continue to grow. I can't tell if that case is true or not and I don't remember the time frame, but I have a suspicion SWEs will instead have to work smarter with better tools and languages that effort "multipliers". x10 languages, and tools perhaps.
With OpenAi (and other comparable technologies) able to semantically understand "write fast and efficient C code for a prime sieve" and generate fairly impressive C code, I'm no longer sure I agree that we will never have enough software developers (after watching this, made me get just a little nervous as well as excited about the future of our industry):
I'm very curious about how this "AI writing code" thing is going to turn out.
I have not tested AI myself for producing code, but I toy a little (ok, more than a little) with various GPT instances to write prose. Sometimes it's great, sometimes it's poor, but:
1/ It never gets anywhere: there is never any resolution
2/ Sometimes it just loops, takes up a clue from itself and produces the same set of words indefinitely.
What I do is generate, survey, and then edit. It's a great tool to get new ideas. But how could this work for code that's supposed to accomplish something?
Code is famously much harder to read than to write; that's why people always prefer rewriting than refactoring. With code-generating AI, all that's left for humans to do is the reading, understanding, and monitoring parts.
It's a difficult job, and if done by incompetent youngsters, I think pretty dangerous too.
What OpenAI does is regurgitate stuff it's seen on the internet.
These things are basically a glorified hashtable (with compression).
Much like what a google query does when it leads you to a chunk of code in stack overflow.
Luckily, there's much more to what SWE does, and it's high time people stop believing that AI is at the level where it can do the job of a SWE, it's ridiculous.
Or to put it in a way that's perhaps more clear: Go ask OpenAI to rewrite to GUI of FreeCAD to be usable, see what it comes up with.
My first initial reaction was: did the computer write that code, or is it cribbing code from Stack Overflow? While the latter is problematic for developers who do the same, it is not a problem for developers who have to write original code or where the code has to be verifiable. If it is problematic, it would also say a lot about what the industry has evolved into and would probably be why so many people are leaving it (even if AI could not take over, less experienced and less expensive developers could).
Or for that matter, how often in your career has the problem been an example problem that people use to showcase basic examples of their programming language.
Prime sieve is almost definitely in the top 10 of most written and published programs ever.
Interestingly enough at the end of the video he asks the AI to write some code that makes money. And it responds with "maybe try investing or not spending money". This is much more in line with the type of questions I get asked in my career and somehow I doubt I would still be working if I had answered the same way seriously.
This is a super trivial example of a very simple algorithm that doesn't have to interact with any other systems. It's almost certainly copied directly from the training data as well, which is fine for an educational toy example, but generally a violation of copyright.
I've been quite happy with Github Copilot but it is not remotely useful to a non-programmer.
> perennial labor shortages unless the pace of change slows sufficiently
This frames it as a one-way cause and effect ("unless"). I wonder if the "dropout of fast learners" that _causes_ the labor shortage will in turn _cause_ the pace of change to slow!
I think it follows that an industry full of slow learners might have a lower propensity for introducing change (i.e. developing new frameworks) and adopting change (choosing a new framework over one I'm already comfortable with).
I started programming in "high-level languages" like Pascal, Lisp, Smalltalk and Prolog. I then had to do a project in C. It felt clumsy at first. But then I got it, the pointer arithmetic, and I felt empowered by it. I felt "closer to the machine". The machine felt more like friend. I felt "C doesn't lie" because its computational model corresponds so closely to the hardware. Just saying most any language can be fun.
There isn't a shortage of software developers. The pay isn't great--the kids getting $300K packages out of Stanford are an exception; it's a marketing expense. In fact, if you control for the level of intelligence it takes to be any good, SWEs make less (and get far less autonomy--do you think lawyers work on Jira tickets?) than any other professional, and they hit a salary/challenge plateau quick... after which, they increasingly face ageism.
People leave because the job sucks. I don't mean that programming sucks; I quite enjoy it. Computer science is still an engaging field worth studying, and research programming isn't bad at all. Corporate SWE is pretty awful, though; you get paid far too little and treated far too shabbily to justify spending hours dealing with bugs and bad decisions that exist not because you're working on hard problems (after all, I've generated my share of bugs and bad decisions) but because of inadequate processes, mindless cost-cutting, generic incompetence, and an overall lack of care, especially at the top. All of this, to make a barely middle-class salary while people who are already rich and connected make millions off my work? No thanks. 2.25/5, would not do again.
Cry me a river. Software development is some of the easiest, highest paid work in history. The majority of the population is grinding away in tough physical labor or abusive service jobs getting paid pennies while software devs moan about only making low six figures and getting a few boring jira tickets while they sit in their home office reading hacker news for half the day.
On one hand, I agree with you that on the whole we're a bunch of spoiled crybabies. Totally grant you that.
On the other hand, saying that the majority of the population is grinding in tough physical labour is just not true. Most other jobs are generic office jobs that don't need to be done (just like 80%+ of software engineering jobs don't need to be done).
Most of us are just doing things for money. Are most people working hard? Not usually, because it mostly doesn't matter. Spreadsheets idle in inboxes, meetings lead nowhere and achieve nothing, brown-nosers and family members get promoted into jobs they're incompetent at. And the world continues to spin regardless.
Also, have you all tried making a non-swe career work lately? How about for 5+ years?
In the last two years I've worked with hundreds of bootcamp students who have reached the end of their rope with 20th century career paths that no longer make any f-ing sense. From teachers to construction workers to bartenders to graphic designers. Not all of them have a deep love for writing software, but all of them recognize that its one of the only viable paths available if you don't want to have roommates into your 40s.
As my best friend told me in an abandoned bay area parking lot before I switched careers to swe, there just doesn't seem to be any other reliable way for people in our generation to make a living. Maybe it only applies to california, maybe it only applies to millennials, but having been on both sides of that fence now I 100% agree.
I saw this as well as someone who hired from "Bootcamps" graduates for a couple of years. The people that got into bootcamps came from all sorts of professions. Most of them had a Bachellor degree (this is in Mexico) and had already worked in their profession for some time.
There were people from Tourism, Law, (non-software) Engineering, Biologists, Chemists, Philosophy, among others.
Most of the people in those areas were burned out of REALLY being overworked, underpaid and being treated like trash (you should see how Law firms treat their interns).
> there just doesn't seem to be any other reliable way for people in our generation to make a living
I suspect this may be exaggerated, in that there is still demand in other career paths, they just aren't as appealing for various reasons, e.g. manual/technical trades with reasonable pay but some physical labor involved or a long apprenticeship period
I do worry we might be flooding the field and siphoning talent from sectors that matter, even as the proportion of software jobs that don't really need to exist balloons
this seems like a recipe for a lot of disappointed people in need of retraining once we realize code can't eat the entire world, and in the meantime it feeds the bubble cycle, pointless or abusive "innovation", and speculative nonsense software is plagued by
We need data for this. More importantly, we need data for how tough it is to get certain jobs. That means stuff like:
* Time spent on finding a job
* Time spent on studying in order to attain a job
* Actual amount of hours worked (hard to get accurate data on it)
* Actual amount of effort per hour (hard to operationalize)
It's a very tough discussion to have, but I have a gut feeling that you're simplifying too much and are partially wrong. But I can't even give evidence that you might be wrong because I don't have data. So at best I feel we're both blind and we don't have a one-eyed king that can see! ;-)
SWE is easy as fuck for the amount of money it gives.
Compare that to physics, chemistry, bio or mechanical engineering.
When I see SWE saying they know 10 different languages I see someone who is bragging that they know how to sum
1+1
1+2
1+3
(...)
1+10
No wonder people with no degree can learn how to code in a few months and get a nice paying job. Try that with any of the others I mentioned and you get nothing. Not with 1 year. Not with 2 years. Maybe with 3 years of studying.
Unsure if the conclusion here naturally follows your premise.
Doctors require lots of qualifications which is a barrier to entry into the career. This significantly drives up compensation.
Are most doctors really more than human frontends to webmd? Probably not.
Source: my doctor told me this (really).
Extending this to SWE. A degree may not be necessary to enter the field anymore, but I don't see how it should be less of a requirement than for a regular consulting GP. Both are expected to understand fundamentals well, and both can cause harm if they do their job badly - I'd argue the SWE can cause a lot more harm to be honest.
If it seems like I'm suggesting qualifications should be mandatory for SWEs, I'm not. I'm saying other professions are revered in a way that we aren't - and I don't see the evidence it's especially warranted.
We can get statistics for some of the parent's claims:
> On the other hand, saying that the majority of the population is grinding in tough physical labour is just not true.
10.3% of US jobs are classified as "physically demanding". [1] (I didn't see parent said "labour" until after my research, but I expect figures in the UK are similar). Assertion is TRUE, the majority is not doing "grinding" labor.
> Most other jobs are generic office jobs that don't need to be done (just like 80%+ of software engineering jobs don't need to be done).
Hard to say about the parenthetical, and it is impossible to say if the jobs don't need to be done without running an experiment, but some research has been done on whether people think their job needs to be done. A study of the European Work Commission Survey [2] showed that in 2005 only 7.8% of people responded that they were not doing useful work. In 2015 even fewer, 4.8% felt they where not doing useful work. (The UK reported slightly higher at 5.6%). A "recent poll" from an article dated 2017 which links to a 404 for the poll claims that poll said that 37% of Brits felt their jobs were useless. [3] Even the originator of the "bullshit jobs" book thought that 20 - 50%--maybe as high as 60%--were useless. Assertion is probably FALSE, most jobs have some utility.
> Most of us are just doing things for money.
Yes, and so what? Does that make it not worthwhile?
According to a Pew study from 2016, 49% of Americans are "very satisfied" with their job (59% of people whose family incomes were over $75k), and about half said they viewed their job as a career. 51% said their job gave them some sense of identity (higher percent as with more education), while 47% percent say the job is just what they do for a living. However, those working in non-profits, government, or self-employed were about 62% likely to say their job gives them a sense of identity, while only 44% of those working at a company said the same) Assertion is PROBABLY FALSE, depending on the definition of "most" and the intent of the claim, but the statistics certainly do not make it a definition-true assertion.
I’m honestly impressed at the extent you’ve researched what I said to this level.
My guess (and it would only be a guess, as I don’t see how it would be possible to really know either way) is that for many of the respondents they could well be kidding themselves about how necessary their job actually is.
As an example, Bob might feel his job really matters, and it might within his company, but his whole company could be an also-ran or a quango that the world wouldn’t miss if it didn’t exist.
All of the above aside, how would you feel about coming to my house while I watch the news and subtly letting me know when I’m being fed alternative facts?
I’ll provide unlimited tea and biscuits. Don’t keep your gifts to yourself!
I've met many people in life that trust their gut. Without anything else, from what I've seen, people trusting their gut is a net negative. With that said, that's mostly because it's a bit of a cultural thing that trusting your intuition is a smart thing to do. However, there are enough people out there who's intuition will kill themselves, if they would only trust their intuition (e.g. drug addicts, not limited to only them though). There are also less dramatic examples that I've seen (women and men consistently finding the wrong partner). So on average, a net negative. Since you're a random person on Hacker News making some bold claims, yea I need something stronger than just your intuition.
Qualitative evidence is just fine. We might want to stop exclusively fetishizing numbers, and instead of complaining how hard it is to find data, do something to fix the problems right in front of our face.
Well, that's all dandy. But when someone actually shares something it's "cry me a river". Their contribution is summarily dismissed because it's not the Right Evidence from the Right People.
I can't get on the "It is a hard-knock life for Software developers" train. We are very-very well compensated, and we are insufferable.
I do understand the issue. Humans are prone to complain about any slight whether real or imagined. Millionaires will complain about not having enough money, A-list celebrities will complain about not having enough visibility, sports stars will complain about every foul play.
Money just does not lead to life satisfaction, only craving. I just have to accept that this is the human condition.
> I can't get on the "It is a hard-knock life for Software developers" train. We are very-very well compensated, and we are insufferable.
The compensation can be high if you work in Silicon Valley (even though the cost of living is high there). The standard corporate programming job is already paid much worse. Also in a lot of countries that are not the USA, software development is not such a well-paying job.
I don't think there is that many countries were software developers aren't reasonably well compensated compared to local standard of pay. And that is the level that we should really compare to. Is the pay above median or in top 25% for the location? I would guess that it is for most of the world.
My brother in law lived in Austria, he was paid similarly to any other job there, there was an impassable ceiling, so he moved to San Francisco and makes 10x.
I looked for software development jobs in Spain, and they pay even less than in my native Uruguay.
And forget about six figures unless you're in the US, England, Switzerland or Australia or a top company in Europe.
The average SWE in America is making $93,000. Average teacher is about $65,000, Average construction worker $38,000, average bartender $65,000, average hotel worker $45,000.
Most of the people I once worked with in the service industry had degrees though. Sure, not compsci, but something.
Also most construction workers I've know made way more than that average, often close to 6 figures if not above it. Of course I'm in a big city, may skew things.
Compensation does not reduce the type of suffering software developers typically endure (or complain about).
Office politics, skills evaporating, high work pressure, ageism, boredom, physical inactivity, an indoor life, over-consumption of information...all of these factors have their effect on one's mental state as well as body.
> I can't get on the "It is a hard-knock life for Software developers" train. We are very-very well compensated, and we are insufferable.
This. People don't feel sorry for people who are in the top 20% in terms of salaries, when they complain about not being in the top 5%.
About a decade ago, or a bit more, I noticed that the the crowd over at slashdot started complaining in similar ways, either that some MBA was compensated better, or that H1B holders were suppressing salaries.
Maybe I'm prejudiced, but it seems to me that this kind of thinking is common in mediocre developers who are disappointed that they are stuck in an average-or-below paying programming job from around the age of 35-40 on. Maybe their salary even went down a bit, in real terms, after the latest downturn.
I simply have trouble empathizing with people who have it better than most other poeple, but still complain like that. If they were industrial workers that started out low (at least for the country), but still lost their job to outsourcing to Asia, and were unable to get another, I would empathize a bit more.
I don't think it's the job of governments to protect top 20% earners from competition from abroad, and get fed up when entitled people demand such protectionism.
I stopped following slashdot because the discussions often turned into something I would expect in a labor union forum, instead focusing on fresh perspectives in tech an science.
But if it is a generational sort of thing, I suppose that is why it is becoming more common on HN about now.
Even the website itself states this pretty much on top:
> Do still be polite, and feel free to have social conversations!
Given that, I don't really see what's impolite about it, especially in an async conversations, where the other person is not expected to be on their feet, waiting for incoming messages.
Don’t mix up engineer culture with being spolied. Engineers will find ways to trade social boilerplate for efficiency whether they get paid 500k or nothing at all.
> Software development is some of the easiest, highest paid work in history.
Nope this is an outdated opinion. You can do just as well or better in trades (source: my electrician buddy). If you’re referring to the FAANG salaries (sub 1%) you’re comparing to doctors, lawyers, entrepreneurs in terms of opportunity cost. Your average joe programmer isn’t killing it like you seem to think, and they’d do just as well in trades or middle management.
The cynic in me thinks this is an opinion promoted by employers, just like the BS “labour shortage” headline.
Do you actually have data to back that up, or just one anecdote from a friend? Median pay for electricians is about $57k/year [1], and that's after 4 years of apprenticeship and passing the exam. That's compared to a median salary of over $100k for software devs [2]. Sure, you can make a lot of money as an electrician, HVAC person, welder, etc. if you own and run your own business, but your average tradesperson is not in that category, just like your average software developer isn't running their own dev shop.
There's a huge difference in where plumbers vs. SWEs live, though. You have to compare the actual purchasing power. SWEs are more likely to be located in HCOL areas and therefore have higher salaries because SWEs need the infrastructure cities provide. Whereas there are plenty of plumbers who live in rural areas.
I have no idea how that comparison would shake out, or if the rise of WFH for tech people would/will change it, though.
Yeah, but in trades you have to climb in hot attics, deal with sewers, work outside, etc. And you have to put in a full days work. How many of us developers actually put in a full 8+ hours? I know that I don’t. I can usually get my assigned work done in about four hours. Can you imagine an electrician or plumber or doctor or lawyer only working 4 hours a day? It’s insane how much I get paid for what I actually do.
Ha! My father is a doctor and in his 70s. He says he is retired because he only works 60 hours a week at the hospital emergency room. According to him, 60 hours a week is being retired since it is so few hours for a doctor.
My brother is a doctor, while he works insane hours, it seems there's a LOT of downtime between jobs, where he's technically "on the clock" but reads ebooks or watches series or socializes.
I technically work 12 hours as well as a developer, but actual work might be 6 hours or less, and it's way less stressful than a doctor.
We’ll my buddy does electrical for new builds and most of his time is spent waiting on other shit to get done. Not even joking, they want the electricians there just in case they can stick to schedule, but of course they can’t.
Also sitting in a chair in an office all day has a lot of health issues And they’re insidious in that your body doesn’t immediately tell you how badly your damaging it.
I don't have to imagine them because I see them while working for me or while talking to my friends who do the job. They go from coffee to coffee, work hard 6 hours for a day then take 2 days break, take 4 hour lunch breaks, say they are gonna come then come 2 days later, and many other stories. Now ALL of them hiked the prices 2x to 3x no matter the craft because of course THEIR material also "dramatically rose in price". Plumber, painter, glassworker, woodworker, electrician, you name it. Some things actually rose in price like steel or wood, but some rose only 20-30% and of course they double the price of work after the price of material "doubles".
Have you seen the way somebody who has been in the trades for 20 years walks? They have a very particular kind of gait. You can easily tell them apart from the weekend-warriors walking into Home Depot just by their stride. It's the walk of a body that's been absolutely wrecked by physical labor.
They all have some combination of: odd posture, odd gait, a dry cough, or bad skin. You can talk about ageism in the Tech world, but I've worked as a tradesman for a bit of time when I was young, and there were not many past 50 much less 50 and healthy. Over lunch they each all recounted their health issues, which made me quit that summer and get an office job.
You can do just as well in the trades - if you are a master (whatever the top rank is) working 80 hours a week. Right now the trades are in high enough demand that you can get that overtime, but that will change and then you are back to hoping you can even get 40 hours. In the meantime working for a boring company is a 40 hour a week job with great pay. FAANG is much higher pay, but they are also more hours per week in general (or so I'm told).
Remember that a high-paid software developer is unlikely to have poor electrician friends, so by definition any of their friends are likely to make as much or more money.
The trades can be a good life, especially since you have much more freedom in where you can be located, but to pretend they average as high as FAANG is silly.
The point is that the trades lile electrician or HVAC get paid well but ut is much harder physical work just had my AC replaced and the guy was sweating through rhe day to put everything together I paid well but it wasn't harder than making a SWE salary.
This seems to be a perspective that's exclusively silicon valley (perhaps large parts of the US) based. Not at all true in Europe at the least. Not that it's a bad job by any stretch of the imagination, but even low six figures is really rare in the EU for Software Development.
Easiest is also highly subjective, if your job is making wordpress templates/sites then sure. If you're working on complexer systems then this statement is horseshit. I've done physical and service jobs that were both easier than the software development I do, only difference was that the physical job was also physically exhausting.
My company has noticed that good developers in Germany are cheaper than in India. Good is key here, you can get bad developers in India for very cheap (this might be good enough). The highest paid software positions in Germany are non-union jobs and the majority are not willing to leave the union for more money, which means they are refusing to do the work of a senior engineer, and this in turn means we can't use the cheap labor for lack of leaders.
It's not taxes, the taxes are high but the gross salaries in EU and even in the UK are very low, compared to the USA. You don't have stuff like developer making 200K a year but taking home 50K a year, it's more like making 60K a year and taking home 40K.
IMHO it's the culture and the VC environment. There's simply not that much money around to throw at moonshot projects, therefore you don't have many unicorns that lose a few billion euros a year paying extravagant salaries for talent. Obviously EU is a very rich place but people with money invest their money in stuff that are profitable right away, likely to be profitable in short term because a giant company is behind it or invest in the US companies if they are more adventurous.
All I know is a lot of great engineers at my company are internationally refusing more pay to non union jobs. I.have no idea if it applies to other companies, or what the benifits are
Anecdotally, I've observed (over a 30-year career) that less than half of the people who have the qualifications can actually produce halfway decent code (as in code that doesn't crash the first time it's used or introduce new problems).
I’m from a small Australian city. I earn in the top 5% of the country with 8 years experience and no degree. And I see all my friends who picked software in the same situation.
It’s probably the most privileged position possible.
Maybe it's true in Australia.. however here in Germany e.g. your chance of being a top earner if you have no degree is near nil. Usually you are expected to have university degrees for tech jobs. There are also some without but it's the minority and their promotion prospects are a lot worse.
I was thinking about moving to Germany from Poland. I was surprised that devs there has such low wages in comparison to cost of living. I abandoned the idea after it turned out my QoL will decrease quite a lot even I'd get more money than I've now
I'm curious what numbers you've heard, my quality of life here is significantly higher than it was in Warsaw, but I was making relatively much less back then.
I'm the "worse" type of developer - I'm working with test automation so salary is a bit lower than for "normal" devs. But what I've heard 60k EUR is what I can get (I was reading about Berlin - I've also read that e.g. Dresned has the same wages as I have now).
But this money will be for my whole family (wife is staying home with childrens right now and for some more time before they'll go to the kindergarten) so I also will need bigger appartment (=more expensive).
And after my rough calculations it turned out that our QoL will probably drop. Not that it will be _bad_ but visibly worse than it's now.
This is true maybe for big, traditional German corporations like Siemens or VW; but have absolutely not the case for any of the start-ups or any of the big tech companies from US.
And especially not true after 2020, when so many more companies are willing to hire people remotely, often for close-to-US-levels of pay.
I have to agree with GP. I know many developers in France/Germany at the Master level, and although their job is far from being the worst, their total compensation is rather in the top 30%. Not bad, but far from a 6-figures.
Same here. I'm from Poland, I've 6 YOE, no degree, and also top 5% of earners. No other job will let me achieve this in such short time.
You can earn similar money as a doctor, lawyer or let's say plumber BUT ypu have to do at least one of:
a) work 80 a week instead of 40 (and let's be honest, none of us work 8h a day).
b) do hard physical work instead of sitting in a chair
c) be responsible for people lifes (and face jail in case of failure)
etc
I suspect this was simply so obvious that OP didn't feel it was worth mentioning. If your goal is making a lot of money in software development, you'd be a fool to stay in Europe. You have to move to the US. That's where the money is.
There's absolutely zero point venting out frustration on fellow working class members when upper management and up are actively trying to reduce costs by cutting headcount and/or wages relative to profits.
This race to the bottom is how software devs will become the new "teacher shortage", and how wealth will continue to funnel to the upper classes with those on the lower end unable to climb up.
Yeah indeed. Coming from someone who had to work with physical labor before I started programming, software developers don't know how good they have it.
"The pay isn't great", "treated far too shabbily", "dealing with bugs and bad decisions", "inadequate processes" and so on, sucks when development is the only thing you've dealt with, but you have no idea how it is to actually have a blue collar job if you're actually complaining about those things.
I think cs137 and others like them should try to have a part-time job at McDonalds (or whatever that is not in front of a screen), because it will make you love your software engineering job again.
My first job was fast food too. Except for the pay, the grease, the unpredictable and strict hours, and the cleaning of bathrooms, I'd love to take a job like that up again.
There's stress, but it ends at the end of the breakfast/lunch/dinner rush, not a constant low-grade stress over the whole day and often lingering into the night from unfinished JIRA tickets.
Then there's mostly chatting and hanging out with interesting people, either kids with dreams or adults with off-the-beaten-path lives (not an endless stream of white collar adults who only have stories about how they went to a bbq or just had another kid) while cleaning or prepping food, helping a customer here and there. Some customers were assholes but they'd be gone a few minutes later and you'd go back to other things. And because it's a public facility sometimes my friends would stop by just to say hi and shoot the shit for a few minutes.
And I was much healthier then too, despite working fast food. Mainly because I spent my day moving instead of being stuck in a chair.
I also worked a retail, a warehouse, and a factory job. The retail job was even better because you didn't have to deal with the grease or cleaning bathrooms, and the customers were somewhat nicer. If it paid remotely near what I make now I'd probably switch to that tomorrow.
Factory job was probably the toughest. More isolating, no A/C in the summer, more constant stress, more physically demanding, mandatory 10 hour days for weeks sometimes, and there was an incident where a drunk forklift driver almost knocked a tower of heavy steel racks on top of me. I quit the next week.
Any white collar job looks better than McDonalds. It’s a ridiculous comparison.
I could say McDonalds workers don’t know how easy they have it. They should try picking fruit as a seasonal immigrant, because it will make them love working at McDonalds.
And those seasonal immigrant fruit pickers don't know how easy the have it. They should try being kidnapping victims chained in a basement, waiting to be tortured to death by an axe-wielding maniac.
That's the problem with "you can't complain, somebody else has it worse" - I can always think of somebody who has it worse.
> Any white collar job looks better than McDonalds. It’s a ridiculous comparison.
I've had a colleague who was paid worse as a software engineer than his previous job as a McDonald's burger flipper. I also have software engineer friends who were paid minimum wage as software engineers.
I learned from that that having a high-value skill means nothing if you're not willing to take action to extract that value.
> Software development is some of the easiest, highest paid work in history.
I joked with a girl yesterday when she asked me what I did. I told her: "I'm a dude that's paid way to much to sit in his living room and make websites and apps".
Not everyone will make $300k per year, but you'll probably still make way more than most other professions. Teachers make like $70k in good places, often times much less. Here we are sitting in our pajamas at home changing the background color of a button and making twice as much.
So many things I have to disagree with. Not all software engineering is easy, particularly the highly paid variety. The six figure salary is diluted by spending weekends constantly having to learn new technology off the clock. Companies then expect senior engineers to mentor other engineers and give presentations of their technical challenges, debasing the value of the knowledge gained working off the clock. Manual labor jobs don’t typically require the student loans software engineers have to pay off, further offsetting the salary gains by years. Manual labor workers get to go home and be done at the end of the day and aren’t tacitly expected to be available at all hours. It may not be the worst industry but it definitely is not a fun industry to work in at a lot of companies. It has one of the worst interview cultures on earth probably and the more years I work the more I see the obvious flaws to the point where I want to leave.
>The six figure salary is diluted by spending weekends constantly having to learn new technology off the clock
You don't need to do this forever. Usually you can just focus on a stack and related technologies and do well. Once you have experience and a clearly defined need, ad-hoc research is good enough.
The thesis of the linked article is that you do. For the most part, it matches my experience. I'm 30 years in and already wondering how useful the Hadoop/Spark/Scala stuff I spent the last few years mastering is going to be in the next 5 years.
On the other hand, a C or Java engineer can probably be pretty confident that there will be jobs for the foreseeable future. Pretty much the Lindy effect in action.
I agree with your sentiment but question some of your examples:
* Scala introduced us old Java hands to a whole different world of modern languages. If you know it you get Kotlin or the latest Java changes for free (probably TypeScript-like other ecosystems too).
* Spark introduced a generation of backend developers to distributed query engine technology (my generation is unlikely to delve into postgress codebase in comparison) and made ETL trivial in real life systems
Hadoop clearly died in the last few years or so (outside of EMR where it's mostly invisible anyway). But it's a perfect example of a complete technology lifecycle - it had a good run for a decade starting around 2010 which in our line of business is incredibly long time. Not to mention how many things about distributed systems people like me learned from that stack over time.
Well, I think the parent is comparing software engineering to other white collar professions like lawyer.
In my opinion comparing SWE to blue collar work like construction is apples to oranges. Ofc the blue collar work is harder and more demanding physically.
It’s better to think about whether software engineers are being compensated fairly relative to other professions like lawyers.
I think they are given that wages seem to be governed with supply and demand. On one hand I don’t make as much as my sister, who is a doctor. On the other hand, I make enough/comfortable money and don’t have to deal with the liability/responsibilities/obligations of being a doctor or lawyer, and I can work from bed naked should I choose to do so.
You also don’t need to spend years in a very expensive education with tall entry barriers and taking on massive debt. But also, you had the info in doctor fakeries available and still chose software.
I mean - this is YMMV. Typical SV FAANG engineer is from an Ivy League or adjacent competitive school. Often with a masters.
So, yes, you might not have to do the full specialty training and residency but it can still be quite competitive and expensive.
You’re only earning surgeon money when you’ve made staff level at FAANG. Which usually means you’re near the same age as surgeons and there was a lot of risk and grind getting there. If anyone thinks getting to staff at FAANG is trivial - I’d suggest they’ve been very lucky in life and aren’t a representative person of how hard it is.
People who could do either often go into software because of the nearly unlimited earnings cap. Theoretically you could start your own company and be ultra rich. That’s what I see often as the source. Less common with doctors afaict.
This is kind of a meme. I'm not sure what you consider to be "surgeon money", but senior engineers at FAANG nowdays net ~450k/year (annualized). As a single person you can retire at 40 with ~5m in the bank by starting at FAANG at 22 and plateauing at senior (not staff) after 5-6 years. So you're hitting that 450k/year level at, like... 27? Earlier if you're working at a company like Facebook, which promotes aggressively (many people make it to senior in 3-4 years), maybe later if you're at Google (where it's non-trivial getting to Senior in 5 years; many take up to 8, some don't break through at all). At 27 you're just starting your residency, working 100-hour weeks for less than six figures. Lol.
FAANG hires way too many people to hire exclusively (or even primarily) from Ivy League schools. Jointly they employ probably 5% of the software engineers in the country! Master's degrees are likewise totally unnecessary; I almost never see anyone who isn't here on a visa getting a Master's. (PhDs are a different story, but I know people without even undergrad degrees working at FAANG too.)
$450k/yr at FAANG for senior is definitely the very top of the band and to get there after 5 years is extremely optimistic. Again - if you're comparing Stanford graduates then maybe it's more common but for your average FAANG eng isn't making that in 5 years out of college especially with just 40-hr work weeks. Very few engineers are making that money even after transferring to another company. Maybe if you have some stock appreciation or stacked refreshers working in your favor you're at $450k/yr. Levels backs this up...
Surgeon money is generally $500-700k/yr (this might be my sampling bias - looking at stats online, it varies a lot - this is like tech incomes, there's a wide distribution). The nice thing about surgeon money is that you can make that in a lot of places - not just in SV. The same cannot be said of FAANG. You're gonna have a hard time getting remotely hired in BFE making $400k+/yr. Yet, the local hospital always needs a surgeon and most surgeons don't want to live in BFE... So, you're gonna be fine.
Even then if you assume you're making $450k/yr at 27 - you're not gonna have $5m by 40 unless you somehow are beating the market or save every penny and live like scrooge. (Which - again - why bother saving $5m if you're gonna live like a peasant?)
If you assume you're gonna live like you would on a safe withdrawal rate (let's say the more optimistic 4% - $200k/yr at $5m) then you're gonna only have $250k/yr gross income to save - which turns out to be ~$150k/yr net. (You can play with the numbers however you see fit but the point will come around) If you decide to save $12.5k every month for 13 years at 6% return rate (just a simple return - we assume same value dollars) then after those 13 years you have $2.9m. It would take 19 years until you crossed $5m. So, you'd be 45-46 assuming you saved every penny, never went above a $200k/yr income lifestyle (LOL at that in the bay area - you're a renter for life!), and you somehow made $450k/yr starting at 27. Which - is again - uncommon. It has happened due to wild stock appreciation but it isn't the norm offer people receive for senior level - levels backs this up.
It sounds nice until you do the math and realize $200k/yr is shit to live on here. God forbid you marry someone who isn't in tech/law/finance/$400k+ income.
- 450k is not, actually, top-of-band for senior engineers right now. There are some companies where that's approximately true (i.e. Google) and some where the number is much higher (Netflix goes without saying, Amazon routinely breaks 500k, Uber is hitting 480-500k, Cruise breaks 500k, Snapchat hits 550-600k at L5, etc).
- Well, since I'm looking at annualized compensation, I am considering stacked refreshers. I don't think job-hopping ~3 times is an enormous ask.
- I certainly wasn't assuming market-beating returns, or living on ramen. But I certainly wasn't running the numbers such that one would be spending the equivalent of one's safe withdrawal rate pre-FIRE. I live unconstrained by budget concerns and don't even come close to spending half that. At 450k pre-tax I was assuming one would be saving ~200k/year, with 7% returns.
- I was also not assuming that one was starting from 0 at 27, since it's not like pre-senior one would be making barely enough to live on. Having something in the neighborhood of 500-600k saved up by then is something like the default outcome, given a typical single person's lifestyle (absent extravagantly expensive hobbies).
- Ok, sure, this gets you to 4m, adjusted for inflation (oops). Guess you gotta hit that third decade (barely).
- You aren't limited to SV; many companies in that tier support remote work and those that don't have a bunch of hubs across the US.
I'm genuinely not sure why you think that 450k is an outlier level of compensation for senior FAANG/adjacent engineers. One possible source of confusion - if you're just looking at the "Average Total Compensation" levels lists for those companies, those numbers are very low. Filter by "New Offers Only", and then remember that those offers don't include refreshers (which, over the course of the first 4 years at a company, will add something like 360k = 90k/year, assuming totally average performance and no multipliers). It might be in the top 50th percentile, though even that may be pessimistic.
As for 200k/year being shit to live on... I assume you're talking about living in Silicon Valley, which, shrug? I don't live in Silicon Valley and before I decided to substantially change my career direction, I was on-track (and that despite not even hitting 6-figure comp until a few years into my career).
Can confirm. Just did an onboarding in my underwear. (Although mostly because I forgot time conversion when on business travel was a thing, because COVID)
Hmm what makes you think software developers are not abused? Maybe in your world they are not, but it regular to hear about devs working 12-14 hours a day. With bosses and clients harassing them even at off hours.
This has not been true in my experience. The shitty small business masquerading as a startup treated me horribly, now I work for OVH and everyone is super nice.
No one talked about service workers. These peoples are exponentially more fcked than any of us can ever be. There is still a too large gap between employee and CEO / founder compensation in our buisiness, despite six figure salaries and yes, it is worth complaining about and fighting for.
I think sales is a much better gig than software engineering, personally.
Any engineer who has enough EQ and people skills to give some demos on Zoom is going to find that Sales Engineering/Solutions Engineering is an easier job with more earning potential, all with no on-call rotation.
On top of that, the company treats you like you're valuable rather than a cost center (especially relevant to people doing infrastructure and IT engineering). Sales teams go on vacation to team-build, engineers are shoved in a conference room with some lukewarm sandwiches.
Your comment already has a lot of responses but one thing I didn't see mentioned is about the tough physical labor. It is often neglected that working at the computer is really bad for your body. Sitting for hours is unhealthy and creates all kinds of problems: obesity, disc prolapse, bad circulation, so much more. Also your eyes suffer a lot and become worse much faster than in most other jobs.
I agree that being a SWE is certainly not the worst thing in the world but it has also a lot of bad properties that often are overlooked and the most positions do not have moon salaries, that we see on HN regularly.
Also to do whatever lower income job you don't need to study for 5+ years (master). You need to compare with jobs that have similar requirements to your education.
Yea so maybe that degree thing is a German problem. I do not know the other markets/countries. Here, what I said is certainly true.
A standing desk is better than no standing desk but won't correct everything by itself. Taking a break and walking around is fine but I still need to get back to staring into the computer screen to do my work.
I don't know your age but for me it started with about ~35 that I realized that this has negative effects on my body.
After 1 year of SWE, I realized my body was becoming frail and mushy relative to when I was slugging cast iron fittings in the warehouse. Took me about 2 weeks to put together a workout regime with built in progression.
One of the things I’m glad of is the flex time and working from home so I can crank out workouts whenever it suits me.
lol yes this is a good point. We should all volunteer in a hospital or police department for a few days to get some perspective...
And another thought: many jobs descriptions list any technical novelty under the sun even though only some small project in some small team somewhere in the company uses it. Why? To look more appealing to job seekers. It's much cooler to describe the job as PHP + Elixir + Kafka + Big data than simply PHP - which is mostly what the job is about.
I'm not denying the field is changing fast, it is. But there's other reasons why 50 year olds are pushed away besides some imagined inability to keep up.
In comparison to the majoritiy of the population it truly is a nice job, though if you ain't working at FAANG your pay can be lower compared to e.g. a mechanical engineer
What they expressed is not contrary to what you express. They said the expected salary conditioned on intelligence for SWE is lower to that of other jobs, and I am inclined to agree. If you include game dev jobs in SWE then it goes even lower.
Oh yeah, every dev in the world makes $100 000 minimum per year, sure. And it's boring and easy with reasonable deadlines and appropriate vacations and time off work.
Even if you make something like $20000 per year, you have it better than most in the world. People spend their body (the one we only have one of) doing way more intense (and vital for the world) labor than software engineers do, and earn much less.
If you think unreasonable deadlines are a big issue, then you're in for a rough awakening if you spend any time with the rest of the workforce that doesn't sit in front of a computer all day.
Most of the world lives in severe poverty, therefor software developers should accept any amount that the owner class feel like paying them and be thankful.
As a civil engineer: I assure you my studies were QUITE rigorous. The job is very demanding, and I assure you the complexity can be very high, the consequences of mistakes are severe and occur over a massive variety of time scales. I had to work for four years apprenticing under licensed engineers after school and pass no fewer than four examinations, and get 4 licensed engineers to personally vouch for my work before getting licensed myself as a civil engineer. I am personally liable, in perpetuity, for loss of life injury or property which occurs as a result of work that I put my stamp on.
As a licensed P.E (who is, dare I say, fairly talented amongst my peers even) with a total of 8 years experience, do you know what I get paid to deal with that complexity and liability? About $80k in medium cost of living area for 45 hours/week. That's not atypical. Does that "control for the intelligence" required of the job?
80k for 8 years of experience with a PE is a bit low. You should be cracking six figures by now if you’re structural and coming close if you’re actual (roadway) civil. Are you in materials testing? If so, obviously you should try to switch over to an inspection gig to get some of that sweet overtime. I was clearing something like 110k salary at similar experience back when I practiced and quite a bit more with my equity/COLA/bonus, but I was in a high risk niche field. Not sure where you’re at but any market of reasonable size should be able to support you getting a raise, especially if you’re actually as good as you think.
I agree with everything you say. Software engineers (most should not even be called engineers, they’re coders or developers and on the same level as a lab technician to me) are prima donnas whose mathematical and scientific backgrounds are (on average) at least one level of education behind any actual licensed engineer/professional making half as much. And their degree of liability is infinitely lower, as you note. I’ll probably get sued or be involved in a lawsuit for my work another half dozen times before 2032 even though I haven’t stamped anything in 3 years.
> most should not even be called engineers, they’re coders or developers and on the same level as a lab technician to me
I'm a programmer and I 100% agree with this statement. I never tell anyone that I'm a software engineer even though that is my job title, I am a programmer. It annoys me to no end that we keep diluting language like this and devaluing the meaning of these terms. An engineer is held to a much higher standard than any web developer ever will be.
In some countries, like mine (Uruguay), it's illegal to call yourself an engineer unless you are one, but the local university does hand software engineer titles.
I call myself a senior software developer (I'm not an engineer, I did went to university but didn't graduate from that career, switched to an Information Systems degree instead).
Wow unfair, is it because you work for the public sector that the salary is mediocre? It is not a bad wage but indeed sounds like the salary should be 50% higher.
No, I work in the private sector in land development. This is actually a good deal for me because I'm full remote whereas in my area I'd be looking at a $15-20k pay cut.
Many lawyers have to religiously time track everything. Because hours attributable to a specific client are billed and billable hours are where the revenue comes from.
On the other hand, tax codes and laws change all the time. If the law changes, and you don't know it as a professional in that area, the outcome isn't good.
Every field has to deal with change. In law, you've got politicians changing the rules, in the corporate area, you've got Office Politicians changing the rules.
It does get stale and changes per jurisdiction. Just look at how GDPR and CCPA changed the privacy landscape. At least programmers don’t have to deal with linked lists having different time complexities in California.
Totally not true. Creating room to sit on your thumbs through the time management system gets noticed by pretty much everyone. Some SWE roles are billed to client accounts hourly, too.
SWE's are not the only ones that think that their pay is low compared to their intelligence level, their education level, how tough, inconvenient or important their work is or what they feel they deserve for some other reason.
For instance, I'm sure there are plenty of HR managers in various companies that are secretly furious about all the young young mostly males, mostly white or asian, still in their 20s, join the company and make more than they do.
I'm sure HR managers know that the majority of jobs they are hiring for are different from the job they're in, with different pay scales and demographics.
>SWE's are not the only ones that think that their pay is low compared to their intelligence level, their education level, how tough, inconvenient or important their work is or what they feel they deserve for some other reason.
Sure, but it doesn’t mean some or all of them are not right, especially when the unequal distribution of wealth is well known and documented to evolve for the worst.
Most people don't get into the top 1% net worth by receiving a salary. To get there, you probably have to invest over a long time horizon (possibly more than a generation) while keeping spending to a minimum, inherit the wealth outright, or start a successful company.
Even successful lawyers need to be senior partners to achieve that kind of wealth ($11M), and don't forget that for each such lawyer, there may be many people who end up as some clerk or generic legal representative in some organization, who may have salaries much closer to a SWE.
Why would the ethnicity or sex matter when these HR managers would still see the same differences regardless of ethnicity or sex?
Either way, I'd think twice about putting HR managers as victims here when they are the main perpetrators of low raise budgets and high hiring budgets.
When people see that some other group of people, with identity traits different from themselves, are doing better than themselves on some metric, it often induces resentment. That's just a fact.
Whether its justified or not, varies with circumstances and what ethical system people have.
When black people in the 50's felt that way about white people and their privileges, most people today would see those concerns as justified.
When German people in 1922 became angry because most jews had enough money for food, while many Germans went hungry, and started believing in conspiracy theories, like the Protocols of the Elders of Zion, most people today would say they were not justified. (though there are still some who secrely believe those things)
And when Hutu's wanted to "cut down the tall trees" in 1994, most people in the west only read the headlines, and didn't care that much.
In the case of HR managers being unhappy that software developer make more than them, some will blame some generic "wage gap", others will explicitly believe in some kind of Patriarchy conspiracy theory. (And a lot of HR managers, of course, are simply fine with things as they are.)
> When German people in 1922 became angry because most jews had enough money for food, while many Germans went hungry
Even if this were true, and it isn’t, people didn’t start believing in anti-Jewish propaganda because the Jews had more to eat than the average German. People always believed in crazy anti-Jewish propaganda since, at least, the first crusade.
> People always believed in crazy anti-Jewish propaganda since, at least, the first crusade.
Absolutely they did. However, even at that time, it seems (if wikipedia is to be trusted) that the Jew's roles as bankers was part of the reason for the massacres:
"Many crusaders had to go into debt in order to purchase weaponry and equipment for the expedition; as Western Catholicism strictly forbade usury, many crusaders inevitably found themselves indebted to Jewish moneylenders. Having armed themselves by assuming the debt, the crusaders rationalized the killing of Jews as an extension of their Catholic mission"
You are absolutely right that antisemmitism didn't suddenly pop up in 1922, or even with the creation of the "Protocols" conspiracy. But prior to 1922, antisemitism probably wasn't worse in Germany than most other western countries.
But as we consider the inflation we are seeing today, try to imagine how it would be if the inflation is not 5-10% but 29500%, as it was at the peak in 1923. In other words, it would take 3.7 days for your paycheck to go to half value. When you got your salary, you had to run to the bakery and buy as much bread as you could. Old poeple and people on a fixed salary would lose everything in an instant.
Even in the present age, especially after 2008, conspiracy theories involving the banking sector is everywhere, and there are still whispers implicating a Jewish conspiracy, if you listen. Now imagine standing at a corner in Munich on a day where the factory didn't need your work, too afraid to go home to your abusive wife who would beat and scorn you for not bringing food to the hungry family.
Imagine some small man with a mustache telling a very convincing story that comletely rationalizes your troubles. He reminds you about the Goldmann banker up the street, and the Ruben gem store at the corner. Their families are not hungry, yet they didnt "work" (meaning physically) a single day of their lives, they are simply collecting usury. Imagine being told this, while hungry, while worried about being beaten by your wife when you come home, in a world where anti-semmitism is still seen as acceptable, in a world where usury is still seen as a sin, in a country where the interest rates on loans have 5 digits.
What I'm saying is that those germans were just like us, just under different circumstances. To them, the jews were the socially accepted "bad guys", just as nazis and facists, and where people can label people as nazi or facist with not much more evidence than not liking that person.
The person that would say today, that "It's ok to punch a nazi." (meaning MAGA-republican), might very will be the person in 1923 thinking that "It's ok to punch a jew.". The reasoning is very similar. And it's not restricted to the left. People on the right are currently generating massive amounts of resentment against "the elites". And in some cases, the anti-semmitism is once again coming out into the open.
> But as we consider the inflation we are seeing today, try to imagine how it would be if the inflation is not 5-10% but 29500%
Hitler took power 10 years after the hyperinflation. People didn’t vote for the NSDAP because of hyperinflation, nor they started hating the Jews more than before because of it.
> You are absolutely right that antisemmitism didn't suddenly pop up in 1922, or even with the creation of the "Protocols" conspiracy. But prior to 1922, antisemitism probably wasn't worse in Germany than most other western countries.
Antisemitism was rampant everywhere in the West, the Germans only took it to its inevitable consequences and only after 1933. Of course, this doesn’t change the fact that nazis were criminals, but the Shoah has much deeper roots that the hyperinflation or some temporary unemployment.
For instance, only in 1870 Roman Jews became full citizens, before then they couldn’t own property and, among other things, once a year they were forced to run naked during the Roman carnival. This was only 60 years before Hitler seized power.
You can find similar stories about all cities that had a large Jewish community.
> The person that would say today, that "It's ok to punch a nazi." (meaning MAGA-republican), might very will be the person in 1923 thinking that "It's ok to punch a jew.".
No, it’s not the same thing because no MAGA-republican has been punched and arrested and they even elected a president.
> deeper roots that the hyperinflation or some temporary unemployment.
Oh, and about this part, I think you underestimate the hyperinflation in 22-23 in Germany. Over a period of about 2-3 years, people who had been comfortably part of the upper middle class would lose EVERYTHING, and in many cases end up starving to death. That's not "temporary unemployment".
Read this quote:
- One particularly arresting story is that of Maximilian Bern, a man of literary education exemplary of Germany’s formerly middle-class Bildungsbürgertum. In 1923, writes Taylor
- "[he] withdrew all his savings—100,000 marks, formerly sufficient to support a modestly comfortable retirement—and purchased all it would buy by that time: a subway ticket. The old gentleman took a last ride around the city, then went back to his apartment and locked himself in."
- If you are like me, you probably assumed the next sentence would conclude with suicide. No. “There he died of hunger.” I had to linger over that sentence to fully grasp the reality: starvation in a society that had recently been among the most technologically and commercially advanced of any on earth.
For the Germans, this left an impression that resembled the Shoah for the jews.
Imagine seeing former affluent tech workers starving to death in San Francisco in 2029, looking like the corpses of Bergen-Belson prisoners. What would that do to the survivors?
They say that, of all causes of death, hunger is the most horrible.
> people who had been comfortably part of the upper middle class would lose EVERYTHING, and in many cases end up starving to death
The upper middle class own non-monetary assets, they are probably the least affected by inflation.
> For the Germans, this left an impression that resembled the Shoah for the jews.
No, not really and not even close. At least because the Weimar hyperinflation hasn’t caused mass starvation. Second because losing your savings is not even close to being stripped naked and beaten once a week and then being put on a cattle wagon to be slaughtered 1000 kilometres from home.
This subthread was not my main reply, just an aspect I had left out in the other reponse.
> The upper middle class own non-monetary assets, they are probably the least affected by inflation.
First of all, don't confuse hyperinflation with regular inflation. Regular inflation is an indication of a rebalancing of an economy, with some mismanagement on top. Hyperinflation happens when the economic system collapses completely.
One difference is that during normal inflation, non-monetary assets often retain much of their value, while in hyperinflation only assets that help produce food and other essentials really matter (such as owning a farm, a factory, etc).
Middle class workers pre-inflation may have a house, a "save" job with a fixed income and some savings in the bank. When hyperinflation struck, they may have been able to sell the house, but the cash gained would be gone in a couple of weeks. The savings were also gone quickly, and many such jobs would either have salaries lagging behind inflation or people might get fired, unable to find similar work.
Meanwhile, workers in factories and on farms were more like "essential workers" during covid.
>> .... this left an impression ....
> No, not really and not even close.
If you read what you quoted, I was not referring to the effects on those that died, only those who remained. In other words, I was comparing the effect on the German people with the SURVIVORS of the holocaust, as well as on jews that were not directly affected.
These effects are primarily cultural. To this day, the German nation remains fiscally conservative due to the events of 2022-23, very reluctant to allow inflationary actions by the ECB, for instance (as experienced by Greece, 10 years ago).
You are right, of course, that the Holocaust was a larger event, even in terms of the cultural effects. But even if the wound of the hyperinflation was smaller, it was still many times greater than the scars after the 2008 crash in the West.
Maybe the number of actual deaths by starvation was limited, it did occur, especially with people unable to work a job (retired people). Also, even for those who did not die from lack of calories, many were left undernourished or malnourished, causing an uptick in deaths from infections, etc.
But as stated above, my main point is what effect it had on the survivors. Those who saw the previously affluent widdowed aunt fall from grace, having to beg her nephews and nieces for bread. Maybe having to refuse to giver her that bread, because your children were hungry, too.
Experiencing (either directly or through some newspaper) the humiliation when French soldiers entered Germany to confiscate assets when Germany could not (or would not) pay the reparations that was demanded, including seeing the Germans that were either shot or turned into refugees.
Seeing how rich were able to (and smart enough to) shifts their assets that would continue to be productive even during hyperinflation. While you, who were used to thinking that money in the Bank was the safest way to save, kept your money there.
And even if you did manage to secure just enough bread for your familiy to make it into 1924, you would hear stories or see pictures of those who did not, and feel the fear that something happened to you that would prevent you from showing up at the factory that, on most days, would pay you to work.
Such experiences leave deep mental scars, and will tend to harden a person and make them more tribal and aggressive. Make them perfect raw materials to be molded by demagoues like Hitler and Goebbels.
In the end, some did become evil monsters. Maybe some were even born that way. But to the extent that is was environmental, it was certainly not born from privilege. It was, just like in most other cases where the result is genocide, born from hardship and humiliation, combined with a strong feeling of resentment towards those who were seen as responsible.
> Hitler took power 10 years after the hyperinflation.
Hitler started planning a coup in late 1922, during the hyperinflation. It was attempted in late 1923, around the time the hyperinflation was stopped. It failed, and he ended up in prison. In 1924, during his time in prison, he wrote Mein Kampf, which lays out the plan he followed (or tried to) thereafter.
Before 1922, NSDAP (aka Nazi party) was very tiny. During the hyperinflation, it grew to 20000, mostly in Munich. Still small on a national basis, but enough to give it a solid basis as an organization.
Between 1925 and 1929, it grew slowly, but exploded after 1929, as the Great Depression hit Germany hard.
As for the role of hyperinflation in this, it is relatively well documented. Here is one quote from wikipedia:
"The Nazis' strongest appeal was to the lower middle-classes—farmers, public servants, teachers and small businessmen—who had suffered most from the inflation of the 1920s, so who feared Bolshevism more than anything else."
At least very widespread. Still, the situation of jews in the West, including in Germany was much better at the time than it was for Blacks in the USA.
In Germany, there were many highly respected German leaders and intellectuals, such as Einstein, Freud and (less known today) Rudolf Hilferding. Hilferding is, quoting wikipedia again "almost universally recognized as the SPD's foremost theoretician of this (20th) century."
> the Germans only took it to its inevitable consequences and only after 1933.
I don't agree that it was inevitable. The Nazis were a marginal force up until 1929. By 1929, most Germans may have gotten over the terrors of 1922-23, but in 1929 the wounds were torn open, and the messages of the "little man with the funny mustache" didn't seem so crazy, after all.
It didn't help that Hilferding was Minister of Finance at the time the Depression started.
"Of course, this doesn’t change the fact that nazis were criminals, but the Shoah has much deeper roots that the hyperinflation or some temporary unemployment."
I'm not claiming that the hyperinflation was the root. I'm claiming it was one of the main sources of energy, and a great inspiration for Hitler himself, direcly before writing Mein Kampf. (Even though he was already an antisemite before 1922, I'm sure the things he saw during those two years reinforced his convitions. Hitler was known to tailor his speechest according to what ressonated with the audience.).
> You can find similar stories about all cities that had a large Jewish community.
Yes, I know. Being a minority comes with a lot of risks and problems. I fully understand why some jews prefer to have at least one state where they can be the majority. (Though it might have been better for world peace had they been given Köningsberg/East Preussia in 1945 instead of being supported in becoming the majority in Israel/Palestine).
I have a hard time believing the reaction would be any different if a bunch of older white collars would stare at young Hispanic software devs making more instead of Asian devs. Only difference being they couldn't hide their jealousy behind politics.
I don't think it makes any difference to most "older white collars" whether those devs are hispanic or asian. Boomers were born at a time when racism against asians were much worse than racism against (non-black) hispanic people.
There are, I would say, two main forms of racism. One is directed at those people see as inferior. Typically, that is right-wing racism. That one is created when some other group, on average, performs worse (or appears to do so) than the group the racist identifies with. Such a racist may see the hated group as sub-human. They may not feel threatened outright, but may be concerned if the population of that hated group grows too quickly.
The other group, is resentment racism (or sexism or other similar identity-group-ism), which is triggered when confronted with groups that do better than the group the racist person identifies with. This is more common on the left*. This kind of racist will see the hated group as outright evil. And if they feel sufficiently threatened by the hated group, may attempt outright genocide, since they may feel they are fighting for their life.
* Nazies were full of both kinds of racism. They saw Roma people as virmin, Slavic people as merely inferior, while they were accusing Jews of attempting to seek world domination. In other words, Roma and Slavic people were sub-human while Jews were seen as Evil, inferior only for moral reasons. And as we know, the hatred against to Jews was by far the strongest. Mein Kampf describes this in detail.
HR managers are rarely ever intelligent. The work they do doesn’t require it and intelligent people don’t want to do a job where they’ll be surrounded by lower intelligence people.
Maybe some HR managers believe that empathy and "EQ" are more important than IQ?
Maybe they even think that intelligence is a social construct invented by the Patriarchy for the purpose of oppressing people who are not "cis white males"?
Who knows, maybe for some, these ideas may even have been part of their education....
> Maybe some HR managers believe that empathy and "EQ" are more important than IQ?
Have you ever interacted with HR? I'm pretty sure they are trained to be cold and uncaring, and to apply business rules without exception. Never heard an ounce of empathy from any of them, it always feels like interacting with an automaton that was somehow annoyed with you.
Plenty of times. Some that are really smart, some that are not. Some that are cold and "corporate", some that are getting way too personal. (As in a "metoo" moment.)
Politically, I've met at least one that was secretly a eugenicist and others that are pretty far left.
All of them individuals, but several of them collectivists.
> Maybe some HR managers believe that empathy and "EQ" are more important than IQ?
> Maybe they even think that intelligence is a social construct invented by the Patriarchy for the purpose of oppressing people who are not "cis white males"?
Are you agreeing or disagreeing with the person you replied to?
Neither, I simply don't see how it is relevant (or even very "intelligent") to stick a categorical "stupid"-label on HR.
HR is a bit like IT support. Some companies will have a lower standard when hiring for such roles, others will have as high or possibly higher standards when hiring SRE's than when hiring developers.
And even if it should be correct that SWE's score a few points higher, on average, than HR managers, I see no utility in making a fuss about that. To the extent that it has a market value, salaries will reflect that.
Now, should HR staff start to act unprofessionally themselves, things change. For instance, if individuals within HR start to advocate for lower wages to or other actions against individuals or groups they feel resentment against, then that is a problem.
LOL, a lawyer in my family does. He actually meets with a whole group of lawyers every month or so and they all do a retrospective to figure out how well applying agile/lean/whatever principles to their firms has been working. He actually feels it gives them a real edge.
I've suggested this to my lawyer wife more than once. The problem is that she works for multiple partners with no visibility in to each other's tasks. So each partner thinks they can have 100% of her time whenever they want, and get upset when that turns out not to be true.
The intelligence floor for becoming a nurse is low, so nurses have extremely large variance in their intelligence, knowledge, and training. Knowing someone is a nurse doesn’t mean you can assume they are pros. Some nurses keep up with the latest medical research papers and are overall similarly knowledgeable to doctors, and some know less about biology than I do and will tell you about vaccine conspiracies and how some homeopathic nonsense will help you. I’ve met people at both ends.
In general to me the whole process of getting the degree, the bar and then the actually work and the pay unless you make it seems like rather bad compared to how easy SWE or related work can be.
The school is laughably expensive for something that should in reality be rather cheap. Read books, listen to lecturers? No laboratory work or practical experience and so on, clearly overpriced. And the studying to get approval of cartel? And then end up working for rather poor compensation for quite a long bit in career... As the pay is bi-modal, yes partners rake in money, but they also need to get the clients. But the people doing bulk of the work aren't that well paid.
Exactly. I was into a half dozen top law schools and had a very good LSAT score but what you mentioned nagged at me until I chose a technical masters instead.
No, it's housing. Housing is expensive b/c there is no supply, and you can't fix a supply side problem by putting more money on the demand side: that just enables people to bid higher than they already are, and the people selling what scant supply exists will pocket the additional cash. The same amount of people that today are below the point where supply meets demand will remain below it.
Not outside of a small number of US tech cities and certainly not outside the US. My first coding job paid about $30k for example.
I am currently a senior engineer earning less than a SV graduate. My pay has been largely comparable to a plumber for my whole career. Not for lack of trying.
I suspect if whatever red tape os preventing US companies from remote hires world wide happening en masse, then non-MAGMA employees will see deflation in their wages.
Similarly if more people are let in on unrestricted working visas that last a while.
> Not outside of a small number of US tech cities and certainly not outside the US.
I live in London and graduated from a reputable university (so most of my friends have decent jobs). I make considerably more money than any of my peers except for those in a few select careers such as finance and law. I make a lot more than those in other technical career paths like mechanical engineering. And certainly compared with "generic graduate jobs" such as consulting companies or the civil service.
My first job wasn't well paid either (about £19k), but that quickly increased over 2-3 years in a way that salaries in other careers didn't seem to.
It’s a rather abstract view of making money. Other houses also go up in value. You only make money from the difference between how much houses in your area are going up, and how much houses in the area you want to move to are going up. And that difference may well be negative.
The alternate is not owning the house and instead paying rent, which means your rent would have to be far lower than the appreciating asset (house) and the cost of servicing the debt (interest part of the mortgage)
That's certainly not the case in the UK. I'd be surprised if it were in the US
The primary reason buying a house is better is because the government has put it's thumb on the we scale to allow you to leverage the fuck out of your assets.
If you could get a loan of the same amount and yolo it into the stock market, I am not sure buying a home would come out better.
There's another piece of government interference that's a lot more relevant: it's a lot easier to get legal permission to start a company than to build a house anywhere remotely desirable.
You can take out a loan on the house with the house as collateral. That's a tactic wealthy people use with their stocks. Also having a home equity loan is a good padding for rainy days.
Except for almost all of them. Aside from a few exceptions (doctors, laywers), everyone else makes significantly less. A structural engineer with a master's degree for exemple will makes less than half (closer to 3 - 4x in my region).
Lawyer pay honestly isn't that great. I know lawyers who went to top schools and routinely work 60-80 hour weeks to make as much as SWE do.
The lawyers who didn't go to top schools became public defenders etc and make about 70k a year and are saddled with a quarter million dollars in law school debt.
Yes, I more or less think they do. Unless they are a headliner or founder of their own firm.
Lets say you are a lawyer at glob corp, and they get sued. Do you really think you can say "Eh, the other guy has a point. I am not going to defend this one."? You either work the case defending glob corp, or you are no longer glob corp's lawyer.
> The pay isn't great--the kids getting $300K packages out of Stanford are an exception; it's a marketing expense.
You don't have to make 300k for the pay to be great. Even if you fall short of that and land at a company that only pays you $120k, you're still making a killing compared to most other professions.
> Corporate SWE is pretty awful
Then get out of Corporate. I work for an SMB, make about as much as I used to in Corporate, but the stress level and workload makes is much easier. Everyone want's one of those SWE jobs for a company that is in the news, but in fact you want the exact opposite. You want a low profile job at a company no one outside your niche has heard of. The pay is about the same, work is 10x less crazy, and if you disagree with a decision, usually you know the person that made it because the company is only about 200 employees.
I think the signal:noise ratio isn’t great in this sector. Although corporate is 100% guaranteed misery, I have yet to come up with a reliable way to find SMBs where IT (because that’s how they call it, a SWE is the same as the help desk in their view) isn’t considered as “the geeks working the cost center in the basement”.
Do you have a method or did you rely on blind luck like the rest of us?
Probably not, but instead they get to bill their working hours in 6 minute increments. And they have to bill a high number of hours a day. Praise yourself lucky if you don't have to do that!
You want to represent a murderer, rapist or a Bernie Madoff? Lawyers deal with an order of magnitude more BS. You may feel bad when you work on a dead end feature. Imagine if your representation let a rapist walk.
Average salary for a software developer in the US is over $100k/year. For a 30 year old, that would put them in the 95% income percentile, or 87% for a 40 year old.
The comparison to other lawyers is weak because becoming a lawyer requires a hell of a lot more education. You can become a software developer without even a having a college degree.
You are also ignoring other professions like nursing. Nurses make far less than software developers and work a hell of a lot harder. Same with teachers.
As far as barely making a middle-class salary while rich people get richer off your work? Welcome to capitalism.
buddy our goal is to make their professions just a shitty and atomizing as ours. it is the way (the faster the better). the professional class are an impediment to progress idc how mean that sounds.
As for attributing burnout as the core issue here, I would strongly disagree with this idea. When teaching undergrads I noticed immediately that a good portion of each student cohort was basically there because they were interested in making money in the future rather than exploring the ideas of computer science. They were no doubt going to get frustrated or bored and move into management or some other profession rather than continue to expand as an engineer. This is totally fine, and they are probably richer for having learned what they did, but I don't know why we can't just see this and appreciate it for what it is rather than portraying it as the drama of burnout.