I feel like a lot of people here are missing the point of the article.
I've worked with several 'expert beginners' over my career. They think they're near the skill ceiling, but they're actually much closer to the bottom. They rose through the ranks despite not being particularly great at their jobs, and now find themselves having to indoctrinate others into their way of doing things. Any suggestion on how to improve the process is usually met with some form of "that's not how we do things around here", since the expert beginner feels threatened.
Preventing yourself from becoming an expert beginner doesn't mean you have to dedicate all your spare time to learning 200 different technologies, as some here have suggested. It's more about accepting that learning is a life-long practice. Knowing that less experienced coworkers still have things they can teach you. Understanding that there is still so much out there for you to learn, and being humble about that fact.
Indeed. For every new job, new project, and new professional relationship, you have to be willing to ask yourself "what can I learn from this experience?" Learning is often best done when you are working on problems you don't yet know how to solve.
But there is a countervailing force, which is the pressure to appear confident and that you know what you are doing. Being transparent about your ignorance, but confident in you're ability to learn is a difficult balance to pull off.
I like these sorts of analysis, whether I agree or not.
I've come to believe though, that they need humour. There's always a tendency to over-extrapolate, get hung up on typologies which crack under weight and to assume the model is the main thing going on. Humour gives them a productive "playing with ideas" vibe, keeps it honest.
In software development, for example, I always think that demographics play a big role. Every decade we get more young programmers than the last. Many older programmers "graduate out" one way or another. The resulting demographics are unusual, with a years of experience pyramid more like the military than most professions.
Technology, methods and philosophy change rapidly. This is both exasperated by and causal to the demographics, and the youth/pace of the field itself. A lot of software culture has this relationship with the demographics. Maybe 60yo developers are more likely to produce important new things, but they are outnumbered 100-1 by 20-somethings... enforcing youth biases, etc.
I think a lot of this essays thoughts on learning might be affected by software demographics. A lawyer, accountant or aviation engineer's thoughts on learning and career progression probably encompasses much longer time periods. In software, we think in much shorter timescales. Between the age of 22 & 27, a programmer has progressed through a "career." Between the age of 22 & 27, an accountant has progressed through a cadetship.
The difference is no accountant does accountancy in their bedroom for fun as a teenager. The software engineer who has significantly progressed their career by 27 probably started as a teenager, not as a 22 year old. It's more like being a musician than being a lawyer or an accountant.
There are plenty of cases where programmers only start programming for real at a post-college job. The majority case, even.
That said, sure. There are differences in substance, history, everything. Those might be the reason for differences between professions, but the question is "how much?"
I think we overemphasize legible, logical reasons for culture, but often it's incidental reasons path dependence, demographics, etc.
I think that programmers who first start programming for real at a post college job don't make significant career progression by 27 unless they are very talented. You are looking at outliers and saying look these outliers don't exist in accounting and law, and you would be right because very few people in those professions are so crazy about them that they live and breath it.
I personally think there are multiple reasons for this situation.
* It is fun to learn initially, to see something new working. Once you get something working not many people see fun in spending a lot of time in getting it to work better.
* There is huge amount of introductory materials (guides, tutorials, examples) but the amount of available materials falls drastically as you start to progress.
* Only some people are able to actually think in abstract terms required to "create" new knowledge based on existing facts. Beginners can advance quickly by "recreating" -- executing tutorials, copying existing code, etc. It is relatively easy to use these as building blocks for a simple application. But as you progress you have to figure out more and more new knowledge, understand underlying principles. This is what many people either don't feel comfortable doing or don't feel is necessary to do or are just plainly incapable.
* It gets more difficult to work with other people in your team as you create knowledge in a given topic. There is tendency to push back when team member tries to introduce something new that is not clearly recreation of accomplishment of somebody else available on the Internet.
* Creating new knowledge in the topic is a huge risk in that it is unknown payoff for large amount of honest work. There is not much risk in following existing tutorials, it is pretty much guaranteed that it is possible to recreate accomplishment others did. When you create new knowledge (for example new patterns, principles, guidelines) it is likely you are going to make mistakes. This fact may cause people uneasy and dissuade them from further advancement in the topic.
* Getting mediocre in any specific topic is frequently seen as good return on investment. For example, as an architect I would maybe not want to get expert at any of the frameworks I know. I see this as a reasonable tradeoff which allows me to tackle other problems as soon as I think I know "enough" about particular topic.
> * There is huge amount of introductory materials (guides, tutorials, examples) but the amount of available materials falls drastically as you start to progress.
Aye. My coworker described it as being able to find 100 guides on how to hammer nails and build a small bench. "Shed building for beginners" or something. But then the next exercise is "build a house" and the one after that is "build a shopping mall".
I was assigned to work on machine learning three years ago. Literally an overnight "the project changed, you're doing this now, here's the project update memo for your contract."
Your post exactly describes ML on the internet: first you find a billion "check out my intro to ML" Medium articles and git repos that spin up OpenCV on Tensorflow ("small bench"), then you move to the Tensorflow docs ("build a house"), then you move to Arxiv.org ("theories of city planning"). Once you step into that middle zone, you're basically stuck reading academic texts. As a 50-something engineer, it was a kick in the balls getting started with this stuff. Fortunately this contract is related to hardware optimization, which I've been doing for decades, but this particular variant of it was a curve-ball.
It is an interesting back-and-forth: I find myself exchanging information with lots of new-college-grads (permanent hires, not contractors) that all have sharply coiffed beards and heavy denim jeans. Two of them are pretentious and fight to be top dog (which is hilarious from my old, jaded POV), but the rest are genuinely good kids that want to learn, and vice versa, teach me what they know. It's fun. I'll miss 'em when my contract ends.
I think the biggest reason is demographic in nature: The rapidly expanding population of professional programmers means that programmers are regularly building their skills on third-hand advice while simultaneously encountering situations nobody they know has seen.
In such a climate, they aren't really competing for the job on the basis of merit, so much as they are posturing that they are a good candidate - because they learned that that's what one does to get work and protect their family. And there is so little expertise around that nobody can call them out. When they get hired, the posturing turns into gatekeeping in short order. They aren't curious about the work, they just want to keep the position. When you fill the ranks with uncurious individuals you get stagnation, because nobody is pushing them forward.
In other professions the process of education and certification cuts most of these people out early on, which comes with its own downsides but does impose some minimum standard of capability which they may go on to use or misuse. But the problem in programming is that the tools of programming address a multitude of problem domains, outstripping what could be reasonably certified - that's why the profession has gotten so big, so quickly. It's all research, all the time - independent, unverified, unscientific research.
In response to this we've ended up with the whiteboard coding interview, which basically aims to do the task of certification within the span of a few hours. It's not very good at that - these interviews are run in an inconsistent and often unprofessional manner too.
As such, Advanced Beginners can break one of two ways: they can move to Competent and start to grasp the big picture and their place in it, or they can ‘graduate’ to Expert Beginner by assuming that they’ve graduated to Expert.
I think, that viewpoint is a bit narrow. The example, where the author tells the story how he started bowling the wrong way and reached a plateau, where he could not further progress, is a good one. And I'm sure there are numerous examples each one of us can tell of own experiences made when learning new skills.
In my opinion learning new skills fast is a great skill. And nowadays there are many resources available to us make this easy. But you will always reach a plateau, where further progression gets increasingly difficult. It's fascinating how this corresponds to larger patterns like the Gartner Hype Cycle [1].
The question is, what do you do when you reach a plateau? Do you invest more time and money? Maybe the skill level suits you well and you don't feel a need to progress further? Maybe deeper understanding is not necessary to do your job well? Maybe it's better to acquire different skills which you can combine instead of being an expert in one area alone?
I think there are many nuanced answers to that question and simply put blame on the expert beginner is not useful.
Every 'stage' is a plateau in itself. The beginner is able to quickly do tasks, but does not connect them together well. For example, they might know clean code, Scrum, Git, algorithms, TDD. But put them in charge of building a product and they can't. They'll stall. They'll overengineer.
To go to the Competent phase, they need holistic recognition. This means running experiments. They have to try to do things that can fail. They have to learn to plan towards a larger, meaningful arc, instead of simply finishing tasks that were assigned to them.
Finding a mentor is the most effective way to get through a plateau, but mentors aren't always available.
But you will always reach a plateau, where further progression gets increasingly difficult. It's fascinating how this corresponds to larger patterns like the Gartner Hype Cycle [1]
It’s not just about skill progression. To stay relevant and not be seen as an old out of touch developer, you have to know how and when to jump on the next hype cycle.
True. You can always learn more. Even after you break through many ceilings, you will keep finding more things that you don't know. Also, the incentive to improve gets smaller as you get better - Doesn't matter how many ceilings you already broke through, the incentives for continuing will always go down because beyond a certain point, there are no financial or even career benefits to getting better.
It's possible for a developer to be skilled beyond a level which their bosses, recruiters and colleagues are able to grasp. You're not going to get paid extra for that surplus skill. Your pay is limited by your boss' imagination. Even though those skills are extremely useful and deliver real returns.
Proving yourself as a developer to a non-technical person takes years because that's how long it takes to see the results. In a big company it may even be impossible to prove yourself because your work is mixed in with that of many other people and the results average out.
Kind of true, when one wants to be seen as "Developer in Tech X" and that is only how they sell themselves to companies selling software solutions.
However there is also the path of understanding business domains, brushing up soft skills and embrace being polyglot across multiple stacks, delivering working solutions for companies that couldn't care one second what their IT cost center runs on, as long as they stay in business selling socks or whatever they actually care about.
Independent consultants are the first to get cut when the economy goes bad. Companies aren’t looking for “Enterprise Architects” and “Digital Transformation Consultants” when they are just trying to keep the lights on.
No, you can’t be 45 years old (I’m 46) and say I understand the business domain and I can give you a really cool VB6 Active X control to solve your problem that only works in IE6 on a Windows XP VM.
Yes, if the CTO doesn’t care that he is using somewhat modern technology he should be fired for incompetence. He’s going to find that he has a hard time recruiting developers who know how to write FORTRAN for a Stratus VOS mainframe. Yes I’m that old. Been there done that.
Saying you know the business domain but not keeping up with technology is just an excuse that people make and then scream ageism the minute no one will hire them because their idea of cutting edge technology is ASP.Net WebForms or Enterprise Java Beans.
Yes, my current position is a technical consultant working remotely for Big Tech who has to understand business problems and not just know how to reverse a binary tree on a whiteboard. But, if push came to shove and a meteor struck all of their data centers world wide, I could put my resume out their and get a job with a tech stack that is at the right point on the hype curve.
I currently develop for mainframes and I can say they are still very relevant today in multiple industries. For my own company, we get a lot more bang for our buck than if we were constantly worried about the hype cycles.
>if the CTO doesn’t care that he is using somewhat modern technology he should be fired for incompetence
A ridiculous statement. The last concern of the CTO should be "are we using the latest tech for the sake of having the latest tech". Their concerns should be about cost, what the business actually needs, and what the future will entail. Finding new developers isn't that difficult, even for something like mainframes which most college grads today are totally unfamiliar with.
Your point about being more easily hireable by staying up to date with the current hype is true and valid. However the CTO doesn't have the same concerns as you. This is the important difference.
Tell that to all the states struggling to get unemployment benefits out because they can’t find enough COBOL programmers to keep their systems from melting....
Well, if that VB 6 is what keeps business running they will be more than happy to pay for it.
Also not working alone helps to sort out the issues when cutting costs arrives.
Finally I am not saying not to learn, rather people should focus on their business value as a whole package, and not being "Expert on Technology X, Y, Z".
I am about the same age, apparently it was worked so far.
It’s about optionality. If I either want or need to change jobs. I would much rather call recruiters and say I know the latest .Net Core/EF Core/ASP.Net Core than saying my only experience is with .Net Framework (which is in permanent maintenance mode) and I can deploy to IIS on Windows (even Azure hosts more Linux VMs than Windows VMs).
Especially seeing that when working as a Corp Dev, salary compression and inversion are real and the best way to keep your salary at market value is by job hopping.
The question is, what do you do when you reach a plateau? Do you invest more time and money? Maybe the skill level suits you well and you don't feel a need to progress further? Maybe deeper understanding is not necessary to do your job well? Maybe it's better to acquire different skills which you can combine instead of being an expert in one area alone?
Is an appropriate response to not getting out of touch.
Exactly. If you reach a roadblock you need to be aware, that there are risks and rewards to whatever path you take.
For example you can invest a lot of time in learning some kind of framework and get really proficient. Maybe that opens up new opportunities to you. Maybe in a few years the framework falls behind and you should have better invested your time in something else?
I think you can do a lot of great things without knowing every detail of a black box. And it allows you to spend your time on other things which are maybe worthwhile in the long run.
What's important is that you do not avoid digging deeper into a topic just because of pure habit. Maybe that kind of self-reflection is a trait which is less prevalent in "Expert Beginners" called out in the article.
> If you reach a roadblock you need to be aware, that there are risks and rewards to whatever path you take.
This sounds pretty similar to the exploration-exploitation tradeoff in reinforcement learning (game playing AIs). Exploration comes at a cost, and exploiting current knowledge is safer, but if the agent doesn't explore enough it will have lower total score. A good exploration strategy is a must because knowledge comes from exploration.
Humans frequently get this wrong. Most low level players don't scout because it doesn't have a direct and gratifying impact on the game development plus it has a cost.
In most competitive games with scouting you have to move up the ladder quite a bit to find solid scouting.
For example you can invest a lot of time in learning some kind of framework and get really proficient. Maybe that opens up new opportunities to you. Maybe in a few years the framework falls behind and you should have better invested your time in something else?
Every framework gets out of date after a few years. It’s par for the course in technology. You have to ride the hype cycle.
When interviewing new candidates it's interesting to see the difference between beginner/mid and seniors. The beginner/mid group seem to have a lot more confidence in their skills and not be as aware of what they don't know. The seniors seem to be more aware of what they don't know and maybe also a little harder on themselves.
I think this is a common viewpoint, and a misconception.
I've seen a number of juniors who were very humble, and this humility was perceived as lack of knowledge.
I've interviewed some seniors who were not humble at all (nothing was an issue).
In general, skill level has little to do with humility and other personal traits. However, we like to project our reasoning into other's emotions, and we love to pattern match, e.g. "she's humble because she has 20 years of experience".
It's another bias you should look for when interviewing candidates. This one definitely does not serve you well.
Fair comment I can see where you're coming from in terms of humility and bias. Again I can only talk from personal experience but I notice this also from self assessment exercises in performance reviews. Humility aside the more senior developers (in my opinion) are more able to accurately gauge their strengths and shortcomings. And I can't help but wonder if these skills are part of what help them develop from junior/mid to senior.
Dunning-Kruger effect says that people with more skill are more confident in their skills, albeit it isn't a perfect correlation so skilled people underestimate themselves and unskilled people overestimate themselves relative to others.
> I think this is a common viewpoint, and a misconception.
Probably a common viewpoint, but definitely not a misconception.
I have had freshers tell me wrong answers which they know they are answering wrong with so much confidence. And you keep digging them on their answer and they would keep coming up with even more wrong stuff.
It would probably be about 2 or 3 out of 10 freshers who can say "i don't know that".
And I've had senior candidates (with 10+ years experience) do the same thing, even after throwing them a lifeline. This isn't a trait of junior-ity (is that a word?), it's a behaviour.
I suspect if you interviewed those 7 or 8 juniors in 10-15 years, many of them will continue to bolster.
I have sometimes interviewed to specifically find whether a candidate could utter “I don’t know” (and ideally add “, but here’s how I’d handle it, find out, learn it, etc.”)
It’s amazing (and a little amusing) how many people literally can’t admit they don’t know something. I have “passed” some otherwise very strong candidates who couldn’t admit that and made it a point to advise them after hiring how dangerous that limitation is both when doing the actual work and when interviewing. A not at all surprising fraction of them couldn’t admit/understand they even did it, of course.
> It’s amazing (and a little amusing) how many people literally can’t admit they don’t know something.
It is a matter of culture too.
I'll take the example of the desk clerks you have to face as a citizen or customer in administrations and companies.
When I lived in a country of Northern Europe, they would often say "I am not sure, wait a minute and let me check in that book" or "I don't know, I will ask/check and call/mail you right away". And they did it and it was fine. A tiny delay and everything is solved for good. You come out from there with a smile and they have learned something for the next time they meet the case.
Now that I am back in my Southern European country, the guys (well, it's 99% women) in the same position will never admit they don't know or they aren't sure of something. They will assert whatever weird/outdated/wrong assumption comes through their mind with definite certainty. Even if you gathered information beforehand and tell them that the rule says otherwise. They feel that checking or asking for help would undermine their authority or make them look incompetent (as if anyone still had any hope about that...). And they will only call the higher up when, after 2 months and the 3rd visit for nothing except getting contradictory information and requirements, you start yelling and they feel that they are less than 30 seconds away from getting punched in their face. Then the higher up solves the problem (which should never have become a problem in first place) in 2 minutes. But they will keep on 'working' like this for 40 years, they'll never recognise that their work and service would be much more efficient if they just said "I don't know" instead of inventing wrong rules.
That happened in my interview for my first real programming job. One of the founders kept making the scale of the problem harder and harder, and at a certain threshold, I no longer knew what to do. I said so, and what I would try to do to figure it out. At the time, I was sure I had failed the interview. Turns out, it was exactly what they were looking for, and my life has wildly changed for the better for having worked with that company.
> I have sometimes interviewed to specifically find whether a candidate could utter “I don’t know” (and ideally add “, but here’s how I’d handle it, find out, learn it, etc.”)
I've done the same. The problem is that in a lot of companies, uttering "I don't know" is perceived as a weakness, and makes you "not a culture fit," so I can completely understand the hesitation...
As another commentator said, the problem is often less the person and more the fact that saying 'I don't know' is a massive negative in most interview settings.
It was something I noticed when I was interviewing for job positions. Any time I said 'I'm not sure' or 'I don't know' and followed up with 'I'd have to look at the manpage' or 'look up the documentation' either the interviewer would express disappointment or continue to press the question in order to get even a wrong answer.
Many interviewers are looking for you to perfectly regurgitate canned answers rather than admit when you don't have something memorized.
I think this is related to relativity, one data feature missing, which is the level of job vs their previous exposure.
Basically with juniors you don’t know what they have been exposed before. In general less they are exposed, more confident they are. Ofc humility is also a factor in this.
> In general, skill level has little to do with humility and other personal traits.
Here, you're using "skill level" as a proxy to years of experience, because the OP used the term "seniority" which usually means years of experience.
So while I think you may be correct when it comes to actual skill level (though I have seen no evidence presented to support this claim), you're definitely incorrect when it comes to "seniority", unless you believe old people and young people do not present different levels of "humility" and "personal traits".
To avoid being accused of not providing evidence:
_you can reasonably presume a 66-year-old will be more conscientious, more agreeable, and more emotionally stable than their adolescent self._
_As you progress through adulthood personality becomes more stable and predictable because you fall into a pattern of thinking, behaving, and feeling._
_Conscientious-ness, a trait marked by organization and discipline, and linked to success at work and in relationships, was found to increase through the age ranges studied, with the most change occurring in a person's 20s. Agreeableness, a trait associated with being warm, generous and helpful, bucked the theory that personalities don't change after 30. On the contrary, people in the study showed the most change in agreeableness during their 30s and continued to improve through their 60s. This even happened among men, which debunks the concept of "grumpy old men," Srivastava says._
A way of putting it is that some beginners are fanatical in the rules they follow. I find they have strong opinions on using say, a string template, or SOLID, and be judgemental on those who don't.
The seniors can be extremely cocky, but they realize when to follow or break rules. They may have strong opinions, weakly held. They'll proudly challenge something to see if it hold weight, but back off once they're wrong. There's some special cases too, like people who are beginners in project management but experts in development, and they can have fanatical opinions on project management.
> The beginner/mid group seem to have a lot more confidence in their skills and not be as aware of what they don't know.
This might be misinterpreted though. A common advice for interviews is to underline your key strenghts.
A candidate rightfully plays their game by showing expertise and mastery of the things they believe they know and try to at least appear worth hiring.
It's up to the interviewer to try and move the conversation to a topic the candidate is not very familiar with and see how they react and handle it. In general, is down to the interviewer to go beyond the "oh yeah i know that" and ask specific question that demostrate actual understanding.
But then again, it's fair game to present your best self during an interview. You're literally selling yourself to a potential employer.
That's what I've always done in interviews. I'm confident about the things I know well, but I'm extremely open about the things I don't know and would like to improve.
Also consider that juniors are all bright eyed and full of optimism and certainly tend to idealize things at first.
Then experience, hardships, victories and org politics teaches them the 'real' game and hopefully they use that experience to their advantage. You become a cautious optimist. And not a cynical ol' grumpy bastard like me.
On skill development: I mainly think of it as an exercise in emotional management, just like procrastination, or physical activities, and so forth.
It doesn't seem to ever become easy for me. I am always running into challenges and difficult obstacles once I overcome the procrastination issue. I am bullhorned about it, so I'll eventually get through. That, more than any particular strategy for learning, is the most important in skill acquisition.
Another component of skill acquisition is skill retention. You'll get rusty when you don't practice your skills, and in enough time, you'll lose your progress. This is probably more than anything else, a lifetime commitment so that you don't lose your progress in the knowledge or skills you picked up, so that in the long run you become better and better over time.
Think of it this way: You learned 10,000 useful "stable" facts about the world one year. Next year, you learned another 10K facts. But retention is exponential. Practice it long enough, the facts will last a lifetime.
Suppose a person have no strategy for retention. Let say he learned 10K facts, 75% decays away. Next year, he learned another 10K facts, but another 75% decays away. So he retained only 5,000 facts, but you retained 20,000 facts. Obviously, this is contrived, but it does illustrate why colleges and high schools aren't very effective. It's not that they don't teach, it's that the students forgot what they learned as soon as they finished a course.
I often get interested in a subject matter and then drop it some point. This often result in surface understanding of the subject. Just enough to sound smart. However the knowledge is retained. Should I ever return to a subject, I'll retain the knowledge from scratch, as sometime I do.
Sometime I was able to keep at a subject long enough to acquire some amount of true mastery.
> the students forgot what they learned as soon as they finished a course
A lot of times this is enough to dissuade me from learning about anything I have only a passing interest in. One time I blocked out an entire two months out of free time after school in an attempt to learn electronics, by assembling a commonly produced headphone amp. A month passed after classes ramped up and I forgot almost all of it, and moreover no longer cared. I still just don't care about electronics since then, honestly, and don't know why. So in my book that was just two months I spent doing nothing that would ultimately contribute to my future knowledge. I mean, it was an attempt, but only an attempt. I never got the circuit to work in the end either. If I had known that was the result ahead of time I would have just worked on a programming project instead because I was better at programming and I wanted to finish it, and also was competent enough to actually finish it.
Unsurprisingly the majority of my college education turned out exactly like this also, but at least I got the paper in the end.
Nowadays it feels like all I care about are the things I care too much about to ever forget, like the skills necessary for my job. As in, not knowing what the point of learning all this is if I don't care enough to remember it after a month. I feel this is a horribly limiting mindset for me to have, but I don't see any way around the "not caring" part. If I know I don't care, nothing I do to sweeten the deal like gamification or dreaming of the finished project will ever work, because I know I'm only doing that because I don't care, so my mind will defeat itself.
I especially don't like it because all these people around me are learning about machine learning and all these bleeding edge technologies that are having a material impact on the world, and I just sit there unable to care, as if I'm somehow allergic to learning. It becomes a "so you're just going to deny the existence of the entire field of X because you don't care and sit around playing video games instead" kind of irrational thing. I don't know how to resolve this, or if I even should.
One thing I will never forget is me talking to someone I knew for a long time and asking what they were doing, and they said "learning about nuclear fusion." For some reason I never felt brave enough to talk to them again since then.
You're totally right that "motivation" is probably the key to learning anything. We're taught that kids have aptitude for different stuff from birth, but I think the curiosity and intrinsic motivation parts are the most important. If something is fun enough you spend the time to become good, and then people say "oh how talented you are", but in reality you've just spent a lot more time working at it.
One way I've seen I can "hack" it myself is if I stumble on things that are actually fun and it becomes a "wedge" into an area I was too shy to try out. For example "Kerbal Space Program" taught me way more about rocket science than I've thought possible, but more than that, once I understood the core concepts and could play around with them in my mind it actually did become fun. I still remember how I was doing some hochman transfer calculations to figure out when should I do a lunch to get to Duna (KSP's Mars) with the limited tech I had unlocked in the game, and a sudden realisation hit me, I'm actually doing rocket science math for a game ... This got me into reading about rocket engines and I ended up reading through the whole of the "biblical" Ignition book just for fun.
Now I'm a web dev and as far away from Aerospace as you can probably get and still be somewhat of an engineer, but it opened a would of interesting news, data, discussions and theories at the bleeding edge of science, for which I'll be eternally grateful.
I guess what I mean is it's possible to start caring for something that's outside of your domain expertise, you just need to find somethings that's fun enough to keep you there.
I agree with a lot of your points, and it's one of the reasons I haven't focused on learning Angular, React, Vue, or any of the Javascript 'build' systems. I don't have a pressing need to use any of them and they will be replaced by something else in a few years.
Regarding machine learning, that field has been hijacked by the hype train and over half of what people post online about various ML techniques is just wrong.
'Allergic to learning' may actually mean that you are immune to bullshit. In that case the best approach for learning a topic you care about is to go to the seminal work / first papers and focus on understanding it and reproducing it, then evaluate for yourself whether the reality lives up to the hype.
You would probably care if you found the utility of the knowledge you’re about to acquire in order to acomplish a goal. Or at least that’s how I operate. I don’t care either about machine learning because I don’t have any personal project or anything that I am itching to acomplish with it.
Writing software is a bit like playing multiple sports. You can be good at one thing and poor at another, and as a result do great work on one project and be mediocre on another.
Writing code in the small, optimizing; architecting for change; architecting for scale in development; architecting for scale under load; architecting for scaling out vs up; all different skills.
Writing code functionally, vs procedurally, vs message oriented. Writing code with control flow vs data flow, and toggling between them. Crafting abstractions vs composing them. Many small parts put together elegantly vs one straightforward transparent monolith. Some skills are alternates, you can go either way and get as good results.
There are people who only know a few things. But on a suitably scoped project, that may be fine.
Sounds like you're contradicting the analogy but not the main point.
Many people would be completely happy with bowling a 160. In many contexts that's a great outcome.
But if you as a developer want to break past the equivalent stage of (beginner) expertise in any of the possible specializations or fields, you have to understand how to keep learning past the "know enough to be dangerous" phase.
Writing code functionally, vs procedurally, vs message oriented. Writing code with control flow vs data flow, and toggling between them. Crafting abstractions vs composing them. Many small parts put together elegantly vs one straightforward transparent monolith. Some skills are alternates, you can go either way and get as good results.
Many developers stop at one paradigm and avoid stretching themselves further out of discomfort. Other developers learn a new paradigm but never reach the point of understanding its limitations, while having convinced themselves that they're an expert on that paradigm. From what I've seen it doesn't really matter which paradigm, language, toolset, or discipline you're talking about, this particular behavior happens all the time, and it matches the "expert beginner" label pretty well.
Correction: not different skills: some of these are domains and subdomains; others are approaches; others are approaches and (sub)domains. They are all mixed up in your answer, which I found to be very insightful, by the way.
Although I am admitedly far from an expert in the field in question I am not a layperson either. I say the types of activities you mentioned are of the same group of skills, in different domains, which often requires a different but similar set of approaches (based on both convention and the subtleties of the domain).
For instance making wrist watches, grandfather clocks, steam punk sex dolls, and the ankagathera(sp) machine are different use cases and require different types of mental projections and organizational methodologies in execution (approaches), but the skillsets have an undeniable amount of overlap. They overlap more with each other than they exclude. In SQL terms the size of the output of the inner join is much greater than the same for the outer join. Therefore you can say that they are of the same skill set, just like doing abdomial flexibility helps with running and bicycling hill climbs helps your breast stroke. Different muscle groups, but share many types of synergy: all part of physical fitness.
Source: 10 years as a learning specialist in business setting and aspiring triathelete.
and it doesn't contradict the article. If you have always worked in one paradigm and think you are an expert in it, learning new development skills can make you see you were not that good in your previous area.
I consider the most reliable evaluation of expertise is working code. Something can be beautiful, but if it doesn't work, or doesn't solve the business problem, it doesn't count. If something objectively scales, I believe that the people responsible for building it are able to build that thing that scales.
If those people learn new things, they might learn how to do the job better. But maybe better isn't what the business needed. Maybe it would be better to invest that extra talent in something else, and hire people with the original skill level (who may be cheaper) to do the original thing.
This all might sound like an apologia for not learning new things, or limited developers who can only build a few things. In some ways, indirectly, I'm arguing against an unfounded arrogance or feeling of superiority which is driven by following the fashions of programming as a pop culture. But more of what I'm trying to get at is that the "best" isn't actually required, a lot of the time. And sometimes the best can be the enemy of the good; doing things the "right" way, according to an orthodoxy, can actually interfere with getting things done. And newcomers to orthodoxies can be - usually are - the most religious.
I've seen LOTS of beautiful code that doesn't work... in the real world, usually because it doesn't check the assumptions it makes. Is the data what is expected? Do the code's parameters span the space they should? Does the code inform the user of errors usefully and gracefully? Is it well documented, intuitive?
In my experience, when it comes to code, beauty and durability are natural enemies.
I’m not sure the “10,000 hours/5 years” rule is a particularly relevant one, these days. Tech mutates so quickly, it's near impossible to become an "expert." Also, many developers are...how can I put this...a wee bit obsessive. It’s quite possible to hit 10K hours well before 5 years.
I liked the analogy about the quirky bowling style.
That has happened to me -several times.
I’m primarily self-taught in most of my tech. It has tended to result in very highly-developed, but narrow, skillsets. Not necessarily a bad thing, but “brittle.” It has happened so many times, that I no longer feel that I am an “expert.” I am now “experienced.” At least the first five letters match.
But it has been a long road to where I am now. Humility has been forced upon me, and I now have a lot of “narrow skillsets,” to the point that they inform each other. For example, a lot of the stuff I did in PHP has helped inform my work in Swift, and vice-versa.
I’m choosing to specialize in a specific discipline and tech stack, which, just by itself, gives me a lot to learn. I am working to develop a “broad base” in a small-ish venue, with a lot of “sharp peaks.”
It’s really humbling. The more I know, the more I know I don’t know. Also, since I am constantly trying out stuff I don’t already know, I’m a perennial “n00b.” Usually, this manifests by not always being aware of the jargon (I know the tech, but not the name). This may result, I suspect, in my being treated rather shabbily by folks in the field (It may also have to do with my age. I have found that the way people treat me changes radically, as soon as they find out that I'm "long in the tooth," so I now make it obvious to avoid that). This has helped me to just keep my damn mouth shut, and open it only to eat my humble pie.
I haven't hit that point yet. It can get pretty close to overwhelming how quickly people are building new things and it's often not clear to me what problems the new things are solving (which likely means that I don't have those problems and shouldn't even worry about those new things).
Do you remember how it felt when you had that insight that tech is not mutating as quickly as it may seem?
I've described it as the "disappointment of knowledge" in the past.
It's like hearing all the microservices hype, sitting down to read it and thinking that it's "just" SOA slimmed down with some of the more complex vendor offering replaces. And that both are "just" Smalltalk-style message passing that happens to occur on multiple machines. Or that Kubernetes is just a bunch of nodes running processes with Docker as a process manager (reading the paper on its predecessor, Borg, was very enlightening). That C# is "just" Java that has been cleaned up.
The disappointment is that there is all this hype that would imply the world has changed and you realize that it may be a serviceable tool, but it is "just" something you've seen before remixed a little.
It's one of the reasons that I have a hard time getting excited about learning another web framework or another C++/Java-ish programming language. I can learn by diff'ing pretty quickly, so I tend to read broadly, looking for stuff that looks like it might be highly differentiated. I tend to burrow deep onto things of either immediate use or that will provide a different world-view (for example, learning Prolog will probably teach you a different way to look at problems whereas learning, say, VUE.js after having done Angular might teach you a useful tool, but the world view is more or less the same).
What I got from it, is that there's a solid "baseline" of technology/technique that tends to remain fairly constant and relevant.
It will often be "discovered" anew, and repackaged with jargon, but the basics are still the same.
My experience is that the mutations are more of an "accretion," rather than a change. New stuff is added. An old technique might be formalized (like SOLID isn't actually anything new, but the definition is relatively new), which is really a way of "adding" to the older technique. We find ways to combine "classic" techniques, or specialize/derive from them, to provide a different service.
It's hard to find teachers that will keep up with the times. As someone who has done training, I can tell you that creating a course is a fraught process. It has to be correct. That means a lot of testing/review, and often "after the fact" fixes and refactoring, as students (invariably) find problems.
Keeping it updated is a pain. Also, it's entirely possible that the entire class may need to be binned, as the tech becomes irrelevant (I have a whole bunch of DU -Apple Developer University- certificates that are pretty much worthless).
Also, scope and scale. Projects are much bigger, nowadays, than they used to be, and we have found ways to apply classic techniques in new ways. Instead of learning how to write a device driver, we learn a device interface SDK. That's not a bad thing (as long as we choose a good SDK), as it frees us to do a better job on the higher levels.
One example I give, is that I started Apple programming with MPW[0]/Pascal (not Object Pascal)/ASM. It could take a week or two to produce a relatively simple GUI app.
Nowadays, with Xcode and Cocoa, I can spin up a fairly full-featured app in just a couple of hours.
Very little of what I learned with MPW will help me today, but the discipline that I earned is quite valid, and much of what I had to do by hand, is now done in the framework, so I do have a bit of an understanding of what is going on "beneath the sheets."
I see - strong fundamentals are always worth cultivating because they are invariant under shifting fads and trends.
The thing that I find difficult is how quickly trends and fads shift in this industry. I came out of mathematics, which moves much more slowly than tech in terms of what's fashionable. I think it's a function of the sheer number of people working on tech compared to math.
As said in one comment, PART2 (link at the bottom of the page), is where this begin to be added value compared to the existing theory.
That being said, very great article that describe well and nicely what I have personally experienced in a sme company that was bought by a big tech group at some point:
- when I arrived, the core team was already there with some having personal ties. A few beginners that took the mediocre manager as mentor, that took himself his manager as mentor
- they learnt by themselves, mostly there, they were responding to diverging opinion with anger.
- over the years, a few 'outsiders' arrived and try to change things by showing the problems and explaining that outside world do differently.
- but each of them had to face the seniority argument reinforced by the group effect to justify that they can't be wrong.
(Listen, we are 3, you are 1, it means that we are right and you are a pain in the ass to think otherwise. Doesn't count that you say that out of the company are unanimous on the subject...)
The worst (almost funny) case I remember was that:
- Person 1 (p1) and person 2 (p2) have a disagreement on how something has to be implemented.
- p1 is the boss favorite, so he is always right... so his solution will be implemented. No one listen to p2 that says that it is an inefficient idea, possibly problematic and that is why no-one does it this way outside.
(No-one? In fact they found one case over 100 of outside projects that did that, so that gave them confirmation that they were right...)
- p1 engine solution is implemented and p2 has to do a component working with that, but that does not work well at all, very slow for basic operations, lots of unexpected deadlocks and issues like that. Another manager complains about the issues.
- P2 decides to give a try reimplementing the engine with his solution. That is completed is no time and it is excellent: performance x1000, no lockup, no more issues with basic operations.
- results are shown to mediocre manager. But instead of accepting and going this way, he blocks the thing and can't accept that his favorite was wrong. So says that there should be a bug in P1 implementation and give him as much time as needed to test and look at it.
- after 1 months, P1 did everything he could with his solution to fix it without using the solution of P2. He comes proudly with his engine is now 10x faster than initially.
- but so, 10x vs 1000x is a no match, and P1 solution was still rigged with issues. So it is finally P2 solution that is used 'out of choices'. BUT... as manager still does not accept that he and P1 were not right. He said: ok we use P2 solution, but you will have to embed and support P1 implementation in the final product. Not to be used but maybe one day...
- conclusion of the story? Evaluation time arrived. Did P2 got a good eval? That would be logic, he saved the product, gave an important perf and stability boost to the solution. But no, he got the worst! Manager said that P1 had to take depression pills because of P2... Not because his solution was wrong and couldn't accept, learn and improve from it. Buuuut: P1 got the maximum grade!
I think the main problem is that most businesses are more than OK with mediocre developers. There's only a small number of companies on this planet where boundaries beimg pushed to extremes and it requires the top devs to keep going further and further. It's like football: there are lots of kids playing football,but we only need a few hundred,maybe a thousand that are at the very top of it. And to get and to keep yourself at the top does require very different mental capabilities than to get from a mediocre one to a good developer.
I agree, except instead of 'mediocre' I'd use 'competent'.
As long as technical wizardry doesn't add significant value, or it isn't essential to the business, most companies would rather not pay for it. They believe they don't need experts. And most computing folks at most companies know that rising technically is not the way to advance your career. If it were, more companies would be crying out to hire experienced 'expert' 50 year olds. But they're not.
From what I've seen, expertise is overrated. And competence is underrated. Adding real value to an enterprise comes from achieving multiple avenues of competence (technical, business, social, etc), and then anticipating the needs of the enterprise before experts have to be drafted to rescue a misdirected effort (or a disaster that never should have happened if sufficient competence had been employed).
A lack of emphasis on expertise is probably a good thing. Answering clever interview questions well, like taking tests well, is at best a surrogate measure for street smarts -- the skills that really matter outside academe.
Very true. I knew a C++ developer who knew the language inside and out, incredible skill, 100% on a written test where most people get hired with a score of 60%+. He offered very little to the company other than optimising the build to run faster. He just wasn't motivated to solve the problems the company needed solving or engage in the product. Unsurprisingly he wanted to play with C++ all day long.
I also knew a developer who came from a coding bootcamp, adequate CS fundamentals and basic knowledge of JavaScript and Java. He was the most productive developer I've ever worked with. Laser like focus and great social skills. He occasionally needed guidance from more technical developers but he knew when to ask for help and he got the job done.
Identifying the good developer with the frequent job hopper and the expert beginner with the developer who stays at the same place seems rather unfounded. I have also seen the job hopper who left just before it became clear that his architecture was actually not all that great. Also, wanting to use the standard stuff vs. rolling out your own is more a thing of incentives. If you are a frequent job hopper it is nice to learn something standard. If you stay at the same place you will have to suffer from the standard boilerplate that the not-so-great framework requires and that has to be typed over-and-over. It is more a matter of incentives than that one thing is necessarily better than the other, I think. One great advantage of the programmer who stays is that his decisions are backed by skin-in-the-game. S/he has to suffer through the consequences of his choices.
> As such, Advanced Beginners can break one of two ways
Perhaps a more clear way to think of that is in terms of comfort and bell curves. The left extreme of the bell curve is easy to both identify and understand for anybody beyond the left end. Those are the people at the low end who perform below accepted baselines.
Less well understood are people at the right end of the bell curve, who drastically out perform other developers. In all objectivity the people at the right end of a bell curve have as much as in common with the median population swelling into the middle of the curve as the supposedly incompetent people on the left end.
Think of this in terms of risk and popularity. When you are below the middle of the bell curve everything to the right of you is better performing. You can increase your performance by moving closer to the middle of the curve by doing what is popular.
Once you get to the middle of the curve you have to make a firm decision: remain comfortable in your current posture and let popularity dictate your approach or take risks to further increase performance beyond the population median. In that regard the populations at both the extreme left and right ends of the bell curve are doing things that are extremely unpopular. You cannot become an expert without fully embracing that reality.
An expert beginner is a person at the median of the bell curve or just to the left of it. They have mastered their skills to that point and refuse to make changes or take risks necessary to advance further.
As a front-end developer I frequently encounter expert beginners. People who have mastered some framework/convention, but doesn't really understand how their technology really works and are hostile to improvements that don't make use of their favorite framework/convention. To outside observers the expert beginner is clearly identifiable as performance is objectively measured with numbers without regard for approach.
I don't really buy the "expert beginner" thing. How does it relate to having the "T-shaped" skills? One is the frowned upon and the other is desired for some reason.
The reality of programming is that most often than not you do not need to be an expert to do a job well. I would even say that being an expert programmer is all about sticking to only basic things. The more you try to be smart and "on the edge" the more you or someone who inherits your code will fail. If I would get 1$ for every trendy abstraction or framework, then I would not retire, but probably could buy a new PC or something.
IMHO it all boils down to ego trips. Everyone thinks he is the superstar, ninja or 10x and everyone else just a poser. "I'm the smartest one" should be the motto of IT. If you think that you are the one that can push the whole community forward then you are probably delusional. I think I'm a moderately advanced developer, but some of the people that I met along the way were just crazy smart, highly productive and - here is the surprise - nowhere near the top. This delusion of grandeur is especially common in people that are pampered in the business. If I would name them, then I would be instantly downvoted, but just stop and try to imagine what people I mean. I'm guessing you won't have any problem.
In the end, your project does not need you to be an expert, your team does not need it, the business does not as well. Only your ego.
The problem is people in industry like to massage developers egos because they need them for their own ends.
You need to learn to judge yourself objectively rather than what people say to you. Measurable stuff like how many times does your feature come back from QA? How many times did you come up with a creative solution during a project that lead to a net positive effect? How long did it take you to be productive in X framework vs your peers? On the micro scale compete on the codewars website and look at other solutions to the problem. Often you will see creative things that hadn't even crossed your mind.
This contact with reality is harsh and you can always make up excuses but it's necessary because people around you don't tell the truth. Then after this it's important to realise that your technical skill isn't your only asset and working on interpersonal skills is just as valuable if not more valuable above a certain threshold of technical skill.
My problem is that I’m expected to learn so many things that I never have time to become an expert in any of them. Html, css, sass, JS, Vue, Vue router, vuex, react, other react libraries, lodash, jest, webpack, Python, node.js, Java, sql, Oracle, Postgres, bash, regex, docker, kubernetes, aws, azure, etc. ad infinitium.
Looking at that list I’d say learn html, css, js, js browser apis like DOM, one backend language, sql, and Linux sysadmin. IMO if you understand those you get a lot of the rest by looking at where they fit and you can build a solution without having to use a lot of product-specific APIs.
Oh, I already “know” all of these things (I’ve been doing web development since the mid 90s). But it seems like as soon as I start to get a decent grasp on a technology, a new version comes along, or it is replaced by something new, or my boss/company directs me to use this other technology, or the industry as a whole moves. SVN > GIT, jQuery > Angular > Vue/React, JS > ES2015, managing our own servers > the cloud, etc. Jack of all trades, master of none.
It's like saying you are expected to ride a bicycle, electric bike, a scooter, a car, a truck, a minivan…
Not exactly, of course, but there is a huge overlap, especially in concepts.
It's not knowing how to "ride" all of those.. it's knowing how to "FIX" all of those. And that's where it gets tricky, while each might be similar in some way (they have wheels and maybe an engine), the vast devil is in the details.
I didn’t really find anything I felt like specializing in for the first decade-ish of my career. I was working on games, I don’t think 3D math is that cool, and I wasn’t excited. I was tired of coding games over and over. I don’t think I was an expert beginner but I wasn’t an expert in anything I wanted to continue doing.
I took a break to be a manager for a while, which turned out to both be an incredible learning experience AND enough pressure relieved from the daily code grind that I was able to rekindle my passion for code.
Started writing a lot outside of work, realized I really like cryptography, studied that, became competent enough to even figure out what TYPE of beginner I want to be, etc. it’s very satisfying.
I’m around four years into that journey now, I’ve returned to a coding path where I get to actually learn while building, and I’m excited.
The expert beginner is a surprisingly easy trap to fall into. I also think we do this TO people when we trap them in a role and give them very little flexibility in execution.
They will begin to specialize in doing a bad thing well without understanding what they’re doing, especially if they aren’t given mentorship.
It's easy to fall into since it's usually what's expected day-to-day. Not that many people have a chance to dive really deep into a topic at most companies. Usually you're churning out fairly similar stuff for most of your tickets. It's also dangerous for your career if you pick your specialization wrong. Maybe 5 or 10 years later that technology is obsolete. Earlier in my career I was something of a (Adobe) Flash expert but there's no way I would even mention that these days. Much of it translates to other technologies but not in a way that makes me an expert in them.
YO, same, I bet we hung out on Kirupa at the same time haha.
I worked on flash games at Disney, PopCap, Sony and Amazon. That was the shit for a while for 2D games. But I actually went from social to mobile to PC games and found myself less and less interested. Don’t get me wrong it’s very cool code and quaternions solve a lot of the pain so some of the math is frankly less fiddly than 2D, but I wasn’t in love.
It’s been nice to fully pivot towards cryptography and security since I get to work on a lot more projects and do a better job on them. No one cares about bugs in games. Everyone cares about bugs in crypto. It’s nicer for the engineers.
Plus it’s frankly so deep that I have very little risk of mistaking myself for an expert. Even PhD’s only have a narrow specialty, it’s very comforting how hard it is.
Similar to the rise of the StackOverflow programmer. You can get pretty far just by Googling error-codes and brute-forcing a problem in a domain without any experience (I should know).
Maybe not the best way to learn, but in such a technologically complex world, it can sometimes be more effective...
Even when I know how to potentially solve a problem I often search it on StackOverflow anyway because I know often I'll find a much better solution from developers who are either more talented than myself or have simply spent longer thinking about all the edge cases.
What is a good developer? Isn't there more than one axis? For my case over the years I have honed my skills at problem solving, seeing the big picture, and finding the most efficient technical solution from a business perspective (not GAFAM scale obviously, but that's the exception). As a result, I can save companies a lot of time and money because I will help reshape the problem and rework the scope to find the max added value / work ratio, to deliver in days a working solution, instead of a big year long project with all the overhead. My productivity is high because I know where to cut the crap.
More than once I have seen a peer or a whole engineering department of brilliant people going down a technical rabbit hole for weeks for intellectual satisfaction, where instead I will just take a step back, walk to the manager and try to work a different take for a solution that can be implemented in a few hours/days even if the business goal is slightly moved.
Yet I'm definitely not a good professional software developer in terms of code "quality", and don't consider myself very smart compared to my peers.
You nailed it in the first sentence - businesses don't need developers, they really need problem solvers - but we often don't get a seat at that table to turn around a problem before it lands on our desk as a request to code some poorly understood idea someone had.
1) Identify the problem - it invariably isn't to do with code but a combination of time and cost.
2) Can the problem be solved without a line of code, such as changing the process and removing the issue that way?
3) Is there an off-the-shelf application, software or component that will solve the problem.
4) Maybe you do need to code something.
We are also all too quick to just dive straight in at (4) because that's what we identify as, that's what we enjoy and unfortunately what we get paid to do rather than getting paid to think, advise and value-add.
Has anyone ever seen a medium-to-large-sized company, in any industry/vertical, that's solved the "[Engineers] often don't get a seat at that table" problem?
Anecdotally, because I've brought up this issue a lot, I've heard many responses from bother sides. From some engineers, not in the room: "I would never want to be in the room, because then I'd have to work with those people". From other engineers: "They'd never want us there, we're just resources to be used."
From the non-engineers who are in the room: "we didn't know you wanted to be there, of course you're welcome", or "we didn't want to interrupt your work for such a high-level meeting".
I find that the reason for exclusion is either incompetence (in which case, get out), or a lack of awareness that certain engineers are exceptional sources of opinion, in those rooms.
But I've never seen a culture that's pushed engineers into those strategy meetings; it's always been opt-in rather than opt-out. (It's always opt-out for the product org).
In my experience I don't often hear: "They'd never want us there, we're just resources to be used."
But: "Damn those stupid meetings again, wasting my time."
It is like a lot of developers don't consider meetings part of job. I am currently less and less into actual typing of code and much more into understanding what others are trying to accomplish.
As engineers, we are spoiled in that we are accustomed to (1) navigating high signal:noise ratio environments (2) systematically improving the signal:noise ratio in environments where it’s poor.
On (1), many corporate meeting rooms have terrible ratios, and so they intuitively feel unproductive. On (2), it’s much easier to improve the ratio in code than it is in words spoken by people.
It requires a lot of patience, but in the long game, engineers do improve that ratio in meetings and strategy discussions, and the company benefits from it.
My theory is that most engineers are not interested in playing the long game, because (1) it’s not as fun (2) they don’t plan on being at the company on time scales where the investment will have been worth it.
While I personally don’t emulate their behavior, I think the developers you describe could conceivably be acting perfectly rational for their career goals.
Some ideas that look poorly thought out are in fact pretty good, just different from what we would do ourselves.
In my experience it is a key skill to objectively evaluate ideas that are different from my preferences and accept those that are good enough, but not use my preferred way of doing things. This is a quick path to get a seat at the table and getting your opinions heard. My 2c.
> Some ideas that look poorly thought out are in fact pretty good
The only times I've ever found that to be true is when there's other details available that I'm not aware of. Business constraints "everyone knows" about but aren't documented anywhere, for example. Decisions made in a meeting that aren't communicated out, meaning, essentially, that you only are given a portion of the problem and request. This gets back to the 'seat at the table' concern above.
The 'seat at the table' is far more related to interpersonal/political skills than technical ones, but can be chicken/egg in some places.
My opinion is certainly skewed by my personal experience, so take it with a grain of salt.
That said, engineers are a highly opinionated bunch and a group of 5 will have 5 different personal preferences. But they are smart and will quickly recognize an objective opinion that is not hard-driven by personal preferences. Folks with good technical judgement who are willing to accept a different approach quickly become a highly respected member of the community. They frequently end up with more tables offering them seats, in a technical expert role, than they care to occupy.
This is one of those places where strong opinions, loosely held should be applied. And I can imagine those people getting more invites simply on the basis of possibly being more agreeable to interact with.
Programming is an abstraction (a perspective on what the computer is really doing) that is very much shaped by the tools we choose to use. Meta-religion is probably a fair way to describe it, and can be every bit as divisive in some circles.
I agree with your sentiment and yes, a workable solution is always the best solution. Idea was probably the wrong word, it's the highly vested, fully formed solutions that I've had issues with.
Also, in saying that, I do believe domain knowledge is far more valuable that programming chops, and not understanding the domain to be able to translate that into an application process is worse that someone with domain knowledge trying to come up with how it should be coded.
Some of the best projects I've worked on involved a lot of sitting and drinking coffee with the domain expert.
I think the problem is that getting the information from the domain expert and then running and programming with it is doable, but the domain expert cannot ask you for advice on programming and then build an application (at least not in the same timeframe).
That were exactly my thougts when I started reading the article. If you have a great developer that will leave on the first sign of trouble, maybe you're better off with an average developer that will understand the business needs and will "averagely" do it's part in solving whatever problems exist.
Most businesses don't need "great" developers. They just need to get the work done.
Getting someone who is really skilled can of course be beneficial but it certainly has the problems of retention, things like providing interesting challenges, high compensation. Not all businesses have deeply interesting technical problems. That's fine. There is plenty of use for the wide parts of the bell curve fpr both businesses and developers.
Yep. I once worked at a manufacturing company. Their clients sent order information via an FTP server. The orders were formatted as flat files, aka just simple text files. I had to create code that painsakingly looked through each format and imported it into the new ERP system. Every vendor was different in each flat file. I'm sure some had similarities, but abstracting the logic was more dangerous in this case.
A lot of business software is boring. It does not take skill to do. It only requires someone to know a handful of tools and be willing to put in the time. So my advice to non-FAANG developers: if you want to make development your career, learn to be bored. Still work on marketable skills, learn new languages, etc. But remember that not every task will be an interesting new technical issue, it's probably going to be something you've seen a hundred times before.
> A lot of business software is boring. It does not take skill to do.
I guess it depends on your definition of "skill". ERP systems are complex, full of problems, inflexible and managed by bean-counters with "battle-axe" personalities. If you look at the work holistically, it does take skill and experience. There is a lot of room for improvement in these systems but the problems involve people as much as they involve technology.
>So my advice to non-FAANG developers: if you want to make development your career, learn to be bored.
is life at FANG really that different tho? Can't imagine that everyone is working on exciting stuff all the time. Would appreciate if someone could enlighten me.
I can speak for only one of the FAANGs, but the work I do there is definitely the most exciting to me personally from all the companies I've been at. There are challenging technical problems that require discussion with other engineers to design a solution. In most companies I've been before the challenges were mostly organizational and I had to deal with all of the "agile" crap to please PMs and managers, while the actual technical work was boringly easy.
It's not always rainbows and butterflies here either but I do feel much happier with the work I do here.
Maybe if you have a lot of money lying around. But in my experience this enterprise forms of development leads to ballooning team size and mediocre software.
The same software build by a small team of experts would cost both less, and lead to a high quality result.
I agree with this. It's probably better to have an average, good developer who is reliable and consistent, than a brilliant developer with risks of overengineering a product or system, burn out, poached, or seek new pastures.
It might also relate to intellectual satisfaction. A "great" developer could be hungrier in that sense, that they need worthwhile projects to challenge their skills - and may get bored more easily.
That's been my experience as well. I've worked with some clever and / or highly effective (= fast) developers, but in practice the applications ended up overbuilt (like the tech lead who spent six months under water, only showing up to the office maybe once a week, while he was building a framework), or applications with a lot of features but missing the foundations (e.g. microservices architectures but with no end-to-end tracing, testing, interprocess communication standards, logging, etc).
These are not the qualities of an expert according to Dreyfus’s model (As referenced by the article). According to the model, as you gain skill you also gain the capability to bear responsibility. A beginner programmer can be responsible to program to a clear spec / wireframe. A senior dev can be responsible for making the client happy. It’s not obvious to a beginner, but those are very different tasks.
If you spend 6 months working on the wrong thing, you aren’t taking responsibility for delivery. It’s a mistake that most smart programmers seem to fall into at some point in their career, when they’re still learning. It’s a classic sign for me of a mid level developer made into a tech lead before they’re quite ready for the role. I’ve seen that happen way more often than I’d like.
If your tech lead is making mistakes like that, your team is operating without a real senior dev at the helm. That’s not insurmountable, but don’t mistake that sort of thing for real expertise.
Why would they? it’s still cheaper to have a talented dev overbuild a framework than it is to find bugs in the code of someone who didn't understand the problem they're solving.
When I was starting my career, one of my first jobs had two clear career paths for "programmers": Business Analyst and Software Engineer.
A Business Analyst focused on figuring out business solutions and constraining those to realistic technological limitations.
An Engineer focused on architecture and implementation.
Seems to me you just chose the BA area of expertise... there is a lot of value in that, but the Engineering side is just as valuable in my book as if you have great ideas and plans but the actual engineering is poor (and vice-versa), your company is going to suffer.
> More than once I have seen a peer or a whole engineering department of brilliant people going down a technical rabbit hole for weeks for intellectual satisfaction, where instead I will just take a step back, walk to the manager and try to work a different take for a solution that can be implemented in a few hours/days even if the business goal is slightly moved.
What do you do when management doesn't want to move their business goal? Now you look like you're incompetent and incapable of just doing the damn thing they asked for.
Asking rhetorically, not as criticism, but because I've been there and it fucking sucks when the people who aren't working on the problem think they might have a better understanding of the big picture than they do.
Ever considered that everyone is doing everything all wrong and you're just feeding into the frenzy?
That's where I've come as a professional. I couldn't give a fuck any more about anyone's problems and I've learned that it has everything to do with creating pathways for people to communicate.
People are difficult creatures and academia has made us into basket cases unable to tie our own shoelaces without help from someone who will evaluate us and dangle a carrot in front of us for the effort.
How about getting over yourself with your corporate lorem ipsum and use the technology at your disposal to fix this dumpster fire of a planet we've made for ourselves?
Software isn't going to fix the planet. That was all bullshit to make rich people feel better about taking most of the USD funny-money. Strong centralized planning in the form of a functioning government, structured around resilience, erring ever on the side of direct democracy, fluid representation, and lifting economic victims over perpetrators in power is what will fix the planet. It's common sense. The software will simply be reduced, so people stop building the same thing over and over: it's stupid, always has been, and the cries for more devs were always just to make it cheaper and more disposable.
Well.. it's not like your comment bursts with cleverness either. But yeah, I clearly overdid it... I just got triggered by that presumptuous comment. I'm really sorry, but I have to work with code written by these "pragmatic, quick 'n dirty"-people everyday. Yeah, the kind of guy who is liked by the management because "they get things done"... IMHO people like that can go and write a poc.. or something that can be thrown to the garbage, but not something that has to be maintained over many years by a lot of different people. Please keep them away from anything serious. Their short-term pragmatic and profitable solution doesn't come for free - it is based on the debt left for others to pay in the future... I am not saying there isn't such thing as overengineering. There is a lot of that too, rest assured that I've seen it more than enough. But overall, I've seen 10x more code that is hard to maintain because it was written by a "cutting-the-crap" guy compared to a something that is problematic because it was overengineered... anyway, that's just my opinion.
I work with an expert beginner and I don't know how he's survived. He's got about 30 years of experience, but writes code like he's got 1 year of experience.
His code has absolutely no sense of quality, doesn't employ any sort of standard design patterns or style, has no semblance of architecture and is an absolute hacky rats nest of code that falls apart with any change because of how interdependent it is.
The other day I was sent some of his updated code, which had no version control and had randomly added an extra 150 files to the project. It turns out the majority of those files where duplicated from elsewhere in the project and apparently it was my job to find where the changes were among that mess.
It's like he learnt to program decades ago and then never opened a book or looked at anyone else's code since.
And he's still got a job. So who's the real genius here?
I don't want to be that guy, and i don't want to work with that guy, but i have to confess to a certain jealousy that people like him have figured out how to sit back and just phone it in and collect pay cheques for thirty years.
I've worked with innumerable people like this over the decades. The thing is, they're usually not phoning it in. They're really trying hard. They just suck. It looks like they're working hard, because they often show up early, stay late, ask a lot of questions, and produce a lot of output.
It's hard to fire them, because to many outsiders, they're workhorses. They know just enough to scrape by on each task, but they also have institutional knowledge and social connections that make them seem invaluable. Often that institutional knowledge is that they're the only person who understands their own shitty code. Their more-capable peers usually know that they're incompetent, but management doesn't.
There are many reasons people like this can survive. Peers don't want to explicitly throw them under the bus. The best of their peers simply move on--to other projects in the same company, or more often, to greener pastures at another company. Their peers cover for them, because it's easier to work around a millstone like this than it is to get them removed. It takes months of concerted effort to get a peer fired for underperforming. If you want to try to get someone like this fired, it starts to look like you're the asshole with a personal vendetta. Especially because they've been there for a decade and you probably just arrived.
These millstones aren't outright incompetent. On paper they've got extensive experience. Socially they're well-regarded by everyone at the company except their immediate peers who know they suck. They've been faking it and faking it well since long before you arrived, and they'll be doing it long after you've departed.
Make no mistake, though. These people are rarely phoning it in. They are busting ass to tread water. Sometimes they believe their own lies, but more often they are terrified that someone will find out just how bad they are. That's why they work so hard.
Damn, that's me in 10 years. I'm so bad at my job that my only hope is that I go by unnoticed long enough so my time in the company becomes a strong reason not to fire me. It's been almost two years so far. The thing is I really try to improve, and in some ways I even do, but at a painfully slow pace. Even though I've been in this company for nearly two years I've been working as a software developer since 2012. And it hurts so much to see people coming fresh out of college outperforming me by orders of magnitude. It makes me feel really stupid. But is a really well paying job in a somewhat big company, so I hold onto the hope that if I don't screw up things too much my employers won't even acknowledge my existence. In the end I only feel bad for the people I work with (not those I work for) because they are nice and talented people and it must suck to work with someone that lowers the team's bar.
"The thing is I really try to improve", that statement alone makes me think you are not one of these people.
Don't underestimate the value of slow and steady in the world of corporate development. Many fresh out of college might seem like super stars developers that know all the latest buzz word technologies, I know this is a massive generalisation, but they don't like to work on "boring" or "legacy" and often quick to jump ship to the next opportunity.
Also, please be aware that this feeling is a common symptom of imposter syndrome.
Maybe there's a better job out there for you? Either at another company where you'll get mentoring and support for growth, or in an entirely different career. It doesn't sound like you enjoy what you're doing now.
I've stagnated at jobs in the past. I've thought it was my problem. Then I moved on to better places, where coworkers helped each other to excel, where people placed a premium on improvement and education.
If your company shoves everyone into a cubicle and assigns them individual tasks, how are you ever going to learn? If you can get a job somewhere that embraces pair programming, you might find a whole different world. You can be mentored by your coworkers and collectively solve problems, instead of feeling like you need to put your head down and produce something on your own that may be beyond your expertise.
Fresh grads, especially grads out of code schools, often arrive with a lot of buzzwords ready to go, and a script of practices they've been told are the right way, but that's all flash and no substance. It can lead to overconfidence. There's something to be said for it, though; you can fake it until you make it. The important part is that you actually make it in the end, and to do that, you need mentoring. It sounds like you're not getting the mentoring.
I know it's hard to quit a well-paid job and take a chance, but I think that's better than going to work every day hoping you won't get fired, and not growing as an individual.
I actually like this job. We do pair programming regularly, which is the source of most of my worries that I'm not good enough. My team has a dedicated manager whose job is to mentor us in our careers. I like him and he seems to like me, he is always willing to listen and to offer help. I think the company is outstanding in providing a healthy environment. Which is why I feel so out of place. Everyone is so smart and grows so notably much and I feel I'm falling behind.
It's a matter of perspective... someone, somewhere in one of your previous workplaces, might have thought of you exactly as you think of these people. So maybe don't judge too much.
This may be an example (rather dysfunctional) of a Domain Owner. Someone, who is by historical reasons is entrusted with guardianship of the particular part of the local knowledge management. The worse the code, the more the likelihood and indeed the need for a role like this. This is clearly systemic, and is a red flag for the whole org/unit.
Whether such a Domain Owner is aware of his standing - that's a philosophical question. Job security on the one hand, inertia on the other, compounded with the very much environment that gives such role more support.
Just wish these guys are willing to share the crusts of the tribal knowledge with you, as this is where their true expertise may be. Side note: if there's this perceived disparity in the technical skill, the seniority in such org may trump the merit. There could be a way to get such Domain Owners on your side; pointing out their lack of skill is not the one, in fact this can backfire literally.
Yeah I genuinely don't understand it. His work is universally panned in the team, he's had multiple disciplinaries against him due to low quality work and keeps scraping through.
I could never be that person, I always strive to improve and become better.
Ah. I think it's an example of how you and your employer use different frameworks to attribute value to what your co-worker brings to the table.
Whereas you put focus on aspect such as maintainability, readability, adaptability, testability,... of the code he's writing, your employer might simply keep him around either because his code, well, simply "works" to a degree that is satisfactory, or because of a complex interpersonal relationships which have been established over decades turning him into much into a fixture.
Put more succinctly: No one wants to know what goes into making a sausage.
That is, there's little value in explaining or arguing the fine technical details of code optimization, architectural design, functional programming, etc. etc. etc. if you don't consider your audience.
Stakeholders who rely on your co-worker's work simply want to know what his work could mean to them, and how it helps them get the job done. Your concerns regarding code quality are totally valid, but unless you, as the maintainer, will be fully perceived by your employer as a formal stakeholder in your own right, your objections risk being thrown in the wind.
At that point, it's not just a co-worker issue, it becomes a workplace culture issue. If there's a dissonance in the manner in which he's held accountable for the quality of his work by the team on the one hand, and management on the other hand, then how does that same dissonance affect how your own work gets perceived?
If he gets to scrape through, does that then imply that you're putting the bar for yourself really high, whereas you could clearly get away with doing less? Or are you trucking on despite the fact that your work is held to a different standard because there's a different expectation towards your own performance?
I think striving to "improve" or "become better" is something that only means anything if you do it for yourself first and foremost. Because that's what you want for yourself. It's a valid pursuit to want that for yourself. Whereas you have to be mindful that few people selflessly care about that desire and go out of their way to let you self-actualize that. After all, your job is first and foremost a business deal between you and your employer, and the primary reason why you are there is because your employer feels there's value in what you bring to the table.
If you work in a group, it's equally important to understand how to compromise on where you set the bar for yourself and others. Of course, in your case, you don't want to compromise on what you want for yourself to the point where you start to cross personal boundaries, lest you want to end up resenting the entire situation.
A lot of these types are the first to get laid off with the best severance packages. Do this three or four times in a career and you end up with a sizable nest egg.
If he has any degree of self-awareness then perhaps those 30 years were riddled with anxiety over losing the job and not being able to find another one.
Maybe a challenging question, but did you give him feedback? IMO, that's the real way to stop people from ending up as an Expert Beginner. It's not that they need to adopt a new paradigm that they don't know about. It's that they need to know when they do something that sucks. People are amazing at self correcting of they have visibility into the problem.
But when people around you enable and tolerate poor code instead of correcting it gently but firmly, you don't know what to correct.
Yes, he has had years of code reviews pointing out every thing he's doing wrong, every way he could improve and still refuses.
Refusing to use version control is a prime example. The industry as a whole has unanimously decided that version control is necessary, but he still argues that his backup of an entire directory that he does once every 2 months is more efficient
Do you have support from your manager on these principles? If this isn't enforced at an organizational level, I understand why he doesn't care to do it, and I would recommend managing up so that your lead/manager understands the importance of modern coding practices.
> Refusing to use version control is a prime example. The industry as a whole has unanimously decided that version control is necessary, but he still argues that his backup of an entire directory that he does once every 2 months is more efficient
For what's it's worth, a lot of people who use version control treat it as a backup system instead of as a record of logical changes to the code base with a detailed description of what has changed and why.
It's still a hundred times better than some kind of a manual process. I can at least view the history of files with good tooling and the build pipeline etc. are manageable.
I worked at a place where the main programmer had been there 20+ years. To this day, they are still using a language which was popular in the 90's, but you rarely see today.
The day I interviewed, he said "I know I'm not the best programmer but I believe that after enough hours on the project you will end up with something that works", and he bragged about many nights sleeping in the office on a cot he had under his desk, keeping systems running and debugging live programs.
It was some of the ugliest code I had ever seen in my life. Every couple years, they'd hire an actual programmer, but he refused to change his methods, and the programmers would eventually find another job out of frustration. It was very easy to see which code was done by the other programmers, it just made so much more sense than his code, it was documented, etc etc.
I definitely agree that his approach to code quality is bad and he should improve.
On the other hand, he still seems to be delivering value to the business by making things work, regardless of how awful the code is, and I feel like this is still an improvement over the other extreme which is chasing hype and buzzwords while delivering near-zero actual business value.
Sounds similar. So many of the hours he logged were because of trying to shoehorn features into the ratsnest of code. If he had rearchitected chunks of the code into something more logical, then it would've saved time overall.
I don't think he understands the purpose of architecture is to make future development easier, and instead thinks that if you make all variables global to be changed anywhere at any time, then that makes it easier and faster
There's a lot of developers out there like this. It's best to avoid hiring them. I think would call this person a lousy developer and not an expert beginner.
I think blaming this all on learners not realizing their missing potential is a bit one sided.
Other influences:
* Skill level of coworkers
* No time is dedicated by the team to abstractions / refactoring due to deadlines, or personal preference etc.
* The technology being used is limiting or used in a limiting way.
If you never push your boundaries you'll never get better. Going to 100% one time will have a lasting effect. If you instead only go to 80% all the time you'll only get better at doing a mediocre job.
However, the goals might be misaligned here: Your employer and coworkers might not be interested in your personal growth. Instead they want you to "Get stuff done."
I find the article while thoughtful and provoking, paints a myopic view of self-advancement. The analogy with sports is what throws it off. Sports can be individual and could be team-based. There's an importance to individual skill, no doubt, but the team apect is what lets one not only assess own standing, but have a chance at advancing it without the adversarial pressure. If you choose to retool, the team would pick up the slack meanwhile until you're back (hopefully stronger).
The skill degrees indeed are relative, and in the field of programming are dynamic. Of any one skill to put in permanence is perhaps an open-mindedness. It is a readiness to learn, not much an ability to master.
Lots of excellent programmers don't stop learning, they just discover the wisdom of 'good enough for the time-being'.
Expertship is lonely, beginnership is open an dynamic. The author projected the whole skill advancement spectrum onto the single grade of Beginner, as if beyond Expert lied a void or infinity.
I believe, paradoxically, beyond Expert is ... a Beginner. Either by humbleness, or by need to discover a new field, or by age, or boredom, or wisdom.
Programming has to be a team activity. If you happen to handle a project part by yourself, your future self or someone to work on your code after you is your current team.
If you're on the team already, then you keep learning from your mates as long as you let your mind stay open to it.
Or maybe they stop learning because at some point, you realize your employer doesn't give two shits about doing incremental changes to your infrastructure / code, let alone breaking changes, but instead pursues paying outside contractors to do broken shit you'll add the pile of things to fix.
Or maybe that's just me. Also, I'm not going to work on mathematical proofs for something good, just to see my work disregarded because it's "too complicated".
Developers are supposed to do all this learning on their own time as well, right? No budget or time for training. In fact, why don't you do some overtime paid in pizza?
Learn new stuff at home on your own time, put it on github, it'll be fun, ignore your family and your friends.
And the helpful comments here for someone who is doing 12-hour days is to meditate and go for a walk?
Work 7 hours and go home, your life will be better.
While the topic is interesting, I don’t think the analysis presented in the article series (at least in the first two articles) is particularly compelling.
For one, it relies much on hypotheses about internal though processes, which makes it basically unfalsifiable. Then the author is in my view shooting himself in the foot with the bowling analogy, namely by describing how he noted that he didn’t improve anymore and went to ask a colleague for advice.
How would the same not be possible or even likely for the “expert beginner”?
All of this is much more easily and briefly explained by motivation - for many people in technology, technology is simply not a passion! For them it is just a tool and like also the article mentions, there are few reasons in many companies to spend more time on perfecting your craft - in fact, it might be strictly worse than building career-enhancing relationships.
This is especially true as someone not in a technical position is often not able to assess the quality of the output.
Two weeks ago I learned how core dumps work on ELF-based operating systems. Last week I managed to write C code to produce symbolic stack traces from core dumps on at least 2 different operating systems and 3 different CPU architectures. This week I used that to identify why the Python interpreter is core dumping on an embedded target.
I'm closer to 60 than 50 years of age and am closing in on 40 years of professional software development experience. I like to think I can still learn new tricks.
I'm currently into learning how computer audio works. Learning to program things like mono -> stereo conversion, panning the audio, normalizing, generating binary sound files,..
Currently going through a book (The Audio Programming Book) for all of this, doing it in C and then trying to make my own version in Go.
The reason I say "define significant" is because these are all new skills I am acquiring - but I'm doing so just for the fun of it. It's not something I had to learn for my job.
I'm 60 next month, and have been programming for 40 years (everything from Basic, Cobol... to C#). I'm currently having a blast learning Java and Android programming.
Big significance: docker last 2 years iirc. It makes development and publishing very very flexible and easy. It's not an argument for me that it's essential in server based development.
Less significant: application locking with redis (redlock) last month. I've had a picture of it from long, but I'm surprised how neat and stable redlock is designed.
I actually have it planned to do 2 things a day, even on weekends. One technical thing (new language, IDE, shortcut, etc) and one larger project based thing. It doesn't have to be much. Even 2 minutes is fine. That 2 minutes tends to lead to 30 minutes.
The only problem here is that building things, unlike bowling, is a team effort, and also goals are very different, you don't win the game by having the highest score in the end. IMHO big question is if it's worth to invest into "getting your score over 160" in terms of benefits for you and/or company? Can that time be better used? Will your project benefit more from moving your bowling from 160 to 200 - or you can get your role covered just fine with 160, so perhaps invest time instead into getting at least 1600 ELO in chess which will be needed on the next project, and will also give you a wider perspective?
Also any discussion of software competency without market analysis is incomplete. We have seen a big change from indy culture towards monopolies and their rules in the last 10 years. Almost all developers are required to play by the rules or Google (Chrome, Go, k8s), Apple, AWS.
After 40+ years as in electronic, software developer, OS and App, Networking, SysAdmin, DBA and DevOps I see it all as a multi discipline. Knowledge in Electronics, CPU design, Compiler development, OS design, communications, Business management and user interfaces all relate and a "Rock Star" would need to be an expert in them all.
Most Dev's stop with App dev and business design. Some go on to UI or compiler performance. System a little and communications and electronics almost never.
Sounds like you have a pretty impressive set of competencies. Well done!
> and a "Rock Star" would need to be an expert in them all.
Need... in order to do what? Run a business? Be qualified to make every decision for... a chip manufacturer that also does OS development and application development? (Apple?)
I agree that it's worthwhile to become proficient in many areas, but your comment seems too specific and absolute to be useful. It discounts the value that people (who aren't your definition of "Rock Star") can bring.
As the author puts it, many a developer gets hired and eventually they "entrench themselves into some niche in an organization and collect a huge paycheck because no one around them, including them, realizes that they can do a lot better".
In my experience, that is what does happen. There is no incentive to do better, and the paychecks in IT happen to be pretty large.
I progressed the most as a software developer when I wanted to be hired again after striking out on my own for a bit.
What I learned from striking out on my own was that my software development knowledge and abilities may have worked in previous contexts but not where I wanted to be as a next step of my career. While I could have been promoted at a previous job that wouldn't have proven anything on the open job market. This is probably where a lot of dissonance often occurs. People are Senior or Lead developers at one place for a long time then suddenly are out of a job and cannot find another one.
So I worked hard to improve in a number of ways because I was highly incentivized to do so - there was no "current position" to fall back on.
This article addresses a very particular person who definitely does exist. Expert beginners are absolutely a thing and things have only gotten worse with the "HIRE EVERYBODY! NO TRAINING REQUIRED" approach to hiring a lot of shops moved to.
I had not experienced it until the past year. If this article does not resonate, if you have not worked with one, count yourself lucky.
I am regularly astounded by the lack of extremely basic skills I have seen in the places I have worked recently. For the love of god, just read a single book or take a single class on software development.
People argue: "Well, most shops don't really need good developers." I call BS on this. Every place I have worked at are trying to make money and keep the team from getting canned, and a bad developer will take you two steps back for every one step forward.
There was a person at my last job, nobody knew why they kept him around. He had a good 10 years on most of the team, but his code was easily the lowest quality, and he constantly fought with the rest of the team around attempts to improve code quality.
We hated him. He was totally stale and had given up on self improvement. We wanted him GONE. What happened? Most of the team quit and he is still there.
I think there is long term culture clash between old and new ways of learning, in the "over a career" sense.
The old way is the formal, with external motivations and structures. Maybe there's a curriculum, like accountancy has. Maybe there's a coach, like in sports. One can spend a long time in the "infant stage," learning by mimicry.
A young karateka spends years practicing motions in the air, with form carefully observed and corrected by a teacher. This lets learners access the knowledge of karate as a whole, without needing an intuitive understanding of why this elbow must point that way while performing kick two in exercise X.
To jive with the author's bowling analogy... Imagine a child bowling with just a ball. No pins. No aley. A teacher corrects form: posture, the swing of the arm, the final position. Once the child starts bowling for real, all their habits will be good. The known deficiencies of poor form are avoided, and the risk of hitting an "expert beginner" dead end is low.
This kind of formality is only possible when the domain is well known, and "correct form" is well understood. I suspect that these systems require generations to build. They also run the risk of devolving into ritual, superstition even.
The informal, more autodidactic culture does risk an "expert beginner trap," a premature peaks in skill that requires formal (or intentional) training to defeat. OTOH, "expert beginner" is not just a trap. It's a useful goal, for a lot of situations.
If you are going to fight the bully next week, karate is not a good answer. Those highly refined skills have little intrinsic short term roi.
I'm going to risk echoing what other like-minded contrarians here have posted. This essay makes some good observations about what happens to developers in their careers, but colors those observations with opinions about what's good and bad. And, as is usual in the developer community, continuously learning new technologies and layering in more "best practice" structure (presumably in one's own free time) is always king when it comes to these folks.
It's hard not to read this and sense a bit of bellyaching about how the market for software engineers isn't a pure meritocracy where those who are using all the latest tools with pitch-perfect architecture are the only ones who get hired and promoted. And it seems to totally ignore the real world in which legacy software, with all its warts, does exist and can't be thrown out.
"Bowling is like Software" does a good job at pointing out a common problem(You can develop and get stuck using a style which has a lower ceiling than other styles).
I still think "Bowling is like Software" is a bad analogy though.
Bowling is a one-dimensional task: Its the same problem every time, so some styles will likely have much higher ceilings than others. Software is far more complex: some styles work well for some problems but poorly for others. If there is actually a generic fix-all style, its likely very abstract, difficult to grock, and probably involves a lot more math than we'd like to admit.
So I don't think theres anything wrong with getting good at one style, even if it has a low ceiling because even if you learn another style later, you will probably want to go back in your toolbox at some point in the future to mix in some of the original style.
Software and business are also too human unlike something artificial like bowling. Creating beautifully architected and amazing abstractions with full test coverage at a startup churning out MVPs is actually a very bad approach even if it's objectively following "best" practices.
It feels like self-admitted beginnerism is actually the mark of a great developer (not assumed expert). Regardless of whether the “expert” is genuinely earned or not.
Indeed this is the idea behind the beginners mindset[1], where we assume our knowledge and experience may be faulty. That we have to unlearn our skills to make a further leap. That when you assume you’re an expert, you’ve already lost.
So I tend to question the Dreyfus model in general. I want to see how capable people are at getting skills, but also how readily they can discard hard won skills. Similar to a sunk cost trap, we can needlessly hold onto our skills and hesitant to think we need to let them go and rethink the whole approach.
Self-admitted yes but I think the OP article is talking about ignorance about your own beginnerism. Definitely as someone who invested heavily into "getting" OOP early in my career, I can now see that I have a lot to unlearn and mold my understanding of software in a less dogmatic way.
In my opinion experienced developers are mostly pretty dumb even at top firms. Yes they are GREAT at the particular thing they have invested the most into labeling themselves but are often stuck in some tunnel vision or just do not care to learn new things. Many young developers are eager, learning multiple technologies/languages at once, work across back-end, front-end, network, dev ops..to be a successful new developer the wall is much much higher than it was for developers 20 years ago. These old guys can't do much tbh and are so slow at adapting, they don't even have excitement in meetings or anything. So boring. You can do what you want in life but don't complain if you are replaced.
This is a really neat article. I feel like just this week I reverted from Expert Beginner to Advanced Beginner—or maybe glimmers of Competent—as working on my current project has revealed to me just how much I don't know about what I thought I knew well.
This is a valid interpretation of a trap that people can fall in to but there is also a positive interpretation of being an "expert beginner". Developers encounter so many new technologies at such a rapid pace over their careers, especially recently. People who are constantly trying new technologies at a shallow level can have access to more tools to solve problems and be successful in a rapidly changing environment as long as they don't over do it, and narrow their scope to relevant topics that support each other. I think there is a line to walk between falling into a lack of mastery, vs not absorbing new things at a beneficial rate.
This hits hard. I know that I'm missing some truly fundamental techniques in my development such as testing and ORMs, but I can't seem to bring myself to take the massive steps to abandon the way I'm doing things now.
Every time I try to start doing things "The Right Way" I end up getting nothing done. In frustration, I revert back to my old technical debt building ways just to try to move forward at all.
-side note- I didn't even know it was possible to bowl without sticking your fingers in the holes... it's almost as absurd as not doing any automated testing.
Sounds like fake it until you make it. People, especially self-taught ones, aren't going to pursue something if they believe they're bad at it. Be nice.
If you have less than 5 years experience doing anything, it's safe to conclude you are not an expert.
If you can't identify a single person who is better than you at something important you need to do, then you are a probably a terrible judge of competency and therefore not an expert.
From my personal experience, are you still writing things the same way you were six months, a a year, two years ago?
I look at code I wrote a couple of years ago, and not only can I identify how I would write it differently, I can identify /why/ I would write it differently - both what technique, concept, or methodology I have since learned, and how that would make the code better.
If you cannot, you have probably plateaued, like I did for a (far too long) while.
I noticed yesterday that I've implemented the same type of CRUD API call in 3 different ways in code I've written in the last 2 years. And I can see why I did that and what I was trying to optimise for in each case. I'm sure I would write it differently now, but would I write it better? I have no idea what "better" is any more. Faster? Easier to read/maintain? More loosely coupled? All of these?
I spent two weeks building a Go version of Webpack last month because it was either that or implement Webpack because the Vue components we're writing needed unit tests. Was that wise? It works fine, I don't have the gajillion shitty dependencies of Webpack, and it compiles, bundles, uglifies and minifies our entire Vue front end in <200ms but is that a good thing? Was I an idiot reinventing the wheel, or was I wise avoiding exposing us to the insanity of npm? When it needs maintenance in a few months and I have to spend a few days fixing it, is that time wasted, or have I saved time because every time Webpack updates a version everything breaks and we don't have that hassle?
To me, that sounds like you've hit a ceiling in your current stack, and probably workplace, and lack examples of people around you to learn from. What the solution to that is depends on your situation. Change of workplace? New career direction? New hobby?
> I have no idea what "better" is any more. Faster? Easier to read/maintain? More loosely coupled? All of these?
"It depends." As a gross generalisation, most juniors optimise for their bug-bear of one of these at the expense of all of the others. Most mid level devs try to balance them out, and hopefully, most seniors pick which ever is appropriate for the task at hand, with an eye on the long term consequences of their design - if there even is a long term consequence.
Re webpack replacement; Given the amount of time I've wasted dealing with webpack and it's madness, I'd back you up and say it's probably a net win - as long as it's documented enough to avoid the bus factor. It's not like golang is some esoteric language where it's impossible to find devs for.
P.s: I want to clarify here that I'm not necessarily a good programmer, I've just spent enough time stuck as a niche programmer & probable expert beginner and fighting my way out of it that I recognise the patterns. I'm probably a better study of dev behaviour than actual dev.
being the only senior in a tiny startup, I'm definitely lacking examples. I think I'll look for a Go expert freelancer to be that, when the startup is making enough money. Thanks :)
Well you could separate out any internal parts that can't be shared (secret keys etc) and put it out on a service like github. This might help you understand where you are at in terms golang and building something generically useful like webpack.
Have you grown in skills or just applied what you already know again and again? If you are doing something the same way you did a couple of years ago, chances are that you are repeating a year instead of growing.
Kinda the same thing over and over again, but in different ways. But then, like every commercial coder, I mostly spend my time building CRUD apis and front ends. But how do I know that this is growth as opposed to random brain spasms?
I've dealt with whole teams of these kinds of people. They also get extremely defensive about anything they don't know. Even worse, this was particularly the case around proper testing. Suffice to say the team committed a lot of bugs to production.
Expert Beginner? That's not just a Developers' phenomenon, it's actually more prevalent in business where "I have X years of experience...." what if they have been doing it wrong for x-years?
Most “Super Senior Principal” engineers I see at larger companies are ones that have been there for 5+ yrs, so I disagree that the best engineers always run to greener pastures.
I remember this one, there are lots of good articles on this blog. This one is a little aggressive, but this is a real phenomenon. In many companies, styles and expectations fluctuate too much for anyone to get comfortable for long. Lots of others move glacially, and within those walls, the outside world can barely be heard.
I've bounced around quite a bit myself, but some of the best work I've done has been in situations where I could have slacked off and phoned it in. That was typically prevented by the opportunity to perform challenging work that allowed me to learn, not by fear of falling behind some imaginary peer.
The problem with many workplaces is that challenging, interesting problems are few and far between and that is by design. Why take on the risk to the business by having technical knowledge in-house to solve the hard stuff? It's expensive, and large chunks of that investment can just walk out the door. Most companies would happily pay for solutions from other businesses that specialize, and then ask devs to glue these mismatched pieces into a semi-cohesive whole.
This sort of glue-work is ubiquitous, even in companies that ostensibly specialize in making their own software. How much time is spent wrangling all those amazing, productivity-enhancing dev tools that have proliferated? The goal is typically to offload work for a fraction of what it costs to keep a dev employed. New businesses are cropping up all the time trying to sell tools to the big boys, so they can in turn keep their devs focused on core competencies. The companies that specialize in tooling of course all sell to each other as well, it's a little bit of a cartel, but this endless cycle of tool churn is nothing new.
If you approach this environment as a newbie, and you are concerned about becoming an expert, it's likely that you'll only make progress on that goal in fits and starts. Companies grow, job roles become both over-specified in their requirements and narrower in their responsibilities, and all the while you are expected to learn new tools that allow your employer to save on skilled labor. It's rare to find a job where mentoring and career advancement are given genuine care. Employers have an incentive to keep employees tuned for specific, ever-narrowing roles.
One of the few consistent places I've been able to find learning opportunities has been in start-ups. The overarching trend is the same there too, but at the beginning those roles are not yet so clearly defined and you can bite off as much as you want. You probably won't be rewarded for doing so, and it's a bit of a crapshoot, but it is one small advantage to those environments.
I suspect this is intentional. What is contained in the article doesn't age out.
As a counter-point, the best article on salary negotiation _ever written_ [0] has the timestamp in the URL, which undoubtedly dissuades people from reading, given that it was written eight years ago.
If a potential reader avoids the article because of it's age, they're poorer for it.
IMO the author is doing the world a favor by not time-stamping the article.
I think that there's often something simpler going on here. Some people are simple and small-minded, they simply do not enjoy learning new technical knowledge and skills, and would rather run what they have to failure even if it means losing out on a lot of income.
A fine example of this is a new hire getting anxious and defensive because the git branching strategy at the company isn't the branded one they're accustomed to (e.g. git flow).
Maybe developers stop learning because they realize that that to learn how to do the same thing with 167 technologies is not a real development. Maybe they realize (they should) that self-development should be we well balanced, so after programming workday they should invest their time into sports and work with their emotions and take care of interpersonal aspect of their lives.
The older I am the more I'm convinced that development is not individual thing - it's a team sport.
Hey, company: divide your workday into 4h work for external projects and 4h work for team development.
Smart, mature developer, will realize that the most inhibiting phenomenon for him is usually the job itself.
This exactly. At may last job I burnt out, bad. In hindsight I see it. It did so many weird things to my mental health (and physical health) that I did not see at the time.
* Depression and Anxiety
* High blood pressure
* Weight gain
* Not having any idea how to simply relax
* Friends were more or less entirely from work
* Loneliness
At the end of the day I didn't want to go learn more stuff. I tried like hell to just gain my life back and failed miserably. It wasn't until reaching basically the bottom that I found a couple things that helped me:
1. Blood pressure medication was a must, I'm still trying to determine if this is stress related or a genetic thing.
2. Meditation. I found I did a ton of thinking about the future, worrying about the past, and just in general not being able to live today. I still struggle with this, but meditation and mindfulness have helped me a lot in trying to be in the present rather than past or future (depression/anxiety).
3. Time off. Though mine was sort of forced and forced to be longer due to COVID-19, I found time off helped, but since it was unplanned financial concerns kept me stressed out and I'm still trying to recover financially now.
If an employer wants me to continue upping my game they will need to give me the time to do it while I'm working. Within reason. I'm not expecting like weeks or something, but I just won't give up my free time anymore, it's too important to my health.
The sad reality is I know this is something an employer will tend to look down on. But I've found that happiness is not work, and work is generally not happiness. You have to have happiness outside of work and they need to be distinct separate things. Work, fundamentally, is about paying for the things I need to live so I can enjoy myself. I've found I don't need fancy toys or an expensive house to enjoy myself. I don't have to live on a lot of money.
Anyway, a little long winded, but your comment hit a chord with me.
I had the exact same experience and reached the same conclusion. After starting therapy and taking medication for the severe anxiety I developed because of that experience, I remembered what it's like to be "normal" and decided this is what I want for the rest of my life. No job is worth giving it up.
Going through this now. Completely fried, working 12 hour days just trying to make sure I hit the required weekly point total. Cant spend any time getting better as all my time is spent just trying to survive. Take time to properly learn means my points drop and I'm fired in the middle a recession.
On top of this, little kids are home so productivity is hard. I do nothing but sit at my desk trying to close tickets. Stress level is through the roof, gained 20 pounds, no time with the kids.
"Required weekly point total" sounds like a nightmare. I'm guessing you are a software dev? Are you a non-remote worker in SF/Silicon Valley? If not, I'd suggest looking for a change in employment. The job market is messed up right now, but startups have the money and desire to grow, and are adjusting to more full-remote workers.
I hear you on that. Try to find little ways to help yourself.
A few suggestions, no idea if they'll work for you but maybe you find something in the list or one of them gives you an idea.
1. Meditation. Even if it's just 5 minutes a day. Most of the apps have free trials. I really like 10% Happier but all of them are reasonably good.
2. Journal. This is a good way to just snapshot your feelings and stuff. I'd suggest writing it on paper as opposed to using some form of technology. Take 5 minutes a day and write.
3. Go for a walk, maybe with the kids? Time with the kids and good for your health. Even if it's a short walk, it's something.
4. Try to find one simple way to improve each meal you eat. Can you reduce the salt? Can you swap in something healthier for that unhealthy bit?
5. Find a lightweight hobby you can do a little bit each day. Maybe it's building lego, maybe it's reading a book for fun (and not work), etc.
6. Get enough sleep... this is important. Sleep is vital and without enough of it you just set yourself further back.
I know you said time is tight, but the thing is you have to try to make time for things will get worse. Start with 5 minutes a day. Most people can find 5 minutes in their day.
Good luck. I hope you find some small ways to improve your life. Ultimately for me not working that job was better for me. This is a terrible economy though so I get that you feel stuck. When it starts to improve, take the time to find something that's better for you.
> working 12 hour days just trying to make sure I hit the required weekly point total
This seems to be against the spirit of most systems. If a company has a "required weekly point total" it seems it would be a race to the bottom as developers start inflating point values. Also, if "required weekly point total" needs to go up over time, and it isnt through improved methods, then that will also lead to point inflation.
At that point, the points mean nothing because you can either use them to honestly estimate projects, or you can weaponize them to churn developers (in which case you cant use them to estimate any longer.)
It's most likely in the context of an agile methodology - "a time-boxed iteration of a continuous development cycle. Within a Sprint, planned amount of work has to be completed by the team and made ready for review."
The planned amount of work is established via a number of tickets, each containing a point value in vague terms of difficulty/time. As you complete these tickets you tally up points. While this system is quite efficient, I'm also starting to have doubts as it could tend to lead towards gamification. Analyzed from a narrow scope/from manager or director's perspective, can lead to simple conclusions about the employee. For example, John is a better worker than Mary because for the past 5 sprints, John's point total was 130 while Mary's was only 100.
To add to this, using numbers is a common mistake for agile estimation. Better to use things like t-shirt sizes. The problem is estimates aren’t associative, but making them numbers geatly increases the odds that someone will try to add them together anyways. And the biggest mistake is to give the numbers units of time. An expectation that a 3 point story will take X hours is particularly bad as that reporting gets higher in the org chart, since the only focus will tend to be on the translated time.
Your gamification example and using agile velocity to rank individuals is it’s own dark pattern too. The team should be the unit when doing agile, everyone has their strengths and weaknesses, but ranking due to to ticket closes is lazy, and so many factors that increase team velocity. For instance just having someone on the team who can tackle the bigger ticket can hugely improve team performace, as can having someone good at closing lots of little tickets. Neither is necessarily more important to the team, but having them both can be really effective.
I can tell you from experience that point quotas however it’s measured is its own mistake, that will lead to substandard teams. I was lucky enough to have enough clout at a previous job that every time management tried to go this route I was able to push back long enough that general productivity had a chance to improve without this sword of damacles causing morale problems.
And if you work for a place that has billable hours, I’m sorry. Much of my advice, while applicable, may be trumped by needs around client billing. Hopefully you make enough money to deal with this kind of stress...
What happens when t-shirt sizes end up mapped to the same point values when evaluation time comes around? My current company is in a belt tightening cycle, and recently introduced stack ranking. Implicit metrics are still metrics, even if they're lazy.
Yeah, this has a tendency to happen, it's sort of the constant fight. You can really only fight it with metrics, you can sort of start to show that 3tiny+1med tends to be equivalent to 1 large tends to be equivalent to 5small, or whatever. At the end of the day, you're trying to make it as hard as possible to simply add numbers. People will naturally try to map these things to ordinal numbers, and it's important to sort of keep a light constant pressure to prevent this the further away from the team you go. You need to start by training your team and your PMs and your manager to avoid this as much as possible. It's their job to train their stake holders and bosses in turn. I can tell you from experience that it's an incredible amount of effort, but the benefits are really worth it. Having a team with space to be effective is a magical place to be. I still mourn needing to leave that team and organization.
Oh, and sorry about stack ranking. It can be the beginning of the end. Job market is bad right now. But it might be wise to look at postings, and see what techs you want to brush up on. I’ve seen stack ranking destroy orgs, especially since good teams tend to disproportionally attract good talent. But management never believes that your whole team is above average.
That sounds seriously like agile gone wrong. If the team isn't able to complete the amount of points in the sprint, there should be reflection: why not? Maybe we estimated the amount of work wrong? Maybe the amount of points was already set up by "crunch standards", not normal standards. Btw. who does the estimation? In our team, it's us, the developers.
I've yet to see examples of "agile gone right" outside of folklore.
I work with multiple groups at any given time and all of them end up following many of these patterns. Through one proxy metric or another, agile systems become bad accounting and metric systems that are used to pressure developers to the point of burn out and or leaving.
For management, the goal is to push more efficiently from a fixed salary. From their misguided perspective, they're already paying you $100-200k+, so they own all of your time. The more they can get you to do over 8 hours, the less they're paying per unit (hour) of labor. This leads to high turnover, burnout, poor products, and is ultimatelt passing costs off to employees.
At some point, for many, time investment vs compensation can even out to working a fulltime and decent full-time and part time job (12 hours a day). The employer has managed to disguise this through all sorts of deceptive means. The employer may be ignorant that fact, they may (genuinely) think they're just pushing you to work a full 8 hours or so, but it's more common than people seem to want to admit. And ego in software development and widespread imposter syndrome for newer developers just perpetuate this problem.
New developers won't admit when a request is absurd, they assume they're slow and pickup the slack. They also are trying to gain entry to the market so they're willing to sacrifice their time in hopes to jump ship for a less toxic environment with higher comp to time ratio. "This is just temporary." Ultimately, that mentality creates an expectation and experience only buys you so much efficiency in certain situations. Those new developers provide positive reinforcement to business managers that "this is the way."
As that happens at more and more enviornments, work to life balances become toxic at more and more places and the end result is, those new developers may never get a chance to escape that toxic balance because they've enabled it, everywhere, through fierce competition of ego.
I learned this at my first job when a very senior (nearly retired) developer working in development since the punch card era seemed to only be producing a little more than I could in a day. I thought, well he must be barely working, lazy or incompetent because I'm a newbie and not too far behind him. "This guy supposedly helped contribite to the development of quantum theory, write some of the first computer simulations for this, and friends with nobel prize winners? What a sham, his skills must be so dated!"
So for awhile, I started churning out more progress than him but he never changed his pace. At some point I realized I was investing a lot of extra time for no apparent reason. I didn't get paid more. I got some "brownie points" but ultimately, I realized I was undercutting my manager and mentor by using my free time and at the same time, creating an expectation of my production rate that required more than 8 hours of work. I was sprinting in a marathon and only harming myself by trying to be competitive and show I was as good or better as this relic developer. It turns, out I was just unwise, and he was light years ahead of me at setting realistic expectations. Through the rest of my career I realized he had created the most realisticand accurate time estimates for development timelines I've encountered and created a work environment that was well balanced for everyone and kept his boss happy, all while no one was stagnant and still developing professsionally.
In the spirit of these things, point values are for individual teams to help benchmark their own rate, and not meant for comparison across teams using different point references.
If there is company-wide point tallying, it will lead to gaming and point inflation -- at which point the benefit of rate estimation will be lost.
Some software shops, unfortunately, take their "Agile" a little too seriously. They end up treating developers like assembly-line workers with a quota to meet. Those burndown charts become a very important metric that managers can show their managers, etc.
We run psuedo agile where you are required to close a given number of points each week. Each ticket is given a point value based on percieved complexity. Point total on closed tickets have too meet or exceed weekly requirement. Not a good time.
But story points are estimates. Estimates are never set in stone. I’ve successfully argued this many times in the past. Software development is too complex for estimates to be always correct.
Anyway don’t mean to preach, sounds like you’re having a tough time. Sorry about that hope it gets better
Time to start padding out your point estimates for things. It's not even a lie: if you're working more than the 8 hours you're supposed to in a day, you're underestimating the complexity of things, which is actually the real lie.
Some of this comes from team dynamics: you all have to together commit to not working every night to avoid burnout and competing with one another.
Even as a well-paid, salaried software engineer, there's a moderate chance that they're still obligated to give you overtime pay (at least in CA -- heavily depends on your jurisdiction), especially if your job is implementing a series of small stories that are completely designed by somebody else and especially if there isn't enough leeway to take a little time to learn about the thing you're coding. You might consider talking to a lawyer or some kind of workers advocate. A year of back-paid overtime can go a long way toward weathering a recession ;)
That can only work for a short time. There is a feedback loop at play here and it will seek a stable equilibrium. The organization needs to solve its internal problem.
The move from estimates to story points is actually a part of this compensation already, companies pretend that story points are about giving the developers more freedom to our faces, but actually they are about taking our estimate that something can be done in a week and removing the word “week” so that it can be estimated as a couple days’ work instead by our supervisors.
The fundamental process is already, at most agile shops I have seen,
- I the dev am going to build in some unconscious safety buffer by giving an estimate that I'm 90% or 95% confident in,
- I am going to give myself more safety buffer consciously by doubling or tripling that estimate when I communicate it to my boss,
- my boss is going to give us even more safety buffer by lumping together all of our tasks and then adding another 50% of time to that
- and then they are going to give it to their non-technical bosses, who are going to survey this estimate for the project and insist that it needs to be done in 25% less time, it is urgent, we need it sooner than that.
Story points allow a bunch of this to happen without explicitly contradicting anyone. But the basic problems is much more fundamental and it is that management and engineering are being phrased as having opposed interests. This is a basic failure in any negotiation: once it turns into a zero-sum game, everybody loses. In turn, there is a set of books that pinpoint say deeper problem, as being focused on controlling costs rather than driving revenues—which we see in these story point quotas, gotta make sure you get your money's worth from the dev team.
At least I Scrum, as I understand it, the commitment should be chosen by reflecting it to past velocity. If that velocity is achieved only by crunching and overworking, that's a problem that needs to be addressed. If it can't be reasonably addressed, that's a sign of a toxic workplace.
> Completely fried, working 12 hour days just trying to make sure I hit the required weekly point total. Cant spend any time getting better as all my time is spent just trying to survive. Take time to properly learn means my points drop and I'm fired in the middle a recession. I do nothing but sit at my desk trying to close tickets. Stress level is through the roof, gained 20 pounds, no time with the kids.
Name the company so that no one else suffers needlessly by tying their ability to provide for their family to said employer. That sounds like a terrible situation for anyone to be in.
> The sad reality is I know this is something an employer will tend to look down on
That's not an employer you want to work for, and in the end, all they will succeed in doing is hiring kids who don't know any better, and driving away skilled talent.
Not just developers - anyone living far enough from the equator is suffering from this. This is in fact why lighter skin evolved, out of a need to get more vitamin D from the limited sunshine.
Take your Vitamin D supplements, a few thousand IU a day, probably more if you're not White. Get sunshine. Sit by a window.
It's particularly critical for men's health as Vitamin D is critical in the proper production of testosterone, as well.
Don’t sit by windows. Windows filter out the rays needed for your body to produce vitamin d. You’re essentially damaging your skin without the benefits
If you're mildly vitamin D deficient, you would probably want to take an additional 1000 IU per day in addition to the 1000 IU in your multivitamin. It's safe for most people to take up to 4000 IU per day, but unless you've been tested and know you're severely deficient then I wouldn't take more than 2000 IU total per day. Then just keep checking your blood pressure over the next 6 weeks and see what happens.
I'm not a doctor. I take ~2000 a day, sometimes 3000 when a third pill falls into my mouth from the bottle. Actual dosage advice would be better from someone who isn't just relaying anecdotes.
That said, I would guarantee that you're better off with that 1000 IU than with nothing!
>learn how to do the same thing with 167 technologies is not a real development
It's sad that this is what continuous learning for software engineers has come to mean. Don't do that! Learn to express different and deeper ideas in the technologies you already know. Programming is a huge world. Maybe you have truly tapped out line-of-business CRUD apps, but remember every line in the CS course catalog is a whole universe unto itself.
Operating systems, distributed systems, databases, graphics, embedded systems, HPC, etc. Pick a layer of the stack, understand it, and move it forward a little.
You'll notice that many of these worlds have completely skipped the Javascript framework treadmill, and K&R C from the 1970s will get you a decent part of the way there.
I mourn the time I have to spend re-learning CRUD because it distracts from this mission.
I like the concept of "learning to express different and deeper ideas in technologies you already know", but I'd suggest going deeper in a particular business domain or industry instead of another area of the CS universe.
In other words, become an expert at solving specific business problems with technology, not a technology expert looking for business problems to solve.
Why is that? C remains an immensely popular language (especially as you get closer to the metal). I also don't see Linux being rewritten in something else anytime soon.
And then when your company decides to either to “take the company in a different direction” or you notice that it is bringing new employees in at market rate that has gone up 20% in 3 years while HR is giving you 3% cost of living raises, you are out there in your mid 40s, can’t get a job and you start complaining about ageism.
I was there a little over 10 years ago at 35. I belatedly learned how to play the game. Until my current job, it was all about resume driven development and job hopping the minute my salary or what I am doing at my current job got out of whack with the market.
I don't like this mentality. Maximizing lifetime revenue isn't some emergency to panic about. It's one goal to balance against many other important things in life. If your income provides you a lifestyle you're happy with and is poised to continue doing so (which in this context is almost certainly the case), there is no emergency.
It's ok to not forcefully grow your income, but stagnating income is correlated with your skillset becoming outdated. If you haven't worked on your network or your interview skills, you can find it hard to find a new job. In addition, a less career-oriented person would find working on those less enjoyable than keeping their skills up to date and likely do them even less.
The reality is that software engineering automates software engineering jobs the most. As a result, people in this industry do highlight the importance of not neglecting your skillset.
However, I wouldn't take that as them trying to maximize their revenue. They've simply found that they need to do a deeper analysis on skills to not get automated out of a job.
You're getting pretty far off-topic (and actually making a point that I'm trying to make myself in a separate subthread). This is the sentence I was pushing back against:
> All those years of missed revenue add up quickly
Your income not only needs to provide you a lifestyle that you are happy with now, it also needs to provide you enough to save and have a lifestyle you want when you’re not working. There is going to be a day you’re not working.
But, if you are working for less than your fair market value, you’re providing the lifestyle that your company’s owners are happy with.
In the US, I think part of the reason why this is the case is because the cost of living goes up significantly more than the cost of living adjustments.
For example, while landlords aren't supposed to raise the rent by more than 2% per year, I've had my landlord surprise me with a 5% change, but moving is too expensive to realistically consider [1].
Factor in all the additional costs of living, and if you don't get significant raises every year, you're going to fall further behind each year and be unable to afford a family, a house, or an emergency.
It's not great.
1 - When I moved to a good neighborhood in NYC, I had to pay a brokers fee of 15% + moving costs + up front costs to the landlord. Say my rent was 3k, this would be 5400 (0.15 * 3000 * 12) + ~500 + 9000 (first + last + security), or approximately 15k just to move into a place you don't own.
Totally in agreement with you, especially in the bigger metropolitan areas like NYC and SF.
Rule of thumb for me in my career is I expect at least a 5% raise yearly to account for cost of living and inflation of currency, otherwise, like you said, you'd be losing money YoY.
Thankfully, I've gotten many > 25% raises throughout my career. It's possible, but it's rare, and jumping jobs usually gives larger pay bumps.
What’s this about what the landlords are “suppose to do”? Most of the US isn’t rent controlled. My apartment that I moved into after getting married in 2012 went from $1300 to $1750 by 2016 when I left. It was $2000/month last year.
My mortgage is less than $2200 for a 3100 square foot house brand new build and that wont go up besides property taxes and insurance.
I can't compare to the US, but I'm in one of the more relaxed European countries and I definitely feel like I'm in the minority for pursuing some continuity with my current employer rather than jumping ship for a higher salary after two years.
(Fortunately, this is a good employer that realises what the reality looks like and is willing to offer me competitive raises to make it a doubly mutually beneficial arrangement.)
Well, in my case since both my salary and skillset stagnated from 2000-2008 - after 3% COL raises and bonuses being cut, I only made $4000 more in 2008 than I did in 2000, there is a lot of catching up with both I had to do. Especially seeing that in 2012, I went from being single to (gladly) getting married and taking on the responsibility of a preteen and a teen.
That being said, I was quite content with my pre-Covid salary which was pretty much average for a top end individual contributor locally. But I did need to make at least $20K more over in two years.
Then Covid happened, along with pay cuts and the local market dried up.
But now working remotely for BigTech means my base salary is a little more than it was pre-Covid and RSUs+Bonus is pure gravy that goes directly to long term savings.
It also meant my wife didn’t have to go back to work in the school system with a Covid going around.
I am not so sure. My wife is European and she is very much inline with always going upwards. I on the other hand very much enjoy work life balance as well as making sure I enjoy what I do even if I make less than I could elsewhere.
By the time I learned the error of my ways, it was too late to do it early. Luckily, I can still get caught up if I play my cards right. I work remotely for Big Tech in a low cost of living area. The arbitrage gives me a great opportunity to save.
I'm pretty new to remote work life. Don't many companies adjust your salary based on where you live? It seems like this would counter the entire point of living somewhere cheaper.
All the remote policies of any company you can find here http://remote.lifeshack.io/, I dont think your salary is baased on where you live, albeit i am not very sure of it. But check out the link i have attached, this website has a lot of useful information about remote wotk etc!
There is a slight premium for Seattle and maybe NYC, but besides that according to Levels.fyi, I’m right in line with the average total comp.
I know someone who applied for a similar role (one level up) at Amazon who lives in MiddleOfNowhere Nebraska and he made about the same as someone who lives in Seattle.
I went from a company with 50 employees, I was not planning on making a move for another year at least and then Covid happened along with pay cuts. Why wouldn’t I try to work for one of the largest companies in the US?
Improving soft skills isn't the same as playing the career development game by job hopping. Soft skills are as much a part of a developer job as programming. I'd actually argue it's the harder part.
This is not meant to be offensive in any way, as I'm not judging your entire being just by the way you approach employment, but developers that do "resume driven development" are by far the worst to work with, and are so obvious to spot.
Resume driven development often involves not giving a damn about what you're actually trying to solve, and trying to make what you're doing sound as impressive as possible to onlookers who have no clue what you actually did. Complicated architectures, hip technologies, fancy percentage based "improvements", it's all bollocks.
I agree largely that Western society has created a work culture where working for "the man" is not a desirable thing to do; employment, by and large, should not be the purpose of our lives. There are so many other things, like social life, family, sports, playing the guitar, and so on, that humans can do so well, but because of the capitalist structures that have been put in place where the rich get richer and the poor get poorer, it's not hard to understand why this kind of approach to employment exists.
> are by far the worst to work with, and are so obvious to spot.
So? It gets them hired and at better rates than those who accurately estimate the size of the projects to boot.
Companies need to stop rewarding irrational behavior, but right now lack the right feedback loops including investors willing to "Venture" capital into a losing business, passive + TARP-like investing driving asset prices up despite inherent risks and bosses who care more about politics/appearances than facts.
Well, once companies stop having policies where they only give internal employees slightly above cost of living raises and bringing in new employees in at market rates - ie salary compression and inversion - then maybe employees won’t look out for their own best interest.
The choices are either have relevant skills to keep yourself marketable or be willing to do the “leetCode monkey dance” and remember how to reverse a binary tree on the whiteboard.
Other than a better financial situation, what else have you achieved doing this? This makes me feel like software development world is just some big hollow game where making a positive change to the world takes the back seat.
Once I had my first real existential crisis, I realized that the things I wanted to leave behind weren't things that someone else could've easily (and honestly, might as well have) done. Furthermore, I wanted to spend my life doing things I actually want to do.
So now I have a job that pays below market rate but leaves me with a lot more time, in which to write my novels. I have more than enough money - more money won't make me any happier, but less discretional time will certainly make me worse off.
To begin with, I'm not a person who 'burns' money - I'm sure I'd be in a different place if I was.
When I say 'enough money' I don't just mean enough money for current expenses. In my opinion, there really is an amount of money beyond which you're probably just adding a measure of unhappiness to your life, if you're making money your first-order objective.
You are correlating pursuing a higher salary with longer hours. That doesn't have to be the case. I quadrupled my income by just going freelance and I generally work 20-30 hours a week.
By focusing on money you can actually reduce your overall burden. In my humble opinion this far and away beats taking some low salary job just to get more time.
A great many people raise families and live comfortable lives on less money than a somewhat below market dev salary. If you want to live in the middle of San Francisco or buy luxury goods then sure, you need to strive to earn a lot of money, but that doesn't describe most people.
Well, I guess I didn’t make it explicit that my being an “expert beginner” at 35 also meant that I was making a way below market salary and was (and I still am) way behind my retirement goals.
And with this being both my abs my wife’s second marriage didn’t do us any favors.
My base salary now is the same as it was pre-Covid pay cut and was more than enough to reach our short term needs and slowly meet our long term goals.
The RSUs+Bonus goes straight to long term goals - paying off the mortgage, cash flowing college for my son and saving for retirement.
I don’t care about leaving things behind - I care about not being 70 years old and broke. The best case outcome is that we both spend our last penny when we are on our death bed.
I feel bad for people whose are trapped by a paycheck in a career they don't love. STEM delights me and if I became financially independent, I'd probably still go on learning about and building software & hardware just for the fun of it.
> I feel bad for people whose are trapped by a paycheck in a career they don't love.
This is kinda of where I am right now. I still enjoy programming, but its becoming more and more apparent that I should have kept it as a hobby and pursued something else as a career.
> I'd probably still go on learning about and building software & hardware just for the fun of it.
That's the thing - this is much easier to do when you're financially independent. If you have kids, and have to put them through school, and maybe have sick parents to care for - costs can add up fairly quickly; and it's easy to find yourself in a position where you can't lose your job, which is a lousy position to be in.
Interesting part is - it can also be easier to choose what you work for when you have more money (or at least that's been my personal experience so far). A different version of the "rich getting richer" dynamic, basically - it's easy to say "no" to stupid things, to avoid or leave crappy jobs. Also if you're highly paid that often means you're listened to/ your voice has some weight - so chances of working on stuff that delights you increase significantly.
I have been programming as a hobby since the 80s when I was in middle school writing 65C02 assembly, I don’t hate my job, but by the time I graduated in the mid 90s, it became purely a method to maintain my short term addiction to food and shelter and my long term desire not to be old and broke.
Arguably, "job satisfaction" is a factor as well, but... it's hard to have job satisfaction when you see others earning 50% more for doing less (or inferior) work, especially if it's in the same company. And it's also hard to have 'job satisfaction' when decisions are made outside your control. I can live with fewer decision making abilities if compensated for in other ways, and that's usually pay rate.
I enjoy learning about the industrial area I'm in (higher education). I enjoy creating software that people use daily. I enjoy creative problem solving. I enjoy learning how different organizations fulfill their reason for being while handling operational details. I go to work for these reasons, as well as, a comfortable pay.
I don't mind working. I would just like to do it less. A few years ago I dropped to part-time in order to be in a grad program in CS. I found that 25 hours provided me with the perfect life balance.
It's not specific to software development. Every field has people who's main focus is to "climb the ladder".
Whether making a positive change to the world is in the front or back seat (or in the vehicle at all) will depend on company culture and may even differ within a single org.
I make good money. I work 10am-5pm every day. I don't make as much money as people who are more career oriented.
But I also save 50% of my paycheck even factoring in that I have a wife and three kids, and my wife stays home with them. What's the point of chasing more money? If anything, I'd like to scale back, but that's not the easiest thing to do.
I start looking at job boards, talking to local recruiters and seeing which technologies were trending on the hype cycle.
I then job hopped when my current company was either not keeping my salary at market rates or were getting behind technologically. I left one job that had two products - a legacy PHP product and a new C# project that was using then current technology.
The C# product wasn’t gaining traction after two years, raises were anemic and they moved everyone to the PHP product. There was no way that I wanted to spend a year doing PHP. I left.
But the smartest but riskiest move I made the next time I was looking for a job was to leave a full time job for a contract to perm where I would get the chance to build a development department and lead two green field projects.
That’s where I first (belatedly) learned about AWS and discovered how much “cloud consultants” made.
Once those two projects were done, I self demoted to a senior developer where the newly minted CTO wants someone to make the company “cloud native”. I jumped on every single project and took the lead on introducing AWS services.
Just from working at those two companies I was able to get my current job.
I wonder if this will keep me stagnant in my programming chops. But at the same time I realize that expanding the chops is diminishing returns for my life from all angles, including satisfaction and finance. I have hobbies, families, communities that I serve and are expanding other non programming skills.
That being said, at the same time I also kinda don't know what is fun anymore about programming. I think for me, creating real stuffs that real people can use is always fun, but at the same time I don't have ideas. Many (useless) ideas are already reiterated over and over again by people posted on Github/Reddit.
I tried to go deeper into a specific implementation, like for example database engine or compiler. But I realized that wasn't as fun as I thought it to be.
In terms of programming, I always liked learning programming languages, but it gets old real fast. Once that is over, I don't know what else should I do. I'm thinking of learning OCaml next. Or maybe I should try other field of programming like game development. Or maybe I should just be content not doing any programming related hobbies anymore and just stick with my other hobbies.
Personally I think it's because in most companies developing your technical skills after 2-4 years of professional experience does not translate into career development.
This is a side effect of having non technical managers. They cannot judge your technical skills beyond a certain point. They can judge your people skills and domain knowledge. They reward improvements in what they can judge.
The above doesn't apply in tech companies and in tech hubs. It applied to me when I searched for jobs in a relative backwater only (and as somebody who wanted to advance technically it was frustrating).
This is why I think you'd be more likely to find expert beginners in Maryland or Singapore than in San Fran or New York.
You're kidding yourself if you think SF is a magical land where knowing to reverse a B Tree gets you promoted. Realistically managers promote people based on tenure, who they get along with, and what they ship. Being able to communicate and understand the problem space is way more valuable than some arbitrary technical "skill"
To be fair, if you are the engineer leading that project and it ships on time and on budget then you are still worthy of promotion. The product leaders who thought another chat app was a good idea are the ones who need a reckoning
My experience was with diffing London and Singapore. It wasn't like London was in any way magical, but there was a night and day difference in the quality of technical interviews.
In Singapore if you previously worked at Google you were golden even if you were a goddamn moron and technical tests were done incredibly badly or not at all. I could see why people didn't improve their technical skills and why expert beginners were omnipresent. Nobody was going to reward them for it.
In London that sort of thing exists too but companies that interview on merit rather than pedigree did at least exist and weren't extremely uncommon.
I have limited experience of SF but I expect it is more like London than Singapore.
I have been a persistent critic of leetcode since before it even had a name. My favored interview style has always been "give realistic tasks within reason".
Leetcode was common in Singapore, because if you google programmer questions that's what comes up.
Do you have 10 years experience, or do you have 1 years experience repeated 10 times?
I feel like the article and the hype cycle push us towards the latter rather than the former. Which is awkward because that seems like a good, concise definition of expert beginner: someone that has 1 years experience repeated 10 times.
Of course, it can pay well to be certain kinds of expert beginners.
>Maybe developers stop learning because they realize that that to learn how to do the same thing with 167 technologies is not a real development.
This is definitely my feeling. After awhile, you've created most the types of functionalities and at some point, you start to see a lot of the same functional outcomes from shifting technology and shuffling abstractions. At that point, it's literally just the tedious effort of learning some new abstraction someone decided to create and some sensible flow within their set of abstractions from start to functional product. You end up dealing with slightly different processes that require tedious time investments to suss out.
Sometimes it's a useful shuffle. You gain a more rapid development process, things are easier, you gain additional functionality than previous approaches for similar efforts. A lot of times it's literally just "ah, new shiny" and you can't be bothered to deal with "ah, new shiny" with your free time as you felt you had to as a beginner/novice (spending several weekends and evenings or so of your free time). You've also, through your career, been forced to deal with these transitions due to someone else's decisions and been burnt multiple times learning to do the same thing slightly differently because someone else in charge or a group with momentum was sold by new shiny.
At some point you learn to assess new shiny before you invest time into it. You've seen the marketing tricks, you've seen the tech industry dazzle with ambiguity/complexity tricks, and you frankly would rather go exercise, spend time with family, or do anything else but be trapped investing personal free time due to some business scheme. Unfortunately, a lot of other people haven't learned to objectively assess new shiny and fall into the marketing and ambiguity traps. It may be business leaders, it may be younger developers eager to 'improve' with new shiny because it's going to be "paradigm shifting," etc. Ultimately, momentum builds and you either join the club being sucked into new shiny (where a lot of resume driven development starts, IMHO) or are left behind.
It's amazing how people continuously fall in these silly traps. I'm not saying all change is bad, there are some improvements and some tech has to be kept current to remain interoperable with other tech. I am saying that a lot of it is an absolute waste, and this is why you have these 'Expert Beginners' -- those who realize outside of tech, the vast majority won't see imporovements and inside of tech, the improvements for adopting a new technology are often questionable (if even existent).
>The older I am the more I'm convinced that development is not individual thing - it's a team sport
I don't know, maybe development should be individual thing. Less bureaucracy, less politics, you get to control more. Besides other human can be finicky.
These days I rather be full stack dev.
Good thing is technology, due to non stop advancement, inherently enables individual to do more and more.
Nothing is stopping you from being a full stack dev, but teamwork, while it is tricky to achieve and execute on, is very valuable once it's there. This is speaking from my experience.
Most often I found that bringing on junior developers and mentoring them very closely brings about a very strong team bond. These developers also inevitably become very productive pillars of a team.
Of course, there's always going to be a place for solo work - by teamwork I don't mean constantly pair programming (although doing it a little bit is a good learning experience for fresh grads) - I just mean collaboration, communication, the like.
Sometime during grad school, I stopped focusing on learning individual technologies and started focusing on keeping up with ideas and philosophies. I'm very happy with that decision now 10+ years later.
Seconded. I find it's the power-hungry managers in the industry who push this idea that unless a software engineer knows "literally everything about everything" they'll always be inferior, especially as compared to some abstract "Perfect Google Engineer" which I've never actually met in real life and seems a lot like Bigfoot.
The problem isn't their averageness though, it's when they impose it on the team. Ive worked with some team leads that were technically average but allowed others to use their skills and they were fine. It's when they invented bad standards that it was a problem.
One of the ways to make space for growth and learning is to say no to more things at your job. I wish I had started this earlier in my career. I used to think it was important to do everything people asked me to do, even if I couldn't do them all really well.
The truth is organizations only respect you for the things you do well and that an organization thinks is important. (So if you are great at documentation for example, and the organization doesn't prioritize that, don't be surprised that no one cares. Etc.)
Doing things you are not very good at or don't have time for only damages your brand. Saying no is also a way to test the waters about how much what you are doing is valued. Imagine part of your job is X, and someone says: "Can you do Y?" You say: "I can't I am working on X." If they say do Y instead, its a hint that either X isn't that important or people don't think you are valuable to it. Find that out!
People will waste your time with meaningless or unimportant work. It is one of the ways that they tell you that they don't really value your contribution. Or not. The way to find out is to try no.
> Maybe developers stop learning because they realize that that to learn how to do the same thing with 167 technologies is not a real development.
The biggest improvement I ever had as a developer came from reimplementing the same thing in Lisp and OCaml that I previously done in Java. It was quite eye opening to see how much more concise I could be if leaving the mainstream languages and started to use something more advanced (also niche at the same time). After having discussions about this with me peers who completed formal education in CS, they explained to me many concepts that I was not aware of and turned my attention to functional programming and ML languages in general. As of today I am even more convinced that doing the same thing in different languages / environments helps me understand the problem domain better, select the right tool for the job and become a better engineer.
> The older I am the more I'm convinced that development is not individual thing - it's a team sport.
I agree. We usually do these implementation exercises as team work.
> they realize that that to learn how to do the same thing with 167 technologies is not a real development
is quite an arrogant view to take on new technologies. It assumes that the sole reason people use new things is a combination of ignorance and boredom. There may be some of that involved in some cases, but even re-inventions of the wheel usually come with a fresh perspective and incremental improvements over the last time. More importantly, knowing the technology of the day allows you to relate to and collaborate with others, which is most certainly not a waste of time.
Turning down applicants who have the above mindset isn't ageism.
Might be arrogant, but after rewriting the same stuff over Sun RPC, CORBA, DCOM, DCE, XML-RPC, SOAP, WebServices, BPEL, RMI, Remoting, REST, gRPC,.... eventually it gets tiring.
Or deploying stuff over HP-UX Vaults, J2EE/JEE containers, mainframe language environments, VMs, Docker, k8s, lambdas (CGIs just got rediscovered),....
There's probably no better language on earth to build a data warehouse in than java yet i fully expect to see a BI book on Javascript OLAP technologies within my lifetime.
You use what you grow up with.
Because so many data warehouses has been built in it. Most significant problems have been solved traditional star schema/snowflake schema fact table oriented systems. And a change in language ins't going to open new doors that otherwise remain closed now.
Not sure I agree. SIMD instructions, vectorized query, and low-level memory management are hard to implement in Java. Plus there's genuine uncertainty about the future of the language. I would not implement a new data warehouse in Java at this point.
Intel and Linaro have contributed SIMD support, although it might be harder than using something like C++.
Not only is the language open source, while during Sun days it was only free beer, there are several contributors, and Microsoft has acquired jClarity, started contributing to OpenJDK, gives parity with .NET tooling on Azure, alongside Red-Hat supports Java on VSCode.
Ah, even Java has had more talks at Build 2020 than F# or VB.
The only uncertainty is from the anti-Java, Oracle hating crowd, for everyone else, companies using IBM, Red-Hat, Adobe, SAP,.... products, it is business as usual.
The category "RPC framework" solves a specific problem quite well, but it's doubtful that each incarnation is really a sufficient improvement on all those that came before it to justify re-learning and migrating.
Try adopting a perspective of curiosity: "How does this new tech work? What new benefits might it provide?" Or at least one of professional duty. All knowledge workers have to keep up with the latest developments in their field; lawyers have to study up on the new case law, doctors have to read medical journals.
If you've decided ahead of time that it's a drudgery, you're only going to make yourself miserable. You don't have to spend all of your free time on it, but you do have to remain flexible and open-minded. You may as well try to find something there to enjoy.
Most of this new tech is mostly created with the purpose of generating new business, that is all.
As Alan Kay perfectly puts it, fashion driven industry, and naturally one must keep themselves fashionable.
So up one goes rewriting that working WCF service into gRPC, because fashion.
But is ok, after all someone needs to pay those consulting rates and keep projects rolling, book industry happy with introduction and best practices books, and new themes for conferences, and most importantly blog posts with postmortems regarding technology migrations.
Career growth after entry level rarely hinges on acquisition of technical knowledge. If anything, technical knowledge is the easiest to acquire which is why entry levels can claim it without much work experience. Organizational navigation, team work, delivering actual results, knowing how and when to apply those skills (and when not to) mostly come with experience, and those are going to be the determining factors for promotions beyond entry levels.
> b) Just a generally miserable way to live as a programmer
I would argue it is more miserable to not have developed those tacit skills so that keeping up with the latest and greatest is the only element of competitive advantage to stay relevant. I would also speculate this might be the reason why juniors tend to over-emphasize the latest tech as the greatest tech, because they don't think they have anything else to be competitive in the job market.
Curiosity is not an algorithmic virtue, it doesn't apply to every case. It needs to be tempered with a dose of conservatism to deliver results in real world.
> Try adopting a perspective of curiosity: "How does this new tech work? What new benefits might it provide?" Or at least one of professional duty. All knowledge workers have to keep up with the latest developments in their field; lawyers have to study up on the new case law, doctors have to read medical journals.
I think part of the problem is that some fraction of "new tech" is substantially just a reinvention of the wheel and/or oscillating fads, but it's all presented as "new therefore obviously superior" to what came before.
Others have probably said it better, but tech needs more study of history.
Like I said, often it's not that simple. Each iteration on re-inventing the wheel usually incorporates some new lessons learned. I won't disagree that more perspective on history would be a good thing, but that's no reason to dismiss it all wholesale.
Very true. In addition not all new tech is repetitive. The state of the art in computer science is advancing very rapidly in many areas. For me, learning involves actively attempting to understand and reproduce some of the latest research in my field. Some of that involves figuring out how to use a new framework or tool. And that some of that is repetitive, but the context in which it is needed is not. So the learning is valuable to me personally.
Secondly, I don't learn technology to make myself more valuable in the marketplace. At least, that is not the primary objective. I learn new technology, to make myself more efficient and the products I build better. And since I'm in the somewhat fortunate position of owning all the intellectual property I create, it's also in my best financial interest to learn.
Also, as I get older, I find the mental exercise of forcing myself into doing things differently is a lot of fun. To be honest: gaining mastery and learning is just plain fun. Thats why I do it.
> but even re-inventions of the wheel usually come with a fresh perspective and incremental improvements over the last time
I appreciate what you are saying, and I definitely think that some folks take the "here we go again" attitude towards a new approach that may very likely have a significant and positive impact on whatever they are working on. That said...
That situation strikes me as relatively rare. More often than not, I see people get excited about whatever the new flavor-of-the-month hotness is, and they forget about real world issues. One simple example is when the cost in money and (possibly) morale is not taken into account when switching from one technology to another. Sure, you may get a 1% incremental improvement in some metric that may save you X dollars, but the switch will cost 5x dollars to implement with additional soft costs like potential loss of morale and potential loss of efficiency due to lack of familiarity with the new system. I see this type of decision making frequently, and I consider it extremely poor form.
I think it's really important to ask why something needs to be done and to ask what the actual costs of switching are (esp. for folks on the front line). If the answer to why is "it will get someone a promotion for little or no benefit" or "a leader somewhere wants to brag / humble-brag about the new hotness with their peers", then the decision to use a new technology can usually be postponed to a later date. These are not uncommon scenarios, and they frequently cost companies dearly.
We can debate all day about specific tradeoffs made for specific projects - I would almost never advocate a rewrite just for the sake of using a newer technology unless it has more-than-incremental benefits - but that's not what I'm talking about.
All I'm really trying to argue against is the mindset that "I could technically do this with the technology I've been comfy with for the past decade, therefore this new technology's entire existence is a waste of time and it's a waste of my time to learn about it." Learning != rewriting. And new projects != rewriting. It may end up being as simple as, "I just lost my job and nobody uses the technology I was using there any more, so now I can't get hired."
I would turn down applicants who don't have this mindset. The last thing I need is a dev who never mastered postgres because they were too busy learning mongo.
> Maybe developers stop learning because they realize that that to learn how to do the same thing with 167 technologies is not a real development.
And maybe because they realize that the majority of those 167 technologies only exist because someone said "Making frameworks and generalization is the only reason I manage to keep doing what I do." [source: https://news.ycombinator.com/item?id=13734009]
> Maybe developers stop learning because they realize that that to learn how to do the same thing with 167 technologies is not a real development.
Yes! A few months back I was feeling quite burnt out over this feeling.
Taking a step back and focusing on a wider variety of work / hobbies / projects did wonders for my work motivation and ability.
It seems to me that taking on a larger, wider variety of mental tasks actually improves ones ability to focus on a single one, instead of spending all energy on one task (or job).
interesting idea in theory. won't work in practice.
the 4h has to be shared. everyone on the team has to be in menstrual synchrony, so to speak. this type of work requires one to get "in the zone", or achieve the "flow" state. everyone has different biorhythms and won't even maintain the same time schedule on a daily basis or even weekly basis.
in an 8hr day you might get 3-4hr of good performance. yes you can do better, but not on a sustained basis. yes more hours per day will give you more but after 8 it really starts to become a diminishing returns thing.
if you tell people to only do 4hr of work, the output is going to be 1-2 hr of efficient work.
you are basically claiming it's a team sport but want to apply a principle that would only work for you personally. now if you can hire only people "like you" then you're golden. or, if you can pay (say) 2-5x market, fire quickly, compel people to have a very specific training-like schedule with execute-or-GTFO practices, have a high demand but high performance environment, only hire high experienced devs, then you might have something. it's not generalizable to the extent that it is usable advice or recommendation.
lastly does your 4h of work include communication and coordination functions? if so, it's reduced to 2h of work.
> Maybe developers stop learning because they realize that that to learn how to do the same thing with 167 technologies is not a real development.
I agree that doing the same thing 167 times is real improvement. Though I don't believe that to be the reason for the expert beginner.
I've seen countless people learn that one technology just well enough to scrape by. Not a bit further. And going further can be done during work hours.
>they realize that that to learn how to do the same thing with 167 technologies is not a real development
I think this is a bit of a loaded statement. Phrased this way, who could argue that learning to do the same thing with 167 technologies is real development? If you listed out the technologies, I'm sure you'd get plenty of responses from people who disagree that they actually are doing the same thing. As a frontend dev, the technologies I'm aware of all have vastly different developer experiences and tradeoffs.
I agree with you about how everything blends together, and that you don't necessarily have to focus only on "improving as a developer", however, expert beginners are absolutely a thing and something I have had to deal with multiple times in my career.
I just got done wit a brutal job search. I call this attitude "losing the fear." I would not be so sure the next 100k job is waiting for you if you are not improving yourself.
He does have a GitHub account[0]. He successfully founded his own company. Entries from his blog also made the frontpage on HN several times[1].
You, on the other hand, do not have a GitHub account tied to your HN account, or any other records of your achievements, for that matter. You also don't seem to have made any meaningful contributions. But people still took the time to read this comment.
The author's history of contributing to OSS projects shouldn't be relevant here. You shouldn't use it to attack his claims. If someone wants to read the article, they can read/evaluate it based on their personal opinions/knowledge.
Look Mr Blogger, take this in the kind hearted intent it’s meant by a 40yr old engineer who doesn’t have time for bullshit anymore.
Get to the point, fast and quick.
I’m sure you have many valid and interesting points of view about your job, experiences, and wants. I’m interested to hear about them. You could offer some unique experiences I can learn from or maybe there’s some tidbit I can offer you.
But unless you learn how to get to the point first, I don’t care. Even now after 3 minutes of perusing your entry I’ve lost any sense of what it was about.
To finish up, I’m not trying to be evocative, negative, or anything. I just want to encourage better ways of writing, thinking, and publishing.
Always consider your readers first. They don’t have as much time to parse what you write as it takes you to write it.
Sending a letter would take weeks, so it was important to make it count. Newspapers with long articles made more sense when anything written was much more expensive.
Long form is good when it helps the story and helps the story.
The problem I had with this story, besides that I don't think I buy the model of reality it's proposing, is that it doesn't get to the meat(The Expert Beginner) until past the halfway mark. The first 7 minutes wasn't particularly interesting or critical to someone understanding the subject. The bowling analogy was long and not necessary to explain a very straight forward idea.
From what I can tell, a "low hanging fruit" analogy would have been more apt and would have taken a short paragraph to preface the main content.
That aside, there are some reasons I have to not agree with the article.
Not to discredit the author's experience but, in my experience, I just haven't met anyone who believes they have reached expert status but is still a beginner. Pretty much every programmer I've met has some degree of imposter syndrome and, while most have some opinions about what makes "good code", I don't find it common that programmers have a strong belief in their own expertise. Totally subjective, I know, but that's one way in which the story just doesn't reflect my view of reality.
> They come to the conclusion that they’ve quickly reached Expert status and there’s nowhere left to go.
I just don't think I've met any programmer who thinks there's nowhere left to go once they've achieved a perceived status.
Moreover, the end of the story seems to conclude that "expert beginners" are the problem, as opposed to the symptom, although the author does recognize that somewhat via the "dead sea" effect. Looking at this from an economics perspective, the so-called expert beginners are the way they are because they're not incentivized to do better. If there was economic and internal political pressure for them to achieve supposedly greater things, they'd be more likely to do so. Having worked for some "dead sea" companies, I have to wonder why any beginner would want to effectively work harder for a company that pays an intermediate salary, makes advancement difficult, and does not reward excellence. The choice to dick around, not spend energy improving, maintaining the appearance of experthood having "worked on that tool" or having been around the longest, while collecting a reliable paycheck, is the most sensible one for those who don't live and breathe code.
The article comes from the angle of someone who lives and breathes code, blaming the programmers who aren't true believers, and leaves out the fact that the supposedly better programmers don't stick around to fix these situations.
My point is that you could turn the article around on itself to say that the "dead sea" problem, and the problem of "expert beginners" lingering at companies, is at least significantly caused by actual expert programmers.
> I just want to encourage better ways of writing, thinking, and publishing.
You could use more encouraging language. Phrases like "I don't care" and "[don't] have time for bullshit" make it seem exactly like you're trying to be negative.
I'm not trying to say anything, I just want to make you do better.
Always consider people who are different to you. Maybe they have so little time for bullshit they won't even bother being negative about someone else's writings from 8 years ago.
The author of that article clearly can describe his thesis in 3-4 sentences. He does so on the middle of the text.
But he decided not to add a summary at the beginning of the article. This is a perfectly valid option and helps him making his point, even though it did reduce his audience.
I've worked with several 'expert beginners' over my career. They think they're near the skill ceiling, but they're actually much closer to the bottom. They rose through the ranks despite not being particularly great at their jobs, and now find themselves having to indoctrinate others into their way of doing things. Any suggestion on how to improve the process is usually met with some form of "that's not how we do things around here", since the expert beginner feels threatened.
Preventing yourself from becoming an expert beginner doesn't mean you have to dedicate all your spare time to learning 200 different technologies, as some here have suggested. It's more about accepting that learning is a life-long practice. Knowing that less experienced coworkers still have things they can teach you. Understanding that there is still so much out there for you to learn, and being humble about that fact.