This has always been my pet peeve with jobs that won't hire people without '5-7 years experience with X' - the vast majority of these jobs do not need an EXPERT on language x, or framework y. If they do, the rest of this post does not apply.
You can learn enough of a language to be dangerous in just a couple weeks. Enough to be productive in 2-3 months. However, there are 3 other things you have to learn:
1. How the code for the project you are hired for works. How it's laid out. All the weird shit about it.
2. How the framework/libraries used by the project works. Nothing to do with the project, but maybe you haven't used Angular, Swing, or Boost before.
3. How the data is laid out, and how it gets in/out of your system. What database tables are there, what's the workflow.
Those 3 things are the hard part and what take 6-12 months. Learning Javascript, Ruby, or whatever language is the least of your worries.
Would I rather have a great dev who doesn't know a framework than a mediocre one who does? Yes.
But there are a couple of reasons why I hire for for expertise in the ecosystem. Granted I'm in .NET which is a very homogenous ecosystem, so framework/library/tool knowledge bifurcates into people who are experts on every tool/library/framework we use and people who've never used one.
1. For the first 6 months you're paying a sr. engineers salary to get jr. engineer productivity. Given that most devs switch after 2 years this is a huge productivity hit and it's all front loaded.
2. While great devs are great, it's a lot easier to figure out if someone is a framework/language expert than if they are an intrinsically great dev in an interview. i.e. ("Do you know C# well?" is easier to determine than "Are you a good problem solver?")
3. Being an expert on an ecosystem pays dividends even after the first couple of years albeit smaller than the first 2.
4. It's not that much harder to hire someone with expertise in your ecosystem than not.
5. Most devs are searching for jobs in the ecosystems they like working in. But a lot of devs who can't find a job get desperate and start applying to everything, which can bring down average quality of people who don't have expertise.
I've been at this software thing for 20+ years and I could see a couple things happening based on your points but I mainly wish to address:
1. Senior engineer productivity
Most companies have zero idea what to do with a senior engineer and they measure them against a junior engineer. The junior engineer usually does the following:
- works a tonne of hours
- closes a tonne of bugs
- makes a tonne of mistakes
- asks a tonne of questions
- uses up a tonne of senior engineer(s) time
And you end up with:
- a high ticket/bug fix rate per day from the junior
- generally horrible and poorly designed code from the junior but the code works
- few big fixes from the senior
- what the senior does produce is generally better designed, fewer bugs and is easier to maintain.
If you have hired a quality senior engineer it shouldn't take too long before your junior and mid level engineers start soaking up the time of your senior engineer. It shouldn't take too long for your senior engineer to become the subject matter expert in several areas of your product which takes up even more time.
I am usually hired because things are a mess and they need to be cleaned up. Years of junior engineers doing the best they can with limited knowledge and experience tossed in with some management pressure leads to unmaintainable code/environments.
The largest impact I have ever had was when my manager put some trust in me, gave me autonomy and a huge project. I think this is what you should do with your senior engineers. Give them the space to make an impact.
+1. On a project with any serious complexity (ie not glorified CRUD apps), a junior engineer is pretty close to net neutral without good guidance.
The industry at large does not realize how much mentorship a beginning software dev needs, or how much damage a poor engineer can cause. I know of a startup where an eng with too much appetite for shiny new tools caused probably $500k in lost runway, and I don't think that was exceptional.
> 1. For the first 6 months you're paying a sr. engineers salary to get jr. engineer productivity. Given that most devs switch after 2 years this is a huge productivity hit and it's all front loaded.
There's something wrong with your hiring pipeline and IC career path if that's what you are observing.
Try paying more for new hires.
> 2. While great devs are great, it's a lot easier to figure out if someone is a framework/language expert than if they are an intrinsically great dev in an interview. i.e. ("Do you know C# well?" is easier to determine than "Are you a good problem solver?")
It's easier to grade multiple choice exams as compared to essays. But the former mostly measures how well someone can rote memorize.
> 5. Most devs are searching for jobs in the ecosystems they like working in. But a lot of devs who can't find a job get desperate and start applying to everything, which can bring down average quality of people who don't have expertise.
Get a better hiring pipeline. If you start dealing with too many clueless candidates it's time to look out where they are coming from (which recruiter, school, whatever) and switching source.
We were paying ~200k a year which while not FAANG is almost double the national average for software engineers.
> It's easier to grade multiple choice exams as compared to essays. But the former mostly measures how well someone can rote memorize.
It seems obvious to me if you have two equally useful attributes, and two measures. You should rely on the measure that more accurately reflects the actual attribute. Finding and hiring great problem solvers is a notoriously hard problem.
> Get a better hiring pipeline. If you start dealing with too many clueless candidates it's time to look out where they are coming from (which recruiter, school, whatever) and switching source.
What pipelines would you suggest? We used recruiters, indeed, monster, hiring, hacker news, stack overflow, etc... None of them were a silver bullet. They all had drawbacks.
> We were paying ~200k a year which while not FAANG is almost double the national average for software engineers.
I read two things: Not FAANG and national average.
If you are competing for FANNG talent that's where you should aim. Whatever average, that includes all bodyshops and bootcamp grads is a useless measure. Average Football player in the US and NFL football players have very different compensations.
> What pipelines would you suggest? We used recruiters, indeed, monster, hiring, hacker news, stack overflow, etc... None of them were a silver bullet. They all had drawbacks.
For college hires go directly at the source. At which university do you recruit?
> 1. For the first 6 months you're paying a sr. engineers salary to get jr. engineer productivity.
Ouch. While I myself have been guilty of that 2-3 times during my sadly way too rich on customers and employers career, this is definitely the exception and something with your hiring filters is very wrong.
Combined with your disclosure that you pay about $200K but still have problems with good programmers then I'd think you are letting people in either way too easily or you don't aim well. Which brings me to...
> 2. While great devs are great, it's a lot easier to figure out if someone is a framework/language expert than if they are an intrinsically great dev in an interview. i.e. ("Do you know C# well?" is easier to determine than "Are you a good problem solver?")
I will respectfully disagree with the last assertion. You or the people who interview devs must reach outside of the usual leetcode / whiteboard tests and do some research on more general interviews and learn techniques through which you will get a very good idea if the interviewee is a good problem solver.
Trouble is, most tech interviewers view gaining those extra evaluation skills as a waste of time.
Piece of advice: give the candidate 10-15 minute and an easy coding problem they could solve in front of you, or give them a really small homework (not this "you'll need 1-2 days at home to solve this homework" nonsense) -- but don't make the tech aspect of the interview the dominant part. Talk with the candidate. Give them a few examples of tough situations from the past. Chat with them about how would they approach the problem(s). Ask them what they know about CI/CD, about best practices in framework X or Y, what project or task that they did makes them proud of themselves. Many other such great interview questions exist.
Not meaning any disrespect to you, you seem to be trying hard to get good devs. But from what I read here it does seem like the aim of your company isn't good and it's mis-firing the hiring gun.
$200K will tempt a lot of people to lie and "wing it" when they get hired. But you should leave the high remuneration in place. That way when you get a good professional they'll have one extra -- and very fat -- reason not to leave. :)
If it took me six months to become productive, I would be deeply embarrassed. There must be something wrong with the talent pool you are able to attract.
To elaborate a good engineer will be able to read a well designed system and almost immediately start mimicking what needs to be done when adding new features regardless of how much experience they have in the given language/framework.
Maybe I haven't seen enough great devs but I see a lot of great devs stumble into a system or framework and they're doing their best but they want to be that great dev and work fast and ... they stumble though stuff that a mediocre dev who knows the system knows better than to do.
> For the first 6 months you're paying a sr. engineers salary to get jr. engineer productivity.
Then you are doing something wrong or have a watered down notion of "senior engineer" (as in "junior engineer" that spent X years on the job, as opposed to someone who is actually progressing well through his career in that time). Senior engineers should IMMEDIATELY outperform junior engineers. Knowledge of the system is largely unnecessary. What senior engineers bring to the table are completely different skills than junior engineers. You can have 100 junior engineers and they will still not be able to figure out the things a good senior engineer will do for you.
Probably you are not utilizing them correctly or giving them the right problems to solve.
I'm using the definition of someone who has been professionally developing software for 10+ years.
So I'm comparing someone who has been professionally developed software on a different stack for 10 years compared to someone who has developed software on an identical stack to yours for 1-2 years.
If you ever need heart surgery, do you want the surgeon who has done 10 years of heart surgery (or better yet, 30 years)?
Or do you want the surgeon who has jumped from area to area and has no more than a year or two of experience in each area? The experience doing knee replacements will be invaluable in your heart surgery. Right?
It is so weird that this meme of "10 times one year of experience" is somehow become a negative.
I've heard this trope for the past 10 years. And just like different sr. devs are better or worse jr. devs can be better or worse. Ideally you always want to hire better.
Yes six months seems very long I recall when I switched to a new language (PL1/G) on the Map Reduce part of our project, the developer I took over from after 3 days said "you don't know it all yet"
I was teaching my self from a 1964 vintage IBM manual that my father had kept :-)
It's not just a new language(C#), it's also a new ORM (EF Core), a new front end (React or Angular or Razor or Blazor or whatever), a new database(SQL Server), a new cloud provider(Azure), a new web framework(ASP.Net), a new work tracker(Azure Devops)
It can be, but a senior who is well-versed in the fundamentals and has some exposure to complimentary tools in other ecosystems is not going to have a problem coming up to speed with those tools.
What they're going to spend more time on is: learning how you decided to solve your problems with your knowledge of those tools and how you decided to fit it all together.
If you have been around for a sufficient number of tech cycles you already have conceptually analogous systems in your head, which you just have to remap to the new syntax. That's what makes a "senior".
The frontends are actually the hardest to stick with because they turn over fastest, probably followed by distributed-compute solutions. C# is a short walk from Java, SQL has been around for decades(even given dialect differences that change best practice).
You aren't wrong that it is a bit of work. It's just less work than learning the actual software itself by a lot. Learning .NET's flavor of the year framework stack is not that bad since there is lots of good documentation, classes, books, and blogs. Your software, at best, has documentation that is as good as .NET's. This is highly unlikely though. Learning the product's layout and quirks dwarfs learning the latest incarnation of .NET.
I mostly agree, with the caveat that people can usually learn something analogous to what the know quickly, but not necessarily in a new domain. E.g. easy to go from Django to Rails or vice-versa. Similarly, not too hard to go from C++ to Java or Go, etc. But I had a tough time doing front-end development for the first time despite being a pretty strong backend/systems/embedded programmer because the programming model was so different from what I was used to and I was coming into a complicated product.
In a traditional corporation you wont get fired for hiring devs with many years of experience in the correct stacks. That is the most important incentive. If you hire devs with experience in the wrong stacks and it doesn't work out people will start asking questions, why did you hire this guy even though he didn't know the things we need?
The problem is that many places want to hire developers and have them deliver as soon as they get a devenv, with zero efforts on training, naturally that only works if the experience is already there.
No, as the article describes, that does not work even if the language/framework experience is already there.
If you're a veteran with Javascript, and Angular, and Node, and MySQL, you will scarcely be able to add something as simple as a "Birthday" field to the user profile pages of your new employer's SPA on your first day or first week on the job. That background will give you an idea of how you would have done that on previous applications, and give a slight speed boost as you try to skim the project for tables and functions that are related to user profiles, but the new application is almost certainly different. If you knew C#, SQLite, and desktop application development instead, or, heck, were fresh out of college with nothing but some toy applications that used Java to write some CSV files, that would scarcely matter, you'd still spend that first day reading through the codebase to understand how user profiles work, and regardless of your experience you're probably merely going to copy-paste and modify the "Occupation" field anyways.
There are a couple minutes of that first day where you're editing a bit of code and it will go slightly faster if you can remember off the top of your head that in Angular, the keyword to parameterize an input[date] field with a maximum value so your users don't accidentally claim to be infants is "ngMax". Or you could go to docs.angularjs.org and look that up in 5 minutes or less, which is a delay, but that doesn't matter much if the other 7 hours and 55 minutes of that first day (and much of the coming weeks and months) will be spent learning the domain-specific details of your business, navigating the project hierarchy, and memorizing the most common table columns and class names.
I think too many hiring managers are unwilling to voice and reason about their subconscious expectation that they're going to hire someone who is a clone of their existing employees. The only developer on the planet with 5 years' experience building a CRM and scheduling tool for landscapers in West Virgina using React/Node/MySQL with database layout and code architecture that matches what you have is named Dave, he's already in the office down the hall, and yes he's behind on v3 because he's swamped with tickets from customers on V2 right now, he needs you to hire someone to help. Just get anyone who's reasonably technically competent and good at problem-solving, and they'll pick it up as has always been done.
The first place I worked as a dev, they had copious amounts of documentation and such an expansive wiki, every time I asked something, the common response was, "fates, it's in the wiki."
I did have quite a bit of onboarding, but after a few weeks, the purpose of handing me off to their documentation was to make me more independent and solve the problems myself
Conversely, I started a new FTE role at a large health care corporation. No training, no onboarding, not even a cursory tour of the network, what source control they use, literally nothing.
The amount of time I wasted learning all of this I'm sure cost the company more money than having a proper onboarding and training program. I pushed my manager and his manager on the importance of onboarding (we were hiring a lot of new devs at the time) but his retort was "it's not in our budget, you just have to learn as you go." Ironically, having spoken up, every time a new dev joined our team, they were pawned off on me to bring them up to speed on everything they needed to know - all the while handling a full load of project work.
Not properly onboarding developers has a huge impact on multiple areas of a business. I was shocked a smaller company knew this, but the larger, entrenched corporation wanted nothing to do with addressing this.
NO! The important (and hardest) part is NONE OF THE ABOVE. It's understanding the REAL WORLD PROBLEM that the software is intended to help with!
Example: A decade ago I hired a Designer who had the potential to become a decent JS/UI coder, but wasn't one yet. He upleveled his own JS skills to where he was quite competent. But more importantly, we made sure he learned WHAT THE SOFTWARE DOES IN THE REAL WORLD. If he doesn't know that, and how the people that use it think about their own jobs and environments, he can't possibly design and write good interfaces. (This was software to manage, operate, maintain, and optimize utility-scale solar power plants.)
When we merged with another company in the space a year or two later, he came back and reported that he was stunned that he (the UI designer!) knew MUCH more about how solar power plants actually operated than any of their "engineers" writing their application code. (And yes, he did...)
It's not about solving the problem in the currently "right" or trendy way - it's about doing a good job of solving the right problem in a really useful way!
(FWIW, I no longer care at all which tools, languages, or frameworks are used - if you care about that, you're focusing on the wrong stuff...)
The actual language, sure. The ecosystem, tools, libraries, and techniques surrounding it, I would say takes longer.
For example, I was making games professionally using C#/.Net for several years before getting hired in enterprise app development. I was able to contribute right away, but there was a ton I wasn't familiar with and took time to understand, from SQL Server (especially SQL optimization, transactions, triggers, etc), to BLL/DAO, dependency injection, CQRS, Unit Testing, Octopus Deployment, Powershell, Active Directory, TFS Server/Build, Windows Services, XAML, Microsoft Azure, IIS, Entity Framework, .NET Core, and a bunch of other things.
Six years into enterprise/web work on the Microsoft stack and I'm still learning new things here and there.
All that being said, I will reiterate that I was able to work on bug fixes and new features within a week of being hired at both companies, despite not knowing hardly any of this stuff at the beginning, and most of that time was spent getting familiar with how everything was structured and flowed in all their various software.
That was my experience with Java. I knew C++ and I was able to start coding on Java almost immediately, for me Java was semantic subset of C++. Of course I had some books and checked out things here and there and, of course, Eclipse was of tremendous help to me, but I never did Java learning as a specific activity.
That was Java 5, though. And frameworks is a different story either, I spent lots of time trying to figure out what J2EE is.
> I learned C# in under a day by using ReSharper and reading a few articles about C# idioms and memory management.
did that include reflection ? P/Invoke, and support for native pointers ? async/await and coroutines ? differences between structs and classes ? how generics are implemented ?
Yes to all except P/Invoke. I don't know what that is or why I'd need it. I'm writing web services, not replacing C++. I also haven't used native pointers, but have read about them.
Not to mention the huge amount of time it often takes to get setup and understand how to use the company's internal tools for source hosting, code review, ticket tracking (even it you used Jira at your old job and are now using at your new one, it's probably configured differently), CI, etc.
Yeah, but what hiring process intends to hire average devs? Yes, most probably -end up with- average devs...but most also ask and test for X years in a language, not people who seek to learn things and are curious and want to take responsibility and own things.
Sometimes it is just the env you work in and how much they trust you. Some places 'I need a server' you will have a VM or machine on your desk in an hour. Others companies it is a 2 year processes and bunches of approvals.
I agree. The main thing I'm looking for in new hires is the ability to think logically, problem solve, and learn new things. I don't care whether you already know whatever languages or libraries we are using.
I have often said that there are practically no C programming jobs. Rather, there are many job writing in the macros for a specific project (with the minimal associated C code).
That’s one advantage of opinionated frameworks like EmberJS. If you’re experienced with Ember, that knowledge will carry over a lot to other Ember apps. It’s pretty wild how consistent the architecture is from one app to another... and that’s thanks to a lot of very smart engineers being opinionated about how the apps should work.
Our team chose Ember since we didn’t have extensive front end experience and opinions, and it’s worked out well for us overall.
Though I would have it as #1 in any job. Having some idea of what the business using the software is trying to achieve (at both the macro and micro levels) is far more valuable.
Basically knowing the why, you can use as the frame for the what and how. Some of those quirky pieces of software could be misguided, unnecessary or counter-productive - and this is where you become a value-add to the organisation.
Yeah there's way too much alphabet soup exact matching going on out there job wise, and not enough "You know that framework or system is not unlike what this dev knows..." going on.
This is why I frown upon people who hop jobs or teams within a year. To be truly productive and effective large scale systems require at least a year of time.
Glamorous Toolkit (https://gtoolkit.com) by feenk, which embodies the thesis of this blog post, is in my opinion the most exciting working development environment in decades. I think in 5-10 years people will be pointing to it as having a huge impact on the industry.
Once you understand the consequences of reducing the cost of specialised tool development by orders of magnitude, it becomes obvious that a qualitative change in experience follows, that brings us much closer to what people like Engelbart were searching for.
> Glamorous Toolkit is implemented in Pharo, but it's made to work for many languages.
Ooooh, now I am interested. I really enjoyed exploring Pharo but couldn't combine it with work. But getting to play with Pharo for a work-environment while still being able to do my day job sounds very appealing
Thanks for the links and references! I am very interested in these things. One of the things I hate most about being a software developer is the tool infrastructure. I want documentation that’s accurate (actually, that would save probably at least 50% of my time), interactive documentation and debugging, little GUIs that help with various tasks, etc.
I am actually working on my own vision of this, but it is slow going.
Some of us (me, for example :) have to spend multiple weeks a year to draw the design architecture of certain parts of the product. During engineering meetings everybody raised the same question: i commit to new area in product, how can I understand what are the components and how to do certain things? You can ask your colleagues but they might know it as well.
So I decided to document main flows and designs in PlantUML diagrams. having these diagrams greatly improved onboarding process, cause you can quickly glance what component does what and what are the dependencies (the code base was in JS, so it is usually quite limited on refactoring/figuring out wtf is going on).
But the problem with such approach is: diagram quickly gets out of date. Someone makes the change and the diagram makes no sense at all now. With what I saw in Gtoolkit, you can always query the real source code and build custom dev tools that always produce current and real overview of the system. I would love to have a starter kit for JS projects that you can drag and drop and start building your own tooling for your product.
How fast are your components really changing though? I write a lot of documentation for the systems I work on and rarely find old diagrams not making sense at all. Maybe smaller projects might have more changes going on but I find I have to add things more often than completely scrapping docs (the specific product I work on is almost 10 years old although the company I work for is a lot older)
You often see stuff as simple as a README get out of date. Lowering the barrier of entry to write documentation is hardly a bad thing, but the problem remains that said documentation takes time and work to stay relevant.
Couldn't agree more. I think we should redesign programming so that it is primarily a method of communicating between humans. If that becomes the focus, we can use the best methods we have of communicating words, graphics, and interactivity to explain complex systems.
One of my own projects in this area is a prototype where I made the best possible explanation of a JavaScript library I could: https://glench.github.io/fuzzyset.js/ui/
1. Programmers and code review tools are not very good at making sure comments and documentation are updated when code is.
2. People see literate programming as primarily for documentation-focused or teaching purposes.
The statistics in the article we're commenting on suggest that more software systems should be documentation/teaching focused, since the "learning the system" phase is where most of the time goes.
> 2. People see literate programming as primarily for documentation-focused or teaching purposes.
I agree. Specifically, it's for teaching my future forgetful self what the hell this bit of code is supposed to do. Especially if waterbed theory appears to apply and it's inherently a juggling act of interlocked complex systems.
I will admit that I am in group 2. I always saw it as something exclusively used for teaching. I have a project that I basically abandoned because of a lack of free time and there is little hope that my friend will understand the code base without my help. If literate programming can help with that project I'll be sold on it.
>But why literate programming isn't more popular is beyond me.
One issue with literate programming is that it advertised a single narrative for code. Only code is data, and there are always many narratives about data. This change might look cosmetic, but it is actually fundamental.
In Glamorous Toolkit, beside having multiple views, we also embedded an interactive notebook right in the development environment and through it we tell interactive stories about the inside the system.
I think we should redesign programming so that it is primarily a method of communicating between humans.
Agreed.
I think it was Fred Brooks who argued that the number of bugs in a system correlates with the number of lines of communication between the coders of the system. And that number grows exponentially as you add coders.
Fixing the communication has the most immediate impact on bug count.
English is pretty good for communication between humans.
Obviously, your method of communication needs to adapt to the subject at hand. Talking about a website design certainly seems substantially different enough than talking about web assembly concurrency semantics to warrant different modifications of English.
That programs should be for communicating between humans is one of the core design tenets of Smalltalk. For example, just look at the number syntax. No cryptic conventions for non-decimal bases; simply write BASErNUMBER, eg 11rA9, 36rZ. Humanism pervades the entire system. Read the introduction and first chapter of Smalltalk-80: the Language and its Implementation (available online).
It’s no accident the the Glamorous Toolkit has been implemented in Smalltalk; it is part of the flow of Smalltalk culture.
Instead of slumming of trying to make JavaScript, a truly awful language for humans with the most contradictory base library I know of, faults beautifully demonstrated in several presentations, try using an open source Smalltalk, Squeak, Pharo and indeed GT itself.
The hard part is talking to hiring managers or potential clients about this without sounding like an idiot who doesn’t know how to put his or her shoes on the right feet. What do other people do in those situations to communicate this?
I've been struggling on and off with this for the past few years since moving into consulting after 15~ years of permanent (platform/ops/automation) engineering roles.
Often the most difficult situations are not complex due to the use of some advanced technology or business need - but instead because of the sheer amount of components in play that you need to understand in order to add any value.
The ramp up time just to understand WHAT components make up a given system let alone HOW they work seems to have shot through the roof over the past 5 or so years.
With once monolithic systems being broken down into distributed microservices, service meshes being widely deployed, everything-is-an-API architecture and other good things - an unfortunate side effect (when combined with seemingly growing expectations on engineers to be 'full stack') is that cognitive load has shot through the roof.
While compared to 10 years ago - it does seem like most systems have better uptime - I'm not convinced they're easier to support and aren't over-engineered most of the time.
Edit: spelling, grammar (sorry it's midnight and I'm falling asleep)
If your distributed microservices require several to be updated at a time for every change, are they really separate services? Isn't the entire point of a microservices architecture to reduce the cognitive load with understanding how everything in the system works? You simply need to learn endpoints, not inner-workings. Take Uber for example: post API 1 for calling a ride, post API 2 for payment data, post API 3 for reporting a problem.
Unless your business requirements change drastically every 2-6 months, then it doesn't matter what architecture you use - you'll have to redo most of the stack in a monolith or distributed microservice arch.
Are these good things, or just fashionable things? Is optimizing for five nines, at the expense of many other metrics, ever the right choice if you're not Google? Or is it just a fad? Especially if and when the implementation is just pure blind cargo-culting?
The amount of people excitedly telling me their 8 person startup is using microservices really upsets me. I try to tell them what a mistake it is, but they never listen. They don't know what problems microservices solve or bring. They just know "microservices = good." It gets really frustrating sometimes.
You can really shoot yourself in the foot when freelancing with this caveat. The faster you are the less you make.
I try to charge more because I do move pretty quickly, but I have been replaced by someone "A third of my cost" more than once. If you took the triangle for "Fast, good, cheap" and graphed the market demand for each, my gut feeling is that "Fast, cheap" is the leading demand by a long margin. To someone hiring a freelancer, "Fast, cheap" looks close enough to "Fast, Good"
Yep. This is why experienced consultants tend to aim for value-based pricing / project-billing, and avoid hourly when possible. Otherwise the incentives are all kinds of screwed up, even adversarial.
At the society scale it's really hurting it seems.. every body is trying to get a chunk of the blanket and we all spend more time fighting each others than improving our lives.
Hence the advice "charge more". Never try to compete on price, it's a race to the bottom that commodifies your work, and you're competing against people able and willing to work for a pittance. Be strategic, be viewed as strategic, and price accordingly. Also, consider a hybrid with a fixed-price base (esp. w an initial "discovery" phase, priced on its own), and explicit provision for time and materials overages. Mostly, figure out what works for you, and do everything in your power to deliver value for your clients.
I say all this having used a few different models over the years but also presently engaged in a long-term (well-paying) hourly contract that's honestly been a welcome break from the sales/marketing/bizdev aspects of consulting.
The hardest part of contracting is finding good customers whose priorities in that triangle align with yours.
Sometimes fast and cheap is the right answer. And there are devs who enjoy that role (although they often don't have the self-awareness to realize this, in my experience working with them).
Yep, both happened to me. It seems that "he'll get into the job in a week" is a deeply ingrained assumption from many other jobs and is seamlessly transported to programming work.
On my last job -- which I do regret somewhat for losing -- I failed at meaningfully delivering that message.
There were SO MANY things to figure out, and I had no experience in Kubernetes (and only surface experience with Docker, Tilt and Helm) yet I was expected to be productive in 1-2 weeks after onboarding.
The main reason why I started taking too long in doing any tasks was that I grew more and more reluctant to ask questions. They were always answered extremely shortly and unhelpfully and left me with the impression that nobody cared to help me start becoming productive sooner. Also you were expected to hang out in a rather unofficial Mumble server the entire day if you have questions. What? Are we working remotely or are we simulating an office? Seems it was the latter.
So anyway, I took the long route and started exploring a lot while working on my first tasks.
Needless to say, this got me fired. I regret losing the very respectable salary but beyond that I am actually happy that it didn't work out.
It's like, I get it, you guys are all busy, but if you took one or two weeks to hand-hold me every day then we wouldn't have a discussion at the 4th month mark about why am I so slow and that I have to be let go.
It was rather sad because I kind of liked the guys in the team -- but they were not helpful and you were expected to wing everything yourself. Which is fine, AFTER you receive meaningful initial help. Which I never did.
But, on topic again: I wasn't able to express that message properly some of the places I worked at. There are people who were receptive to it (it == "you need to properly onboard people even if that means some of your business-critical devs work at reduced capacity for a few weeks") when I said it in the past -- and most are receptive to it in my current job as well -- so it seems it really depends on the people themselves and/or the culture of the company.
I guess it's kind of like hiring: it's much more randomness than anything else.
Had similar k8s experience. Because I took time to learn how it worked, not just make believe.
Yes, and:
> why I started taking too long in doing any tasks
I actually test my code, which makes me appear A LOT slower than my "fast" coworkers. So while I rarely do rework, my apparent "velocity" (Agile FTW) looks much worse.
I mean actual tests. Including negative tests. Which requires knowing how the system I'm changing actually works.
My last few gigs, I don't recall any one else testing their code, much less doing negative testing.
Multiple times, coworkers will discover that their code in production didn't actually work. Could never work. Ever. Like the "cache" which never updated its entries.
One of the curses of having been a QA Manager, doing actual QA & QC & Test, is I'm embarrassed when my code doesn't work. Which has become a career limiting character flaw.
I have seen this before. Middle management graded teams on story points per sprint. Of course the shittiest team took that to the extreme. They had 15X the story points of everyone else. The problem is that every file edit was a story point. Every bug fixed was a story point. They shipped ridiculously buggy code constantly that cost us tons of money through lost customers. Fortunately each team owned separate service, so they had to handle the continuous calls for their crap. And yet, every time, management would scold them lightly and praise them for their hard work. The one time we had a bug that made it to production, management jumped on us. Thankfully, I left that company after a short time.
I now ask exactly how management attributes time for bug fixes. If they aren't allocated to the person, team, and story point for the original feature, then I explain to them my story and why I won't work for them.
This yet again proves that people just want to be shown on the basis of which criteria to judge others and once that happens they accept it as the universal truth and will not easily (more likely ever) change their views.
I've had colleagues in previous jobs that were times better than me and they got jaded by such attitude and left, leaving EVERY single manager "very surprised". But they were also kind of snarky and dismissive about it. Oh, they joy when they finally admitted 6 months later that they made a mistake by letting him quit and then the second negotiations when they wanted him back! He told them "eff you" twice in a row but eventually accepted 6-months contract... at 3x his older salary.
I feel that the under-appreciated engineers must learn to punish the companies with their absence much sooner. But yeah, a lot of programmers are rather introverted and shy and don't understand the leverage they have so sadly what we have as the status quo is pretty normal...
Just remember this when you see who gets promoted, and be sure to adjust your strategy accordingly ;)
Unfortunately most software companies (like all companies) are based on management ego and rah rah rah. This is an infamous problem which is part of why software quality is so low.
Never trained for any sort of a QA but can completely relate to your mindset. I am paranoid and love to add tests. They saved the bottom of my team (and sometimes the company, in my smaller customers at least) many, many times.
And it's still under-appreciated as a skill to this day even if I got handshakes and many "good job!"-s in chats.
I wanted to let you know that you probably did nothing wrong. The people you were working for were probably just like you a few years ago and when they first arrived the guy they worked for was a prick just like they were to you. They knew nothing and their job was on the line. They probably went home every night for a year, puked their guts out and frantically did everything they could to learn as quickly as they could while during the day keeping their head down, and trying to keep the look of panic off their faces.
Now they're established, they've got their place. That boss that they struggled under is still there and they're going to do to you exactly what he did to them. It's like hazing. Sure one of them might choose to break the cycle and take you under their wing but they're taking the risk that after you learn the ropes you'll go to their sadistic boss and stab them in the back. You know you'd never do that but this is a workplace where everyone stabs everyone else in the back so why not. Also don't think that there hasn't been someone that's already tried that and got stabbed in the back so everyone has seen it and knows what can happen so they're not going to do that.
What they were probably looking for were two things. You could take a certain amount of shit and not fight back or at least fight back but only to a limit. They wanted to know that you knew your place. The second thing is they wanted to know if they could trust you. Until they know that you're not going to go to their boss and say something like, "I have no idea what they're doing. They don't even know XYX hot new tech" you're an enemy.
Not staying there very long was probably a blessing. Working in these environments can leave some serious scars. You read about stuff in history books were you think, "wow, how can someone be so angry that they do such terrible things to other people" then you work at a place like this and you find yourself thinking, "if I came across him in the parking lot having a heart attack I'd step over the body and smile the whole ride home." Then you'll know.
You know what? That sounds plausible. Hurt people often pass on the pain to the newer members of any team (including families).
I did get the general vibe that this is some kind of a rite of passage -- the unsaid message of "figure it all out by yourself or you're not worthy". I also did get the vibe they are tired and overworked (hence they needed several new people in the team). But the strongest impression was of a formerly tight-knit team of people who are now forced to work remotely and who are begrudgingly admitting they need help, and were also strongly introverted -- to make matters even worse.
That mix led to the situation described previously: I got more and more paralyzed and hesitant to ask for help as time went by, and my daily efficiency dropped to almost zero.
The thing that hurt me was how annoyed the team acted when I asked every question. One nasty (but pretty normal) response was "read Tilt docs" and it really didn't help me when it turned out that a special Python code line had to be put because my local k8s install refused to even initialize its network interfaces without much of an error indication until I drilled way down.
I did not feel any triumph when the guy I did a screen-share session with finally admitted that they might have thrown me into too deep a pit and that yeah, my setup turned out more difficult than theirs and that yeah, last they did it the cluster was smaller and easier to configure.
I never did once thought to say: "HA! TOLD YOU SO!" -- I was just very saddened.
It's easy for one to give in to negative thoughts and to second-guess themselves but discussing with people -- including yourself -- in this sub-thread did give me more clarity and showed me that it wasn't only me who was at fault.
> ...not too much catering to “super stars”. 1-2 heros does not a team make, the senior people make it their job to lift everyone up. The team doesn’t obsess over their high performers.
If I'm imagining your experience correctly this exemplifies part of why we need to open up our culture. Also, I'm pretty sure I've been in that gig and it bites. I suspect you dodged a bullet even if you got grazed.
Well, it would be very unethical and highly illegal for me to resort to name-calling (but we can probably chat about it in private) but in short the team was a bunch of pretty hardcore guys who are very good at what they are doing but they were a part of a smaller company that got swallowed by the bigger company that hired me.
My general impression after I was sacked was that the team (and the smaller company) were resisting to get their culture changed with all their might (example: sitting in a voice chat room for the entire day, seriously, we work remotely and somewhat asynchronously nowadays, so why?).
But I didn't give it much thought because in the end it didn't matter: they made up their minds without discussing with me, and even if I re-applied it would not get me anywhere.
What really got to me however was that I was fired shortly after I finally took the company's mission and challenges to heart and started working VERY hard. Just 2-3 mere weeks before I got fired. To be let go almost exactly after you muscled through a mountain of obstacles and started loving the job and the people... that definitely did hit a vulnerable spot inside of me, I admit.
> Also, I'm pretty sure I've been in that gig and it bites. I suspect you dodged a bullet even if you got grazed.
Long-term I am sure I'll think the same but in the meantime my income took a hit. Sigh.
Thank you for the kind words, they do mean a lot. (Still have some insecurity about losing that job, can't deny.)
As outlined in more detail in another sibling comment, I had a lot on my plate in my personal life during the gig and I couldn't pull through in time (in their eyes at least).
I am glad and proud that I managed to overcome literal dozens of obstacles -- most of which with tech that's extremely hard to master, with Kubernetes at the top spot -- but I still wish I could work with them again. But with some culture modifications. Which, I realize, won't ever happen. Plus most companies and teams never change their culture.
The only way to win at a job like this is to dive in and work twenty hour days until you show productive output. You need to over communicate and haunt those company forums. It's more the companies fault than yours. They should have paired you with an experienced employee the first few features. I bet they overhire and just use the first two months as a working interview.
Judging by one guy who left (or was fired, I don't know) while I was there, and several more from other departments then I think you are spot on -- they did seem to cast a wide net and just let some of the fish fall through. Guess it was easier that way?
I was painfully aware that I had to put in 12+ hour working days until I show a productive output, yep. But sadly the stars aligned against me: during the same period my wife had severe depressive episodes I had to help her through, my mother almost died and me and my wife took turns "patrolling" the hospital where she was laying for days, had a huge fall out with my brother during the same time, and I finally cracked under financial and emotional pressure (not going to bore you with my life story but let's just say that the last several drops made the cup overflow). On top of that I was asked to comply with weird culture and practices that put extra pressure on me.
So I wasn't able to do what I was very keenly aware that I should do to keep the job. I am still a bit sad about it because I know for a fact that the whole thing actually started taking shape and I found my motivation and energy and desire to work on the problems in detail and with good craftsmanship... but it was too late at that time, apparently.
I'm sorry that you feel like you have to give reasons why you weren't able to work 12 hours a day. That's an unreasonable, unhealthy, and frankly disrespectful expectation from your employer whether it's explicit or hidden. I'd encourage you not to try to justify or excuse it. I can't imagine that losing the job is easy, but
it's not your fault
that you were subjected to that, or that you did not match their terrible standards.
I think the best approach is not to be apologetic about it, and just treat it as part of the job (which it is).
For instance, when you're talking about time/effort estimates, if you include the time it's going to take you to get onboarded with a new system, this is a sign you are more professional, not less.
I am not saying we should be apologetic about it at all.
However, I am saying we can improve the effectiveness of the time we spend by an order of magnitude without much effort. We just need to want it. Interestingly, I also observed that solving problems without reading code leads to increased happiness, too.
I completely agree but currently, the way things are, we'd need very intelligent tooling that would be able to parse an entire project and give you some meaningful insights -- in order for you to have a shorter and more impactful onboarding.
Thing is, nobody wants to produce such tools for free. Plus, they'd be a huge competitive advantage so even if a company invents such a tool they're very likely to use it for their own gain and not to help the entire programming area at large.
The choice of what to treat as bedrock is essential to the parent's comment remark that, "We just need to want it." The notion that a person or a team should be able to use whatever development methods and architectural style they feel like coming up with at the time and then throw it into intelligent tooling to figure it out—rather than, say, spending a little more time learning another paradigm that doesn't require general artificial intelligence to be a solved problem—is something that qualifies as wanting something but still not wanting it bad enough to do anything about it.
Consider a closely related topic: source control. Nowadays, not using any form of control sounds crazy, but there was a time when that wasn't the case. What was standing in the way? A subset of working developers who just wanted to code without having to think about the task of systematically capturing a record of a codebase's history. But having a reliable, exhaustive record of changes is more useful than not having it, and the default stance today is that you need to use source control. What did it take to get here? Programmers getting over themselves and putting in upfront work to attain a level of competence where basic source control operations are natural.
The tools that the author has created and the work they're doing is in "clearly better than what many people are doing now" territory, but it's completely unworkable so long as programmers are unwilling to get over themselves and keep opting instead to just do things the way they've always done them.
I think it's economical incentives and supply/demand games above all else these days though, not so much about technical prowess (which is IMO aplenty in most teams I ever have been in). I have met some extremely intelligent lawyers and business managers and a good chunk of them are VERY KEENLY aware that most programmers are of low quality and are unwilling to change their (barely learned and never revised) ways. They know, trust me.
But they sleep better knowing there is a bigger supply pool out there because they mostly care how do they manipulate the tech worker into their agendas and objectives -- and thus [want to] view IT workers as replaceable cogs, even if we all know (them included) that this is factually untrue.
That's a huge cultural problem. Not because I feel threatened by some 20 y/o smart guy who feels like a God after two JS hackathons, no; it's mostly because the shot-callers play on people's egos. Meritocracy is still mostly a theoretical construct.
---
RE: "we need to want it bad enough", I do want it very badly both ways: (a) have intelligent AGI-level of tooling and (b) people not shoving their current hype-train ideas into a 5-year old project -- but I personally am not willing the put the work in both because both are not my job. And even if they are made my job, I am 99% sure I will not be paid enough to (1) deliver direct business value with my tech expertise, (2) mentor youngsters and (3) work on automating myself out of the job, all in parallel.
So while I do agree with you on all accounts, I think the incentives outside of our tech bubble are wildly misaligned with our interests and objectives.
Well, we just built that platform and we made it free and open-source. Take a look at gtoolkit.com.
And you are right in saying that it can provide a huge competitive advantage.
The only difference is that the tooling does not give you meaningful insights unless you ask it questions. Well, unless you program your questions. But, that is actually much less difficult than it appears.
This might seem like it’s purely about optics, but I do think being familiar with some terms to communicate the need to understand out the system you are working on as a consultant or employee while being professional about it can help a lot.
A couple of such terms that I use[0] are “knowledge transfer” and “[initial or in-depth] audit”.
Typically I’d be communicating with a non-technical decision maker on the other side, and if there’s an existing codebase (or third-party systems that need integration into what I build) I may request:
1. An initial audit before I agree to the larger body of work. Audit will require access to systems in question and the ability to talk to the people in charge of or using them. Initial audit may already warrant an NDA, but whether I can accept further work will ultimately depend on the outcome of this initial audit. The deliverable of the initial audit may be a brief document or a scope/statement of work.
2. Knowledge transfer or in-depth audit. If knowledge transfer is impossible, during the previous step I’m supposed to get a good understanding as to why (does the previous owner refuse to communicate? which factors have caused this situation?), and instead of knowledge transfer allocate billable time for an in-depth audit. There would be a separate deliverable of this step as well, which would include some diagrams or documents describing the systems in question. A separate deliverable is useful both to myself and to the customer (if they choose to hire someone else to work on this), and makes it clearer what the customer is paying for.
[0] If there’re better alternatives, I’d be curious to hear (though I’m not holding my breath, responding to this thread 2 days late).
One thing that I've tried (to varying levels of success) is to use that time writing unit tests. It's visibly productive, nominally helpful, and incredibly useful in picking apart what's actually going on under the hood. The two biggest challenges I run into are managers (including engineering managers) who insist that unit tests are a waste of time and codebases that retard attempts to break things down to workable units.
I have not faced this problem. I say that there will be a learning curve to understand the product before a new developer can start fixing bugs or adding enhancements. Managers have always understood it.
For contractors too the learning curve is part of the deal. The contractors get paid by the hour when they are climbing the learning curve. Managers seem fine with it.
What is typically considered an acceptable learning cost / duration in your experience? I am not asking because I do not agree that there is a learning curve (we call it assessment), I am asking because I am curious what people expect.
There's no "duration". There's "what's the proportion of time you'll spend learning". Unless you have a very repetitive job, it will never go down to 0. It will just move towards it over time.
And the rate of change depends on the person as much as on your internal structure, documentation, scope, etc.
This depends greatly on both the Seniority of the position and the complexity of the code base in question. Generally jobs I've worked have expected it to be on the order of 1-3 months, although I've found it hasn't always taken that long in practice.
Unless you work on trivial codebases you never stop learning. The code changes faster than you can keep up. It's a constant overhead for everything you do.
You tell the managers that you will spend no time at all on figuring out the system because you are just that good. Of course, you will still need to spend some time on it. Hopefully nobody will look into it too closely, but even if they do, you can claim that that time was already part of the "fixing the problem" phase.
Most managers will not look into the matter too closely, because if they find no wrongdoing then it was wasted time and if they do find it then they have to spend even more time to find a new freelancer. (And possibly take a reputational hit for hiring the wrong person in the first place)
This is my situation right now on a contract I started a few months ago. A consultancy was hired and spent over a year building a system. All the in-house employees don’t really know how it works, it’s essentially a black box to them. So I’m digging through the code to make sense of it and no one from the business can give me test data that reflects what the system was meant to handle. Only thing I can really do is best guess and slowly learn how the business operates while at the same time fix bugs and launch new features.
So, one aspect of this post is: a) a lot of prior work has assumed that "comprehension" and "reading" are the same and b) reading is a bad approach to understanding code.
For me, this also calls to mind an old blog post from Peter Seibel about a disconnect where many of us think that we should all be reading code for our own understanding, somewhat like literature, but very few of us do and it rarely yields much. And one of the reasons why it seems that style of reading is ineffective, is that coming to an understanding of code is more like a scientific investigation than just reading prose.
I agree with that point. I also agree that we need to be able to create tools that deal with software systems more easily.
But I also think two other perspectives are important:
- One is the historical perspective. The code-base is rarely a coherent whole. Two different areas may accomplish similar things with different tactics. The design of a component may be ill-suited to the way it is now being used. What is an intentional choice, and what is an accident? Which past choices should my current project align with? We typically read the code as it exists currently, and look at specific parts of the history only as a supplement, because even visualizing history is complex. But understanding why choices were made, and in what context, can be critical to knowing which things can now be changed.
- The other is that reading code shows us a complex intensional definition, and reading tests gives us a partial view of an extensional definition (in case X we get behavior Y). But to "understand" programs enough to proficiently change them, we have to grokk something like the neighborhood around our current program: How would a given change in the code change the behavior? Being able to interactively change and re-run a program, and compare behavior before and after is in some sense like doing finite difference method differentiation.
The post explicitly says that assessment (comprehension with the purpose of making a decision about a situation around a system) is not reading. However, it does says that people currently conflate the two because nobody talks about them to the point that reading is a proxy to measure comprehension effort.
You are correct in saying that I argue that it is not appropriate to employ reading as a main means for assessment.
Code is certainly not literature, but it should still be studied. In fact, assessment specifically talks about the intent. If the intent is different, such as learn a new language, reading is appropriate. Reading is also appropriate when the problem fits on one screen. It starts to be inappropriate as soon as you start to scroll.
I also do not say that tools should be limited to code either. Every aspect of a system, including its history, runtime, tickets, customer feedback, is data, and it's all relevant. We should be easily able to integrate any of these in our reasoning.
I agree with the observation that code can vary greatly. In fact, it is for very reason that out-of-the-box-clicking-tools will always fail to provide meaningful value. They bake the question in the click, but because of context is unpredictable, we simply do not know the question before we have the problem. That is why the specific tool we'd need should come after the problem, not before it.
And yes, a system is a phenomena that should be approached through the scientific method (this is the essence of what moldable development is). Developers are already doing that implicitly. We should just make it explicit. All sorts of possibilities will arise after that.
IDK why someone downvoted you. Thanks for these thoughts.
I guess I would only add the distinction that you're discussing "comprehension with the purpose of making a decision about a situation around a system". But sometimes we legitimately want to build comprehension without yet having a specific purpose or decision (e.g. when onboarding a team with an existing code base, or trying to understand how a technique works), but even then reading is a tempting but inadequate path to building understanding.
You are raising an important point. When you do not have a hypothesis, the first thing you want to do is get one :). It's like in research: the greatest problem you can have is not having a problem.
Now, how do you get a hypothesis?
You can start from some generic visualizations. The goal here is not to gain understanding, but to poke at finding interesting initial questions.
But, you actually always know something. You likely know the domain. Or you know the last tickets that are in the work. Even listening in the casual conversations is a good starting point.
When we train people, we literally start from the very issue they work on. Within 15 minutes, we typically find an interesting hypothesis to check for. For example, a dialog could go like this:
A: What do you work on?
B: A UI refreshing bug.
A: What do you think happens?
B: I do not know.
A: Why are you looking at this specific screen? (this is a key question. people often do not know why this screen and not another. If you have a 250000LOC system, you likely have some 5000 other screens you could potentially look at. Not knowing why this one is potentially interesting is not a good thing)
B: Because I think maybe it's related to how we subscribe to events.
A: How do you expect the event subscription to be like?
B: It should always happen in a method called xyz that is provided by the framework.
A: In all classes?
B: A, no. Just in components.
A: Ok, so you want to know the event handling that are not defined in xyz in subclasses of the component superclass.
B: A, right.
It's actually remarkably straightforward. Just try it.
One thing I've found can help somewhat with the historical aspect is putting reasons for particular choices in commit messages. Including small choices, decisions that are subcomponents of the reason for the whole patch. They are forever attached exactly to that diff, and can sometimes let a spelunking maintainer differentiate accident, entropy, and intention.
Interesting. I'm not so fussed about the formatting, and the ticket number is easy to catch with a git hook. But the "ask for more details" is definitely what I'm after.
This is so true, I Twitch streamed myself developing a game, for a total of 64 hours. And looking back, I found the parts where development slowed down is when I was confused--I almost want to yell at myself--hindsight really is 20/20.
What I took away is the faster I can recognize I'm in confusion, the faster I can get out of that state--it means I really need to focus on learning what I'm not understanding, and then come up with a plan to change the actual system into what I want it to do.
Another thing is, though I've really refined text editing skills, it's interesting to see yourself struggling to _simply edit text_. So while watching myself, I jotted down a few ideas of shortcuts that could help.
Final thing, that I don't know how to fix, is sometimes I'm not motivated, or not in the mood, or scatter brained. Honestly, looking at myself, I don't know what my deal is. If I can just force myself to turn on the stream, then my friends will pop in and encourage me, so that has really helped. Though I'll still hit work that I'm like ugh.
At the beginning of the presentation, a slide showed the similarities between IDEs. For whatever reason, it reminded me of the setup for every intro to Smalltalk presentation I've been through. Sure enough, Glamorous Toolkit was written in Pharo, and seems to really bring a lot of the smalltalk like tooling to other languages ... and wraps it up with a notebook style UI. It's the most interesting Smalltalk thing I've seen in a while.
Reading this leads me to wonder: Is a large part of what makes a "5% programmer" or whatever, that they are so much better at comprehension/figuring-it-out, and retaining what they've figured out for next time?
I don't consider myself a "5% programmer", but I'd definitely agree that one of the big measures of my improvement as programmer had to do with my ability to quickly figure out how a system works, and my ability to manage the mental models involved.
Compared to who I was, say, ten years ago, I'm better at grokking the system in general. But for many projects keeping the entire thing in my head is impossible, so another thing I feel I've gotten better at over time is how to figure out which part of the system I need to hold in my head as I work on things.
Related theory I've had rattling around my brain recently: a programmer's career velocity is strictly correlated with the percentage of code that they deal with every day that is code of their own making, vs. someone else's. I always get so motivated and productive when I'm building off my own stuff, and pretty grumpy and slow when not.
I've spent almost a year with Spark and I think I'm just scratching the surface now. There are so many knobs that just configuring out just the optimal cluster configuration for production jobs took weeks of testing (so much of that is dependent on the data size and specific use-case). The docs are pretty good, but really don't detail any of 'gotchas' that you'll find in production (you have google relentlessly for those), and the hacks you put in place to deal with those, well, you're on your own. Unless you have a staff that has worked with the toolset for years with it (which we don't.. I'm basically it), you will spend weeks in a try / hack loop. All that said, it's a great toolset for its intended purpose..scala is a great language, etc, etc.. but I've spent a long time 'figuring the system out'.
I've found Source Sourcetrail (https://www.sourcetrail.com/) an invaluable tool in aiding program comprehension, especially in C++ projects.
There's another interesting program comprehension tool based on dynamic analysis (as opposed to Sourcetrail's static analysis), which I've yet to try: http://findtheflow.io/
The tooling approach is an interesting one, but I think the most important thing remains documentation. The "treat code like data" analogy breaks down because unlike (most) data, code is a thing that was intentionally crafted, one piece at a time, by a relatively small number of people, many of whom are probably still in the building. It isn't some foreign artifact, understood by no-one, that's been measured from impersonal processes. It was made. Almost by definition someone has already had an understanding of it at some point. Reverse-engineering a new understanding from scratch - even via powerful tooling - remains a wasteful path to take compared to simply reading a (written-down) understanding that already exists.
From my experience, when changing (or fixing) something on an existing system, I typically spend ~48% of the time figuring out where to do the change, then ~2% actually doing the change, and then another ~48% testing it and adapting unit tests broken by it.
The difference between operator and Engineer is, you need an Engineer when it's not obvious what to do next. That's the whole job - figuring the system out and finding a solution.
So let me get this straight: A programming environment to program programming environments in. What would be the startup cost of using this be? Hmm. Comprehending and navigating it, eh? So now, when I'm, say, doing embedded C/++ programming, I have to add yet another language to the stack, yet another toolkit, yet another environment _I have to build first_ with my preferences -- so that _another_ person getting into the project can not only use my environments, my toolkits, my code and comprehend those - but ALSO my hand built specialized for my needs programming environment? And this is supposed to _improve_ the situation?
OK, I only skimmed the site, but mostly because above question keeps nagging me and the mind boggles.
IOW, gtoolkit is today's sexy emacs (or, it claims to be what emacs claims/ed to be): Here's a toolkit for writing/extending/MOLDING your tool. Or, acknowledging it's smalltalk-heritaged, it's yet-another-smalltalk-IDE.
It's not like the "extra work" of making your product explorable disappears with glamorous toolkit, the silver bullet. gtoolkit just offers a streamlined set of APIs for you to (comprehend, and navigate, and then) use to make your product explorable, doesn't it. So we add another layer of abstraction, work, and potential for error? What was that with complexity and abstraction? Hmm.
The parallel to Emacs is quite on point. Emacs was a great for text, but it's about time to outgrow that medium.
Indeed, Glamorous Toolkit is a Smalltalk system, but the target is for it to work with all sorts of other technologies (and it does already).
And yes, the claim is to spend that extra work to build tools. I understand how that can appear as coming at an extra cost. But, here is the thing: the budget for figuring the system out is already allocated.
Just in the same way as it was allocated for testing. When automatic testing became a talked-about proposition, people claimed they do not have time to spend doing it because they are already busy clicking around. It turned out that automation freed much of the energy to allow people to focus on more rewarding activities.
Now, it's code reading's turn to be automated. Not all of it will. Code reading is still meaningful in the small.
Glamorous Toolkit is a first technology that shows how this works in practical settings. It's not theoretical. Of course, it comes with a learning cost. We estimate:
- about 1 week to learn how to learn (yes, the technology can be used to learn the technology, too :)), and
- about 1 month to get reasonably fluent with initial analyses.
This is an investment that should be judged like any other investment. The promise we make is that the investment can be utilized over and over in many different circumstances because moldable development is universally applicable.
You should not believe it. We made the technology free and open-source together with all the material around it for people to evaluate it themselves.
I know that the programming language is only part of the story but I do wish more programming languages put a greater emphasis on readability and maintainability rather than writing. I find it particularly annoying when a language does not allow you call functions/methods with named parameters. E.g. it's possible in python, required in Swift and impossible in Golang.
Hence the value of documentation. NIH syndrome is perhaps often a rational "I need clarity and control, so I am not going to waste time grokking the existing implementations and just re-implement" decision.
Does someone have advice on how to figure out a system, especially a Java API? I'm supposed to own an API at work, but I never really understood how to learn an API. Do you type out the functionality in text? Do you draw a flow diagram? Do you keep things at a class level or dive down into functions and variables? The bright side is that this API isn't very big, so I do want to use this as an opportunity to really learn it and also learn how to approach studying API's in general.
I like to write a one pager for anything I have to grok in a serious way. Key places the code visits are summarized, I use indenting to show branching. White board helps too. Just the act of doing this is more helpful than any artifacts it produces. They decay rapidly.
Great point about the artifacts. I've definitely learned that already. Nothing makes you think hard about what documentation you choose produce like the realization that probably nobody, including yourself, will keep it up to date.
It's beneficial to approach learning about an API top-down and bottoms-up. The context in which the API operates and then the working of the API in itself. Imagine if you were in charge of maintaining a hardware tool. I first want to know how this tool is used by various customers before learning how the tool works.
1. Begin by learning all the end-to-end product flows this API is going to be invoked within. This is the context in which your API is operating in. It also sets the stage for non-functional requirements such as latency, and availability requirements. How will end customer and business be impacted if this API were to misbehave?
2. Next speak to all the customers of this API. How do they produce the data for your API's inputs and how do they consume your API's output. You will be surprised by all sorts of creative ways in which an API gets used, not necessarily what the API was intended to begin with. But you need to support all those clients so better get to know their use cases. Also, understand how they handle your API failures. The consumers tend to make unstated assumptions about input/output validations and invariants. It's not documented anywhere but floats around as tribal knowledge. Extract that knowledge and document it somewhere. Make sure you are very clear about every single input parameter, how it's produced and its expected values as well as every single output parameter and how it's consumed. Don't forget about exceptions. Often times customers treat exceptions as just one of the expected output parameters and so actually depend on the API throwing that exception.
3. Finally, take a look at the API's implementation. Don't get frustrated if you don't understand more than 50% of the code. It's perfectly fine. Make notes of the parts that make sense and also note down those questions. Ask around. Often some parts makes absolutely no sense to you. But it is there for a reason, someone as competent as you put it in the first place. Again, ask around both within your team and the API's clients. Pay attention to the API's downstream dependencies and how their failures are handled and bubbled up to your clients. Your API's SLA depends on your downstream APIs, learn about them and ensure that it matches the expectations.
4. You also have indirect consumers of your API who depend on the side effects produced by it. Make sure you learn about them. If this API causes a side effect (such as DB update, publish a message to Kafka) then learn how those side effects are consumed. Does someone depend on it? If yes, how?
Though I've described these steps in sequence I find it useful to approach all of them in parallel and over multiple iterations. At first everything is blurry and after first pass you penetrate through about 10% of cloud cover. After 4th or 5th pass you will have a reasonable understanding (~85%) to make small bug fixes.
Sincere thanks for your response. This is probably one of my biggest weaknesses as an engineer so I really appreciate your help.
Right off the bat, I think you hit the nail on the head whereas I have only slowly started to think about how API's are used, rather than just how they're written.
1 is a great point. I'll take some time to get more info on the end to end flows, and I think that will provide some great foundational understanding before even going in.
2. Is a whole knowledge bomb in itself. I hadn't even thought of many of these points. Talking to the consumers is another great idea and I'll make sure to do that.
3. I appreciate the reassurance. Part of why I never built up the skill so far is because it's, frankly, intimidating to start poking at some big system from scratch. Thinking about downstream APIs is a smart angle.
4. I like this point a lot. Seems like a way to level up and start thinking about the bigger picture.
I can already see the benefit of these strategies and how they'll help me develop my understanding. Now I feel equipped to tackle our team API's, not as daunted as I was feeling before. Thanks!
You are most welcome! I'm glad you found it useful.
As you go through this exercise I'm sure you will gain new insights into the process and the API itself. Please document them for the benefit of others around you. The way I see it, you are anyway putting all the effort to learn the system, you might as well put in a little bit more effort to document and get your peer's appreciation in return. Also, you will have something concrete to show for all the time and effort you put in over 2-3 weeks, as opposed to just saying "Right I'm now ready to maintain/change this API".
Written communication and specifically documentation is a very under-appreciated skill among software engineers. But it's a great way to make yourself visible among your peers and spread the knowledge of the system.
The software you write will have impact on customers, but they are far removed from you. The documentation you write on the other hand is beneficial to your peers with whom you work and interact every day so this is a good way to be in their good books. They will appreciate and respect you for going the extra distance to spread the knowledge.
Absolutely. I've always been a huge fan of documentation and push for it when I'm at an org that doesn't care much for it. Thankfully my current org, at least my current team for sure, is very pro-documentation, so I'm more than happy to create documentation for everything I learn and give back to our internal community since I've also benefited a lot from others' documentation.
I also like your point on visibility a lot. That's something I've only really starting learning about recently at my current org, and you make great points there.
This is why following conventions for whatever technologies you are using (not reinventing the wheel) has value. Same for documentation. Going the extra mile to document what you are doing and why, with some examples, or to update the documentation as things change, is not only valuable, but a clear sign of the maturity of a developer.
This will probably get worse as the value to engineers of fighting for architectural sanity diminishes in the modern semi-disposable project based job economy.
I have no stake in the the companies future? OK, add another layer of abstraction to 'fix' that nonissue. Whatever. I can avoid it for as long as I can see to care.
In other words, the secret sauce of a scalable, in terms of people, software project is the lack of any clever bs in it: such a project uses only boring predictable patterns in everything.
>We created Glamorous Toolkit to provide a concrete start for the "how not to read code" conversation. Glamorous Toolkit is a moldable development environment that makes it possible to create custom tools about software systems inexpensively.
That was the most blatant blogversising I have ever seen.
I use Rust everyday in the past 15 months. The programming language is just a very small part of the whole picture. Usually you integrate with many 3rd party systems that don't always provide all the guarantees they claim (so you get many edge cases) and understanding these things is not trivial. If it was only for understanding the code it would be easy. And actually having worked with both Rust and Ruby, the easiness of binding.pry and debugging is unbeatable. Rust gives you guarantees how the code behave, but when you have input from external sources (user input or external service input), it's much easier to understand what's happening (=debug) in a dynamic language.
> it's much easier to understand what's happening (=debug) in a dynamic language.
I would argue the opposite. It's easier to understand what's
happening when you know at a glance what datatypes are involved,
whether they're used by value or reference and how each case is
handled. When I last worked with Python it was rather time
consuming to fix bugs that crashed a script at some point after
10 minutes of processing, caused by code that would have thrown
a clear compiler error in some languages and would have been
fixed in seconds.
And when I look at e.g. a Rust function signature I immediately
see what kind of data is used as arguments and what's passed
back in the result. Meanwhile in JavaScript you just need to
forget a symbol in a check and suddenly it won't know the
difference between "false", "null" and "undefined".
You are right about integrating third-party systems, but I still
prefer the approach where possible issues are discovered as
early as possible.
You are bringing in Rust's amazing type system but that assumes that you understand how the 3rd party system behaves. Otherwise your types will be very loose (i.e. String ? could be anything in there) not gaining anything compared to Javascript/Ruby/Python.
If I know how the external system will behave then yes Rust is better, obviously. However I was talking on the case of figuring out, as the article says. Sometimes you are not even there, you integrate with an external (legacy? undocumented? buggy?) system and you need to understand what's going on. In these cases a dynamic language is so much more productive than Rust and any Rust-like language.
Or even better, if both is the same. That's the reason why, as much as I like Rust, I would never use it for a project where performance is not critical.
That makes it just less transparent. You still have to be
careful and know the difference or one day you unintentionally
create a shallow copy of an object and now you've got a bug the
runtime won't warn about. Yes, Rust is more complicated and
harder to get into in those areas, but in return it's not as
ambiguous. I know that calling `clone()` will always do just that,
no edge cases, and even if I were wrong the compiler would
immediately tell me because of a type mismatch.
Realistically I don't think any of the two approaches is
significantly faster. But I know what I find more consistent and
less frustrating to debug.
You are talking from the perspective of Rust. And you are right that in Rust, it cannot/should not be the same!
However, if you forget about Rust specifics, then things look different. E.g. by simply having the constraint of complete immutability, there is no reason to differ between an object identity and its deep content anymore - it will just be always the same. Of course that means no mutation and hence reduced performance for certain things, which is why Rust doesn't do it.
Rust is absolutely amazing and made me a much better programmer. But programmer ergonomics don't seem like a focus of their team currently.
I can only hope for Rust 202X edition that can introduce new syntax / deprecate another and make certain things clearer (even if that means explicit with a little writing here and there). Lifetimes and traits in particular need a lot of upfront investment to grok intuitively, and the fact that some core aspects of the language are implied can later hit your assumptions really hard and confuse you for a while. At least that's happened to me, maybe I am just dumb and mediocre though.
> But programmer ergonomics don't seem like a focus of their team currently.
Actually, maybe my post gave a wrong vibe. I like Rust and I think the team put a lot of effort into programmer ergonomics.
It's just that the language is focused on performance a lot and compete with C++. And so they sometimes make tradeoffs in favor of performance instead of expressiveness or simplicity.
That is understandable - but 95% of the projects I(!) have worked on don't need this - I can just give it a GB more RAM and a bit more CPU and write software quicker because I don't have to care about certain details.
> At least that's happened to me, maybe I am just dumb and mediocre though.
The fact that you used Rust probably puts you in the upper 10% - rough gestimation. I'm not telling you which 10% though. :P
> It's just that the language is focused on performance a lot and compete with C++. And so they sometimes make tradeoffs in favor of performance instead of expressiveness or simplicity.
Yep, exactly my feeling. When I get back to writing Elixir at my $day_job I am just blown away how I can achieve most of the same results (for 100X less performance of course) in like 20x less coding lines... :( Not a direct comparison with a dynamic language is possible of course, but I too wish the Rust team starts sacrificing something for a bit more expresiveness and code conciseness.
> That is understandable - but 95% of the projects I(!) have worked on don't need this - I can just give it a GB more RAM and a bit more CPU and write software quicker because I don't have to care about certain details.
Both what you describe and hand-crafted and ruthlessly tested C/C++ code that's maximally efficient have their place. But I definitely don't belong the the "machine efficiency at all costs" tribe and I get worried at any potential signal that Rust is headed in that direction. Which it might not be. We'll see.
> The fact that you used Rust probably puts you in the upper 10% - rough gestimation. I'm not telling you which 10% though. :P
<Saruman voice> YOU HAVE NO POWER HERE!
...I mean, I am myself's worst critic. It took me a while to get comfortable with Rust and even if that means I am a below-average programmer, I don't care. I am taking my time and I can objectively measure that I am getting better with time there.
I still do agree that Rust does require time and persistence however. That is irrevocably true. Here's to hoping the team will make it consume a bit less characters (and thus typing) and improve the compiler and the tooling further. I am rooting for them with all my heart.
> I get worried at any potential signal that Rust is headed in that direction. Which it might not be.
I think they do - but that's good! We need a language like Rust to write operation systems, databases, proxys, webservers, hey maybe even browsers. All the things that are widely used and need to be high performant and secure.
Maybe you are using Rust, but you actually really want a different language, one that doesn't focus so much on low level / performance?
Haskell or Scala or F# come to my mind. I'm listing statically typed languages, because I assume you like those (otherwise, why Rust and not sticking to Elixir).
> Maybe you are using Rust, but you actually really want a different language, one that doesn't focus so much on low level / performance?
That is very possible. But utilizing my experience and intuition, very rarely have I seen such meticulous and relentless pursuit for efficiency and a compiler that will kill most of your bugs after it successfully compiles your program like Rust. Maybe Haskell and OCaml are it as well but they have plethora of problems that Rust doesn't have. Maybe Nim and Zig? Only heard good things about those but never tried them.
> Haskell or Scala or F# come to my mind. I'm listing statically typed languages, because I assume you like those (otherwise, why Rust and not sticking to Elixir).
Personal / professional development. I started with C/C++ and Java 19 years ago and moved to dynamic languages at least 12 years ago and I felt that I want to have such a powerful language like Rust in my toolbelt again.
I would say Haskell does a much better job but at the cost of much harder to predict performance.
Scala (which I use professionally) comes close to Haskell, but you need more discipline, because it has e.g. the concept of "null" and you have to avoid it.
What I like about Scala is that it has a sweet spot in the sense of a good number of jobs (way more than Haskell or Rust) and also having very good tooling (better than rust, not as good as Java though).
And Scala gives you this "if it compiles, it works" feel. But it has a steep learning curve.
I think F# is also great and underrated - same for OCaml. But because the languages are even more niche, they have less good tooling etc. What plethora of problems are you referring to btw?
Nim sounds exciting, but I've never used it either.
> I felt that I want to have such a powerful language like Rust in my toolbelt again.
If you are up for systems development, I would stick with Rust tbh. I think it will offer you some good job opportunities down the road and in general have a bright future. I don't think other languages like C++ or D can really compete with Rust in the long term.
Otherwise, I recommend to Haskell or Scala a try, depending on if you favor the learning experience or the practical gain.
> What plethora of problems are you referring to btw?
- Haskell: literal hundreds of possible combinations of compiler variants. Immediate turn-off.
- Haskell: several String types. I understand the lazy / non-lazy distinction but I can't understand why in 2021 you have C strings and UTF8 strings separately. I am not seeing much Haskell adoption in embedded contexts where every byte counts. Felt like a meaningless academic pursuit and not a practical concern.
- OCaml: lack of actual parallelism. I am following the Multicore OCaml monthly reports but at this point I accepted that it's best to just wait for OCaml 5.0 which promises it will have Multicore baked in (earliest timeline: end of 2021, so I don't know likely mid-2022?). Also I don't like the mixed paradigms. Even if I would appreciate using a `for` loop every now and then I think I shouldn't be given that freedom. But that last one is a minor gripe actually.
- OCaml: strings again. Having UTF-8 strings there is a challenge. In 2021 there is absolutely no excuse to introduce friction on such topic. UTF-8 strings must exist. I know I can use the libiconv bridge and it's not what I am talking about. I am talking first-class support.
- Haskell and OCaml tooling felt behind excellent tools like `mix` (Elixir) and `cargo` (Rust) but I hear that they are constantly improving and are easier and more intuitive these days. Hope my impressions are outdated there!
There were others but I only managed to remember those above.
Ah yeah, the Haskell problems you mentioned are indeed annoying. However I think the compiler extensions are actually not a bad idea. Haskell is an old language and the extensions allowed it to improve over time.
Languages like Rust or Go will find themselves in a spot where improving the language will become hard - same for Java and look how slowly the language improved and improves still.
Can't say so much to your remarks about OCaml, but was interesting for me to read.
Mostly because I don't feel they will teach me something new. But I might be mistaken, who knows.
Another factor is the so-called T-shaped skills. I feel I've been going wide (learning every possible technology and paradigm under the sun) for waaaaaaay too long. I now want to focus on several skills and learn them to near perfection before going wide again.
If you already learned pure functional programming through Haskell, then I would not recommend to learn Scala to broaden your overall language-skills.
If you haven't really done the pure functional programming thing, then I recommend to learn it. Especially the way of doing concurreny would be very very different from how it's done in both Rust and Elixir. Scala also has actors, but the other way of doing concurrent programming is more interesting. For example, check this here: https://zio.dev/docs/datatypes/datatypes_stm
This is something you don't have in Rust or Elixir at all (to my knowledge).
But if it's really just for the sake of learning, I think choosing Haskell is better - who cares about strings when you learn these things.
If you want to specialize, learn Rust in and out. :D
Yep, I've been quite exposed to [almost] pure FP for 4 years and something now -- by working with Elixir. Definitely not as hardcore as LISP or Haskell but I feel it already made me much better than before. So in terms of being exposed to new programming / comp-sci paradigms, I don't know, I am sure I haven't seem them all (stuff like Coq and Idris 2 comes to mind as an example) but I also don't want to only invest in being a walking talking (and useless) encyclopaedia. :)
> This is something you don't have in Rust or Elixir at all (to my knowledge).
Maybe I am misunderstanding you but Erlang -- and thus Elixir -- has the best actor system invented so far. Message passing, copying data between actors, immutability, Erlang's OTP (fault-tolerance and tunable restarts of crashed actors), all of those things were the entire reason I moved my web work to Elixir at all. Well, the amazingly well done build and task executing tool `mix` turned out to be a huge and pleasant bonus, not to mention the very welcoming community and top-notch docs and best-I-ever-seen REPL experience.
In fact Erlang's actor system is so good that those in Scala and .NET were very heavily inspired by it. Akka in Java land as well.
Rust is getting there too -- the async semantics, the const functions and the various runtimes definitely are converging to much more efficient and machine-native actors with zero copying semantics and dynamic multiplexing on all CPU cores. I am extremely excited to see where Rust is headed in the next 5 years. It has the potential to get very close to the end-all be-all language.
> If you want to specialize, learn Rust in and out. :D
Completely agreed! There's so much work to be done out there that requires efficient use of hardware. So many companies have legacy systems still limping on ancient C/C++ monoliths and 2-3 brave souls are maintaining them, but the business wants either new features or the tech debt is preventing any improvements -- reasons abound.
Rust is extremely well-positioned to disrupt a lot of companies with legacy systems. I am planning to cash in on these opportunities. So it's a good advice from you, thank you.
> Maybe I am misunderstanding you but Erlang -- and thus Elixir -- has the best actor system invented so far.
I would sigh off on that, and yeah, I think you misunderstood me.
> Yep, I've been quite exposed to [almost] pure FP for 4 years and something now -- by working with Elixir.
That surprises me. I'm not sure we use the same terminology. There is no "almost" pure. Immutability and passing around functions is nice, but it is really only 10% of functional programming. Mind that many languages call them "functional" nowadays, but the original meaning is actually different - it's about referential transparency. I'm not aware that Elixir supports that in a meaningful way, especially since you said you did it for 4 years.
So for what I'm writing next, I'm just assuming that what you did is using immutable datastructures and avoiding mutation of variables etc.
If someone comes to me and wants to learn actorsystems, I can direct them to either Erlang/Elixir or Scala (with Akka). But honestly, Scala+Akka makes it difficult to fully embrace actors and is just... inferior. I would always recommend Erlang/Elixir and would even go so far to say, that most people probably don't really learn to think in actors if they pick Akka.
For pure FP however it is the opposite. If you have not written any slightly bigger program in Haskell or Scala using FP, then you also haven't understood the concept. If you don't already think in pure FP, then you will have a hard time to learn it when writing Elixir.
In Scala, the situation is much better, but still not optimal _for learning_ pure FP. I suggest you to look into Haskell again when you feel in the mood to tease your brain a bit with a new style of programming. Pure FP is as different from using Elixir/Actors as Exlixir/Actors is different from writing python. You have to think different.
For me, both Actors and oure FP are actually techniques that complement each other very well. Pure FP is good for writing all the code _inside_ of an actor. It makes reasoning and concurrent programming much easier compared to having a lot of really small actors. On the other hand, pure FP does not scale - once you cannot stay in your own small "bubble", you need a concept to go beyond. Be it to work with multiple machines, deal with elegant recovery from hardware problems or network problems between bigger parts of the system, or simply load and messaging problems. I don't know anything that is better suited than the actor model here. I hope that eventually we will have a VM like the Erlang VM with a language that supports pure FP as well as Haskell does. :)
The down-votes on your post are probably by people who are not happy with the status of tooling / debugging / IDEs for rust - but I totally see your point.
A lot of time spent maintaining / understanding the codebase is time spent thinking about edge cases and about what can go wrong.
I had a similar experience with Haskell, but in Haskell you also have to think about laziness and what the computer will actually try to do, which complicates things a bit.
The (unrelated topic)-to-Rust-fanboyism hop count is strong on this subthread.
The amount of brain wankery you initially put in your architecture is inversely proportional to onboarding time for new staff. This problem is no stranger to Java (and Rust, it seems) projects.
You can learn enough of a language to be dangerous in just a couple weeks. Enough to be productive in 2-3 months. However, there are 3 other things you have to learn:
1. How the code for the project you are hired for works. How it's laid out. All the weird shit about it.
2. How the framework/libraries used by the project works. Nothing to do with the project, but maybe you haven't used Angular, Swing, or Boost before.
3. How the data is laid out, and how it gets in/out of your system. What database tables are there, what's the workflow.
Those 3 things are the hard part and what take 6-12 months. Learning Javascript, Ruby, or whatever language is the least of your worries.