Realizing that external dependencies are regular codebases just like the one you're working on. That you can open them up in VSCode, look around and figure out any bugs or issues you're having and even open pull requests to improve them.
At that point, you lose the feeling that there are magic things out there that you will never understand and that for the most part everything is just regular old code that regular people wrote.
I don't really remember when I felt that external dependencies were magic, but thinking about this, it explains a lot of the behavior I see on some developers who are very negative about the more challenging parts of the job.
Some of them don't really believe the research stuff we do at work are even possible. They're constantly surprised when other devs finish those tasks. Some don't believe that other devs can code in C++ or Rust, or write parsers, database modules, implement IQueryable in C#, or develop novel algorithms for novel applications.
To them, if a package exists it must just work, and that package comes from another breed of developer that can't coexist with them. I see a similar thinking with AI: now with ChatGPT and GPT-4, there's a hubbub about there being "no reason for our AI team to exist anymore".
I'm not a big fan of working with those developers.
> I'm not a big fan of working with those developers
I agree. And it ties into something I often see that puts me on edge: programmers not taking responsibility for the code they put into their projects.
What I mean is that when you incorporate any code, from any source (library, framework, copypaste, etc), then you are responsible for that code and its proper behavior as much as for the code you actually wrote. So you're well-advised to understand it.
That's one of the reasons why I won't include code that I don't have the source code to. I need to understand it and be able to fix it.
I've worked with some of those folks too. It seems to me like they haven't really learned programming the way I understand it; instead, they've learned various incantations that can be strung together, and are just at a loss when they don't work as expected/documented.
I’ve noticed this as well - they don’t view software building as a form of engineering that can be learned from basic principles of computer science, but instead as magic.
So the go to for every solution is to find a third party library from a “real” witch or wizard, and follow its basic tutorial. Maybe try to customize it a bit at most.
If something breaks, just start randomly moving things around or copying and pasting more code from forums until it works.
I can’t live like that. I need to know why something’s not working, AND why it IS working. I like stepping through my code with a debugger just to make sure things look right, even when they’re working.
I think the craziest part, though, is just how much people with this “software is magic” mindset can actually get just by brute force cobbling things together.
To me the issue is they assuming everyone around them is like this. I'm totally fine with lack of experience or knowledge, but a co-worker constantly underestimating their peers is not alright.
The senior developer above me is like this for me, and it’s rough because it shakes my self esteem AND often leads to me having to support and extend a fragile and changing 3rd party library to solve a rather specific problem that would be better solved with custom code.
Where it really stings is when I do something of my own initiative (like, for example, create a transparent API over memoizing and caching some expensive calculation, or refactor some of our common client customizations into their own set of classes so it’s easier to extend) and he ignores it or scoffs at it.
Only to then find a third party library that implements something similarly. THEN it’s presented to me as a “brilliant idea” that we can take advantage of.
At that point, when I say “we” already do this, he usually rephrases what his brilliant 3rd party library does, as though he can’t fathom that I would be capable of doing something like that myself, and clearly I’m just not understanding what he’s telling me.
I think it’s a defensive mechanism for his ego against one of his peers or employees being more capable than he is.
But it’s also not like he can’t learn this stuff himself, he just doesn’t ever put in the time or effort.
With many exceptions (the left-pad debacle comes to mind), it’s generally much better to use a third party library instead of supporting your own implementation.
I know this is the mantra, but my experience has been highly mixed and only generalizable to how low in the stack it sits.
For, say, security implementations for authorization and access controls, or even low level HTTP request routing? Absolutely. The goal there is to adhere to something standard and battle tested by experts, and the third party libraries tend to be fewer in number, and of higher quality, with longer term support and clearly defined upgrade paths.
But that’s the lower level stuff, where your special custom needs are superseded by the primary goal of just “doing the one right thing”, or “adhere to the commonly agreed upon standard”.
For all the other things that make an app unique - things like CSS frameworks, UI components (beyond basic, accessibility-minded building blocks), chart drawing, report generation and caching - my experience has taught me otherwise, the hard way.
Being stuck using a 3rd party library that doesn’t do what the client or business needs it to do, having to juggle our own internal patches and bug fixes with updates to the library itself, all only to have the library abandoned or deprecated in favor of the author’s next pet project, really sucks and often comes with a high opportunity cost and a high development cost.
I now consider third party implementations of higher level features (and especially anything front-end) to be something that needs to be evaluated as equally costly as an internal implementation by default, and not favored just because somebody else wrote it.
Maybe I’ve just been unlucky in my experience, though. I also suspect ecosystem makes a difference. The PHP and JS ecosystems are full of poor libraries with snake oil sales pitches. I suspect this is different with, say, Rust.
I think we mostly agree with each other. As I said, there’s many exceptions.
I’ve mostly worked with Python and JVM languages, which probably explains why I’m less passionate about the counter-argument than you are. Ecosystem definitely matters a lot. VanillaJS is the only good JS framework IMO.
I’ve never met a developer with that attitude, but I’ve met too many managers with it. You’ll be even less of a fan of working for those developers, I guarantee it.
That mediocre dev is a straight shooter with middle management written all over them…
I think it depends on the industry and in location. In my neck of the woods I'm seeing an uptrend towards more technical managers, or sometimes a mix of lead developer and manager, who can actually get things done.
It's a matter of preference but IMO/IME it works better when a manager is actually good at it.
A manager that is supposed to be technical but only wants/knows how to do the management part, and can't make more than the basic stuff won't really fly. I've seen a couple of those getting fired during probation period.
On the other hand, the "bad developer" that thinks packages are magical will eventually settle as an expert beginner in a low-expectation environment. Which is fine.
Adding to this, the decompiler built in to many ides really up'd my game understanding underlying libs. How they work, what methods to call, etc. Very helpful! As much as people trash java this is a really nice feature. I'm sure other languages have decompilers as well, but I've never seen anything close for c# for example.
In the case of Java at least, what helps is that the IDE can decompile the code or, which is often even more helpful, download the source code and allow you to step through it while debugging, at least if a source JAR was published (which is pretty often the case).
In the case of non-compiled languages, of course you don't even need this step since all your libs exist in source form already, so it was pretty simple for me to step through Ruby library code with a simple debugger and no fancy IDE.
I have a habit of sometimes debugging even horribly abstract framework (e.g. Spring) code when I don't understand what it's doing. That's maybe not the most efficient method, but it does usually make me understand why thing X is not working the way I expected it to work.
Funny thing was I never even thought to do this until I was working on a very strange bug, and a senior engineer at my company suggested I look at the source code for one of our dependencies. Sometimes really obvious and basic advice can be a big step for people.
Yeah I think it's helpful to recognize that this isn't always obvious to people, even folks who seem like they'd instinctively do so–I had a similar experience a year or two into being a professional programmer, despite being someone whose first experience with dependencies years beforehand was downloading Perl files and directly editing them.
Had a similar experience during my Bachelors thesis. I was adding new functionality to an existing code inspection framework and was also supposed to add a graphical interface with GTK (this was 2014). At some point I identified a performance bottleneck within a GTK component. My advisor suggested fixing it and I just couldn't understand how I lowly student am supposed to tackle anything in this big behemoth.
In the end, I didn't do it but it made me think and jump into various big open source projects in the following years. And you get used to navigating these suprisingly fast.
> That you can open them up in VSCode, look around and figure out any bugs or issues you're having and even open pull requests to improve them.
I think the real skill then is to learn to navigate big codebases in few days time than taking a few weeks and then feel dejected by the time spent and still unsure.
I often feel ambitious for such endeavors but navigating big codebases take time, any tips?
It helps if you can get your IDE's indexer configured, so that you find refererences to functions and variables reliably.
More importantly is to use an IDE with a good fast global search function and get comfortable with it. At least for me, 99% of navigating a large codebase is global search.
For C++, I use clangd (works fine for GCC projects). The only config it needs is the path to compile_commands.json, which can be automatically generated by CMake and some other build systems. For TypeScript no config is needed. For Java there is the redhad Java plugin in VS which provides good indexing.
My favourite language for this is Go. The standard library is totally exposed and easy to jump into, right there for you to learn from and to make sense not only of the library but how to actually write Go in the first place.
that's actually, um...common? i just had to repair an electric jackhammer last week. i worked in a machine shop for a large well drilling company not long ago, and not only did we create/repair tools for the company, but obviously had to keep our mills and lathes and cranes, etc. in good working condition.
It's not common in the same sense. First of all, tools are very different from software products. And there is never the same level of analogy that one has to do in software.
Imagine buying a hammer, that hammer not working, the hammer's design being so complicated that it's impossible to understand or mend, and then having to design and build your own hammer, and then putting up with that situation over and over again and accepting that as the status quo. That would be the correct analogy.
I'll agree wholeheartedly that the analogy needs some work. Tools are different - we have our literal physical tools that we don't generally dive into (keyboards, mice), we have tools that are maybe more battle tested and rarely examined (cat, grep, find).
We have do have tools like the hammer - there is one design, everyone more or less agrees on it. There is still high quality and low quality, but it has one job. We have tools like a bulldozer - complex, numerous parts, requires constant maintenance, closed source.
As the parent said - it is not uncommon to have to maintain old equipment, as well as design new tools as new requirements pop up.
Sure, our rust is a little bit different - time wears on software in a different way. Use wears on software differently. (Changing product requirements leading to a new tool is probably common.)
The maintenance may be trickier - but I'm sure changing components on a tool when a certain component is no longer available is not easy, thats where shim layer comes from!
Have you met farmers? They will readily tear apart their equipment to fix an issue or modify the tool to make them more ergonomic. This trend of massive multi-million dollar John Deere combine harvesters with DRM widgets that you need to take to a specialized tech to get fixed, is a relatively modern one, and one detested by virtually every farmer.
This was and is quite common in any blue-collar field where works don't always have the money or time for a brand new jawn every time something goes wrong.
I love estate/yard/garage/barn sales and going through the tools and reading the tales they tell from their wear patterns, field repairs, revisions, and hacks from their owner.
I worked on a farm. I think the things you're bringing up are still subtlely different. Modifying something you can see and touch for some unintended purpose versus modifying some piece of software because it doesn't work are miles apart from each other.
For example, I have a graphics project where screen tearing suddenly started appearing although my code didn't change, and the tearing wasn't there before. Is the issue in Skia, OpenGL, the graphics driver, the Intel or NVidia GPU, or the OS? Or is it some latent issue in my code that started showing up because of a change in these dependencies? Or is it some other complex interaction between multiple dependencies? I couldn't possibly know, and I actually don't think there is anybody that actually knows. And there is zero chance I could ever figure it out. I mean, it at least appears it was a driver issue as after some updates and a reboot it just went away, but there is zero insight into why or what actually made it go away.
If I modify a plow because some part as originally designed was flaky and constantly broke, you usually know to a pretty good degree why your fix works.
In software, the abstractions are such that it is practically impossible atbtimes to understand.
That to me sounds like a difference of degree or heap problem, rather than a categorically different condition.
Sure, welding back on a hardpoint that broke off a plow is concrete and obvious (maybe that's analogous to fixing a broken dependency that got renamed), but there are other fixes that definitely fall under "I don't know why it works but it works". I've had electrical gremlins and fixed them by grounding something that looked like it ought to be grounded, and was grounded when I tested for continuity, but nonetheless was acting floaty. It was probably a loose connection elsewhere in the system, but I didn't know that for sure, nor do I even know that it was the problem in the first place.
Software definitely reaches insane depths of complexity, but again, if you can dig down and understand the abstraction well enough to attempt a fix, and the problem goes away, isn't that good enough? The real difference is mechanical systems typically only have a few layers of abstraction, while software has dozens to thousands of layers of abstraction.
I feel like this is quite common but it depends on the optics of situation.
A mine site usually fixes its own tools used to do their job, they usually fixed things they've purchased to use that are broken or become broken, is that not the same?
Your job as a developer is to write code and piece code together to make your life easier. That includes taking the work that others have done and make it work to suit your code base.
On a mine site, the job is to mine and process minerals. That includes engineering a mine using products that eventually fail over time and need fixing. As part of that process an automotive electrician may need to fix the electronics on a vehicle that have become faulty or damaged.
Mechanics and engineers need to inspect, rebuild, and fix the tools they are given to do their job. The difference is that yours may not work up front, whereas these tools need maintenance over time.
When I was doing enterprise work with closed-source libraries, any defect required them fixing it. We've found some issues with AWS and they fixed it too.
The difference between most professions and programming is that we have open source. The equivalent of that in the real world is getting blueprints to build our tools and then building them ourselves, with no guarantees.
"I was an ordinary person who studied hard. There are no miracle people. It happens they get interested in this thing and they learn all this stuff, but they’re just people." - Richard Feynmann
The trick is to dig deeper into those big library's dependencies as well. It's turtles all the way down.
The other thing I find is that big libraries are either mostly dependency bloat (as implied above) or dealing with a hard domain problem. If it's the latter, what you're really struggling with is not the library, but the domain it's trying to represent.
"If it's the latter, what you're really struggling with is not the library, but the domain it's trying to represent."
But this doesn't change anything of the programmers problem. If I stumble on a bug in a physics libary I am using for a game, I cannot just jump right into there and go fixing things. I mean I can start doing it, but at the cost of not getting anything else done for quite some time.
There are lots of hard domains in programming. Cryptography is hard. Networking is hard. Fast rendering is hard. Efficient DBs are hard. OS are hard. Drivers are hard.
You can maybe fix a trivial error in such libaries, but everything else is usually a (big) project on its own.
Absolutely correct, but I think it’s still valuable to be able to recognize if you’re struggling against the code or the domain.
Additionally, once you recognize that you can also recognize if you’re dealing with incidental complexity (e.g. poorly thought out/designed code) and inherent complexity (e.g. physics calculations). The former can be fixed, the latter cannot. Knowing the difference saves much pain.
At that point, you lose the feeling that there are magic things out there that you will never understand and that for the most part everything is just regular old code that regular people wrote.