I like the sentiment of this article. It's a great analogy.
Might be a little off topic, but it reminds me how I am happy that the Go programming language and its philosophies gained popularity even though I don't use the language regularly. Watching Go talks made me appreciate simplicity and clarity.
It made me accept that I don't always need to use every design pattern in the book. It made me think about the readers of my code, who might not always be experienced enough, or might not always have time to understand the brilliant architecture I came up with. I can have some repeated code sprinkled around in the codebase. I don't always need to have n+1 layers in my architecture where all the layers just call the next layer anyway. It might be better to use functions over a complicated hierarchy of classes. It made me appreciate simple tools and widely accepted conventions that result in codebases that feel familiar the second you dive in.
Of course, go is not the only community where these ideas are prevalent, and it's good to know your design patterns and architecture, etc... Finding the balance is not always easy, but it's good to have a popular, successful "counter force" community.
>I don't always need to have n+1 layers in my architecture where all the layers just call the next layer anyway.
This is by far the most common thing I’ve seen consistently in especially difficult to maintain codebases. Anecdotal for sure, but number 2 on that list is way behind. Extra abstractions for a future that has yet to happen and abstractions because the IDE makes it easy to click through the layers is the number 1 by far reason I’ve seen code based be very difficult to maintain.
If you just keep in your head, “Can I see exactly enough on this page to know what it does? Not more, not less?“ It’s an impossible ideal but that concept is a fantastic mental guideline for maintainable
codebases.
The best organizational level technique I've found so far is to add the rule of three to code review checklists. An abstraction requires at least three users. Not three callsites, but three distinct clients with different requirements of the abstraction.
Obviously it's not a hard rule, and we allow someone to give a reason why they think that it's still a good idea, but forcing a conversation starting with "why is this even necessary" I feel has been a great addition.
I’m curious why three call sites isn’t sufficient. Any time I find I have three instances of the same non-trivial logic, I immediately think of whether there’s a sensible function boundary around it, and whether I can name it. If I can, it’s a good candidate.
Obviously for trivial logic that’s less appealing. And obviously all the usual abstraction caveats (too many options or parameters are a bad sign, etc) apply.
The risk with so much duplication is that if the logic is expected to remain the same, even tests won’t catch where they diverge. To me that’s just as risky if not more with internal call sites than with clients, as at least client drift will be apparent to other users.
Probably. When talking about object oriented programs, "abstraction" is oftentimes used as a placeholder for "abstract class" as opposed to a "concrete class". You can see this at play when talking about the SOLID principles and when you get to the "D" part people want to turn every class into an interface because it says you must "depend upon abstractions, not concretions".
I think this is where I’ve been most at odds with common OOP approaches (apart from the common practice of widespread mutability). An interface should be an abstraction defining what a given operation (function, module) needs from input to operate on it and produce output, and nothing more. Mirroring concrete types with an interface isn’t abstraction, it’s just putting an IPrefix on concrete types to check a design pattern box.
That’s not what I took from it, but even if that’s what was meant, I think I’d have the same reaction. In terms of abstraction implementations, a class is just a different expression of the same idea of encapsulation.
I still don’t think I’d react differently. A function is an abstraction layer. Maybe this is just me being unintentionally obtuse because I’ve worked so long in environments where functions or collections/modules of functions are the primary organizing principle, but when I encounter “premature abstraction” arguments I don’t generally understand them to mean “sure take those three repetitions of the same logic and write a function, but think really hard about writing a module/namespace/package/class/etc”. Am I misunderstanding this?
I agree with the sentiment. A pure function with one or two parameters is going to attract a lot less scrutiny than a whole module with multiple classes.
Instead of creating a Spring service to call a repository for data retrieval, i instead called the repository directly, because there was just a single method that needed to be implemented for read-only access of some data.
And yet, a colleague said that there should "always" be a service, for consistency with the existing codebase (~1.5M SLoC project). Seeing as the project is about 5 years old, i didn't entirely agree. Even linked the rule of threes, but the coworker remained adamant that consistency trumps everything.
I'm not sure, maybe they have a good point? However, jumping through extra hoops just because the software is a large enterprise mess doesn't seem that comfortable either, just because someone decided to do things a particular way 5 years ago. It feels like it'd be easier to just switch projects than try to "solve" "issues" like that (both in quotes, given that there is no absolute truth).
I think its a judgement call to be made. Being consistent with a deliberate architectural decision that is actually useful is important. Otherwise you could potentially have a broken window effect where more and more calls leak out of the service layer with the justification being if it was OK in one place why not others? Putting it in the service means that it is ready for any new calls that might be added and future collaborators know there's generally only one place to look for these calls. Now maybe in this situation it would be overkill but with bigger and longer lived the project, the more consistency pays dividends.
Well, consistency in itself is a good rule to follow. The problem is , if a bad decision was made at the beginning of the project, maintaining consistency despite that is madness.
Yep consistency truly matters, since its likely this won't be the only need for data retrieval and everyone doing their own special thing means the code becomes an unreadable, in-consistent mess that cannot fit in anyones heads and development velocity slows to a crawl.
From where I am, there are abstractions coded for APIs in the layers of - topmost API layer, then Business logic and the 3rd DAO layer. Even though there is only one implementation everytime of these layers, this structuring alone has helped maintaining the code so much easier, as everyone even across teams goes by this structure while defining any API. Can't even imagine just coding functions in large codebases without a pre-defined structure, it can become brittle over time.
Rule of three sounds catchy but logically it's just a arbitrary number.
Similar to SOLID and KISS, why pick some arbitrary (and also obvious) qualitative features and put it into an acronym and declare it to be core design principles?
Did the core design principles just Happen to spell out Solid and Kiss? Did it happen to be Three?
Either way, in my opinion, designing an abstraction for 3 clients is actually quite complex.
The reason the OP advocates pure functions is because pure functions are abstractions designed for N clients, when things are done for N clients using pure functions the code becomes much more simpler and modular then when you do it for three specific clients.
This is a good question, and I haven’t yet seen anyone reply with (I think) the real answer: it’s not the rule of 3 so much as “not 2”.
When you start adding a new feature, and notice it’s very similar to some existing code, the temptation is to reuse and generalize that existing code then and there -- to abstract from two use cases.
The rule of 3 just says, no, hold off from generalizing immediately from just two examples. Wait until you hit one more, then generalize.
“Once is happenstance, twice is coincidence; three times is enemy action” (Ian Fleming IIRC)
I think setting hard limits on design is a good thing. Creativity needs limits. If your limits can imply something about your desired design goals then that’s a good synergy. It also forces the engineers to think more about design rather than fall back on their goto pattern that may or may not fit the problem.
Especially junior and mid level engineers might not have good heuristics on is their design any good or is it just following whatever cargo cult they were brought up in.
Like one engineer on my team implemented this crazy overkill logger and I asked a few questions why do it like this and the answer was that they had implemented it in another language at another company. After that I told them to not have more abstraction layers than concrete implementations when adding a new feature.
Sure, but I wouldn't implement something like that as a policy, but as a guideline. So when someone really goes overboard into one or the other directionyou can point them to the guideline, but there is still some freedom in deciding on the spot.
If the need / opportunity to abstract something is highly subjective then it is best left to the team lead / senior architect. For all other obvious cases having a policy as outlined above strikes a healthy balance between autonomy and uniformity.
While I usually like the zero-one-infinity rule as a go to when there aren't any other constraints, when trying to build an abstraction it can be fairly tricky to suss out the parts that actually are share vs what is actually different. Two unique and independent users could share a lot of process &c randomly, 3 is a little less likely.
Ye I don't like these way too specific rule of thumbs either. It is superstition that is invoked during code reviews to not having to explain or justify your arbitrary nagging on the reviewing side or defending a bad layout on the other.
> Can I see exactly enough on this page to know what it does? Not more, not less
Is there some book/website/SO post that tries to drive this piont home? Bascialyl some web resource I can link to other programmers to explain the value of coding as such.
This article is from a personal blog on a website with a URL $someguysname.ninja.
Maybe you should be the one to write the article you seek! Believe in yourself. If you find yourself with steadfast values that you find tedious to repeatedly communicate, but that you thinks others ought to know about, why not write them down? Who knows - if it's good and resonates with others, is sounds advice, etc. one day it may end up on HN too.
Not everything worth doing has already been done before!
One of the best products I've had to maintain recently was a cgi app with very few abstractions, many of the pages in the app didn't even have functions, just construct sql, read it and spit out html. If someone had a problem all the code was right there in a single file and the error could be found patched and deployed in minutes.
Over the years there were a couple of attempts at replacing this legacy system with a "well-architected" .net one but all the architecture made things harder to maintain and it only ever got to a fraction of the functionality. When there was a bug in those ones we had to not only find it but we had to go through every other bit of calling code to ensure there were no unwanted side effects because everything was tied together. Often the bug was in some complicated dependency because spitting out html or connecting to a database wasn't enterprisy enough. Deployment was complicated enough it had to be done overnight because the .net world has a fetish for physically separating tiers even though it makes many things less scalable.
90% of the corporate/enterprise code I've seen would be much better off being more like that cgi app.
Counterpoint - code like that is OK if the project is small and tidy, but over a certain size, changes become horrible refactoring efforts and adding multiple developers to the mix compounds the problem. The 'enterprisey' rework that you describe sounds badly architected, rather than an example of why architecture is bad. Good architecture is hard to do but I don't agree that means we're better off not bothering.
My first programming job was with a firm that never had money for paying developers, let alone tools. It was also a few years before Visual Studio Code was a serious thing. So I used "programmer's editors" -- those cute things like Notepad++ which had syntax highlighting and on some days autocomplete but no real code understanding. There was no middleware, no dependency-injection, and things like the database instance were globals. More or less, the things you needed to know were in a single file or could be inferred from a common-libraries file.
My second job, they splashed the cash for full-scale professional IDEs, and they couldn't get enough abstraction. I suspect the conveninence of "oh, the tools will let us control-click our way to that class buried on the opposite side of the filesystem" made it feasible.
I wonder if there's some sort of "defeatured" mode for IDEs which could remind people of the cognitive cost of these choices.
> Extra abstractions for a future that has yet to happen and abstractions because the IDE makes it easy to click through the layers is the number 1 by far reason I’ve seen code based be very difficult to maintain.
This is always tempting. A good argument against it is to realise that future developers (us included!) will know their requirements far better than we can guess them; if code needs writing, they should do it (as an extra bonus, we don't waste effort on things which aren't needed). The best way to help them is to avoid introducing unnecessary restrictions.
> The best way to help them is to avoid introducing unnecessary restrictions.
But that's the other side of the exact same coin. How do you know if a restriction today is good or bad for the future? Restrictions prevent misuse and unexpected behavior, in the good case.
Incidentally one of the biggest benefits I see of using a text editor like vim / emacs is that it really encourages good code management.
It's not to save the ~10 minutes per year in faster key strokes to manipulate your code. It's about the way it shapes your thinking about how you code.
After using Intellij for about 5 years I switched to a less batteries-included code editor (currently doom emacs). I figure if I need an IDE to navigate our code as a senior developer on the project then less experienced ones don't stand much of a chance.
Golang and simplicity in the same sentence does not quite reflects my daily experience.
Want a Set? Golang does not have one, create a map[type]boolean instead.
Want an Enum? Golang does not have one, create a bunch of constants yourself that are not tied together by a type, or create your own type that won't quite make what an Enum is.
If simplicity means feeling like you are programming in the 80's, that is what Golang meant for me with simplicity.
Not having basic stuff such as Set and having to workaround with a map of booleans is not simplicity, as you will have to make it turning the code into a more complex blob to represent the same kind of data structure.
I could go on and on with the list of things that lack instead of things that are simple. </rant>
I recently had to write some Go code and coming from Java/Scala world, actually the "err != nil" thing didn't bother me as much as I thought it would. In fact I liked the explicitness of error handling. However, lack of enums really puzzled me; how is having to go through "iota" hoops simpler than "enum Name { ... choices ... }"? I did like the batteries included approach though - I could build the entire component purely using standard library - not having to deal with importing requests and creating virtualenv etc was refreshing.
As I get older my code gets a little more verbose and a little less idiomatic to the language I am writing. I’ve been writing code, starting with C, since 95. Mostly Python these days, but I try to make it clear and easy read. Mostly for myself. Future me is always happy when I take the time to comment my overarching goals for a piece of code and make it clean and well composed with enough, but not too many, functions.
> well composed with enough, but too many, functions.
In my experience, code with too many functions is more difficult to grok than spaghetti code. It's like trying to read a book with each sentence reference a different page. So, I try to code like I would write, in digestible chunks.
> As I get older my code gets a little more verbose
I've seen too many of my previous projects die right when I moved on. Now I tend to write code as if it were written by a beginner: verbose and boring, with no magic.
On the other hand, no abstractions is like reading a book where each and every thing is spelled out in outmost detail. Instead of telling you “I’m fuelling the car”, I’ll tell you: “I’m walking to the entrance hall. I’m picking up the car keys. I’m putting on my shoes. I’m putting on my jacket. I’m unlocking the front door ...”. You see where this is going. And here we already assumed that things like “putting on shoes” are provided by a standard library.
There seems to be two types of programmers: one that can read a line of code like or theCar.fuel() and trust that you in the current context understand enough of what the call does that you can continue reading on the current level of code. This type of programmers don’t mind abstractions even if a function is called in only one place.
The other type of programmer must immediately dig into the car.fuel code and make sure she understands that functionality before she can continue. And of course then each and every call becomes a misdirection from understanding the code, and of course for them it is better is everything is spelled out on the same level.
I’ve seen quite a bit of code written by the second type of programmers, and if you don’t mind scrolling and don’t mind reading the comment chapter headers (/* here we fuel the car */) instead of all the code itself, it can be reasonably readable. But there’s never comprehensive testing coverage for this kind of code, and there’s usually code for fuelling the car in four different places because programmers 2-4 didn’t have time to read all the old code to see if there was something they could reuse, and just assumed that no one had to fuel the car before since there wasn’t any car.fuel() method.
I have had the good fortune to have never worked in a codebase with the characteristics you describe. But I have seen some issues with theCar.fuel(), and that’s generally around mutability and crazy side-effects. I think most of these are pragmatically overcome by adoption of functional paradigms and function composition over inheritance or instance methods.
Still lacking good tools in our own toolbox. If ides could expand function calls inline (not a header in a glassbox, but right in code), both worlds could benefit from that. Expand all calls depth 2 and there is a detailed picture. Collapse branches at edit-time based on passed flags/literals and there is your specific path.
Hmm. There’s a vim sequence to accomplish this that you could macro. But even so, don’t most IDEs give you somewhat more than a glassbox header? I’m almost certain I’ve seen people scrolling and even editing code in the “glassbox” preview pane in VSCode.
Afair, it doesn’t inline and overlaps with the code behind it. If that is not true, it may be closer to it, but my experiments somehow failed to show its benefits over “just open to the right pane”. Maybe I should check its config thoroughly. As a vim user, I’m interested in a method you described, is it handmade :read/%v%yp-like thing or an existing plugin?
Then you have an electric car, and you use the fuel method and add a special case for isElectric handling inside. And some other dev uses lamp.fuel since it already handled isElectric internally. But later, we have to differentiate between different types of charging and battery vs constant AC and DC power. Then someone helpfully reorganizes the code and breaks car.fuel because the car does have a battery too. And then ....
No, you don’t. And the alternative implementation is that you either go through all code where car is used and add conditionals for all the cases where kind of fuel matters. Or is very common for this kind of programming, just copy the whole car.roadTrip() where fuel is called and to the method electricCar.roadTrip and just change a few lines. Then of course all requirement changes or bug fixes must be done in several places thereafter.
My feelings about people that can’t handle abstractions is that they just don’t have had to create or maintain anything complex. Very few real world systems can be kept in ones mind in full.
I agree. This whole discussion prefering long functions seems like advocacy for bad code to me.
It is just ... I have seen both types of code and if written by someone else, coffee that at least attempt to segment things into chunks that clearly don't influence each other (functions with local variablea) is massively easier to read.
I think it’s honestly just folks talking past each other because these situations are isolated judgement calls, and some folks feel that
// #1, in essence
result = a => map => reduce => transform
is easier to read and understand, while others feel that
// #2, in essence
aThing = a => map
aggregation = aThing => reduce
result => aggregation => transform
is easier to read and understand. Folks in camp #1 think camp #2 is creating too much abstraction by naming all the data each step of the way, and camp #2 thinks camp #1 is creating too much abstraction by naming all the functions each step of the way.
Really it’s just these two mental modalities butting up against each other, because you will separate your layers in different ways for increased clarity depending on which camp you fall into. What makes things clearer for camp #1 makes things less clear for camp #2, and vice versa.
That’s my suspicion anyway: the premise of the discussion is just a little off.
The one caveat is that I want to easily be able to find out what fuel() is doing. Preferably nothing like car.getService('engine').run('fueling'). Code navigation is very important, preferably doable via ctrl+f since that makes review easier. Most people just use the browser tools for reviewing code and don't actually pull the branch into their IDE.
> In my experience, code with too many functions is more difficult to grok than spaghetti code. It's like trying to read a book with each sentence reference a different page. So, I try to code like I would write, in digestible chunks.
This is so true. The worst code that I've dealt with is the code that requires jumping to a ton of different files to figure out what is going on. It's usually easier to decompose a pile of spaghetti code than to figure out how to unwrap code that has been overly abstracted.
My experience has been that spaghetti is almost always in the real world mostly overly abstract and poorly thought out abstractions. You know you get a stack trace and you end up on a journey in the debugger for 5 hours trying to find any actual concrete functionality.
Compared to someone writing inline functions that do too much, the wasted brain hours don’t even come close
It's also often very deeply nested and follows different paths based on variables that were set higher up in the code, also depending on deeply nested criteria being met. Bugs, weird states, bad error handling and resource leaks hide easily in such code.
In my experience refactoring out anything nested >3 levels immediately makes the code more readable and easier to follow - I'm talking about c++ code that I recently worked on.
Decomposing to functions and passing as const or not the required variables to functions that then do some useful work makes it clear what's mutated by the sub functions. Make the error handling policy clear and consistent.
Enforce early return and RAII vigorously to ensure that no resources (malloc,file handles,db connections, mutexes, ...) are leaked on error or an exception being thrown.
And suddenly you have a code base that's performant, reliable and comprehensible where people feel confident making changes.
I disagree. I think the central thesis of Clean Code still holds up. You should never mix layers of abstraction in a single function.
That more than anything is what kills readability, because context switching imposes a huge cognitive load. Isolating layers of abstraction almost always means small, isolated, single-purpose functions.
I think the central thesis of Clean Code still holds up. You should never mix layers of abstraction in a single function.
I agree up to a point, but I find this kind of separation a little… idealistic? I prefer the principle that any abstraction should hide significantly more complexity than it introduces.
At the level of system design, there probably are some clearly defined layers of abstraction. I’d agree that mixing those is rarely a good idea.
But at the level of individual functions, I have too often seen numerous small functions broken out for dogmatic reasons, even though they hid relatively little complexity. That coding style tends to result in low cohesion, and I think the cost of low cohesion in large programs is often underestimated and can easily outweigh the benefit of making any individual function marginally simpler. If you’re not careful, you end up trading a little reduction in complexity locally for a big increase in complexity globally.
// v1, mixing layers of abstraction
x = a if exists, else first()
y = b if exists, else second()
result = third(x,y)
// v2, abstraction
result = getResult(a,b)
In v1, we have the semantics of x and y, so we understand that a “result” is obtained through the acquisition of x and y. Whether we need to understand this is a judgement call. But v2 opens a different “failure to understand” modality: “getResult” is so blackboxed that the only thing it really accomplished is indirection, without improving readability.
I love Clean Code, but I think it sometimes prematurely favors naming a new function and the resultant indirection.
The primary motivating reason to have abstractions in the first place is to prevent context switching - i.e. you shouldn't have to think about networking code while you're writing business logic.
I’d say that’s a sign that either it’s the wrong abstraction, there’s implicit coupling (a distinct variant of the wrong abstraction), or both sides of the abstraction are in so much flux that the context switching is inevitable until one or both layers settle down.
> It's like trying to read a book with each sentence reference a different page.
Yes!!! I've been trying to teach Juniors that if the function itself has 4 levels of abstraction, even if the names are readableFunctionThatDoesXwithYSideEffect ..... it is harder to understand, Ctrl+clicking downards into each little mini-rabbit -hole. Just keep the function as a 80-liner, not a 20-liner with 4 levels of indirection w/ functions that are only used once (inside parent function) ugh.
The key concept they always helps me is to minimize side effects per function. One thing goes in, one thing comes out (in an abstract sense). Multiple side effects starts getting dodgy as it makes the function harder to predict and reason about. I do err for longer easier to read functions. And don’t compose into functions until it’s clear you will actually reuse the code or you actually need to reuse it :) DRY is good but premature composition is just as annoying as premature optimization.
No, that is not for a specific case of high performance... It's for the non-specific case of keeping the code clear, understandable, and bug-free. The style was chosen for these reasons, not because it is more performant. It just happens to also be more performant than the layers of indirection that also harm understandability.
For a procedural code base, avoid subprocedures that are only called once.
For a pure functional codebase, e.g. Haskell, locally-scoped single-use functions can be beneficial.
You still have to test all the 80 lines if they're broken down into multiple functions, so it's something that you have to evaluate on a case-by-case basis.
It might even make it harder to test: if you break a function wrong, then you might end up with a group of functions that only work together anyway.
For example: when you break a big function into 3 smaller ones. If the first acquires a resource (transaction, file) and the third releases it, then it might be simpler to test the whole group rather than each one separately.
Breaking an 80 line function into to 8x 10 line functions does not necessarily make it easier to test. Most of the time it just adds unit testing busy work, for no clear benefit. This becomes more clear if you imagine you wanted to test every possible input. Splitting the function in 8ths introduces roughly 8x the work, if each new function has the same number of possible input states. The math is more complicated in the general case, so you have to evaluate it on a case-by-case basis. Also, if you're trying to isolate a known bug, it might be beneficial to split the function and test each part in isolation.
Depends on the language. In general I find the way many unit tests are written to be very brittle. There is a balance here. If the 80 lines are clear and easy to understand they will likely be easy to test also. It’s very situational though. An 80 line function isn’t that bad. Check out the SQLite code base, which is extremely well tested, or the linux kernel. C code tends to push out the line count. Whereas 80 in Python is probably a bit much. Some libraries, especially GUI code tend to take a lot of lines, mostly just handling events and laying things out and there you often see big functions as well.
Perhaps we just imagine different things, but I like when code is a list of human-readable calls to functions. The implementation of these functions isn't so important to understanding the code you're reading.
This works really well as long as you use pure functions, because their impact on behaviour is clearly restricted.
"I've seen too many of my previous projects die right when I moved on. Now I tend to write code as if it were written by a beginner: verbose and boring, with no magic."
Theres nothing wrong with charming magic in your code, if it really does something special and is not just used for the sake of it - it only gets into dark magic, when you forget or are too lazy to add proper documentation in the end.
Which ... happened to me, too many times.
But otherwise very much yes. Clarity and simplicity should be always goal number one. But since simplicity is hard to reach at times and time is short, it is always about the balance.
If you are comparing reading code with reading books, then surely you have read books that have unfamiliar words that you have to lookup the definition, and then you might have to recursively lookup the unfamiliar words in the definition as well. Then when you internalized the sub-definitions, then you return to what you were reading and have a better understanding.
The difference between code and books is that programmers can freely and naturally define functions. I wonder if some people complaining about too many functions never actually learned how to read code in the first place.
Generally used with a negative connotation. C2 also discusses how the layers can become entangled/stuck with one another and difficult to replace, which seems to fit the metaphor.
For describing layered code in a non-negative fashion, just saying "layered (or "modular") seems most typical.
It's easy to say ugh, but we juniors are more than willing to learn "the right way". This is the hardest part for me. I get anxiety about it and it slows me down.
How do I apply this to taking over someone else's 4 year old Magento project? We're out here doing our best, and sometimes our learning environments are in that context.
I would say, don't stress about it too much. There is no perfect way. Everyone makes misstakes. And about when to make abstractions and when not, is mostly about experience. There are modules worth optimizing and abstracting. And others are not.
You definitely will make wrong decisions about it and later found out, this optimisation was a waste of time, or that quick and dirty approach really cost you much later on, we all did that and still do.
Much worse than making a wrong (design) decision is making no decision at all - because mostly you have to decide for something and then just go with it.
Overthinking things seldom helps. What helps me sometimes is, putting a special hard problem to the side if I am stuck and solve something easier first. Then after some time, when I get back to it, things are much more clear.
But I also wasted too much time thinking about the right approach in a neverending, neverprogressing loop to achieve perfection.
Now my question is not, is it perfect or shiny, but: Is it good enough?
What matters is, that shit gets done in a way that works.
> wasted too much time thinking about the right approach in a neverending, neverprogressing loop to achieve perfection
A CEO from my past often muttered that "perfect software comes at infinite cost". It's key, imo, to identify which components of what you are building _must_ be perfect. The rest can have warts.
"to identify which components of what you are building _must_ be perfect"
Well, but by the words of your former CEO (and my opinion) those parts would then have infinite costs, too... if they really need to be perfect.
I mean, it is awesome, when you do a big feature change and it all just runs smooth without problems, because your design was well thought out, but you cannot think of every future change to come - and when you still try, chances are you get stuck and waste your time and risk the flow of the whole project. I rather tend to think about the current needs first and the immediate future second, but everything after that, I spend not much thought anymore.
Agreed. What I mean by "perfect" is: for a given part/component/decision/etc, take the time (an always-limited resource) to learn as much as possible and contemplate more than just the seemingly obvious path forward. Take security for example. I'd rather 'waste time' now making sure I'm covering any gaps in that realm before shipping.
OTOH maybe some jacked-up abstraction/incorrect tool choice/ugly-ui/etc is something that can wait a few sprints or longer. At least you can plan when to deal with these. Security breaches tend to plan your day for itself on your behalf. :)
I am a junior developer too. Questions like this are better suited to your manager. Mine gives me constructive feedback at regular intervals, and I also reflect on my own work and look at other people's work.
> Mostly Python these days, but I try to make it clear and easy read.
Which is why I enjoy languages that let me do this without getting too hung up on performance. It's curious that you bring up Python, because idiomatic Python (especially where math libs are concerned) seems to vastly favor brevity/one-liners over all else. It's nice to hear that a veteran is favoring clarity.
> idiomatic Python (especially where math libs are concerned) seems to vastly favor brevity/one-liners over all else
I can’t speak to math libs, but in my experience with server-side development, Python devs tend to (often even religiously) cite PEP style guides favoring explicitness and verbosity. I think there may have been a shift as Python got a lot of uptake in scientific and ML communities, and I hope that hasn’t seriously impacted the rest of the Python community because, while I don’t especially love the language/environment, I deeply appreciated the consistency of valuing clear and readable code.
> cite PEP style guides favoring explicitness and verbosity.
Explicit is better than implicit, always has been, always will be. Granted, I've been writing backend/server-side Python code for 15 years now, so that might be one of the reasons.
For what it’s worth, having spent the last few years writing server-side TypeScript, I’ve evangelized “explicit is better than implicit” fairly aggressively. A lot of even seasoned TS developers are still mainly accustomed to JS interfaces, and fairly often their first instinct is to cram a lot of semantics into a single variable or config flag. I’m glad I spent a few years working with PEP-8 fanatics. It made me much better at thinking about and designing interfaces.
I'm someone who came to the server side of things from the scientific Python community. IMO, that community is still learning how to incorporate Python's best practices to cater their very specific needs.
For example, if you're writing a plotting library geared towards data scientists, you're almost forced to pick brevity over verbosity even if that means violating some of Python's core philosophies. Data scientists usually come from non-compsci backgrounds and almost 90% of the codes they write, don't go to production. So, they usually prefer tools that help them get the job done quickly and they write tools following the same philosophy.
Right. And a lot have come from other languages like R where that’s more common.
If I were building a library for something like that, I’d build the core idiomatically, then expose an idiomatic API with aliases for brevity. I’d make sure the alias is documented in the docstring, and types refer to the idiomatic names. I know TIMTOWTDI isn’t entirely “pythonic”, but it’s a small compromise for probably a good maintainability boost.
There is a point where fitting a little more code on one screen actually helps. Usually not though. Our brains can only see a screen at a time. There is some optimal mix of terseness, especially when you know your reader (probably you in a few months!) will grok it, vs verbosity. If I find myself untangling a single line down the road in my brain it was too complex. Python is already so expressive! We all find our style, but generally I know I did it right if I look back at code at think “wow that’s easy to understand” vs “hmmn, what was I thinking there?” Heh.
The bold utilitarian approach of Go might face some valid criticisms from seasoned programmers, I myself had to empty my cup(mostly Java) to get onboard Go and I'm glad that I did.
After a spine surgery my programming time got severely limited and so I decided to code my future projects with utility focused languages. I had used Python in the past, but the performance tuning once the application scales is counterproductive and expensive to say the least.
I wanted a language which has predictable performance, decent standard library and most importantly not waste my time; time I can focus on my health. Go was the answer, even if it meant that I had to let go of some of my decade long programming patterns and practices.
Now my only wish w.r.t to Go's future is for it to stick with its utilitarian philosophy and not succumb to pressure of including features which might compromise it and leading to the several forks of Go.
That's what I meant when I said that I had to empty my cup and It's unnecessary for most if they're happy with their current language.
As for the inclusion of Generics I'm divided, I'm eager to use generics again in my current Go to language but on the other hand I'm worried if this is the direction Go language design team is going to take then where will it end?
>>It made me think about the readers of my code (...) I don't always need to have n+1 layers in my architecture where all the layers just call the next layer anyway.
Your assertion doesn't make sense. N-tier architectures are primarily intended by the needs of said reader of the code, because it provides a clear understanding of how the overall code is organized.
More importantly, it provides a clear idea of what code is expected to call which code, and makes it clear that dependencies only go one way.
I have no idea what leads people to believe that ad-hoc solutions improvised on the spot are helpful to the reader instead of clear architectures where all the responsibilities and relationships are lined up clearly from the start.
In practice, it rarely turns out that way. I have to deal with large, mature Java codebases for some of my work. The good thing is that the code does just about everything well and rarely breaks. The downside is that when something does break, and I have to debug the code. At some point in the Java world, best practice became building abstraction on top of abstraction on top of abstraction. And often these abstractions just call the next abstraction. Well that makes finding the offending line of code extremely difficult and time-consuming unless you are an expert of the codebase. Had the exact same code been written with less abstractions, debugging would be a lot easier.
I am not against abstractions, but I think they lead to hard to read/debug code when overused. I think they need to be used wisely rather than the default.
> At some point in the Java world, best practice became building abstraction on top of abstraction on top of abstraction.
It really doesn't. There is nothing intrinsic to Java that forces developers to needlessly add abstractions.
If your codebase has too many unwarranted abstractions to the point it adds a toll to your maintenance, it's up to you to refactor your code into maintainability.
And no, n-tier architectures do not add abstractions. They never do. At most, you add an interface to invert the dependencies between outer and inner layers, which does not create an abstraction. Instead they lift the interface that was always there,and ensures that you don't have to touch your inner layers when you need to fix issues in your outer layers.
It's hard to stay simple when the number of users grow. Go will probably not stay simple for much longer (with generics and whatnot).
One thing that I don't understand about the ecosystem is the hate towards GOPATH. Why introduce a complex dependency system for a package manager when you can just pin submodules with git and reap the same benefits? :)
GOPATH is hated because it's poorly thought-out. It's poorly thought-out because Go is designed by Google, who uses Bazel for dependency management. GOPATH is only there because you can't expect everyone to adopt Bazel in order to adopt Go, so some half-assed solution gets designed to get the language out the door.
In simpler terms, the people who designed the language don't use GOPATH at all. That's why it's terrible.
I don't think GOPATH is poorly thought out at all. Dependency-environment-locating is a PITA. Off the top of my head, I can't think of a single package management system that doesn't use universal installs, FOO_PATH or "giant local dump per project".
Rust is probably the least-half-assed (most full-assed?) model, with both a sane user-wide default for cache (~/.local/cargo), a way to edit that default, and project location flexibility.
But I actually love the Go notation that I've opted to organize most of my code around the ~/namespace/src/domain/repo scheme. I never lose track of where a folder is :)
Nope, just two or three. Most lives in ~/ao (easy to type on dvorak), some is in ~/rd (random), some is in ~/tmp. I don't really work on enough variety of projects to deal with collisions.
I take issue with some of the decisions that went into Go, but I definitely respect the overarching philosophy of keeping things simple and not giving teams enough rope to hang themselves with
I think functions are a good enough abstraction for many things. A few years ago I tended to make everything a class in Python. Nowadays I rarely need more than functions. Learning Rust made me realize just how arbitrary my aesthetical ideas about code where. When I tried to go the class based object oriented route in Rust it failed spectacularly because I was unable to navigate the maze of ownership in no time. Once I let go of these ideas everything became incredibly straightforward. The spell has been broken.
That being said I think module borders have become more important to me. Keep seperated what is meant to be seperated.
"It's better to repeat yourself than use the wrong abstraction."
It happens often in the attempt to be DRY we add a parameter or some condition to handle a new variation to what seems like a universal logical construct in the code. Do this enough times and the code is no longer comprehensible to any of the people who wrote each variation, let alone a newcommer. We mistake some commonalities with a universality. We become zealots.
Might be a little off topic, but it reminds me how I am happy that the Go programming language and its philosophies gained popularity even though I don't use the language regularly. Watching Go talks made me appreciate simplicity and clarity.
It made me accept that I don't always need to use every design pattern in the book. It made me think about the readers of my code, who might not always be experienced enough, or might not always have time to understand the brilliant architecture I came up with. I can have some repeated code sprinkled around in the codebase. I don't always need to have n+1 layers in my architecture where all the layers just call the next layer anyway. It might be better to use functions over a complicated hierarchy of classes. It made me appreciate simple tools and widely accepted conventions that result in codebases that feel familiar the second you dive in.
Of course, go is not the only community where these ideas are prevalent, and it's good to know your design patterns and architecture, etc... Finding the balance is not always easy, but it's good to have a popular, successful "counter force" community.