Most of the time when a team is writing low quality software it's not really a choice. No one made a conscious decision not to write tests, not to do PR reviews, or not to refactor. It's actually that the developers are not capable of writing tests, reviewing code, or refactoring to a sufficiently useful level that it's worthwhile. If you've come in to the industry and joined a company that doesn't do those things then you've never learned those skills.
Where Martin Fowler says you'll see the benefit of high quality code in a few weeks he's making the assumption that the team is capable of writing high quality code but choosing not to, whereas it's actually more likely to be the case that the team would need to go away and learn how to write high quality code before they can start, including things like learning how to write testable code in the first place. That is a much bigger and much more time-consuming problem.
The article is absolutely 100% correct that high quality code lets you go faster but it ignores the root cause of the problem - developers have been writing low quality code for so long that unlearning all the bad habits and actually getting better is a huge undertaking.
It's important to note that having high test coverage doesn't make code good. Unit tests will actually make bad code even worse because it will be even more difficult to change the underlying logic (because the tests lock all the poor implementation details into place).
Tests have nothing to do with code quality. All they do is verify that the code works. I would argue that the simpler and therefore the better your code is, the less you need to rely on tests to verify that it works. Fewer edge cases means fewer tests.
I'm a big fan of integration tests though because they lock down the code based on high level features and not based on implementation details. If you ever have to rewrite a decent portion of a system (e.g. due to changing business requirements) it is deeply satisfying if your integration tests are still passing afterwards (e.g. with only minor changes to the test logic to account for the functionality changes).
I see this opinion a lot from people who haven't seen tests and code written by people experienced with TDD. The tests should not end up that coupled to the code. The implementation structure and the test structure end up somewhat different when refactoring every time the tests are green. When listening to the feedback from the tests & code. With the skills to spot the refactoring opportunities.
Oftentimes people seem to equate unit testing with a 1:1 correspondence of test and implementation with high coupling between the two. These sort of tests resist refactoring, rather than enabling it. With good tests you can pivot the implementation and tests independently.
In my experience, your statement is true when writing library code or tests that don't need to mock lots of objects.
Unfortunately, Unit testing becomes highly coupled when testing classes in the standard web architecture. A service class you're testing can depend on other service classes, a DAO, and potentially other web services, so now you're left mocking all those other classes if you want to create a Unit test instead of an integration test. Since the external dependencies have been mocked out, now the Unit test is higly coupled to the implementation and is a PITA to change the implmentation of the test or the code implementation. I suspect that's why OP prefers integration testing, as it helps keep the test less coupled from the implementation.
In my experience, if your tests require lots of mocks then that's a sign that IO is coupled too tightly to application logic. Refactoring your code so this isn't the case isn't always obvious, but it's a breath of fresh air and really cleans up the interfaces.
One problem with decoupling IO is that you still somehow need to get the data deep down into those places where it's needed by your application logic. That means you end up either:
1. Passing each individual little piece of data separately down the call stack with bloated method signatures containing laundry lists of data that seemingly have nothing to do with some of the contexts where they appear.
2. Combining pieces of data into larger state-holding types which you pass down the call stack, adding complexity to tests which now need mocks.
I think one of the toughest parts of day-to-day software engineering is dealing with this tension when you have complex modules that need to pass a lot of state around. It's easier and cleaner to pull stuff out of global state or thread contexts or IO, but that makes it harder to test. More often than I would like to admit, I ask myself whether a small change really needs an automated test, because those shiny tests that we adore so much sometimes complicate the real application code a lot.
If anyone has thoughts on how they approach this problem (which don't contain the words "dynamic scoping" :P) I'd love to read them.
This is my experience as well. I learned the lesson the one time I was allowed to write unit tests at work. It was on an existing code base without tests. I had to significantly refactor code to make it testable, and one of the lessons I learned from the experience is to isolate I/O from the main business logic that I'm testing.
In the pre-test code, the functions were littered with PrintConsole statements that would take a string and a warning level (the Console was an object that was responsible for printing strings on a HW console). I made sure my main business logic was never aware of the Console object. I made an intermediate/interface class that handled all I/O, and mocked that class. Instead, the function now had LogMessage, LogWarning, LogError functions of the interface class that took a string. The function had no idea where these messages could go - it could go to the console, it could be logged to a file, it could be sent as a text message. It didn't care.
Now when we needed to make changes to how things were printed, none of our business logic functions, nor their tests, were impacted. In this case at least, attempting to unit test led to less coupled code.
And usually with good tdd acceptance in your team people automatically write more testable code, because they're too lazy to write tightly coupled code that needs many mocks.
... and no doubt the ratio of application/domain/pure logic to external services interaction varies tremendously by project and by industry, which is likely what leads to such a variety of opinions on the subject.
I would consider needing to mock a lot of objects to write your test a form of design feedback. An indication that our design could be improved. Perhaps the code under test has too many responsibilities, we're missing an abstraction, boundaries are in the wrong place, too many side effects.
One of the downsides of modern mocking frameworks being so easy to use is that it's less obvious when we're doing too much of it.
If we test drive the behaviour, our first failing test of a single behaviour won't involve many collaborators. If it does we're probably trying to test more than one thing at once. At some point as we add tests we may add more collaborators. If we refactor at each time we should be asking ourselves what's going wrong.
Testing more than one class at the same time doesn't make it
an integration test. Arbitrarily restricting a unit to map to a single method or a class is a good way to ensure that your test code is tightly coupled to the implementation.
> Testing more than one class at the same time doesn't make it an integration test. Arbitrarily restricting a unit to map to a single method or a class is a good way to ensure that your test code is tightly coupled to the implementation.
But at least if you restrict your units to a single method, you have a chance of getting somewhat complete tests. If you're testing multiple classes with several methods each as a unit, the number of possible code paths is so huge that you know you cannot possibly test more than a small part of the possibilities.
If you TDD your implementation then it's all covered by tests. If you refactor as part of the TDD process then you may factor out other classes and methods from the implementation. These are still covered by the same tests but don't have their own microtests.
The video seems to support all my points. "Adding a new class is not the trigger for writing tests. The trigger is implementing a requirement."
A test which covers a class is a unit test. A requirement is typically a feature. To test a feature, you usually need integration tests because a feature usually involves multiple classes.
I didn't downvote your comment but I vehemently disagree. Mission-critical code such as NASA flight guidance, avionics, and low-level libraries like SQLite depend on a suite of tests to maintain software quality. (I wrote a previous comment on this.[0])
We also want the new software that commands self-driving cars to have thousands of tests that cover as many scenarios as possible. I don't have inside knowledge of either Waymo or Tesla but it seems like common sense to assume those software programmers rely on a massive suite of unit tests to stress test their cars' decision algorithms. One can't write software with that level complexity that has life-&-death consequences without relying on numerous tests at all layers of the stack. Yes, the cars will still have bugs and will sometimes make the wrong decision but their software would be worse without the tests.
High quality software relies on both lower-level unit tests and higher-level integration tests. Or put another way, both "black box" and "white box" testing strategies are used.
Isn't this disagreement basically the same point made by Martin about different kinds of quality? SQLites tests don't say the code is architected well and reusable and modular and blah blah blah, it says that it works. When people talk about the quality of NASA code or SQLite, that feels more like external quality than internal quality.
The 100% MC/DC testing in SQLite does not force the code to be well-architected, but it does help us to improve the architecture.
(1) The 100% branch test coverage requirement forces us to remove unreachable code, or else convert that code into assert() statements, thereby helping to remove cruft.
(2) High test coverage gives us freedom to refactor the code aggressively without fear of breaking things.
So, if your developers are passionate about long term maintainability, then having 100% MC/DC testing is a big big win. But if your developers are not interested in maintainability, then forcing a 100% MC/DC requirement on them does not help and could make things worse.
>Isn't this disagreement basically the same point made by Martin about different kinds of quality?
M Fowler's comment about "tests" was also made in the context of internal quality. He mentions "cruft" as the buildup of bad internal code that the customer can't see:
>[...] the best teams both create much less cruft but also remove enough of the cruft they do create that they can continue to add features quickly. They spend time creating _automated tests_ so that they can surface problems quickly and spend less time removing bugs.
I think what he means is that just because you have tests (and even if you have high code coverage) doesn't meant that your code is high quality. They're correlated, but I've actually seen code with high test coverage... whose tests never made any assertions.
So self driving systems are based on machine learning, and thus does not have regular (deterministic) unit tests. They will mainly be tested on past data, but the end results are always probabilistic. I.e. no one (not even musk himself) know how the car would behave when it see something that it was not trained on.
I've always wondered whether there's a bunch of conditional statements constraining the output of the probabilistic model. That would seem like the logical thing to do, but I'm not that familiar with ML to know whether such thing is needed or not.
I think that unit tests make sense for safety-critical systems but still in those cases, my point would be that it's better to add them near the end of the project once the code has settled.
Re: your last point, I recently rewrote parts of a billing system full of hairy logic and edge cases (and bugs). The initial MVP consisted of exactly replicating the existing invoicing logic. Due to the general complexity of the problem domain, I found myself rethinking and rewriting large portions of the system as I grew more familiar with the (undocumented, naturally) business requirements. In some cases I'd throw out entire modules and associated unit tests. After a while, I started relying more on integration tests which simply compared generated invoices between the two systems (and/or against golden files.)
Having these made it extremely easy to refactor large portions of the system quickly without needing to refactor unit tests. (I still wrote unit tests, just less of them, more focused on the stabler parts of the system.) This has loosened the grip of the "every function must have a unit test" mantra in my mind, which... I dunno, somewhere along the way sort of became simply assumed.
Some caveats to note, however. A) The code had minimal external dependencies (postgres). B) The integration tests ran very quickly against a local postgres database, only slightly slower than unit tests might, providing a quick dev feedback loop. C) While internally, the system was rather complex, the output was not. It was a simple CSV file that's trivial to parse/compare.
Thus, I wouldn't overgeneralize from the above. In cases where there are lots of external dependencies, integration tests are slow, or where evaluating the test results is more tricky (ie, you need Selenium or whatnot), this approach wouldn't be as feasible.
Most of this is a series of false choices. In fact
- tests can help show code quality improvements do not break anything
- you can have integration tests and unit tests at the same time; in fact, it is more of a spectrum than two rigid categories
- it's often possible to have simple code and test it
Generally speaking the more specific the question, the less controversial the choices are. It's typically not all that interesting to argue about how to test a particular algorithm, data structure, or service.
The hard part in all of this, from an engineering perspective, is just talking to folks, promoting good teamwork, actually showing the value of less obvious things (a passing test suite), and knowing what to do when technology choices become toxic to the product or team.
Most interesting refactorings change the boundaries and count of abstractions, which usually does break unit tests.
Unit tests are great at the leaves of the call graph, and things which are almost leaves because their dependencies aren't at any real risk of change. The further into the stack you go, the more brittle they get.
> It's important to note that having high test coverage doesn't make code good.
Sure, but low test coverage doesn't make it good either. Coverage is a metric and like any metric, it (1) needs to be assessed with judgement and (2) becomes useless when it's used as a target rather than a measurement.
> Tests have nothing to do with code quality. All they do is verify that the code works.
Well to start, Fowler notes a distinction between external and internal quality. External quality is "does it work from the end user perspective?", which can be verified by tests -- you note integration tests in this role (acceptance tests, feature tests, user tests, behaviour tests, whatever you choose to call them). In the external quality case, verifying that the code works is a large fraction of quality.
Your argument, I think, is that internal quality is unaffected by testing. I don't agree: in my experience the needs of simple testing create constant design pressure on the production code, most of which makes it easier to create future changes.
Though as noted at the top of the thread: expertise still matters. Writing better tests and better production code are skills.
> I'm a big fan of integration tests though because they lock down the code based on high level features and not based on implementation details.
I've found this to be a dangerous mindset. Integration tests are great, but they need a solid foundation of unit tests. Integration tests are slow, difficult to root-cause, complex to write and maintain, and also generally don't test all the various corners of the system.
Testing is a pyramid, with unit tests at the bottom and integration tests somewhere in the middle. If your unit tests are based in implementation details, as you say, then that's probably a sign that a refactoring is in order (would love to be less blunt but it's tough with the absence of more details).
> Tests have nothing to do with code quality. All they do is verify that the code works. I would argue that the simpler and therefore the better your code is, the less you need to rely on tests to verify that it works. Fewer edge cases means fewer tests.
While I won't argue that tests verify that the code works, the assertion that tests have nothing to do with code quality based on that premise is incorrect, and here's why.
Some of the main types of poorly written code are 1) brittle code, which breaks easily when things are changed, such as dependency changes or changes in I/O, and 2) unreadable code, which decreases accurate understanding of what the code does and causes incorrect assumptions to be made, which yields bugs.
Unit tests, over time, raise the alarm to these types of code smells. While a test might not yield much info for a short time after it is written, when the code ages and has to stand up to the test of changing code/environment around it, well written tests WILL highlight parts of the code that can be considered poorly written due to the two criteria above.
> Unit tests will actually make bad code even worse because it will be even more difficult to change the underlying logic (because the tests lock all the poor implementation details into place).
This statement is patently false, unless for some reason a project includes unit tests themselves as the production code, which would be highly unusual.
At most, unit tests must be refactored along with the code, but that's the standard operating procedure.
This seems to assume that the tests will be higher quality than the problematic code in the first place. It’s actually commonplace to see tests coupled to internal implementation details of production code, which makes refactoring very hard.
The idea of TDD (mostly lost to hype and consultants) is that you change _EITHER_ the tests or the code in each operation. This allows you to use one as a control against the other. If you change both, you prove that different tests pass against different code, which is substantially less useful. Unfortunately if tests are coupled to internal state, getting code to even compile without modifying both sides of the production/test boundary is difficult after a refactor.
> This seems to assume that the tests will be higher quality than the problematic code in the first place.
If the problem lies with probematic code then tests are not the problem. At most they're just yet another way that problematic code affects the problem.
Let's put it this way: would the problems go away if the tests were ripped out?
I actually just finished a ticket related to this. It took me significantly longer than necessary because I also had to go through all the poorly written tests.
But while you're building a new system/subsystem, it doesn't make sense to write unit tests for units of code which have a high likelihood of being deleted 1 month from now due to evolving project requirements.
It's like if you were building a smartphone; it wouldn't make sense to screw all the internal components into place one by one unless you were sure that all the components would end up fitting perfectly together inside the case. While building the prototype, you may decide to move some components around, downsize some components, trim some wires and remove some low-priority components to make room for others. In this case, unit tests are like screws.
The problem is prototypes end up being production code in the real world. Writing tests is about managing risk. You should write some basal level of unit test as you go to validate your logic as you go. That basal level is determined by the team or individual tolerance of risk
Who said anything about production ? Haven't You seen requirements changing even before first prototype is ready? I had a meeting literally today where client's CFO and head accountant throw out my team's week of work because they forgot about key requirements (and this have happened third time this year)
Yes I have instantly improved code by removing micro level solpisistic tests that were tightly coupled to the implementation. These tests made it much slower to improve the quality of the code and had zero benifits because they only tested that the code did what it did and not what it is supposed to do.
Good point, and I agree. Sunk cost fallacy and all that. I wouldn't consider those "objective", but I don't think it's worth arguing over the definition of that word when I think we otherwise agree.
I also would agree that sometimes time has been wasted creating too many tests. Perhaps that time could have been spent to greater effect.
I also think that even if, in retrospec, a test is very tightly coupled and specific to one implementation, that test still might have revealed bugs and may have helped the original author. If that test is now a burden, throw it away.
You appear to be trying some sort of reduction ad absurdum, but in many cases work on some change to the software starts with deleting all the related tests because they're going to be irrelevant and changing them isn't worth the extra work.
That that deletion is necessary means it apparently did make the code a bit worse.
Also all the time they were in while the code wasn't being changed, they made running the application's tests slower.
>No one made a conscious decision not to write tests, not to do PR reviews, or not to refactor
I know several small off shore software contractor firms that actively turn down cries of help from their developers for tests and refactors for "budget reasons" all the time. Their clients usually don't know any better and later go on to pay the technical debt in support fees.
> It's actually that the developers are not capable of writing tests, reviewing code, or refactoring to a sufficiently useful level that it's worthwhile.
Or maybe it's that management pushed new features far higher up the priority list than "making code more maintainable". That has been the case in most places where i have worked.
I've worked three different places where I attempted to implement automated testing strategies, failed twice and succeeded once. If you subscribe to the anecdata model of knowing things, here goes:
In the first company, there was a strong culture of testing but no strong culture of teaching. I did not last long there and I did not succeed at implementing even basic automated testing. Everyone was very busy in their own roles, and as a co-op employee nobody would show me how to test. I was a pre-graduated Computer Science student who honestly didn't know about unit testing frameworks, or selenium, or whatever. If you give me a giant Waterfall document about requirements, and a giant spreadsheet to fill up with naughts and crosses, with little to no additional direction about the software, how it's tested, or how it even works, then you're going to have a bad time.
Second company there was a strong culture of quality, but not of testing. We were also a two-man developer shop, so there was very little time for teaching and testing. I was expected to learn on my own, and avoid spending time on learning things my boss already knew on our behalf. I accepted broken code from him all day long and made it work.
To be honest, that's where I learned to do good work and not break stuff, and we never invested heavily in test suites. We also almost never built anything above-average in complexity, and when we did, it actually wasn't very long before the boss left, and I was on my own to support it. In the next few years that guy wrote a book about how to dig out from this situation, when your software is successful and needs to change, but doesn't have any tests.
(He says it wasn't a very good book, but from my perspective it's something that was meant to be read preventatively, even though it reads more like a step-by-step manual, you should hope that you never have to follow these terrible, terrible steps. If you are starting a new project and still have a chance to keep test coverage at acceptable levels as you go, I'd maybe recommend reading it, so you know what you're in for if you make the bad decision and your software becomes successful anyway. I have a coupon code if you really want to know, but I digress... the short version is, you've gotta test everything before you change anything.)
In this last role, I have succeeded at implementing automated testing. (But at what cost?) The company supports the idea of spending time on testing. My direct supervisors all were willing to wait the extra week or two to see what I came up with, and understanding the benefits of testing, in retrospective it was always considered time well spent. Nobody was really in a position to teach, but fortunately I had tons of experience at trying and failing before, so this time with the right support structure I was able to get it right for the most part on my own, with help from docs and the internet. (It helps a lot that browser testing tools and other testing tools have all evolved a lot in the last 10 years too, they are objectively better now than they were when I had that first job, and no support.)
In summary, I'd say it's necessary to have all three - time to learn, actual support from above for delays when "this seems like something that shouldn't take this long" ultimately appears, and an actual operational need to build automated testing, which is not always granted depending on your team size, design, and need for growing complexity.
It is possible to build a widget that works, and never changes again. In this case, spending time on a test suite may be a waste. I have found as I've grown more successful and work with more people, that it happens a lot less often than it used to.
Thanks for the insight, in my last role I attempted to introduce automated testing and had support and some success but no buy in from the rest of the team meant I ended up 'owning' the test suite.
The best suggestion I've heard recently is, when someone writes a flaky test, that person needs to be the one who fixes it. (If you write flaky tests and I fix them, I learn how to not write flaky tests, and you keep on writing them, blissfully unaware of the pain that they cause every day.)
If only one person is writing tests, that's a problem you won't have, but what's worse... I think you have it worse.
I share this sentiment. I tend to categorize cruft code in two different categories, and in most cases I have encounter both stem mostly from programmer inability rather than time constraints.
The first kind is what I would presume is the most normal one. It undoubtedly shows up if you have unregulated feature growth in a codebase with low feeling of code ownership. Grunt programmers, or drive-by feature development teams shoehorn in new code to fulfill their requirement. This leads to to the normal degenerate codebase, modules are thousands of lines long, functions are hundred of lines of deep staircase like if-statement logic, spaghetti dependencies, promiscuous state-sharing etc. The classic ball of mud.
The second kind of cruft is the one where someone tried to be clever above their ability and created heavy abstractions that are ill fit for the problem at hand. Signs of this are over-usage of complicated language constructs like inheritance, meta-classes, runtime inspection etc. The style can lead to verbose, boilerplate heavy code that overshadow the business logic. I tend in my frustration to call this abstraction wankery.
In the ball of mud-pattern the programmers often lack the ability to properly form the abstractions needed to sort out the mess and they are aware of that, resigning themselves to do trying to fulfill their task at hand without breaking the existing fragile mess. The grunt might be well knowledable in the buisniess domain and has programming as a side skill. The drive-by coder does not have the motivation to understand the whole messy codebase and does the minimal change and tries not to break anything in the process.
The abstraction wankery is driven by other things. Usually a seconds systems effect. The Junior programmer has some code under his belt and is trying to level up his skills, a smooth talking architect with low insight in the business domain is cargo culting some new fad, etc. This kind of style is usually well perceived by management, they hear the buzzwords and it sounds good in their ears. It can take some time until the card house falls, usually a requirement comes that does not fit the abstraction and unexpectedly take exorbitant amount of time to implement, or maybe a deep rooted bug that requires fixes that ripples through the whole codebase.
When the cleaners are finally sent in the big ball of mud can usually be shaped up by incrementally applying the standard refactoring techniques until structure starts to show. The abstraction mess can be much more difficult too clean up. Incremental improvements can be more difficult and sometimes a rewrite of the code is required, leading to a much more noticeable lack of feature velocity than the ball of mud fix.
>I tend to categorize cruft code in two different categories, and in most cases I have encounter both stem mostly from programmer inability rather than time constraints.
This is only natural. Where the customer doesn't value software quality they will hire cheap (not very good / not very experienced) developers.
My personal experience is that at the beginning of my career customers neither expected nor wanted quality - prioritizing speed of delivery above virtually all else - and I felt like I was engaged in a perpetual struggle to "make" them understand, while as I grew more experienced I found that customers/employers simply expected that quality should take precedence over speed and required no convincing.
IME any attempt to "convince" the customer/employer that code quality was important was a waste of time. It's better simply to get them to decide their desired level on a rolling basis and act accordingly and find somebody else to work for if the answer isn't to your liking.
> This is only natural. Where the customer doesn't value software quality they will hire cheap (not very good / not very experienced) developers.
Well those abstraction wankers usually do not come cheap. So from customer's standpoint they have hired experienced and well paid programmers but the result is still crap.
Yep, one of the catch 22's of tech is that you need good people to recognize good people. I think it's why tech tends to congregate in hubs in spite of the lack of any real intrinsic need to it to do so.
> IME any attempt to "convince" the customer/employer that code quality was important was a waste of time. It's better simply to get them to decide their desired level on a rolling basis and act accordingly and find somebody else to work for if the answer isn't to your liking.
Exactly, you have to work within the context of the culture.
> No one made a conscious decision not to write tests, not to do PR reviews, or not to refactor.
Then you've never worked at a company like my current one. The developers all very much want to do those things, but are forced not to by a management that is suspicious of the value of these things no matter how many times avoidable bugs pop up or massive refactors become unavoidable to add new functionality and management gets "I told you so"-ed. Tests are regularly postponed to follow-up issues that mysteriously never make it into sprints and preventative refactoring is a non-starter.
What the developers want or are capable of is meaningless in a situation where they have no leverage over how they spend their time.
At an old job, they were using an older XML marshalling library that hadn't been maintained in years, but it was everywhere in the code. Well, it finally got to the point where the thing just stopped working (some schema it couldn't parse), so somebody suggested that we "refactor" the code to use the more standard, well supported JAXB. Well, since we're talking about a couple million line code base that parsed XML literally everywhere, the refactoring ended up taking months (mind you, this was something we were completely dead in the water if we didn't do). However, it got into management's collective hive mind that refactoring = several months of no new features, so they prohibited anything called refactoring.
I can't speak for everywhere, but it's generally not that hard to get another software job. Places like that tend to have few people with both experience and the ambition to improve things. The turnover rate in general is usually high.
Anyway, developers tend to have plenty of leverage, in switching jobs or teams if nothing else.
Probably the time-tracking bait-and-switch. As a developer, you have to account for every hour you spend on "company time", and you have to forecast those hours ahead of time (usually at the beginning of a "sprint"). You've got a little bit of wiggle room to sneak important things (like refactoring code, writing tests, reading documentation) in between the forecasted tasks, but not days. I guess your employer hasn't gotten "agile" (which is management-speak for "not agile") yet.
I am curious about your example. When you guys tell management about the best practices what is their response? What is their reasoning on not doing PR reviews, tests, etc ?
"In the real world...", "Nobody cares about how it works as long it works...", etc. This is a company where the top level management 1) thinks it knows best about everything, 2) micromanages everything, and 3) trusts absolutely no one to do their jobs. The CEO has a software degree but never really worked as a programmer professionally and is as obsessed with exclusively shipping features as any MBA, and it flows down from there.
When you hire subject-matter experts to do professional work and then refuse to believe that they might know more than you about how to do that work, you're going to have deep dysfunction.
I agree having technical leadership makes a huge difference. Still, most developers don't realize that if they push back hard enough, a lot of those loud "done yesterday" requirements dissolve. Non technical people always ask for more than they expect they'll get.
While there are certain a subset of developers who lack the ability to write solid code, more often they aren't given the opportunity to do so. Too often company deadlines, resource shortages and selling prototypes as complete products boxes in the developer teams. Product and sales want constant new features and most companies don't enable a culture of investing in the product with refactoring and reviews and other best practices. Developers are seen as a cost center not the critical cog in the machine. If you fix this, you will get better quality code.
I see this as well - in a lot of ways, it's an (accidental?) outcome of JIRA-driven project management. The project manager's job is to squeeze as much productivity out of the developers as possible, so they do so by having you account for every hour you plan to spend and what you're going to spend it on. Then they start looking at what can be cut, and the stuff that's not "mission critical" gets cut. What's frustrating is that this ends up being a Pyrrhic victory.
I would add that due to the heavy business emphasis in the industry, the level of apprenticeship has dropped to very low. Instead of letting experienced engineers guide newcomers, it is often the case that junior but agreeable and less paid engineers take the lead. This way, it takes much longer to learn the craft.
>Most of the time when a team is writing low quality software it's not really a choice. No one made a conscious decision not to write tests, not to do PR reviews, or not to refactor.
And then:
>That is a much bigger and much more time-consuming problem.
Right there is the choice.
Most comments here are blaming management. I've worked in a team where the team themselves were the ones opposing it. Yes, like you said, they worked for years without doing all this. But then management actually gave them leeway to spend time learning it and implementing it - on the job, but it was left to the developers to figure out how to learn it. They could learn it solo or form learning sessions - whatever they wanted.
Only a few developers took advantage of the opportunity. And the rest who hadn't then actively opposed changing the code for testing.
People generally don't want to change. In this case, it definitely was a choice.
Few studies have been done to prove that one way of coding is better than another. Not saying that there isn't a consensus of what leads to good code (tests are a good thing). I prefer functional style but someone else might prefer Java lasagna style. The ones that say that their style is better don't have any scientifc footing at all. We programmers love to say that we are so clever but we never actually try to do any studies. I hope this will change with time.
There have been some papers published about languages and unsurprisingly, FP languages come out as the most time inefficient while Java comes last, even after C.
I’m not sure how they controlled for experience/skill as the Java developer is probably not as skilled as the FP person, but even so the results imply that choosing a programming language is a big deal, just as Paul Graham has espoused over and over again.
> FP languages come out as the most time inefficient while Java comes last
I'm confused - you say that FP languages are the most time inefficient - so they're the worst? That means that you're saying that Java is the most time efficient/the best? I'd be curious to see the paper.
Excellent point. It isn't to say that the developers cannot learn modern practices. I know this is how I worked when I first started and am a moderately capable developer. It is an undertaking though.
Is going through that gauntlet fairly universal? Onboarding is almost always a pain for the individual if you are hiring outside of large tech companies. Why is our default coding style not compatible with team programming?
I love the opening sections of this article, but the end left me feeling wanting.
> the best teams both create much less cruft but also remove enough of the cruft they do create that they can continue to add features quickly. [...] They refactor frequently so that they can remove cruft before it builds up enough to get in the way.
In my experience in several large teams over 20 years, this is not a great summary of what actually happens. What actually happens is accumulation of customer requirements. We build new features and the old features that don’t fit easily with the new ones are not allowed to be removed. Everyone on the team wants the old features removed, and at the same time, the team reaches consensus that doing so would alienate loyal customers and lose business.
The decision is to avoid financial risk, not to drop software quality. The new features are also required, and so compromises and complications arise from supporting both. This is the main source of what is being called “cruft” here. I’ve seen truly great engineering teams, I’ve never seen engineering teams good enough to withstand conflicting requirements between old and new features. I don’t know what the solutions are, but I’m suspecting the thinking on solving this these days is planning a year or two ahead, publishing the deprecation schedule of old features. This takes a certain kind of management that is willing to sacrifice a few dollars today for the bigger picture, it isn’t easy to find.
That’s true. A lot of people think that being customer centric means to exactly do what the customer says. But sometimes you have to say “no” or propose different ways to achieve their goals so you can keep the software architecture halfways clean. Unfortunately it doesn’t help that often engineering is not allowed to or doesn’t want to talk to the customer directly so a lot of these trade offs never get to the customer.
> achieve their goals so you can keep the software architecture halfways clean
That sounds... like a bad decision. We create compromised software solutions to support the business, we don't compromise the business to support the software (unless it involves the safety of others).
I’ve never seen engineering teams good enough to withstand conflicting requirements between old and new features. I don’t know what the solutions are, but I’m suspecting the thinking on solving this these days is planning a year or two ahead, publishing the deprecation schedule of old features.
For a balance of practicality and loyalty, I still like the old-fashioned way of doing things. You release version 1, 2, 3, ... of your product. Old versions are supported for a relevant period of time and get essential updates for security and the like, but each major version is its own offering with its own set of functionality, which potentially includes new features, breaking changes, or even removing something.
Users can then move to a new version if and when they want to. Ideally you have a system that migrates their data automatically, including converting to the new way of doing things as needed and warning if there is anything lossy in that process. However, if the user is happy with their old version, they can keep using it without unwanted changes.
Meanwhile, you minimise development costs for ongoing support of older versions. In general, there is no obligation or expectation to backport new functionality. You just release security and compatibility updates as appropriate. You probably also update your migration system regularly, to track whatever you're doing in your newer versions and keep the upgrade path open.
I don't see any inherent reason that the same approach can't be employed even if you're doing the whole cloud-hosted SaaS thing. You just keep the lights on for your existing customers, but direct new prospects to your latest and greatest. IIRC, Basecamp is one business that does something a lot like this.
Perhaps, but then a lot of other software doesn't either. For software that does, obviously a strategy of maintaining multiple versions with potentially conflicting data models isn't going to work.
It's kind of a fractalized representation of a dynamic that happens to businesses at every scale: they get good at one thing, so now the organization pushes back against changing that thing when conditions change. The business coasts along for some indefinite period, still profitable because of momentum, and then collapses when its market advantage has sufficiently eroded.
The success stories generally come with a combination of top-down strong leadership and bottom-up skunkworks teams and units that are given the leniency to take a chance on something new. Sometimes acquisitions take on this role(e.g. Instagram as Facebook's replacement product).
If we take the old Microsoftism of "It only gets good at version 3" and extend it with, say, "it starts getting worse after version 7", then every software product should have its prospective replacement start shipping alongside version 4.
Actually doing this takes attention to detail and finding an unaddressed niche that would support a different product, though. Compromising and letting too much be reused or too many old requirements pour into the new thing usually causes the effort to fail. It has to be really different, like the IBM team in Boca Raton that came up with the PC design by dispensing with the normal IBM checklist of the time and cludging together some commodity parts and third-party software instead. Most such ideas get squashed by political machinery when put into the context of existing products and teams, which is why the skunkworks approach has to be fastidious.
There’s a book about this called “Sacred cows make the best burgers”. I haven’t read it, but it was recommended to me by a colleague after I was complaining to him about having to support a legacy system because of a few “important” clients.
Where possible, I've found it effective to keep system components small and reusuable (each component should have one purpose and do that thing very well) instead of building monolothic objects. Complexity of system components increases exponentially with size.
Keeping components small increases the resilience of the system. Also, from an OO perspective, shoehorning components into a strict hierarchy can lead to unnecessary coupling and rework. In practice, it may be better to fork a small component. YMMV
I work in the public sector in Denmark, we operate 300-500 systems from private suppliers and none of them work, none of them are particularly cheap either.
Our medical software on life supporting machinery is about the only software that actually always does what it’s supposed to, but it goes decades without changes. Everything else is a broken mess, regardless of what principles of development the companies adhere to.
I think the only software that we operate, which is both high quality, stable, secure and capable of adding/removing features when we ask is our dental software, and that’s actually some of the cheapest software that we buy. It’s not made by a tech/development-house though, it’s made by a couple of former dentists who do it as a side product on their main business which is selling dentist equipment.
So maybe the real issue lies with the development houses? But our experiences are obviously anecdotal so it’s hard to say.
Obviously anecdote != data, and you don't say what the software does. But your story would seem to imply that domain knowledge is more important that software development knowledge. There's a certain logic to it - someone who deeply intuits what the software is trying to model will naturally gravitate towards the correct abstractions, even if they're ugly and ad-hoc; while a professional software developer with no domain experience with happily build a shiny tower of abstraction that doesn't fit the problem domain, and chaos will ensue.
There's a reason we emphasize nailing down requirements before committing code. But what if it goes a level deeper than that? Perhaps what we actually need to do is understand the mindset that is generating those requirements. Perhaps, for some types of software, that's equivalent to being a domain expert.
> But your story would seem to imply that domain knowledge is more important that software development knowledge.
I think the story suggests alternative explanation: that the product is a side project for the people that make it, probably even considered as a marketing expense. So they don't have the usual software-house incentive to fleece the government while delivering worst possible product. Wouldn't be surprised to discover that these dentists don't consider it a high-pressure project, so they actually take time to do it right and be proud of their work.
The real issue is that you pay them to deliver bad software.
I know it sounds like a weird thing to say. But had you as a customer demanded and were willing to pay for something different, you would get that.
Think about how the public sector buys a software development project; what the sort of process the supplier has to go through, how they qualify, how they bid, how the requirements are formed, how the software is tested, delivered and so on.
Had the public sector prioritised the internal quality; it could have done so. But it chooses not to.
In a public sector IT project the actual softare development is only small fraction of the cost. Other parts. Sales, legal, management, testing, documentation ... have much bigger impact on the suppliers ability to make money. Thus those are the parts you get and that is what drives the cost.
Yes! Some organizations are incapable of buying software.
Buying yet-to-be-developed software is easy with the right software company - you just need to provide your problems and priorities, and an open mind and let them manage the process. We do that for our customers, and we have happy customers.
But if you're incapable of choosing a good partner or you let your internal politics dominate the process, then it is extremely difficult. Even with a good development company, a dysfunctional buyer can easily be a factor of 200-500% in lost productivity.
Off-the-shelf software should be easier - you can just try it out. But the wrong organization can easily be incapable of that too, bundling everything up to save money without understanding how much more complex it makes everything and how ill-equipped they are to handle that complexity, never trying things out in practice, writing long spec lists instead, bikeshedding over unimportant implementation details, prioritizing development contract minutia over working systems, putting too many layers between the developers and actual users, going for a big bang.
Software is not about code. Software is a reflection of culture. The code is a means to a social end.
And if everything about a culture is broken - the relationships, the management insight, the goals (collective effectiveness and pride-in-professionalism vs individual ego and greed), the hiring and HR systems, the procurement, the sales - any software that crawls out of the swamp is going to reflect all of that.
When you buy big enterprise systems you enter contracts that aren’t easy to exit. You also bind so much money into those contracts that you don’t really want to leave them either, even if the company sucks at delivering. Maybe you’ll fight them in the courts for a few years and maybe they’ll compensate you a few hundred million, but once you enter these deals you’re basically in them until the law dictates that you have to do another round of bidding.
I’ve done this with a lot of difference companies and a lot of different development and project management philosophies though, and they all fail.
We’ve gone full waterfall, we’ve gone full agile and everything in between. We’ve done long detailed requirement specifications and we’ve invited companies into the heart of our business, to let them literally work inside our offices sitting shoulder to shoulder with our domain knowledge. None of it produces high quality software.
The highest quality software we have, aside from a few small suppliers, is the software we build ourselves. It’s anecdotal again, but it’s the same story I hear in my network of digitalisation managers across the countries public sector and banking.
With the way government contracts work, you'll almost never get really high-quality software that way. The contractor simply does not have any incentive to do so, as it isn't in the contract. Instead, the contract usually gives them the incentive to drag things out as long as possible and make sure development costs are as high as they can get away with; "cost plus" contracts are notorious for this.
It’s a quite interesting question. Our national tax ministry has had almost nothing of expensive scandals over the past 15 years.
Two years ago they setup a focused devops team inside their organisation. I don’t know the exact details of it because my knowledge is from a 45 minute summit talk, but apparently this team managed to build a national scale system in 3 months that actually work. That would have cost them billion on the private market, and would likely never have worked, yet they did it with a relatively small team.
Maybe the problem is scale. I mean, sometimes I wonder why our contract include numerous product owners, key account managers, groups business analysts, project managers and God knows what else.
This is a little unrelated to buying big systems, but when we wanted to build a RPA setup, one of the consultant agencies had an offer which included 6 business side people and one technician. I mention it, because sometimes buying enterprise systems feels exactly like that.
Quality software is not just a matter of the customer demanding it.
Medical software usually comes with medical devices. You'd need a manufacturer that is good at developing both the devices and the accompanying software, and have a medical organisation that is good at their core business (being doctors) and knows enough about medical equipment and software to choose the manufacturer that has good quality in both. Even though another manufacturer may have superior or more affordable equipment and not be as good at the software side, etc (if that can even be judged before using the stuff for a while).
And all sides need to stay profitable while doing this.
Who says going for the manufacturer with the quality software is even worth it? Maybe it's better to go for the one with the better MRI scanner and make do with the crap software, etc.
The software market has strong information asymmetry: typically the seller knows far more about it than the buyer. Buyers struggle to assess quality and must rely on other signals ("nobody fired for buying IBM", "everyone uses Giant GloboConsultingCo", "I really like the font" etc).
Sounds like actually a team size issue. Two guys in their spare time can't create a codebase big enough to have severe agility or maintenance issues. But the moment you ask for a feature they can't handle in that very limited funding envelope, there's going to be an issue.
Or they have simple ways to get the data in and out of their software so the features don't need to be in there. The best software stays small and resists having all those features added.
When talking about quality, "good enough" is always good enough.
When I build a garden shed, I will not make strength calculations on the whole think, and my foundation will be pretty basic. My "timbered some wood together" shed will stand for 50 years, just as the "made a garden shed like an appartement building". Only the latter will take way more time and effort.
When building an appartement building, good luck doing that with the same effort as building a shed. You will have some nasty surprises once you start adding weight to the different floors. The whole thing will collapse.
So in the end, it makes no sense to build a garden shed as you would an appartement building, and it makes no sense to build an appartement building as a town shed. A lot of people forget this in the software world.
So the quality "support" depends on the project itself. Small projects need less, big projects more. Just like small companies need less process overhead, and big companies more.
Like everything in life, it all comes down to balance, and experience will teach you where the balance lies. Because sometimes you will go too far the the left, and after that you compensate and go too far to the right. But the balance will always be somewhere in the middle.
So no matter what project, "good enough" will always be good enough.
> So in the end, it makes no sense to build a garden shed as you would an appartement building, and it makes no sense to build an apartment building as a town shed. A lot of people forget this in the software world.
I agree, but I think that in the software world we're not even to the point where we can build sheds reliably well. We have neither historical knowledge that informs what the "ideal"[1] shed should look like, nor materials that won't suddenly change form the next day[2], nor tools that won't sometimes explode on us halfway through construction.
I understand the disdain for the "sufficiently smart compiler" argument, but I think that there's a long way to go in development of software tooling before we can get to the point of slapping together software like a shed. A pet peeve currently on my mind is throwing exceptions for invalid method parameters. For example, I genuinely appreciate the work that Microsoft has been putting into the .Net ecosystem, but out of all the recent changes I feel like non-nullable references is the only one that helps me write higher quality code instead of improving productivity a little bit (Now we just need enums that are actually type safe (one can dream)).
I'm excited for Rust, I hope it finds success in the world dominated by C/C++. I'm hoping something similar comes along for the world dominated by Java/C#. Elixr looks really cool and in the vein of what I would be wanting, but I haven't used it at all to know how an "enterprise" Elixr development process would work.
I'm just hoping that "good enough" can get better in my lifetime.
[1] Not in the quality sense, but in the "Platonic ideal" sense
See: Who Builds a Skyscraper without Drawing Blueprints? [0]
I think we do have the tools to build Skyscrapers in software. I don't think it's as expensive as it used to be but it is certainly more expensive than it is now.
The reason I think we've seen so much innovation in the last couple of decades in software is because we're getting away with building Skyscrapers with little, to no, regulation or oversight.
Great for profits. Not so great for insurers, democracies, and regular people who have to deal with identity theft and fraud claims.
Fortunately it's not like JPL in the 90s. Writing reliable, critical infrastructure doesn't cost nearly as much with the advance of automated model checking and interactive theorem proving tools. The skills to use those tools are more difficult to acquire than the ability to write a valid C program but I don't think every software developer will need to. If enough people in senior positions start insisting on their use I think we'll do well.
In the meantime... when you need to know whether you're building a shed in a yard or a skyscraper you can look to your local professional engineering society for answers. In my area they have classifications available just as they would at your local city hall for determining if you need a permit to build your structure.
There's also a handy reference guide called, the Software Engineering Body of Knowledge or SWEBOK which is regularly updated with the current state of the art. It's quite useful to be familiar with it at least:
The with "good enough" is when it's not applied correctly. Simple script to move files around? That's fine. Business critical software your firm produces but doesn't directly use? No. I've seen the latter case many times.
> When talking about quality, "good enough" is always good enough.
Humans are poor estimators of "good enough", due to a number of biases (optimism bias / planning fallacy, Dunning-Kreuger effect, hyperbolic discounting etc).
This is partly why your reference industry, building construction, is so thoroughly covered by regulation and caselaw. Apartments would be much more frequently half-arsed if there weren't unpalatable legal and financial consequences for doing so.
I work in QA and don't like the reduced focus on quality (selfishly, because I've got the skillset to get paid by assuring quality).
The article shows a graph that indicates that, over the long term, teams that attack cruft or spend time reducing it make a better product with more features.
To be cynical though, who cares? Who cares about the long term? Your goal as a startup is to crank out features fast enough to keep ahead of the competition and do so long enough to get bought out, IPO, or otherwise exit with a wheelbarrow of cash. Then the cruft is someone else's problem.
We're not exactly in a "long term focused environment". We're over here moving fast and breaking things. Bugs on production are fine, we'll just do a hotfix and then thank everyone for staying late and being rockstars.
Hell, half the S-1 documents I've seen flat-out state "we're losing a billion USD per year, our operating costs are definitely going up in the future; we may never be profitable" but it doesn't seem to matter one bit. "We're going to get big enough to raise our margins!" Neat, enter a scrappy competitor using vc funds to subsidize their overhead, undercutting you with the same business model you started with. That's not long term thinking.
Yes there are better ways to produce higher quality software, but who cares?
It's cheaper to develop high quality software, so let's all pay $500k+ a year to hire people capable of doing it for us, find a recruiting team good enough to find them, build a culture they actually want to join, and then hire enough of them that they have time to do regular code reviews, write tests, refactor code, etc.
High quality software takes a lot more than management telling everyone it's ok if they want to write high quality code.
False dichotomy. You don't need $500k engineers to write quality software. $150k/yr ones will do it just fine if you teach them how to do it and part ways with those who don't see the value in following basic quality standards.
Pretty sure the OP meant that only $500K+ per head FANG employees are capable of writing quality software. To be fair, you can't write any other software at FNG (don't know about A): you won't be allowed to check it in. Some people leave due to not being able to pass the readability review. But FANG employment is not a prerequisite. I have personally hired something like 20 engineers of varying seniority and within a month or two, they were cranking out impeccable work.
When Cleanroom did it, it was anywhere from a little cheaper to a little more expensive. Praxis's process had lower defects with higher, upfront costs. The one quote I heard on price was about 50% more than other suppliers. Praxis and some Cleanroom suppliers issued warranties for their code at specific defect rates. So, I imagine it's not as hard as you suggest to have high quality software. Heck, I think companies saying their stuff is higher quality could get away with less quality than those warrantied products given current state of software market.
I'm going to let you all in on a little secret weapon for high code quality and bug-free code called Design by Contract. Not only will contracts find bugs, you can set it up in such a way that it forces you to fix them or you don't have running software.
It's a myth that it's faster to put a bug in a bug list and deal with it later. If you find a bug, fix it immediately, most bugs take only a couple minutes to fix anyway. With DbC, you will find more bugs and it will reinforce the discipline to fix them then and there.
The graph that Martin Fowler showed where high-quality code allows for faster development is true. Where I would disagree is that there is an initial bump in time. Probably because most people will write tests as a sign of quality. Don't write those tests, go faster, use contracts.
I'm currently consulting for a small startup one of the two engineers in which just doesn't get why he needs to adhere to any sort of a coding standard or write tests, or just write code that doesn't make your eyes bleed. To make matters worse, the founder thinks this engineer is "very capable" and just "set in his ways". As a consultant, it's not my place to fix such issues, so I'm thinking of dropping the client in question, for the first time in my career. The project is pretty cool, and I charge a steep rate, but this is just not worth the aggravation.
It seems to me that the software market is a "market for lemons"[0]. Consumers lacking a decent way to validate software quality don't believe you when you do in fact produce a high quality product as other firms who produce low quality software are making the same claims you are. Everyone assumes that virtually all software quality is bad, and if its going to be bad anyway why not complete the work quickly as well?
The case is not being made particularly well, I feel, from the perspective of decision makers who incentivise quick-and-dirty tactics.
The cost/benefit of adding internal quality is only apparent over the entire lifetime of the product. If the product life is short, or only simple features are added, or not many of them, or the original design is a good fit for the feature scope in the future, you may never see sufficient benefit and the internal quality will be a net cost.
I'd grant that people tend to underestimate product lifetime and future complexity (perhaps wilfully, in some situations). A lot of people simply say "let's cut corners". I don't think there's a failure to explain to them that cutting corners can have downsides, as the article suggests. Everybody knows that. It is not unique to software, either.
Grant me the diligence to work out well architected systems with great naming and tests, the courage to churn out quick prototypes for testing if we're building the right thing and fast hacks for suffering coworkers — and the wisdom to tell the former from the latter.
It's Martin Fowler. He makes money off trying to help people write better code. So you get the answer of "mostly yes."
Ask management, or upper management, or a customer, to write something as lengthy and comprehensive from their perspective, and I bet they could be pretty convincing that high quality is not always the best choice.
It's Martin Fowler. He makes money off trying to help people write better code. So you get the answer of "mostly yes."
Isn't the true of every expert in every industry? They all make money telling people (what they believe is) the right way to do something, and it often sounds obvious once someone has actually articulated it, and even more often there's someone else whose job is to prioritize keeping costs down telling people that actually it's not true because they can save a few bucks in the short term by doing things the quick and dirty way. You choose who to believe.
I define a difference between experts who make their money informing others how to write high quality software and experts who implement the (hopefully) high quality software.
As the latter, there are most definitely cases where the high-quality argument is not valid. It's not a matter a belief. It's more about constraint. If you have the time and expertese, quality is great, and that's when I like software development the most. But I've also decided to eschew quantity to meet external demands. Then we're talking mitigation: "We'll farm out these 3 services to juniors/external devs to meet the client's/management's deadline with the expectation they'll be chucked and rewritten later." We're making the decision to take on technical debt and hopefully keep it isolated enough that it's relatively easy to redo.
Martin Fowler and his ilk are frequently lacking a level of pragmatism that must be adopted to meet business needs.
“They all make money telling people (what they believe is) the right way to do something, and it often sounds obvious once someone has actually articulated it,”
Or it sounds obvious and clear until you actually try it and then things are not that clear anymore :)
Quality is only ever a proxy for more practical metrics. If poor quality code causes more bugs, more failures, slower feature development then you've got something to measure ROI. And contrary to Mr Fowler's claims, there's plenty of times where code can afford bugs and failures pretty easily while dev time is generally very expensive. It depends heavily on what the software is used for and how often it's used.
A million users transacting money? Quality first for sure. A content migration script run once a month where an intern can go fix all the mistakes by hand. Who cares if it sucks.
The author's distinction between internal and external quality is useful, but there is also an argument that internal quality has observable effects beyond development time. It is harder to reason about the correctness of software containing lots of accidental complexity, which means, in practice, that it is more likely to have problems that get through into the field. This is particularly so for security matters.
I really liked and found interesting the end of this sentence:
> "For several years they have used statistical analysis of surveys to tease out the practices of high performing software teams. Their work has shown that elite software teams update production code many times a day, pushing code changes from development to production in less than an hour. As they do this, their change failure rate is significantly lower than slower organizations they recover from errors much more quickly. Furthermore, such elite software delivery organizations are correlated with higher organizational performance."
consumers kind of expect software issues and reputational damage is par for the course, especially with hugely expensive government systems where the development is 100% offshored
I agree that high quality software is usually more cost effective than lousily-put-together software, but not in all cases.
If you are building something new, as the article recognises, or in some cases you don't have a lot of experience, or you expect to grow fast, etc., what you build will have problems anyway, it might soon become obsolete, unmaintainable or ineffective... and then die. You should still have a decent plan, but in these cases it would be more effective to not bother much about high quality. You need to be really prepared in order to write high quality software in a business environment, and that's not something achievable through just will, in a reasonable amount of time. You need to understand sometimes you lack experience / definite direction / resources / ...
If you have the experience, a clear scope and goals, then high quality might indeed be the most effective way to go.
When building your own, small projects, high quality might be the way to go too, as you won't hate what you are doing and you will learn much more. Here effective would mean a very different thing.
But I think that considering the effectivity of code in different contexts is a better perspective than talking about quality. It's always a good idea to spend some time considering the architecture; it's always a good idea to keep things modular; to keep code as easy to delete/replace as possible; to write as little code as needed. But the quality? Well, it depends. What's even quality? (and I'm the kind of guy that can't stand writing lousy or ugly code :D)
His conclusion that "high quality software is cheaper to produce" neglects to mention any kind of time frame over which it actually becomes "cheaper".
Over the short term (e.g. the next several product features) it may in fact be cheaper for a team to focus on speed and not quality. The accumulated technical debt would only cause problems for future development, and that's why it's taken on. In most cases, I think everyone (including management) knows perfectly well that high quality software is cheaper in the long run, but they're willing to take on that debt in order to have some short term benefit.
He does mention that in talking to experienced developers, they find cruft slowing down their progress as early as a few weeks into a project. Additionally, his graph shows the cross-over as "a few weeks."
The thing is, do those costs surface to the other units of business that drive dev decisions? Product might feel the pain, but do marketing and business get it? Do they feel the perils of lost opportunity cost, unclear requirements, technical debt and slow work?
Often, dev and product are slaves to the business machine that only cares if the money keeps flowing. At many places, the dev team is not an equal partner with an equal seat at the table. Paying down technical debt is often an unpopular notion when "we could be making money".
You really need the entire company to understand what it means to develop software.
One premise this article is based on is that more features = a better product. However that isn't always the case. Having a codebase which makes it easier to add new features isn't necessarily going to make that product succeed over a competitor.
If Product A adds 10 bad/mediocre features it'll become bloated and hard to use. If Product B, in the meantime, adds 2 good/great features the market will recognise this. Now Product A is stuck with 10 features they don't want. And good luck trying to take away those features from your users!
My general sense is that high quality software always wins in the long run. I'd argue that is the primary reason Google has rapidly gained share in markets traditionally dominated by Microsoft and other enterprise software companies -- Google's software is noticeably better.
This article has the implicit assumption that you're holding the team constant. What if I really broke out the checkbook and brought in Martin Fowler to lay out a plan for technical debt cleanup? Or acquired a key data provider that has a terrible API in order to have direct access?
Certainly that's buying higher software quality yet that's not what is being discussed here (but I wish the author would discuss it!).
What this is saying is having developers think and plan a little bit instead of treating every day like it's the home stretch of the Kentucky Derby and you're half a length behind plays off pretty quickly. I would agree with that! I also like the point about letting teams with high momentum move fast and make improvements, I often see such teams reigned in and I tend to think that's a mistake.
The thing that ages software is changes. The irony with software is that the most successful pieces of software need to be changed frequently, because of changing requirements. It means that the software is very useful. The more useful it is, the faster it ages and the worse the code gets. The best written software that doesn't change means that its usefulness is very limited.
It is impossible to design software that can account for long term changes. It's 100% impossible. You need to design for the near future as best as you can, and realize that eventually you need to refactor.
So design your software with the best maintainability you can for the next few years and then try your best to refactor it as you go along with new changes, but don't beat yourself up with things like tech debt creep up.
"The irony with software is that the most successful pieces of software need to be changed frequently, because of changing requirements."
I don't think that this is true. Some software doesn't need to change much at all and yet remains quite useful. I'm thinking of common Unix utilities as a good example here. Some haven't changed much in decades and yet are still essential to many workflows.
I think what is closer to the truth is that many tasks that we write software to accomplish have rapidly changing or highly variable requirements. This is especially true of things that directly support business processes. Those kinds of software projects are either going to require a lot of changes or will need to constructed in a highly flexible manner so as to acomodate different needs.
It depends on what your definition of "successful" is. I meant in the monetary term. Sure, there are old Unix utilities that haven't changed much but they haven't made a lot of money. Those with paying customers generally require a lot of changes to make their customers happy.
In many cases, especially in the early days of a startup, you can't even afford the 1/2 weeks to increase code quality, because you might not have a business by then.
If you have a demo for a customer the next day, at that point software quality does not matter at all.
I mean, are there startups that have a demo a week or two after they are created? Because the article is saying that correct software is cheaper on every timescale past that.
What I meant is that the closer you get to a delivery date, the more stress there is on you to "just make stuff work".
Most software projects that: 1) create business value, 2) are not trivial, and 3) have time constraints, reach a point where you have to just finish it, no matter the cost to code quality. I see it where I work. We have pretty good programmers, but sometimes we have to create debt intentionally because we know that's the way we'll make the deadline, and therefore impress customers, and therefore buy more time to write new features, and fix that debt.
Anecdote not data: We've have had 4/5 times in the last 3 years of our operation where we features that we launched in less than a week that were very important to launch.
If you have experience and follow best practices, you should produce decent quality code in the same amount of time as producing bad quality code.
In those cases, the gains from improving the code further are not always worth the costs since you should already be taking care of the low hanging fruit. There are always more you can do, but often times good enough is good enough.
It is often worth it for inexperienced developers to spend more time refactoring their code though. Besides the obvious improvements to the code base, it helps them gain the skills to do it "good enough" the first time.
There are exceptions to every rule, but when someone says that they are the exception to the rule they probably are not.
For example, we all know people who are lovable ###holes. However, in my experience people who think they are lovable ###holes are generally just ###holes.
My theory is that this is because people who are the exception to the rule are vigilant not to go to far, work to improve their flaws, or try to compensate. On the other hand, people who say they are the exception to the rule do so to use it as an excuse for not doing those things.
Applied to this topic; I'd argue that if someone says that for the software they are working on it isn't worth the cost to do refactoring/code reviews/write tests then they are probably wrong.
people are used to crappy software though and in enterprise world software regularly gets scrapped as it becomes unmaintainable.
internal quality is not important at all as it can't be put on a spreadsheet whereas cost can
In the enterprise world, isn't the problem that unmaintainable software doesn't get scrapped, because it's so embedded at the core of the organisation, and maintenance snowballs. I'm thinking of sprawling ERP systems and 'core' Banking in particular.
Quality can be put on a spreadsheet: Cost of maintenance, regulatory cost, transition cost. However, a lack of commitment to radical change exists due to lacking risk appetite.
For example, 'Challenger' banks in the EU with only a few million in VC funding and a couple of handfuls of developers are able to provide complete banking services and really good (instant response) customer service. The equivalent system in a F500 bank can cost hundred of millions of dollars and simply applies a band-aid as another layer on decrepit systems, which still get supported.
As another example [1], a poster shared on HN a couple of days ago that Tencent have 6000 developers supporting QQ, yet WeChat has only 50. All in the same company but silo'd and very different management philosophies for overlapping apps. I find that amazing but completely understand.
'Innovation' is the fashionable enterprise-level replacement buzzword for 'creativity'. The Enterprise-world has lost it's risk appetite, and is slowly being erased.
> For example, 'Challenger' banks in the EU with only a few million in VC funding and a couple of handfuls of developers are able to provide complete banking services and really good (instant response) customer service. The equivalent system in a F500 bank can cost hundred of millions of dollars and simply applies a band-aid as another layer on decrepit systems, which still get supported.
Aren't they able to do this simply because they piggyback off existing financial infrastructure, limit their scope, and eschew doing anything at all in the meatspace? Some of the complexity of real banks come from having a great many branch offices[0], handling ATMs, currencies, credits, all sizes of customers (from individuals to corporations), and running some of the backend financial services themselves.
--
[0] - or whatever you call the place you physically go to do your banking; not sure about the correct term.
[0] Indeed, 'branch' is the correct term. For retail business 'branch' is the bricks and mortar space you see on a high streets for retail customers including SMEs, and for institutional 'branch' is usually head office of the country (but the only thing that matters is the relationship manager that looks after individual customers for institutional customers, the handling of business done in a shared service centre somewhere offshore).
Challenger banks, that don't have branches (well, they do in the regulatory sense, but not in the customer service sense) all seem to use MasterCard (please correct if Visa also serve them), so I'm sure there's a deal there somewhere but I'm also sure they are required by regulators to run their own general ledger as independently licensed banks, and yes existing infrastructure (it's actually quite simple to set up an ATM network of your own using existing protocols and networks). ATM withdrawals are transaction-free for the user. A challenger to payment systems is FasterPayments providing RTGS payments at low cost, who are now expanding in Hong Kong/HKD (and perhaps more).
They (challenger banks in the EU) do have very low interest rates, but seems targeted for low balances and perhaps the business model is to take advantage of PSD2 in the future for brand and financial management, I don't know, N26 makes a big deal of travel insurance and value-added services for a monthly fee of 10-15 EUR. PSD2 destroys the traditional concept of brand of a bank simply leaving the brand of the service.
Part of my background is setting up and managing shared service centres for institutional businesses, so I'm looking somewhat from the outside in the retail space as a user, but an avid user.
Monzo, Starling Bank and Revolut all have their own backends. The latter two only outsource card processing (aka connecting to the MasterCard network).
The whole point of these banks is to not have branches (I believe that’s the word you were looking for) so it’s not that they rely on third-parties to do “meatspace” things, it’s that their whole point is not to do meatspace things (because it’s unnecessary for most customers anyway) and pass on the cost savings to the customer (that’s how they get away without charging bullshit fees for foreign card transactions or declined payments).
Code quality as discussed in the article is more of a global property (the architecture) than a local one (syntax/implementation). Code that is bad locally can be fixed, not so much for a bad design.
Its absolutely imperative that you dont let anybody claiming themselves to be a programmer anywhere near a keyboard. Unplug them all now, unless you have all the safety precautions in place to catch any potential problem no matter how small or insignificant it may first appear. They will ruin your business. 100 percent guaranteed. Absolutely critical word of advice. Nobody should ever claim to be capable of being able to write any function in any scripting language. Only AI can write programmes in todays world. The risk is to high.
Where Martin Fowler says you'll see the benefit of high quality code in a few weeks he's making the assumption that the team is capable of writing high quality code but choosing not to, whereas it's actually more likely to be the case that the team would need to go away and learn how to write high quality code before they can start, including things like learning how to write testable code in the first place. That is a much bigger and much more time-consuming problem.
The article is absolutely 100% correct that high quality code lets you go faster but it ignores the root cause of the problem - developers have been writing low quality code for so long that unlearning all the bad habits and actually getting better is a huge undertaking.