Hacker News new | past | comments | ask | show | jobs | submit login
What I wish I knew when I became CTO (medium.com/sketchdeck-developer-blog)
519 points by edmack on Feb 13, 2018 | hide | past | favorite | 249 comments



> I’ve found it a real struggle to get our team to adopt writing tests.

If you're struggling to judge the engineering culture of a company that you're considering joining, consider this indicative of a poor one. It isn't definitive, but it's something you should ask about and probe further. Ask to see their CI dashboard and PR comments over the last few days. When they talk about Agile, ask what _engineering_ techniques (not process!) they leverage. These things will tell you if you're joining a GM or a Toyota; a company that sees quality and efficiency as opposing forces, or one that sees them as inseparable.

When it comes to tests, there are two types of people: those who know how to write tests, and those who think they're inefficient. If I had to guess what happened here, I'd say: the company had a lack of people who knew how to write effective tests combined with a lack of mentoring.

That's why you ask to see recent PR comments and find out if they do pair programming. Because these two things are decisive factors in a good engineering culture.


PR comments I agree with, but after believing in unit tests for years I'm drifting slowly into the "waste of time" camp.

I'm convinced that unit tests don't usually find bugs. IMO, most bugs are edge cases that were an oversight in the design. If the dev didn't handle the case in code they're not going to know to test for it. Fuzzing is a much better approach.

At my current position I have the opportunity to work with two large code-bases, built by different teams in different offices. One of the projects has ~70% code coverage, the other doesn't have a single test. Exposure to both of these systems really bent my opinion on unit tests and it has not recovered.

The project with high code coverage is basically shit and has so many bugs that we regularly cull anything marked less than "medium" severity as "not worth fixing". This project was written by a team that loves "patterns", so you can find all sorts of gems like Aspect Based Programming, CQRS, N-tier, etc. well mixed into a featureless grey Java EE goo. We get so many bug reports that its someones job to go through them.

The other project with no tests is a dream to work on. Not a single file over a few hundreds lines, everything linted and well documented. Almost no methods that don't fit on the screen, no recursion. No bullshit "layering" or "patterns". I can't remember the last time we had a bug report, since our monitoring picks up any exception client and server side. Every bug I've worked on was identified by our monitoring and fixed before anyone noticed.

Whats the difference between teams that developed these vastly different applications?? I've worked with both teams for a while, and honestly, the engineers that wrote no tests are of far higher caliber. Use Linux at home, programming since they can remember, hacking assembler on the weekends and 3D printing random useless objects they could easily buy. The other team went to school to program, and they do it because it pays the bills. Most of the bad programmers know what they're doing is wrong, but they do it anyways so they can pad their resume with more crap and tell the boss how great they are for adding machine learning to the help screen that nobody has ever opened.

If your developers are great then tests would hardly fail and be fairly useless, and if they're terrible tests don't save you. Maybe there's some middle ground if you have a mixed team or a bunch of mediocre devs??


Tests help very much against regression. And if you have mixed people touching the code.

Anecdotal: I once helped out a team who was writing a Maven plugin for doing static tests on some js code during build. There was already a test suite with a bunch of test code. As my stuff was fairly complicated and I have a habit of writing unit tests for such I added a bunch. Fast forward a year and a half later: I was greeted with a mail that there was a bug in it. I had to fight the better part of a day to nail it down: First not being familiar any longer with the code and secondly because a bunch of stuff has been added meanwhile. I fixed and thought it was a good idea to add a test as it was a nasty corner case. I headed for the natural place where the test would go and found -- exactly the test I was going to write, nicely commented out. A quick check with Git revealed that I added this test initially, which then was commented out when the new feature causing the bug was added. Firing up git blame was next... This is why I am fond of having tests: you are stomped onto it if you break something at least if your test suite is worth its salt.


>Firing up git blame was next...

I like your story, but I find amusing that `git blame` has such an appropriate name.


"git annotate" does the same thing, but "git blame" can be more fun / dramatic if you're looking into the cause of a problem.

Interestingly, "svn annotate" had 2 aliases: "svn blame" and "svn praise". But git didn't add a "praise" alias, just "blame". I actually almost submitted a PR to add "git praise" one time.


Linus Torvalds wrote it so it's pretty typical of his style.


My sibling beat me to it, but I always thought that it was called this because of the need for it for exact the reason I described in my post...


Let's break this down.

> I'm convinced that unit tests don't usually find bugs.

They don't, they test whether or not the API contract the developer had in mind is still valid or not.

> IMO, most bugs are edge cases that were an oversight in the design. If the dev didn't handle the case in code they're not going to know to test for it.

You don't write test to find bugs (in 98% of cases), but you can write tests for bugs found.

> Fuzzing is a much better approach.

If you're writing an I/O intense thing, such as a JSON parser, then yes. For 80% which is CRUD, probably not.

> The project with high code coverage is basically shit and has so many bugs that we regularly cull anything marked less than "medium" severity as "not worth fixing". This project was written by a team that loves "patterns", so you can find all sorts of gems like Aspect Based Programming, CQRS, N-tier, etc. well mixed into a featureless grey Java EE goo. We get so many bug reports that its someones job to go through them.

You are blaming tests for bad design choices. With the patterns raised unit tests only get you so far, integration tests are what help you prevent bad deployments.

> The other project with no tests is a dream to work on. Not a single file over a few hundreds lines, everything linted and well documented. Almost no methods that don't fit on the screen, no recursion. No bullshit "layering" or "patterns". I can't remember the last time we had a bug report, since our monitoring picks up any exception client and server side. Every bug I've worked on was identified by our monitoring and fixed before anyone noticed.

So how many exceptions were raised due to bad deploys? Core review only gets you so far.

> If your developers are great then tests would hardly fail and be fairly useless, and if they're terrible tests don't save you.

Failing tests don't have to do with devs being "great" or not. Developers must have the capability of quickly testing the system without manual work, in order to be more effective and ship new features faster. If the tests are one-sided (only unit tests, only integration tests), then this will get you only so far, but it still get's you that far.

Don't abandon good development practices only because you saw a terrible Java EE application.


There's a much easier way to break it down.

Tests are a pattern. And patterns are the bread and butter of the medicore. That's not to say that patterns or tests are bad, but high calibre guys know when to use which tool. As a tool, unit testing is almost useless.

Low calibre guys don't have any feel for what they're doing. They just use the tools and patterns they were taught to use. All the time. This goes from engineers to managers to other disciplines.

I've seen people at a factory floor treating my test instructions for a device I built as some kind of bible gospel. I had a new girl who had no idea I designed said gadget telling me off for not doing the testing exactly like the instruction manual I wrote says.

The same thing happened with patterns and unit tests. You have hordes of stupid people following the mantra to the letter because they don't actually understand the intent. Any workplace where testing is part of their 'culture' signals to me that its full of mediocre devs who were whipped into some kind of productivity by overbearing use of patterns. It's a good way to get work done with mediocre devs, but good devs are just stifled by it and avoid places that force it.


I find unit tests to be _most_ useful in very particular cases: When a given function I'm writing has a set of input/outputs that I'm going for. Various items like parsing a URL into various components, or writing a class that must match a particular interface. I need to make sure the function works anyway, so I can either test it manually, or I can take a few extra moments and commit those test cases to version control.

For more complex items, I'm much more interested in higher level black-box integration tests.


That's a great example of why unit testing is mostly useless.

Having an expected/input output set when writing something like a parser is standard practice. Turning that set into unit tests is worthless for a few reasons.

1: You will design your code to make them all pass. A unit test is useless if it always passes. When your test framework comes back saying x/x (100%) of tests have passed, you are receiving ZERO information as to the validity of your system.

2: You wrote the unit tests with the same assumptions, biases, and limitations as the code they're testing. If you have a fundamental misunderstanding of what the system should do, it will manifest in both the code and the test. This is true of most unit tests - they are tautological.

3: While doing all of the above and achieving almost zero additional utility, you had to fragment your logic in a way that easily yields itself to unit testing. More than likely, that's not the most readable or understandable way said code could have been written. You sacrificed clarity for unit testability. Metrics like test code coverage unintentionally steer developers to writing unreadable tangled code.

The only use case for unit testing here would be if this parser was a component that gets frequently updated by many people or a component that gets implemented anew for different configurations. But at this point i'm just talking about regression testing, and there are many ways to do that other than unit testing.


You're complaint about always passing only makes sense if you ignore negative tests. Good tests will also test that bad/incorrect input results in predictable behaviour - e.g. invalid input into a parser doesn't parse.

> While doing all of the above and achieving almost zero additional utility, you had to fragment your logic in a way that easily yields itself to unit testing

Another way to consider it is that unit testing forces you to structure your code to be more composeable which is a win. The amount of intrusive access/changes you need to avail test-code is language-dependent.

> The only use case for unit testing here would be if this parser was a component that gets frequently updated by many people or a component that gets implemented anew for different configurations. But at this point i'm just talking about regression testing, and there are many ways to do that other than unit testing.

And yet successful large-scale projects like LLVM use unit-testing. Not exclusively but it's a part of their overall strategy to ensure code quality. Sure, for very small-scale projects with a few constant team members it can be overkill. Those aren't particularly interesting scenarios because you're facing fewer organizational challenges. The fact of the matter is that despite all the hand-wringing about how it's not useful, unit tests are the inevitable addition to any codebase that has a non-trivial number of contributors, changes and/or lines of code.


The applicability of unit testing to your particular cases varies greatly across languages & runtimes.

For URL parsing, some runtimes/frameworks have that thing already implemented. E.g. in .NET the Uri class allows getting scheme/host/port/path/segments, and there’s a separate ParseQueryString utility method to parse query part of the URI.

To ensure a class conforms to an interface, the majority of strongly-types OO languages have interfaces in their type systems. If you’ll use that but fail to implement an interface or some parts of it, you code just won’t compile.


Indeed. Tests allow new members of the team to confidently make changes. I've seen codebases that had near zero tests and also a total mess, with one change somewhere breaking a hundred things 30 levels down the stack. We'd find the issue only in production, along with an enraged customer.

Tests are not a replacement for good developers, they are just a tool for contract validation and a regression safety net.


> Developers must have the capability of quickly testing the system without manual work

Running unit tests is hardly quick. Especially if you have to compile them. End-to-end are even worse, in this regard.

> They don't, they test whether or not the API contract the developer had in mind is still valid or not.

If you're always breaking the API, then that's a sign that the API is too complex and poorly designed. The API should be the closest thing you have to being set in stone. Linus Torvalds has many rants on breaking the Linux kernel's API (which, also, has no real unit tests).

It's also really easy to tell if you're breaking the API. Are you touching that API code path at this time? Then yes, you're probably breaking the API. Unless there was a preexisting bug that you are fixing (in which case, the unit test failed) then you are, by definition, breaking the API, assuming your API truly is doing one logical, self-contained thing at a time as any good API should.

edit: As an aside, I'd like to point out that POSIX C/C11/jQuery/etc. are littered with deprecated API calls, such as sprintf(). This is almost always the correct thing to do. Deprecate broken interfaces and create new interfaces that fix the issues. Attempting to fix broken APIs by introducing optional "modes" or parameters to an interface, or altering the response is certain to cause bugs in the consumer of the interface.

> Don't abandon good development practices

Unit tests are a tool. There are cases where they make sense, where they are trivial to implement and benefit you greatly at the same time. Then there are cases where implementing a unit test will take an entire day with marginal benefit and the code will be entirely rewritten next year anyway (literally all web development everywhere). It doesn't make sense to spend man-months and man-years writing and maintaining unit tests when the app will get tossed out and rewritten in LatestFad Framework almost as soon as you write the test.


> implementing a unit test will take an entire day with marginal benefit

The benefit should be realizing that if you need an entire day to implement an unit test you're doing something very very wrong.


I think it probably depends on the complexity of the code as well. I can't count the number of times my unit tests on projects I alone maintain have saved my ass from releasing some unwanted bug into production due to a change I did.

Especially, if the codebase evolves due to new end-user requirements being discovered along the lifetime of the project unit test on various corner cases can be a lifesaver.

I'm not a bad dev, honestly. The complexity of the code I have to maintain just overwhelms my working memory. And yes, without any silly patterning. Sometimes domain requirements alone are sufficiently complex to confound a person without additional safeguards.

Another level of complexity comes from functionality that is rational only to include form third party sources. The third party sources must be updated frequently (because the domain is complex and later versions usually are subjectively of higher quality). The unit tests are about the only thing that can tell me in timely manner if there was a braking change somewhere.

Yes, there is smoke testing later on, but I much prefer dealing with a few unit tests telling me they don't work rather than dealing with all the ruckus from bugs caught closer to the end user.


On projects I alone maintain I prefer to only unit test the primary API. That usually gives me the information I need to triangulate issues and I move too slowly otherwise.


My tests are generally against the module interface as well. Unit tests don't need to be atomic, as long as they run sufficiently fast. Sometimes there is no 'correct' output, I just need to pinpoint if some change affected the output and is that a problem or not.

Dogmatic unit testing is silly. Testing should focus on fixing in the critical end user constraints and covering as much of the functionality visible to the end user. So, I would not necessary focus on testing individual methods unless they are obviously tricky.

In a an organization where everybody can code anywhere I would enforce per method testing, though. Sometimes a succint and lucid unit test is the best possible documentation.


Thanks. I also have a history with tests and I continue to struggle to find the right balance not just between coverage, but also unit vs integration (and within unit, between behavioural and implementation). I think this uncertainty based on experience is in a whole other class than "I can't get my employees to write tests."

Two quick points.

1 - I've added fuzz testing to my arsenal and find it a good value, especially if you're blasting through a new project.

2 - Good monitoring (logging, metrics) trump tests when it comes to quality, both for being _much_ easier to do and in terms of absolute worth.

That said, testing is primarily a design tool (aka, identifying tight coupling). The more you do it, the more you learn from it, the less value you get from it because you inherently start to program better (especially true if you're working with static typed languages, where coupling tends to be more problemantic). There are secondary benefits though: regression, documentation, onboarding.

I think one key difference between what you describe and my own situation is that my small team manages a lot of different projects. Maybe their total size is the same as your two, but our ability to confidently alter a project we haven't looked at in months is largely due to our test coverage. I agree that then and there, I get less benefits from tests for the project that I'm currently focused on.

The nut I haven't cracked yet is real integration test between systems. This seems like something everyone is struggling with, and it's both increasingly critical (as we move to more and and more services) and increasingly difficult. My "solution" to this hasn't been testing, but rather: use typed massaging (protocol buffers) and use Elixir (the process isolation lets you have a lot of the same wins as SOA without the drawbacks, but it isn't always a solution, obviously)

If you're interested, I've written more about this: http://openmymind.net/A-Decade-Of-Unit-Testing/


>That said, testing is primarily a design tool (aka, identifying tight coupling). The more you do it, the more you learn from it, the less value you get from it because you inherently start to program better

Unit tests "identify" tight coupling because they are themselves a form of tight coupling.


Huh? My interpretation is, it's harder to write shitty code (e.g. hard-coding the database server IP) if you write unittests (where you'll need to abstract the database interface to be able to mock it). In this manner, unittests promote clean, separated interfaces and work against tight coupling.


Tests that couple to database connection code and mock or stub it out are more tightly coupled to the code than tests that just couple to, say, REST end points.

I'm not denying that the pain of having one form of tight coupling interact with another can be used to "guide" the design of the code. It can be. I've done it.

I'm simply pointing out that you're using tight couplings (between unit test and code) to highlight other tight couplings (between code and code).

I use my eyes to detect tight couplings in code that deserve a refactor because that's cheaper than writing 10,000 lines of unit test code to "guide design". Each to their own, though. I know my view isn't a popular one: https://news.ycombinator.com/item?id=16374624


I've never liked unittests. They make it harder to refactor, which you have to do because all good designs come from iterating many times, and the types of mistakes unittests catch tend to be easy to spot (when the error occurs) and fix anyways. In general I feel like there's a disturbing tendency of programmers to avoid reading code and only think in terms of inputs/outputs. This is often a nice abstraction to make, but not always. See all the comments saying something like "unittests let someone new contribute easily"; I disagree with doing this, I believe before you start making any changes you should know where and how a function is being used, instead of relying of unittests. You're saying 'this code may not work let's write some more code to check it', but what if the test code does not work? The idea is rotten to the core.


Unit tests are not meant to find bugs, they are meant to keep it together as you add features and refactor the system.

They are also completely orthogonal to patterns and layers and aspects and J2EE and what not. All that has nothing to do with tests at all.


I think it can be related in a certain type of poor team that cargo cults strict rules and patterns and mechanistically writes tests for every little getter and setter of all the useless layers of useless glue classes whose real purpose is to mask their lack of understanding.


> If the dev didn't handle the case in code they're not going to know to test for it. Fuzzing is a much better approach.

I routinely call myself a proponent of BDT (Bug Driven Tests) over TDD for much a similar reason. That said, tests are HUGELY beneficial for guarding against regressions and ensuring effective refactors. Anecdotal but on my current project tests helped us:

* Catch (new) bugs/changes in behavior when upgrading libraries.

* Rewrite core components from the ground up with a high degree of confidence in backwards compatability.

* Refactor our Object model completely for better async programming patterns.

I don't think tests are particularly good at guarding against future bugs in new features; as your comment about fuzzing hits squarely on.

But I DO think tests are good at catching regressions and improving confidence in the effectiveness of fundamental changes in design or underlying utilities version to version.


Unit tests are there to make the code less fragile, so that it can be modified with confidence. But if you need tests to make your code robust, it's likely a mess underneath; probably better to spend the time refactoring.

Personally, I say write tests when it makes development quicker or serves as a good example / spec.


I think unit tests will die one day, and that day is probably not too far away.

These days I follow three "good practice" rules, all of which are violated when you follow common unit testing practise:

* Only put tests on the edge of a project. If you feel like you need lower level test than that then either a) you don't or b) architecturally, you need to break that code off into a different project.

* Test as realistically as possible. That means if your app uses a database, test against the real database - same version as in prod, data that's as close to prod as possible. Where speed conflicts with realism, bias toward realism.

* Isolate and mock everything in your tests that has caused test indeterminism - e.g. date/time, calls to external servers that go down, etc.


> Only put tests on the edge of a project.

I mostly agree with your point, but I think this is too much. Projects should be made up of decoupled modules (units ?) with well-defined interfaces. These should be stable and tested, and mostly without mocking required.

The larger your project the more important this is.


>Projects should be made up of decoupled modules (units ?) with well-defined interfaces.

That goes without saying.

Nonetheless, if it's a very decoupled module with a very well defined, relatively unchanging interface which you could surround with tests with the bare minimum of mocking - to me that says that it probably makes sense to put it in a project by itself.

>The larger your project the more important this is.

The larger a project gets the more I want to break it up.


While I personally believe in BDD and unit testing, I'm interested in some commentary here from down-voters - always keen to learn from examples


To be clear, while I'm a big fan of BDD the practise, I strongly dislike cucumber and other gherkin tools. I consider a large part of the relative unpopularity of BDD to be attributable to their problems.

I think the downvotes are largely dogma driven - people are quite attached to unit testing, especially for the situations where it worked for them and largely see them in opposition to "no testing" not a "different kind of test".


> I'm convinced that unit tests don't usually find bugs.

At work, I've rejected many merge requests with the comment "this unit test is equivalent to verifying a checksum of the method source". It's so frustrating that people still think it's necessary to write things like this literally real example:

expect(User::MAX_PASSWORD_RESET_ATTEMPTS).to eq(3)


That's an education problem which is not an intractable problem to solve over time by mentoring the code author.


Tests by definition are recursive, you need to know what to test for before you can test it.


Unit tests aren't supposed to find all bugs. Moreover, if you're not enforcing that the tests have to pass before merged/pushed into a shared branch they are beyond useless because they age & more importantly the pain of broken tests is multiplied as it escapes the developer making the change to the entire team being blocked.

To understand how unit tests are useful, you look at how code is developed. Typically there's a write/compile/run cycle that you iterate on as you write code (or you do it in that order if you're a coding god). Then you test it with some sample inputs to validate that it works correctly. The "test it with some sample inputs" is simply what a unit test is. This is frequently a simpler environment to do so as you can control the inputs in a more fine-grained manner than you might otherwise. If you submit this then at the very least reviewers can have more confidence in the quality of your code or perhaps see some corner cases that may have been missed in your testing as devs in my experience are horrible at communicating exactly what was tested in a change (moreover, they tend to be high-level descriptions that can contain implicit information that's omitted whereas unit tests do not). Once you get it in, pre-submit validation enforces that someone else can't break the assumptions you've made. This is a double-edged sword because sometimes you have to rewrite sections of code that can invalidate a lot of unit tests. However, the true value-add of unit tests is much longer-term. When you fix a bug, you write a regression test so that the bug won't resurface as you keep developing. Importantly you have to provide a comment that links to the bug system you're using that can provide further contextual information about the bug.

Unit tests aren't free as they can be over-engineered to the point of basically being another parallel code base to maintain or they can be over-specified and duplicated so that minor alterations causes a cascading sequence of failures. However, for complex projects with lots of moving parts it can be used to obtain the super useful result of always being able to guarantee a minimum level of quality before you hand off to a manual QA process. Moreover, the unit tests can serve a very useful role of on-boarding less experienced engineers more quickly (even quality coders take time to ramp up) or handing off the software to less motivated/inexperienced/lower quality contractors if the SW has transitioned into maintenance mode. Additionally, code reviews can be hit or miss with respect to catching issues so automated tests ensure that developers can focus on other higher-level discussions rather than figuring out if the code works.

Sure unit tests can go insane by having mocks/stubs everywhere to the point of insanity. I prefer to keep test code minimal & only use mocks/stubs when absolutely necessary because the test environment has different needs (e.g. not sending e-mails, "shutdown" meaning something else, etc). There's no free lunch but I have yet to see a decent combination of well thought-out automation & unit tests failing to ensure the quality maintains over time (the pre-submit automation part is a 100% prerequisite for unit tests to be useful).


"This is frequently a simpler environment to do so as you can control the inputs in a more fine-grained manner than you might otherwise"

One of the things that really sold me on unit tests for Django development was realising that it was quicker to write a test than to open a shell, import what I was working on and run the code manually.


This stems from an unwillingness to make it a job requirement.

There are several things you as a software engineer are expected to do as a part of your job: write code, write tests, participate in code reviews, ensure successful deployment, work effectively with various groups, etc.

It's really simple: state the job requirements up front in the position description and during the hiring process. Make testing part of the code review process, and use it as an opportunity to educate about what makes a good test. Make it part of the performance review and tie raises to it (and, if it goes on long enough, continued employment).

Need to write tests for existing untested areas of the code? Have the team create a developer rotation so they dedicate someone to it for part of each sprint.


I couldn't agree more. I've worked at places where some of the engineers were conscientious about writing tests and having excellent test coverage. Guess what? Our services were still unreliable because the engineers who didn't write tests brought poor quality into the codebase, so we had constant problems.

Even a few engineers on the team who don't write tests can make the product as unreliable, from the customer's point of view, as it would be if none of the engineers wrote tests.

At my current company, test coverage is taken seriously as a job requirement, and it is considered during performance reviews. Consequently, the test coverage is pretty darn good.


Per your previous now on reliability, does reliability at the current company match the test coverage?


I'm in 100% agreement with you up until the point of tying your test coverage and writing of tests to your employment. In my eyes that promotes a culture of writing bogus tests that provide no value other than to make more green check marks. You should be encouraged to write tests by your colleagues and be in a culture that sees the benefits, rather than forcing people to do it.

I'm also unsure if sitting one developer down in a corner for a segment of each sprint and dedicating them exclusively to testing legacy code with no purpose is valuable. You should be testing legacy code as you come across it and making sure you harness it properly and make your modifications and continue to the next stop. If you are spending time doing something that doesn't complete a bug or a feature, you're spending valuable time on testing something that may completely removed in the future.


If a PR has bogus tests that provide no value other than to make more green check marks, how do they pass code reviews? That indicates that your code review process is kinda broken--tests should support the code review process by indicating what edge cases the writer of a PR has thought of and then prompting the reviewer to ask what hasn't been thought of.


Bogus tests have to be caught in code review. When I talk about educating the team that's what I mean.

I've only ever had to do a test rotation once or twice, and it was like pulling the rip cord on a lawnmower. Requires effort at first and then it becomes self-sustaining over time. It establishes or affirms a culture of testing. The rotation doesn't even need to last long.

You should know which portions of the code are here to stay and which are nearing their end of life. Naturally, you want to spend your time where it will have maximum payoff.


If you are a company trying to integrate the idea of Unit Testing into a company and it is a new concept, I guess this practice could be acceptable. I think context really matters. I've put a lot of thought into this throughout the day, and I'm completely torn. On one hand I see the benefits of trying compensation to it, but I also just see it creating more problems than solutions. Especially if later on it becomes a cultural standard in the office, how on Earth are you going to remove that benefit (because you no longer need to encourage it) without pissing people off?

Also for the latter point I guess that also depends on context. If you work for a consulting company you may not have the full knowledge of what the code base is, or even have direction to be touching some things. If you are developing software for your own company, I do agree you need to figure these things out, and maybe having a developer dedicated to it each sprint isn't a bad idea. I overstepped my bounds on that comment, as I have never worked for a company that makes its own software it sells, I've only ever done consulting and I sometimes forget about alternative perspectives, so sorry about that.


No worries. Note that I am not saying you get rewarded for doing the bare minimum (writing tests). You get rewarded for going above and beyond. You are not performing the minimum requirements of the job if you do not include tests.

Of course you combine this with managerial support and coaching around task planning and messaging to other groups.

I've been a consultant, too, and I agree that it can sometimes (for some clients) be difficult to make the case for testing in that environment.


> a company that sees quality and efficiency as opposing forces, or one that sees them as inseparable.

I just wanted to say that this was beautifully stated. I've been looking for better words to explain this concept to the people around me.


I imagine it is thinking along this lines...

https://totalqualitymanagement.wordpress.com/2008/09/12/cost...

Definition of cost of Quality It's a term that's widely used – and widely misunderstood. The “cost of quality” isn't the price of creating a quality product or service. It's the cost of NOT creating a quality product or service. Every time work is redone, the cost of quality increases. Obvious examples include: The ...


To be fair, testing is to some extent an unsolved problem. The joys of testing were being extolled long before test frameworks were actually usable. Now that they are, and you can glue Travis, GitHub and the test lib of your choice together pretty easily you have solved about 30% of what needs to be tested. If, say, you are developing an Office add-in on a Mac, and you want to test it on Word 2013 on Windows 7, there is no easy way to automate this task, and certainly no "write once, run everywhere" solution.

In my GitHuby life, I write tests obsessively. In my enterprisey-softwarey life I don't, because there is no sensible way to do it.


well it happens a lot of techniques are still not understood.

I mean we develop database heavy code. Should we never test the code running with the database? would be a poor choice since we would loose a lot of coverage. What we did instead were transactional tests. Which means that in PostgreSQL sense that we actually use SAVEPOINTS to actually wrap our tests inside a savepoint and than rollback to the sane state and never commit anything to the database. With DI this is fairly easy since we can just replace the database pool with a single connection that uses the pg jdbc driver which can insert these savepoints.

Test suite runs ~4 minutes (scala full compile + big test suite ~65%+ coverage (we started late)) in best cases and can be slow if we have cache misses (dependencies needs to be resolved, which is akward slow on scala, sbt)


Databases are ridiculously testable because your inputs are just text. What’s hard is when your inputs are platform environments and versions and hardware and racey events and...


Our tests take about 10 seconds to run (mainly the tewts which need to test a lot of endpoints, our domain is around 1.4s), with the compile being the slow part, which brings CI to around 1min30s-1min50s on average.

We use elixir, so we get nice features like ecto sandbox with Async tests out of the box.


I'd say that is an architecture problem up to a point. A test does not need any framework. Simply a defined output for a defined input, and then check whether the output matches expectations.


In this sort of scenario, the bugs lie in the expectations themselves. Tests that don’t account for that are dead weight.


Can you expand on that? Because I don't see how a test can account for the faulty expectations of the person writing it.


Mostly agree, save one thing: pair programming is deeply divisive, and by itself provides no signal about an engineering culture.


My experience is that what you're saying is true if it's being done dogmatically. I've been a subject (victim?) of this before.

But strategically applied? I think it's a pretty big win. Specifically, I'm talking about onboarding people (onto the company or a project) and working with interns and juniors. Doesn't even have to be a senior and a junior, two juniors working together is significant. And it isn't just about information flowing from SR -> JR, the benefits are tangible the other way.

I'd say at a minimum, having everyone spend 4 hours a week pair programming should be a goal to try.


I've paired a lot and see a lot of value in it but I disagree with most of this, honestly. Agreed on onboarding, but only if that's what works best for the incoming engineer. Two junior engineers pairing rarely increases productivity in my experience and putting some arbitrary number on it like "4 hrs/week" seems dogmatic.

Horses for courses - pairing works really well for some teams and is painful for others. The presence (or lack) of pairing in a company wouldn't be a signal to me, rather I'd take it as a good sign if the team is fine with pairing whenever it makes sense but doesn't have any specific rules about it.


> I'd say at a minimum, having everyone spend 4 hours a week pair programming should be a goal to try.

Pair programming is like nails on a chalkboard to me and at least a plurality of developers generally, based on what I’ve experienced personally and read online. An expectation that I’d do 4 hours a week of it would have me hunting for a new job immediately.

It’s different in kind to other practices like mandatory code reviews or minimum test coverage. Organizations are free to select for compatible employees, of course, but it’s totally unrelated to the health of the engineering org in any dimension.


Ok, so that assertion was pretty controversial, but, honest question, what mechanism do you use for mentoring/learning/growth? Code review is the only other activity that I've seen that can have the same type of impact, but I see them as complementary.

I'm old, I learned about sql injection and hash salts and coupling and testing by being awful at it for decades. How do I transfer that knowledge so that a 26 year old can be where I was at 32 if not working closely with them, using their code as the foundation for that knowledge transfer?


I like embedding, joint design sessions, and thorough code review. For explicit junior dev mentor relationships, I like frequent one-on-ones (I’ve even done 2x per week) and quite detailed, joint ticket/work planning. I’m also happy to do pair code analysis/review for areas that I’m familiar with and the junior isn’t.

What I’m not happy to do, and what pair programming is, as far as I have seen, is to sit down with another engineer and figure out a problem together. In addition to being simply incompatible with my mind’s problem-solving faculties, it in my experience produces lowest-common-denominator software that strips any sense of self-satisfaction or -actualization from my work. No thank you.


Thanks, I appreciate the perspective.


> I'm old, I learned about sql injection and hash salts and coupling and testing by being awful at it for decades.

You pick up a goddamn book, man!

> How do I transfer that knowledge so that a 26 year old can be where I was at 32

You tell them to pick up a goddamn book, man!

Sorry for being curt. But it's the professional responsibility of developers to educate themselves. Some people think they can cram on binary trees in CS, use that limited knowledge to BS the interview, and then coast into working on the transaction system at a bank or whatever.

If a company wants to pay you to mentor a junior, that's one thing. And should be explicitly stated as such. I'm willing to help just about anyone that asks. But if I find myself showing a developer how the compiler works (or a compiler works), or the syntax of our programming language, or basic things that Google knows, I'm going to walk away from that flaming wreck of a company. I've worked with developers that hunt-and-peck typed before. You ever have to explain syntax to a guy that can barely work a keyboard? Let's just say, my threshold for putting up with BS is dramatically lower now.


My belief is that the mentorship comes from the code[1]. Juniors (+ new hires) copy the existing code.

They don't avoid sql injection because they think it's bad, they avoid it because they're adapting your code. When they're asked to make a page that does X, they just copy a page that almost does X somewhere else in the system. Maybe one day they read a list of the top 10 vulnerabilities and realize why you did it that way.

It's why loads of developers can add new functionality just fine, but ask them to build a whole new app from scratch and you will get an incomprehensible mess.

Of course, this doesn't work too well when your code base is a mess of competing styles, etc.

[1] Not that I'm saying some additional help wouldn't be good, but that the significant amount can be learnt alone, with no guidance, from the code base.


Strategic Pair Programming is the perfect way to describe the right answer here.

Essential for onboarding and cross skilling. But mandatory for everything ? Awful idea.

Not everyone learns or benefits by watching someone else type.


Tests or unit tests? When people refer to writing tests, they usually mean unit tests. In the last four years, our unit tests have caught maybe one or two bugs. The time and resources spent on fixing those bugs after they were in production would have been a fraction of the time and resources spent on writing unit tests. Unit tests simply don't catch bugs and spending time on them is time wasted. Judging a company as having a bad engineering culture because they don't do pointless, unnecessary, and superfluous work that doesn't benefit them seems to be more a reflection of your ideas than the company itself. I'd say that reflects quite well on the company and its engineering culture. If you're talking about other automated tests, they may or may not make sense depending on your team size and product.


> Ask to see their CI dashboard and PR comments over the last few days

This is fantastic advice.


>> [article] I’ve hugely appreciated the succinct functional syntax of CoffeeScript and believe it’s helped me achieve greater personal productivity over the years.

Ends justify the means. This resonates me with multiply teams I've left--irrational exuberance, about technologies like coffeescript / mongodb / etc. Anyone who has played with a functional language / "nosql" / etc on the weekend can experience this euphoria without the toxicity of churn to their company. It's patronizing to people who understand the importance of where things are headed. This is one of the signals that I look for.

>> [article] I’ve found it a real struggle to get our team to adopt writing tests.

> If you're struggling to judge the engineering culture of a company that you're considering joining, consider this indicative of a poor one. It isn't definitive, but it's something you should ask about and probe further.

After reading the article, parent's comment is spot on in multiple dimensions... this article is full of red flags to look for when joining a team. The depressing thing is if you're manager's manager doesn't care... and you're manager doesn't care... and you care, well.. then.. nobody cares.


It's aside from your point, but would that Toyota's embedded software were as good as its mechanical engineering...


The genesis for The Mythical Man Month:

"In particular, I wanted to explain the quite different management experiences encountered in System/360 hardware development and OS/360 software development. This book is a belated answer to Tom Watson's probing questions as to why programming is hard to manage."


Anecdote time. My former company outsourced embedded development to the company that does firmware for Toyota. It was a complete disaster, and a year of work had to be scrapped. Code was rife with cut and paste, badly reimplemented mutexes when they could have used the ones supplied with the RTOS, and other nonsense. I suspect the Japanese company put all their deadweight engineers on the project.


From my understanding of the "unintended acceleration" lawsuit, you could very well have had the exact same engineers who implemented the Toyota firmware :).


If there is never enough time to refactor and new features are always being pushed when does anyone have time to write new tests.

Tdd helps focus and structure some developers but rarely does it save time. In situations where everyone is being pushed too hard for too long saving time is more important. I would bet documentation is also a low priority.


> If there is never enough time to refactor and new features are always being pushed

That is a sign of bad culture, both in engineering and product. Whether its starting a green-field project in a scrappy startup or building yet-another-feature for an established product if the estimates are constantly redlining everyone's available time and never giving thought to maintenance, QA, testing, code review, and testing then of course it will always feel like that. When estimates include that stuff and you show product you can ship features more reliably more often in the long run, they buy into that. If they don't buy into that they are either very delusional, have only worked with absolutely perfect people, or they utterly do not care how many extra hours / stress the lack of quality causes you / the team / the company.


And in a bad culture other priorities can be more important. As an employee your job is to adapt and support the business. Writing tests in a culture that doesn't value the time spent is not helpful.


The problem is that the "bad culture" wastes more time and mental strain on addressing the consequences of the lack of tests than it would spend maintaining a proper test suite.

This waste exhausts morale.


Tests don't cost time.

Not true 100% of the time, but it's the right "default" mindset, because it's true the majority of the time.


tests do cost time, but an investment in an automated test can save orders of magnitude greater time than it takes to write them... in the end, automated tests save a lot of time, as long as they (A) cost relatively little to maintain and (B) provide a reasonably useful guarantee of quality


From a pure number of keystrokes tests can add time.

If you know exactly what to write because you have done this 100s of times before tdd will slow you down.

If you are unsure of what the outcome of what you write tdd will give you training wheels and help guide you. That may make some quicker for a little while.


Pay attention, because it’ll be a while before anyone tells you this again: You are the sort of person people in this thread are warning others about.

Nobody who ‘needs training wheels’ is going to get them from/do TDD.

I’m more concerned about people who think they can fly without learning to walk.

Most of us can type 60-80 wpm. Have you ever gotten close to that while coding? Typing is incidental. The very easiest part of your day. You’re right, they type less, because they’re so into the smell of their own farts that they refuse to believe their bugs are bugs, and they make other people clean up after them.

Humans are fallible. We all have bad days. We all get distracted. We all misunderstand. We all change our minds. Don’t be so sure you got it right the first time. Even if the evidence supports you. You’ll be looking at a broken piece of code soon enough that you can’t figure out how it ever worked. Sooner or later it’ll be yours.


>If you know exactly what to write because you have done this 100s of times before tdd will slow you down.

I once had this attitude. Then, I worked with other people. It makes all the difference. My perspective shifted when I was bitten by something small when making a small patch to a foreign system because someone else didn't leave good test cases behind.


I find that when working in a team, tests help save time by preventing people from stepping on each others' toes, breaking existing functionality


> If you are unsure of what the outcome of what you write tdd will give you training wheels and help guide you. That may make some quicker for a little while.

In my experience, if you are unsure of how you are going to solve a problem, writing test only makes you slower. When you are coding exploratively, you will likely have to delete and completely rework what you did several times before you find what works. If you write tests for all but the highest level, those will just be scrapped along with the rest of the code.


Of course tests costs time. You are often writing twice the amount of code and there is the amount of time it takes for most CI systems to run all of the tests (often tens of minutes, but I've worked on hours).

But the reason we do it is because it increases quality.


I think the theory is that increasing quality will save time in the long run via fewer regressions and bug reports.

Making that math work, though, seems to depend on the idea of some sort of future crisis state, where normal development is slowed way down. (You'd need to avoid a big slow down in the future, in order to balance out the continuous extra time given to testing.)

Does such a crisis lurk in the future of every development effort? Hard to know. It's certainly not the only way technology projects fail. Plenty of products have passing tests but fail to find customers.


Exactly. It’s likely for some projects every line of code you write will in the end be a complete waste. TDD is only worth the trouble assuming any of the project succeeds. If not it may just be more wasted effort. For this reason I think a case can be made to skip TDD on MVP traction tests in some cases.

A friend of mine just had his startup acquired, so his startup was an above average success and he told me 80% of what they built ended up getting scrapped.


The problem is you don't know which 80% will get scrapped.


I don’t see this as a problem as long as you make time to add tests later once you are convinced the code will be kept active through user traction.


Tests absolutely cost time, inversely proportionate to the raw ability of the programmer. A 95th percentile engineer can write cowboy code with zero tests that largely works. Enforcing tests could cause up to a 50% slowdown. It’s probably worth it in the long run, but for a time and cash strapped startup its a legitimate cost/benefit analysis.


God save me from code that "largely works".


Testing is absolutely critical don't get me wrong, but you can't test what you can't predict, so there needs to be a distinction between tests that really stress the system in unknown ways vs verifying your ADD function did indeed add N consecutive numbers correctly.


> verifying your ADD function did indeed add N consecutive numbers correctly

This kinda depends on if it is a public or private (or `__private(self):`) method. If its private, no need to test it. But suppose that rather than using something in an existing library, you are bothering to write your own ADD function and expose it to the rest of your codebase. Wouldn't that indicate that your function was special enough that it should be tested?


I mean yea it should be tested, but making a code change that propagates through the rest of the repository is really poor abstraction design. At that point, you might as well have one class called tester that holds pointers to everything and is about 100K lines long.


> I’ve found it a real struggle to get our team to adopt writing tests.

I find this hard to believe. Do others CTOs / team leads find this to be the case?

I've been a CTO of two small startups with 3-7 developers. We've had resistance to tests at some points (myself included). We've solved it fairly simply. All pull requests (PRs) require tests. PRs are rejected immediately without tests. If a PR doesn't have tests and it is critical to get in, we open a new ticket to track adding tests. It isn't fool proof, but it does result in a high degree of test coverage.

And once developers understand how and where to write tests, they usually see the benefit quickly and want to write more tests.


I'm not a CTO but I do lead the dev team at our agency (was previously 16 devs, but we've slimmed down to 7 currently). I want to preface this by saying that at an agency, your biggest enemy is always time; sales teams have to sell projects for the absolute minimum in order to get a contract, so you can't waste time on non-essentials for most projects.

That said, the biggest resistance I have found is "this feature is due in three days, I need two and a half to finish, and then we have another half to review and find bugs." In the end, the biggest issue is that we have time to test on the spot or write tests, but not both. You can scrape by with just manual testing, but I don't think anyone would ever rely on automated tests 100%.

Our larger projects are test-backed, and our largest even reaches 90% coverage, but the only reasons we wrote tests for those was because we knew we would be working on them for 2-3 years and it was worth the time in that case. I wish this wasn't the case, but I've found it's always the argument against automated tests in my corner of the market


In my previous agency life, this was something that I experienced as well. A short lived product that was due in less time than any sane dev would estimate. We all knew that we "should" write tests, but there just wasn't time. And in 6 weeks the project would be relegated to living in source control because the campaign was over.

It made hiring devs fun. Trying to explain to people why it was that way, and their insistence that software development doesn't work that way.


> A short lived product that was due in less time than any sane dev would estimate. > And in 6 weeks the project would be relegated to living in source control because the campaign was over.

That is exactly it for 90% of agency projects. Underquoted to get the deal, a rapid development cycle that leaves the devs feeling dead, and then once that first release is out, you have maybe 1 or 2 small updates and the project is never touched again, or at least not for a year or two.

There is no world where it makes sense to write tests for these projects.


What does 'agency' refer to in this subthread?

Agency for what?


Advertising agency, marketing agencies where technology takes a back seat to marketing/promotions etc. Places where projects number in the thousands on websites, apps, games, many systems clients, new technologies etc.

Every developer/engineer should work in an agency for a while because of the amount of sheer work and lifeline of said work is short, projects are primarily promotions and one and dones in many cases.

What we did at the agency I worked at was try to harvest systems from common work. Landing page systems that then had base code that was testable and common across all, create a content management system that supports agency specifics. Promotions/instant win systems that had common code across all and could live longer than the 3 week promotion, create a prize/promotions system that ran all future promotions and improved AFTER most promotions due to time constraints. Game systems for promotional games / advergaming, after new games and types became common or re-usable etc.

Many times, you have to take an after the ship approach and harvest systems that make sense from the sheer amount of work you are going across hundreds of projects. Where good engineering really comes along on subsequent systems where promotions, projects or games/apps were initially made and proved a need or prototype for how to do future projects quicker and with more robust systems.

Testing and doing code specifically for that campaign may be usable or not, but later you can harvest the good ideas and try to formulate a time saving system for the next, including better testing and backbone/baseline libs/tools/tests etc.

I have worked in agencies 5+ years and game studios 5+ years and both are extremely fast paced, usually the harvesting approach is the one that is workable in very pressurized project environments like that. Initial projects/games/apps etc are harvested for good ideas and the first one might even be more prototype like where testing/continuous integration might not fit in the schedule the first time around, or might not even be clear what to standardize and test until multiple types of those projects it out. Starting out with verbose development on new systems/prototypes/promotions/campaigns/games might not be budget capable or time allowed to do so on the first versions as they might be throwaway after just a few weeks or months. There is a delicate balance in agencies/game studios like that where the product and shipping on time is more important on the first go around as the project timeline and lifeline may be short. Subsequent projects that fit that form are where improvements can be made.


I remember my agency days.

Now days I work on a single long-running legacy project where tests make sense. Back then, I read a lot about how testing was the "right thing to do." But I also realized that most of the time (a) the client wasn't going to be willing to pay for the tests and (b) odds are that once we launch the product, that will be last time I ever look at it.

Maintenance will occur in five years when sales talks the client into scrapping the entire thing and rewriting it -- the client won't be willing to pay for maintenance or automated tests, but somehow sales could always sell them on a total rewrite.


It's a very interesting setup: all prod code is "throwaway".

I wonder if each such project is built completely from scratch. If not, the reusable parts can be improved over time, and covered by tests.


> I wonder if each such project is built completely from scratch.

For us, it's a mixed bag. We have a CMS we use for most projects that we did not develop, but we have developed our own packages/blocks for it that are included in every project that bootstrap and whitelabel the hell out of the CMS to provide the functionality we need in every project. From a data standpoint, one of our packages replaces several dozen classes and hundreds, if not thousands, of lines of custom code in every project.

When it comes to more custom projects, specifically ones that never see public use (like a custom CRM, admin dashboard, CRUD-based system, API backend, etc.), we build using the Laravel framework which bootstraps away all of the authentication, permissions, routing, middleware, etc. and gives us a very good blank slate to work with. For these, everything is mostly from scratch, minus what we can use third-party packages for (such as the awesome Bouncer ACL). We have a front-end library that I wrote to abstract away common tasks into single line instantiations, but it's our experience that these projects are being built on a blank slate for a reason. These are the projects that may actually see tests written for as well, although not all will.


The typical stuff an agency can reuse is all covered by frameworks and libraries anyway.

You take an existing CMS or shop software and customize it, or take a web framework and build a very customer-specific service on top of it. Most everything you can share between CMS projects is already part of that CMS.


I find this view (and the replies) interesting. One thing that I've experienced after writing tests a lot is that once you know the patterns, implementing TDD becomes effortless. Eg (python)

Need to test interfacing with an SDK correctly?

Sure, patch the SDK methods and ensure they are called with the proper parameters

Also, for extra coverage, add a long running test that makes actual calls using the SDK. Run these only when lines that directly call the SDK change (and ideally there should only be a few of those).

Need to mock a system class?

Sure - Here's the saved snippet on how to do that

---

This of course applies only if you repeatedly access projects that use the same stack. If you don't then I understand that it can be pretty hard. But basically over time, writing tests must become easier else that's a sign that something in the process is not working correctly. Knowledge isn't being transferred. Or things aren't being done uniformly.

Ideally once you get past a certain point, testing should be just a selection of patterns from which you can choose your desired solution to implement against a given scenario.

I accept that I could be missing something here so please take what I say within the context that my thinking applies to work that can be described as technologically similar.


I always find code coverage such a useless metric: if you have two independent ifs next to each other and one test goes in one if and another test in the other you have 100% coverage. Congratulations. But you've never tested what happens when you go in both


I agree that it is a useless statistic, especially when comparing unit vs integration vs functional vs smoke testing. There are different types of tests and just because you are reaching 90% of your code does not mean you are thoroughly testing it.

The only reason I brought it up was to show that we don't skip test writing entirely and the projects where we do write them, it isn't like we just wrote a test to check that "Project Name" is returned on the homepage and called it a day.


A few years back a person I worked with was tasked with implementing code coverage. Part of that task was they also had to get our code base up to 80% coverage or so.

They wrote stupid test after stupid test after stupid test. Hundreds of them. Oh Em Gee. It was like that story of Mr T. Where the army sergeant punished him by telling him to go chop trees down, only to come back and find Mr T had cut down the whole forest.


That's just the basic coverage metrics, there's more than that: https://en.wikipedia.org/wiki/Code_coverage


There are different kinds of coverage metrics. If line coverage is not enough for your liking you can always go for full path coverage. You'd have to write an exponential number of tests though.


I find that tests pay off pretty quickly in terms of productivity -- somewhere around a week. There are a couple of caveats, though. First, you have to have a team that's already good at TDD (and not just test-after-the-fact). What I mean by TDD is hard to describe succinctly and especially since I said it's not test-after-the-fact, it's easy to think that I mean test first. I don't. To me TDD is a way of structuring the code so that it is easy to insert "probes" into places where you want to see what the values will be. You can think of it a bit like having a whole set of predefined debugger watch points.

With good TDD (or at least my definition of it :-) ), the programmer is constantly thinking about branch complexity and defining "units" that have very low branch complexity. In that way you minimise the number of tests that you have to write (every branch multiplies the number of tests you need by 2). The common idea that a "unit" is a class and "unit tests" are tests that test a class in isolation is pretty poor in practice, IMHO. Rather it's the other way around (hence test driven design, not design driven tests). Classes fall out of the units that you discover. I wish I could explain it better, but after a few years of thinking about it I'm still falling short. Maybe in a few more years :-)

In any case, my experience is that good TDD practitioners can write code faster than non-TDD practitioners. That's because they can use their tests to reason about the system. It's very similar to the way that Haskell programmers can use the type system to reason about their code. There is an upfront cost, but the ability to reduce overall complexity by logically deducing how it goes together more than pays off the up front cost.

But that leads us to our second caveat. If you already have code in place that wasn't TDDed, the return can be much lower. Good test systems will run in seconds because you are relying on the tests to remove a large part of the reasoning that you would otherwise have to do. You need to have it check your assumptions regularly -- normally I like to run the tests every 1-2 minutes. Clearly if it takes even 1 minute to run the tests, then I'm in big trouble. IMHO good TDD practitioners are absolutely anal about their build systems and how efficient they are. If you don't have that all set up, it's going to be a problem. On a new project, it's not a big deal for an experienced team. On legacy projects -- it will almost certainly be a big deal. Whether or not you can jury rig something to get you most of the way there will depend a lot on the situation.

So, if I were doing agency work on a legacy system... Probably I wouldn't be doing TDD either. I might still write some tests in areas where I think there is a payoff, but I would be pretty careful about picking and choosing my targets. On a greenfield project of non-toy size, though, I would definitely be doing TDD (if my teammates were also on board).


tdd can be faster for some but you are forced into a funnel that involves another step.

If you know exactly what you are writing it is quicker to add your changes jump to the next file add your changes. If you are constantly checking the browser to see if what you wrote works Tdd can help.


> I find that tests pay off pretty quickly in terms of productivity -- somewhere around a week.

I think you overestimate the agency project life cycle. Most of our projects are built and ready for client review in 2-3 weeks total. Once the client makes a few days worth of changes, the project is shipped and we likely do not look at it again for another year or three.

That said, there are always long-running projects and those are the ones you try to include tests in.


Interesting. I worked very briefly in an agency a long time ago. Our projects were on the 2-3 month time frame. I suppose it depends on what you are doing.


We have plenty of those as well, but the overwhelming majority of them are about 2-3 weeks of work once we get started


It's much easier to convince people that tests are necessary if you're starting a new project. Problems arise when you're working on a legacy codebase that never had tests to begin with. Often, the code isn't testable at all.

IME, there are far too many "senior" devs (who absolutely should know better) who never worked on any testing-heavy teams that just don't see the point. After all, there's QA, and it's not like THIS code should break THAT code in a seemingly-unrelated part of the codebase...


I'm CTO at a bootstrapped agency that has now 30+ ppl, and I couldn't agree more with the author.

Sure thing that you can use your authority to force ppl, but should you?

Smart people are hard to come by but once you have them you should let them work, and when you tell them how to do their job you implicitly assuming that you know better. Besides if you force them you achieve nothing but some brain-dead tests that are going to hunt you later and getting a budget to "rewrite tests" is a fairytale.

The art here is to build a culture that embrace the test as a powerful tool. So newcomers are quickly seeing benefits and start to write the tests in right places, not for the sake of an artificial metric.

Besides, there are plenty of places where having high coverage is going to be a waste of time: - throwaway prototypes, - heavy UI code full of animations - they need to look right that is hard to test, - infrastructure code if you have just a few servers of a particular type, - customer projects with unreasonable deadlines that are not going to be touched again,

So having your team that writes tests is a hard job and using PR policy won't help much.

The things that worked for me were: - write tests that make sense your self in early stages of the project - pair with your employees and write tests with them, - do peer reviews and suggest what could be tested and why it make sense.


Getting a team to write tests is change management 101:

People are resistant to change when they don't know how it benefits them directly and immediately.

My suggestions have been:

- By giving developers slight nudges every time they get frustrated with developing when tests aren't present is a good way to help them see the benefit. "Imagine how much easier it would be to write this piece of code if you had tests in places where this function calls other things".

- Enforcing it during commits (as you suggest, using PRs)

- Reminding your whole organization that while you migrate to implementing more testing that velocity of development will be impacted. This is really important, because it means people outside of the dev team also need to see the benefit.

- Eliminating "deadline cultures" first and then implementing unit testing


At my old telecom job, we had weekly MOP (method of procedure) meetings, basically the gate process from lab to production.

One of the MOP checkoff boxes, test results.

So many times you could tank someone by asking "Where are the test results" and they would have to reschedule their maintenance window. If you pissed some ops engineer off, expect the question "Where are the test results" every MOP meeting.

Good times.


Team Lead/Manager here. I frequently have the opposite problem: people who think code coverage (quantity) is as important as or worse more important than test quality. I'd much rather have low percentage high quality coverage than high-percentage low quality coverage.


I completely agree. I've met a few developers who think that 100% coverage is required, and that a complete set of tests will save them from all bugs. Perhaps unsurprisingly, they wrote pretty crap code. Passed the tests, but it was unpleasant to work with.

I like to see good coverage (say, 85%) because the act of trying to cover that much has led us to discovering some bugs that would otherwise have gone unnoticed until someone ran into them in production. But 100% line coverage is still a tiny, tiny fraction of covering all permutations of how that code is used, so I feel like trying to hit some kind of holy grail perfect coverage target over-emphasizes the value of tests. While tests can absolutely be very useful, it's the actual running code that needs to be high quality, the tests are just helpers.


How many tests does a PR need? One? Five? When would you not write tests for something(that something not necessarily being a PR, but maybe a unit or feature)?

Tests, like any process, should be serving you and your goals.. You shouldn't be serving your processes or testing practices. This sort of un-nuanced thinking isn't indicative of a high performing startup or CTO IMHO. Perhaps your policies are not directly indicative of your real thoughts on the matter?


I guess it might sometimes be fine to be a relativist and write off the need for tests as a result of "nuanced thinking," but I think you have to accept that you are running a risk by shipping untested code into your product.

As others have said, line coverage is a misleading metric. Ideally, your tests would fully cover all _program states_, and even 100% line coverage doesn't guarantee full state coverage. If you have untested states, then the following facts are true:

- You don't have a formalized way of modeling or understanding what will happen to your program when it enters an untested state.

- You have no way to detect when a different change causes that state to behave in an undesired way.

So the answer to how many tests does a PR needs- as many as needed to reduce your software's risk of failure to a minimal level... And this is failure right now, and in the future, because you will likely be stuck with this code for a while. Since it's difficult to know how much a future failure will cost your company, IMO I always try to err on testing as much as possible. Plus, good comprehensive tests have other benefits, such as making other changes / cleanups safer by reducing the risk that they unintentionally side-effect other code.


Those facts are untrue. If i am using a sound static type system, I have a formal way of modeling and understanding what will happen to my program, even without tests.

If a function has been statically proven to return an int, I know it will either return an int or not return at all. It can't suddenly return a hashmap at runtime, no matter what untested state it enters.


Code without test code doesn't mean untested code. And vice-versa.

Unless you're actually writing complex tools - no, you're probably not getting a "formalized way of modeling" what happens to your program.

If somebody tells me "hey, I have to keep manually testing this and that, I'm losing a lot of time, how about I spend 2 days writing my test thing?" - I'll say Sure!

But if someone tries to convince me in the abstract - I'll be skeptical. Developer busy-work is real.


If you have any concurrency in your system then you aren't going to cover all the states using unit tests. You'll need some sort of formal model for that.


> How many tests does a PR need?

Enough test for each of your spec. Adding a new functionality to your product? Your test should cover and the cases you put in your specs. Correcting a bug? You test should trigger it with the old code.

You can have 100% code coverage with unit testing, it will do jack-shit for your users when they enter a negative number and there was no code meant to manage this case so it never was tested.


> How many tests does a PR need? One? Five?

Enough so the overall coverage doesn't go down.


Be careful that coverage is a proxy metric to good tests. Striving for high coverage can mislead you on the quality of your tests.


On the flip side, encouraging good coverage usually ends up uncovering some bugs that might otherwise have gone unnoticed until they bit someone.


On the other flip side it encourages writing test that have zero business value.


High coverage is necessary but not sufficient, sure. I don't think you can have a good test suite with low coverage (< 80%).


As an engineer, I would now be very wary of joining any company unless "Do they write tests for their code?" as part of the hiring criteria. If you want to have something be part of your culture, it needs to be part of the judgement exercised by humans in the hiring/performance evaluation process. I say "by humans" because you do need someone exercising actual judgement rather than checking a box.

What you outline seems reasonable, at least in an environment where you sometimes have hard deadlines (eg. Ticket sales for this festival go live next week). Outside of that, I'm curious what cases there are where you can have a PR which is both critical to merge and doesn't need tests. When I review a PR, I look at the tests as one way of thinking through "what edge cases have already been accounted for here?"


I totally agree that requiring PRs to have tests is a good way to solve this - it's what we've adopted after trying a few approaches


I've never found developers resisted reasonable unit tests, though teams may squabble about what _real_ unit tests are, or may have been burned by poor approaches to unit testing in the past. If you can find the cause of resistance, all teams I've worked with have been happy (even excited) to get better testing in place. It makes them more productive and more successful when done properly.

What I find more common is for the business to be unprepared to make lateral changes to a product. Even rational unit tests are a medium term investment. You need to spend time developing features customers don't see, and apply those tools for some time, to see quality differences. That can be difficult to justify in a number fairly normal business scenarios (low cashflow/reserves, high tech debt/regret, etc.).

To help offset the cost (and delayed benefits), I've always suggested phasing in unit test strategically. Pick a module or cross-section of the product that is suffering from bugs that customers see (i.e., affecting revenue) and add the minimum viable tests to that. Repeat as needed, and within months/years, you'll have coverage that fits the business needs well.


I haven't had this problem either. We also tend to hire more experienced folks though. A lot of startups are hiring junior folks who might not know the best way to structure things. There aren't a ton of incentives to write tests at startups and it's hard to get right. Very different when you write an infra product though :). It likely depends on the kind of product you're shipping.

Consumer might not need that as much as enterprise.


Most test writing feels unnatural (especially lower level unit tests). I think this is a tooling issue and an inherent problem with unit tests.


Can you recommend good resources (article/pdf, lynda, etc.) on writing tests? Less about the how (though important too) - but more about when, why, and for what parts/functionality. I find when I do write tests some are so trivial and then I stop doing them altogether.


I've actually had a bit of luck in the past with the idea that every PR required a test in some unrelated bit of code. This accomplished two things: coverage went up, and a junior Dev had to explore the code base and learn


I have not had this problem as a CTO. I try to lead by example with comprehensive tests for everything I commit. We also track test coverage with CodeClimate and make checking tests a part of our human code reviews.


Very hard to write tests in ES5 and angular 1 and karma. I am testing way more with React, jest with snapshots and enzyme, redux, my team wouldn't have 100% coverage without easy to test code


Now say I have a CTO who doesn’t like (automated) tests be cause he thinks they’re a waste of time and mostly something a type system should do for you (never mind we’re in dynamic loosetyped land). How do I introduce tests? Tried many times, always end up not getting adopted.


Some options:

- Educate your CTO.

- Just start writing tests. Consider whether you can pull this off.

- Wait for something to go wrong where tests would have caught the problem earlier. That would be a good time to bring testing up again.

- Find another job.


A and C if not done tactfully could lead to D


Do it for yourself - tell the CTO you're doing TDD to make your own process sane. It would be weird for others to object to you adding tests to your own code. Then others will see your awesome code with great coverage and no bugs in PRs and it may even catch on. Convince one or two colleagues to do it with you.


Respectfully I disagree. I’ve found that ‘talking about doing it’ will cause problems if you aren’t able to do it and hold schedule with everything else. From personal experience, for parent post, I’ve found it’s been best to implement what you can for unit tests and grow it yourself slowly or at whatever pace you are comfortable with, but set the bar high for yourself, beat it, and then maybe try to get others on board. This only holds if there is no mentor you could talk into helping you. Just personal experience. :-/ If the culture doesn’t encourage them now, announcing you will be doing it might only hinder the effort.


forgiveness > permission


Oh yeah, go ahead and do TDD to prove to your CTO that he is right.

TDD is a tool for specific needs, just doing TDD because you heard it's great will just kill the team productivity and whatever product they are working on.


Guessing because it's usually presented as something they should do without changing timelines or resource levels.

Sure, it will pay off, but not right away. Something needs to cover the interim.


So hiring is pretty hard but I kinda disagree with most of the points there..

* only hire when desperate

Strong talent is so hard to get you should probably always be hiring. If you're hiring too many people your bar is probably too low.

* only hire to keep up with growth

You need to be at least a little preemptive. The hiring process itself can take months, plus the time to train even good new hires is at least a few months, AND you need your most sr. engineers to help interview so that is time they aren't writing features when you're trying to hit that critical milestone.

* Don’t hire someone to do something you’ve not yet figured out

This is probably also a mistake as software engineering has become pretty specialized. Specialized Frontend, Devops, or Data engineers can bang out solutions even a strong generalist would take ten times longer to even approximate (and most likely anything they build will be throw-away). There is huge low hanging fruit in engineering productivity /business value to getting at least a decent 80% solution for most of these areas that it's worth hiring at least one strong specialist to help Shepard development.


> Don’t hire someone to do something you’ve not yet figured out

I think this is not an indictment of hiring for something you do not know how to do, so much as it is of hiring someone before you have a defined job for them to do.

When you’re hiring an engineer, presumably you’ll be placing them onto a team that is responsible for some well-defined part of the stack. So you should know what skills you’re looking for when you’re interviewing. This should make interviewing easier; if you know what capabilities you need a new hire to have, then you know exactly what to test for in new candidates.

(This is yet another reason why generic whiteboard interviews make no sense. They’re optimizing for solving problems that could be wholly unrelated to the problems your company faces on a daily basis. I’m surprised more companies do not give interviews that focus more specifically on their relevant problem domains.)

If you don’t know what the new hire is going to do when he or she starts work, then you have no idea what skills to measure in the interview, and end up settling for the “least common denominator” of whiteboard coding ability.


The Whiteboard is nothing more than a hazing ritual testing marathon runners on their 100-yard dash. As CTO I opted to go for two-pronged:

1) give them a take-home project in an area relating to the position they want to weed out the unqualified.

2) bring them on site and speak with them in persona along with other members of the team they will be joining.

3) Its fairly easy to tell whos a whos an impostor if you are knowledgeable yourself, but a group of engineers can identify a faker fairly quickly.

4) Always consult your team about the new hire and don't make it unilaterally or their failures will reflect on you. Even their success won't make up for it if they turn out to be a nutjob and you vouched for them.


This is good advise when you have a lot of money to spare. Startups sometimes are more strained.


To be clear this advice is for small early stage startups that are pre-product market fit.


In startups, specialised engineer would not necessarily add a good value due to how fast things are changing and mostly lack of proper spec. Specialised engineer can find a good solution with a proper structure which startups lack.


Enjoyed reading this article, all valid points. However, the one thing that stood out to me in this area was how light I was in effective principles of management and leadership. As a CTO of an organization of more than a handful of people you eventually "get things done" largely via other people rather than being hands on yourself. Had to read a lot of Harvard Business Review to gain the skills and confidence for that. Just like programming, there are indeed tangible skills to learn. It's not just common sense and you're not just born with it.


It’s funny how, as you progress through a career and gain responsibility, those HBR articles go from seeming like a bunch of Markov chain corporate-speak to being on-target for that exact problem you had last month with the leadership team.


Are you sure they really have any more meaning, or are you just ascribing them meaning that exists more in your "evaluation context" than in the text itself?

Compare: the way meditation is usually taught. There is something "there" to communicate, but meditation teachers mostly fail to communicate it. To use an old phrase, they are "pointing at the moon"—but, to stretch the analogy a bit, they're doing this pointing indoors, where the sight-picture you get by following the tangent of their finger does not, in fact, contain a visible moon. You have to imagine taking the thing they're doing (pointing), and reframe it in a context where there is a hypothetical moon to see. Whether that helps you find the moon is more about what you know about the sky and fingers and angles, than it is about how well the meditation teacher can point. And this is why the teachers end up failing to communicate: they did not, themselves, figure out how to "reach enlightenment" by absorbing a verbalized lesson, but rather by pondering a gestalt mess of ideas that have little in the way of words associated—so they can't just turn that gestalt mess back into words.

So: are HBR writers pointing at a visible moon, or are their words Markov-chain-speak because they're trying to backwards-chain the gestalt mess of their own mostly wordless understanding into a verbal lesson?


What is up with the disrespect I constantly hear for wordless understanding? Not everything is best communicated verbally. There's a reason traditional education is often described as a series of falsehoods.


There's nothing wrong with wordless understanding per se; the thing that's "wrong" is thinking that you have words (i.e. a teaching) that can effectively, repeatably communicate a concept, when you actually just have a wordless understanding.

The problem of meditation teaching is false positives: people experience enlightenment while pondering some koan, so they think that that koan actually helped, and pass it on. It's superstition. Anything could have helped. Something that truly helps, should help more people than average, more often than chance—and if you've got that, you've got words.


False dichotomy. Understandings aren't completely wordless or wordable. They fall along a scale.

> Anything could have helped.

If something helped a person, and they want to pass it along, even if it's difficult to communicate in a tangible fashion, I'm not going to stand in their way.


Sure, but If I want to learn a difficult-to-communicate lesson, I would hope that the people who have a wordless understanding would keep their communicating to themselves—unless-and-until they come up with some coherent words to match their thoughts, that they can be sure can be used to reconstruct those thoughts without their brain there to help.

People don't yet know what they don't know, until they know it—so it can't be the learner's task to preemptively avoid vacuous lessons. That responsibility has to fall to the teacher.


Sometimes what sounds like nonsense hints at a higher truth.

http://m.nautil.us/issue/40/learning/teaching-me-softly-rp


Same goes for philosophy.

However, a good writer should be able to convey even the most advanced topics in accessible ways. Often when I see someone relying on jargon and insider language too much, they strike me as a poor writer, regardless of their grasp of the source material.


The risk you run here is internally over-emphasizing events that happened more recently as more important. That could make them more relevant to your recent experience but not necessarily something of more value than the problems you were solving earlier in your career.

I have also found increased risk of bikeshedding. The higher you go, the more likely you're working cross-disciplinary with ego-intellects. That also leads to suppressing dissent (hierarchy relationships more than experience-based), leading to worse decisions.

Please don't listen to the HBR articles, they're generally very terrible and often can be summed up by survivorship bias.


I loved "High Output Management" for a concrete handbook on a lot of these topics: https://www.goodreads.com/book/show/324750.High_Output_Manag...


I have quit jobs because we kept bad hires too long and then didn’t fight to keep good hires from walking away. I think grooming and retaining talent is just as important as providing technical leadership. You need to be strong in both areas.


I've seen exactly this in a local well-regarded startup. Incompetent hires with problematic behaviors thriving and being protected, and competent hires being unprotected, not cared about, and almost pushed out.

They would hire almost anyone, and then not take active action in maintaining a healthy staff. Needless to say, it's not going very well over there, regardless of the CTO being quite technically proficient.


Incompetent PMs can be an issue too, between inaccurate/incomplete feature planning and shoving their responsibilities onto unwitting developers. I'd argue that a great PM is worth as much as the much-vaunted 10x developers, if not worth much more.


What always stands out about startup reflections like these is how utterly undefined, freeform, and rapidly evolving the roles can be.

The old fire hose saying is true, but it’s not just that you’re drinking from a fire hose, it’s that you often don’t know what’s coming out of the hose next. One minute deep technical decisions, the next minute helping to establish hiring philosophy, and cashflow and growth always on a background thread.

After a few years of this I think my experience is not uncommon. If you exit and through whatever circumstance (success or failure), come back inside an F500 company, you realize that trial by fire has force fed you a vast amount of new skills without even realizing it.

On one hand, the realization is really empowering, the realization you feel comfortable taking on various high impact tasks without much thought that you could have never jumped right into before. On the other hand, it can feel limiting, because F500 companies tend not to encourage even the most talented technical people to cross roles and help define company wide hiring practices.

It’s an invaluable education, but I don’t know if MBA is quite the correct ananlogy, not sure what a better comparison is.


Much of this is summed up to be:

CTO positions are much more about technology vision (e.g. choosing frameworks/technologies that can last + serve your needs today and tomorrow) and hiring/retaining talent. Everything else is gravy.


It depends on the companies. For some, CTO is merely a business title and an outward facing person, but the (S)VP of Engineering / Director of Engineering is more politically powerful since he/she leads the engineering division, and does not report to CTO at all.

Many of the Fortune 500 companies have both CIO and CTO, and they are not necessarily peer to each other. In the recent years a bunch of new titles like big data chief, digital media /innovation chief, process and technology chief, etc which make the political scheme more confusing and toxic. Many of them end up reporting to CEO directly. There's also EVP rank, so go figure...

Again, it depends.


Is the choosing frameworks and technologies really a thing that CTOs do? That seems like more of a tech lead/architect job to choose the right tool for the job. I could see the cto pushing back on those choices from time to time if something is being drastically over engineered but declaring what technology is being used seems like a job far below a cto.


In a word, no. But in startup-land, where the total number of people on the engineering team is, say, less than 10, chances are good that the CTO will also play a lead engineer and/or architect sort of role, in which case they will play a part in designing the architecture, selecting frameworks, and so forth.

CTO of, say, US Foods? No, of course not.


Why call the role a CTO then? If the role is closer to a tech lead or architect, just call them an architect.

This has always confused me in start up land. There will be a full c suite in a company of 10 people, despite that those c suite folks day to day would look nothing like a corporate position.

Just call it what it is instead of inflating titles.


> Why call the role a CTO then? If the role is closer to a tech lead or architect, just call them an architect.

I agree! How many times I got job offers that read basically: "CTO/sole developer". That's inflated and meaningless, like being one of 100 Senior VPs in a bank.


Because the two agents in this game both benefit by inflating titles, without direct perceivable cost.

The CTO/lead gets a better title to inflate their ego and (possibly) future earnings.

The company gives the employee something the employee values but costs the employer nothing.


What do you call the person running the company of 10 people? I guess "team lead" would be a non inflated title, or just "manager"? It would be pretty strange to have to explain that title to anyone outside of the company though


"founder"?


I think there is some semantics here. A CTO at all levels needs to have an understand of their (1) current stack of technologies and understand how it improves the business (2) a future stack of technologies and how it may improve the business in the future.

Specifically this could mean as minuscule as "We're using RoR for our website" to more broad like "We need to have sensors in every food package we ship to manage our supply chain and we use IBM IoT platform to do this". The point is have a defined vision with subsequent technology choice behind it. Whether or not you have a Lt that helps drive those decisions is moot.


This article seems focused towards startups, and the chances of most startups even having a tech lead/architect are pretty slim. So yes. Many CTOs do pretty much 'everything' for quite a while until the funding is there to build a real team.


And technical leader-ship, to train your employees and build right culture that delivers :). I would put that first though.


> Don’t hire someone to do something you’ve not yet figured out

Hum. I would say the reverse. Bring people that are smarter and know more than you.


I think by "figure out" OP did not mean "learned it to bits" and more like "learned enough to be able hire for that task"

I was bit before by hiring specialists when I did not know what task at hands was, what metrics I should expect, what kind of timeline is reasonable, what potential gotchas are.

And for person with CTO title, I think it must have.


Both points are true:

Don't hire a role you think you need until you're sure you need it. Sometimes startups think "we need an HR person" or "we need a marketing person" before those jobs are actually at the point where they require a full-time person.

But after your first few engineering hires, you will probably know well whether you need, say, a backend engineer. You will have people doing some of that work, and be able to look at your roadmap and estimate correctly.

But for first-of-their-type roles (like my marketing or HR examples), that's harder - often part of it is startup leadership thinking "we could be doing so much XYZ I don't know about", instead of "we're doing 10 hours of XYZ a week and I know we need 40".

Once you've decided you need the hire, you want to get a person as smart as possible.


You have to know enough about the thing to know if the person you are hiring is smarter than you at it.

I see this all the time in hiring and acquiring vendors. Management just wants to fill missing talent, but then can't tell they are getting mediocre work.


If you don't know how to do something yourself, you wont even be able to identify someone who is better than you in that field.

I've seen people who don't know how to market their product go out and try to hire a marketing guy. You might luck out and get someone perfect for you, but I've never seen it.

Usually they just end up wasting a lot of money and learning some hard lessons.


So how do you hire effectively as a CEO? There are too many areas for you to be knowledgeable, yet you need to be able to hire top talent across a variety of areas.


The method I've found is to ask people I respect a lot, "Who is the best X that you know?" Then call/email them, saying, "I'm the CEO of Y, and I'm trying to find out what a good X looks like. So and so said you're the best she knows. Can I have 30 minutes of your time?"

Do 5 of these, and you'll have a good idea of what someone good looks like. (And those 5 may give you some candidates)

This is very difficult though because things like "organizes the team to hit quota every quarter" can come in many different manners.


You hire CTO and let him do his job. How you hire CTO? You learn intimately how to hire for such positions, what their day to day jobs are, etc. Only then you can evaluate canidate for CTO position.


Didn't Steve Jobs say something about B players should only be hiring A players? Cause we all know if you start hiring C players they will just start hiring D players and so forth. The Bozo Explosion he called it.


There is a balance I guess. You don't need to do all the shooting yourself but you need to know where to point the gun, so to speak.


That isn't reverse. You still need to figure out what you need your self then get person to do your job 10 times better.


"Only hire when you feel you’re completely desperate for the role". Maybe for a tiny, extremely lean startup. But for anyone else if you wait until you are desperate you will end up hiring the first person that you think might do the job. That doesn't sound like the right way to go to me. But maybe "desperate" is relative.


Although I'm not a founder, the rule I espoused in the early days (<20 people) was to not hire for a role unless 50% of a person's time was collectively being spent on that role across the company. This was a great way to be disciplined in determining what was actually a bottleneck for our growth.


> ...it’s a blessing that my predilection for hipster technologies has not caused any serious problems.

It's entirely possible that this was the primary source of his problems with hiring, firing, testing, and a lot more.

The technology you choose determines which technologists you attract. And it's not a superficial thing, it actually says everything about the CTO's own technical skill, judgement, and experience.


I was thinking that most of the problems he notes are the result of that litany of tech. How much of that was really necessary or appropriate?


Couldn't agree more. It does not seem like a wise thing to do.


As time goes on, the CTO becomes a pretty flexible position, somewhat analogous to that of a COO. This article was useful for me to figure out the kind of options I had as a CTO, in terms of specializing, as the company got progressively bigger: https://www.linkedin.com/pulse/five-flavors-being-cto-matt-t...

Early on, like OP discovered, you pretty much have to do it all, but you slowly remove yourself away from a lot of those tasks as you find better people to replace you in those areas.


I like reading posts like this one. May be they serve as a form of therapy for me that I’m not in this alone. There are others in a similar boat, fighting the good fight, making similar mistakes, and having the same realizations.

Very well; now, I can go back to work with my head up high. :)


> "I appreciate now that technologies have a surprisingly short lifespan"

This fact alone makes me so glad that I stuck with older tech that has withstood the test of time for our own SaaS. I know that we have users from bleeding edge tech companies sign up for our service, then run away when they glean the 'ancient' tech that it runs on - but then again, I think we have outlasted many other new tech frameworks/languages that have rocketed on high, then fizzled out into obscurity in that same time.


What was the stack?


Front end is basically Bootstrap + jQuery (o_O) Back end is Ruby, but built using Padrino, based on the Sinatra framework instead of Rails. Not exactly 'old' tech there, but not nearly as cool or fast moving as Rails, Go, Rust etc.


> hence why cloud providers can offer $100,000 initial credit

Is this a thing? How can my company get $100,000 of AWS on credit?


That kind of offer is generally only available to startups that are in an accelerator program


Apply for AWS Activate.


Probably proof that you have over $1m in seed money.


> Don’t hire someone to do something you’ve not yet figured out (some exceptional candidates can bring new capabilities to companies, but often the most reliable route is for some “founder magic” to re-assemble the company until it can perform the new thing)

I'm curious what this "founder magic" bit means. Is this advice largely because of the difficulty of trying to find a qualified expert to bring new capabilities to your company when you personally aren't familiar with that area? E.g., it's hard to not get the wool pulled over your eyes by someone who talks well but can't deliver?


The problem is that a founder who hasn't already filled that role themselves doesn't know two things: 1) what the key parts of doing the job are and 2) which skills/personality the hire must possess.

There are many many ways to fail in a position and only a few key parts that matter. The "founder magic" is taking your unique perspective as a domain expert in your business and finding out what the role really needs. You do it by executing in that role for real. After you do that for a while (weeks/months) then you know what will make a candidate successful (and now you have a 50/50 chance of hiring the right person rather than a 10/90).


Unless you are hiring someone for a VP type role, your new hire likely isn't going to step in and know exactly what they should be doing everyday to achieve the goals you have laid out for them.

So you have to try it all out yourself first and figure out what makes someone in this role successful, what makes them not successful, and how to create a process or blueprint that your new hire can follow to success.


Specifically: in a company of five, don't hire a customer service person if you haven't done customer support yourself at least a bit. Don't hire a database person if you yourself (or somebody internal) hasn't already tried [and presumably failed].

Experience and Failure are important guide-posts to help you look for the right person to fill that role. Where are they better than you? Then you have to mentor them so they get to be better than themselves so they can make your next hire(s).


Yo, why are you doing BI queries on MySQL?


I'm no fan of MySQL either, but probably because they were already using MySQL and they had some Business questions they wanted answered Intelligently without setting up a bunch of new infrastructure. Sometimes at a startup you just need to get things done, and fix it later when (if) it becomes a pain point.


Definitely the kind of thing you use a dedicated replica for


We had a little MySQL db, and both the data and the different systems consuming it grew quite rapidly, faster than we could get ahead of given company priorities. We have a read-replica for the BI dashboarding system, and this keeps our world relatively stable and reliable.


Really not a replica. That is a fine shim for early days but it means you are severely limited to what you can do with reporting based on the data structure prod has. A better pattern is prodDB->Kafka/kinesis streams to a reporting DB like redshift/snowflake/big query. That way you can shape the data however you need, and it lets data teams avoid bogging down engineering.


That’s putting the cart miles ahead of the horse at small-medium organizations. SQL scales a long way. It’s not a shim - it’s the best way to do business until the technology is a limit for you. Modern RDMS can go a long way.

Scale enough to have people dedicated to building and maintaining data lakes is a late stage problem. Who’s going to go build and maintain that reshaping of data?


I guess I really value data, especially for early stage companies trying to understand users and find fit. I don’t think the DB needs to be a dedicated analyticsDB, MySQL and especially Postgres work great for analytics. My issue is with read replicas. In most cases it doesn’t make sense to force the prod DB to have an analytics friendly schema for the replica to use. Making all those views and interesting in them as important business questions come up shouldn’t require a production DB migration.

That said I’m helping an early stage company and an AWS read replica plus Metabase is meeting most of our needs fine for today. We’ll probably start pushing events to bigquery soon so we can make some metrics that would otherwise take crazy joins and sub queries.


Most early stage companies will be writing queries directly against OLTP tables - which is why a read-only replica of your master DB is the safest/fastest option.


There’s options though yeah? Materialized views are a great fit for data reshaping for such things.


> prodDB->Kafka/kinesis streams to a reporting DB like redshift/snowflake/big query

Over engineer much? I've worked at trading and advertising analytics firms that had less engineering


Sounds like they hit a paint point.


Curious to hear if they were running SQL queries using some sort of ORM layer, or if they had people who were knowledgeable with SQL itself writing queries or creating Views to extract the data they needed, as well as tune the database and ensure proper INDEXES were in place.

In my past experience, the two methods above can produce wildly different impacts on database performance.


>Of the list, AngularJS and MySQL have been the only ones to give us scaling problems. Our monolithic AngularJS code-bundle has got too big and the initial download takes quite a while and the application is a bit too slow. MySQL (in RDS) crashes and restarts due to growing BI query complexity and it’s been hard to fix this.

Maybe they should try TiDB(https://github.com/pingcap/tidb). It is a MySQL drop-in replacement that scales.


It's funny you mention that you had difficulty having your team write tests. At my company, the CTO has difficulty writing tests and the team has consistently written adequate test coverage.

I fixed this in a new project by starting with jest [1] and failing the CI if the test coverage wasn't at 100%.

[1] : https://facebook.github.io/jest/


> failing the CI if the test coverage wasn't at 100%.

This is horrible advice and should never be followed.


Why? It's not hard to do if you start a fresh project.


Just because you have coverage doesn't necessarily mean that you have written good tests.

That being said, we do something similar where we require 80% coverage.


The difference between 80% coverage and 100% coverage is overrated. 80% is more than sufficient, i'd even go ahead and say 70% is better.

100% goes into "change detecting test" territory. There's also the time aspect: going from 0-70 is not hard, 70-100 is extremely time consuming, and often not worth the effort.

Monitoring is a way more efficient tool at catching issues.


While I agree with 70% is about the sweet spot, it really depends on the tools you're using.

We've found that with using Jest and just doing snapshots you can get to 70% without actually testing any of your others methods, hence the 80% coverage requirement.


> You accept long-term “technical debt” with the adoption of any technology.

how long have they been using perl5 over at craiglist?


Re: getting your team more interested in testing. This is not an easy thing to get momentum on if people aren't used to it. Yes to getting the test time down (and keeping it down)

Also, try defining (maybe in collaboration with the team) the tests you want people to write rather than leaving it up to them or (hopefully not) expecting 100% coverage. I wrote this on my thoughts a while back https://getcorrello.com/blog/2015/11/20/how-much-automated-t... We had some success with increasing testing using that and code review so others could check tests were being written. Still not total buy in to be honest but a big move in the right direction :)

One surprising thing was that after years of thinking I was encouraging my team to write tests, the main feedback on why they didn't was that the didn't have time. Making it an explicit part of the process and importantly defining what tests didn't need to be maintained forever really helped.


I just write the gherkin in comments, interleaved with the test code. No messing with regexes


Why are so many self-appointed startup CTOs so anxious to share startup advice?


I recently interviewed with these guys. Was not impressed.


Great article!


If your engineers don‘t write tests you hired the wrong people. Testing is vital. Make a rule: Every change needs to be tested (you can even set up a pre-commit hook for this. If a class has no test, one has to be written. If tests can not be written easily for a class, it has to be refactored.


It's a matter of cost. Adding good covering unit tests basically doubles your development time. You are paying now for dividends later.

In terms of business, you are trying to prove your business model. If your business model is bad, it doesn't matter how well your software is written. You need to prove your business model before you run out of funds.

It's a give and take. You really need to understand both the technical aspects and the business aspects to understand why entities might do certain things.

Also, people have been writing software without unit tests for decades.


> If your engineers don‘t write tests you hired the wrong people.

Disagree - if your engineers don't write tests, you need to clearly state to them that tests are table stakes, and create an environment conducive to the outcome you want (set up CI, make it fast, set aside time for test-writing hackathons).

If your engineers don't want to _follow_ that leadership after it's given, then yeah, you hired the wrong people - but don't demonize employees for not doing something they weren't told they need to do.


If your engineers don't write tests, it also probably means that they are not being rewarded for writing tests or punished for not writing tests; indeed, if they are rewarded for doing things that are not writing tests (such as pushing new features) and they can do those things without writing tests, they are being rewarded for not writing tests.

Just telling engineers, "write tests" and then promoting the ones that don't is bad leadership: you need to create an environment where the behaviors you desire are the ones that are promoted.


> If tests can not be written easily for a class, it has to be refactored.

How do you make sure that the refactored class does the same thing as the old one? Rewriting old code that you don't have test coverage for is way riskier than whatever small change you were going to make to it.

I write a lot of code without tests because a lot of legacy codebases aren't set up to be testable, but they work, and it's important to the business that we're able to deliver small bug fixes and incremental improvements on the existing code while we write whatever replacement system we want to write. As I work on them they'll slowly get more testable, but if you're abandoning working code because it has no tests, you're usually making the wrong decision. (Which the author recognizes.)


Refactorings worthy of the name aren't scoped by class. In fact refactoring is one of the strongest arguments against unit tests; refactoring typically changes the split of responsibilities and alters the articulation points in the design, such that old tests are discarded and new kinds of tests need to be written.

Solid integration tests may work, but it's hard to get really good coverage in any reasonable running time from integration tests.

These days I try to cover the happy path with a fairly integrated flavour of test, the edge cases around the tricky bits of code, and fairly exhaustive coverage for authentication / authorization code paths, and not a whole lot more.


Characterization tests - Michael Feathers writes about strategies to build tests in "Working Effectively with Legacy Code".

If I wanted to be a consultant or contractor again, it would be walking into these situations and essentially building test systems for legacy code.

(And if anyone wants to pay me 8,000 a week for a few months...)


> How do you make sure that the refactored class does the same thing as the old one?

This. This is the problem. The answer, with tests.


What properties do you write tests for, though? Presumably you're touching the code because there's something wrong with it. How do you know how much of it is wrong? How do you know that all callers are actually thinking the current behavior is wrong, instead of one caller misbehaving and another caller expecting it (possibly because someone noticed and worked around it, and now that workaround is going to break)?

Tests are simply the implementation of knowing what the code is expected to do. If you don't have any basis for that expectation, writing tests is meaningless - either you test the current behavior of the code, which doesn't help you change anything, or you test your imagined behavior of the code, which doesn't help you validate anything.


I agree, and commend how well you've noted the problems when code is written without tests. Such a codebase becomes mentally exhausting and expensive to probe into; much more expensive than the original time saved by ignoring testing practices altogether. Sure, a huge amount of legacy software may not have tests, let alone comments. Then yes, it's like where do you even begin and have any confidence in what youre testing for. But ignoring proper testing practices in new, modern codebases, especially in a business where the single product or service offering is software, is extremely risky and irresponsible. This is a little ranty because why are devs justifying not writings tests in 2018!


> If your engineers don‘t write tests you hired the wrong people.

That's great if you have the luxury of time.

Good test coverage will definitely save you time in the long run, no doubt about it. But it will cost you dearly in the short term.

And if your company's life or death hangs on getting a feature out a couple days sooner, then skipping tests is a perfectly valid thing to do.


So basically you've never worked at an early stage start-up.


I'm not sure what kind of professional environment you work in but I would argue that the activity of testing is separate to the activity of writing tests, and that, writing tests only forms part of that.


> I appreciate now that technologies have a surprisingly short lifespan

That's pretty much only true in the Javascript ecosystem. Every other areas of the technological stack usually see lifetimes in the decades.

> Stepping aside from pure technical decisions, the life-blood of being a CTO is people management

Not really, no. That's the job of a CTO at a start up, not at a larger company. I'm not sure the author of the article has actually learned the right lessons from his experience.

At the end of the day, CTO of a start up is not really a CTO role in my opinion. It's a technical co founder. You just happened to be the most senior person of the team at a point in time and you inherited a few leadership responsibilities in the process.

I've seen a lot of start ups fail because they fail to recognize that fact and didn't realize that after a few years, they needed a different CTO than the co founder, someone who understands that role at scale and the many tasks it implies that are not necessarily relevant to the early years of the company.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: