Hacker News new | past | comments | ask | show | jobs | submit login
Are Tests Necessary in TypeScript? (executeprogram.com)
31 points by capableweb on April 14, 2020 | hide | past | favorite | 61 comments



> In Execute Program, we always write tests around subsystems that either (1) are critical to the product or (2) contain a lot of details that can't be statically checked.

This is one of those things that sounds good when you're in college, and sounds, well, horrifying is too strong a word, but it will definitely set off the warning bells for anyone that's been programming for a decade or more.

My team has a simple standard: every single user-facing requirement is verified by automated testing. We don't have 100% coverage, but it's damn close, somewhere like 98%. The remaining untested code is generally weird branch paths that are hard to trigger, and "shouldn't" be seen in production.

But this goes beyond code coverage or TDD: firing up the debugger and executing a unit test is the simplest way to add a feature, period. I cannot fathom building and starting our application so that I can hand test every time I change a line of code.

Unit and integration tests aren't punishment inflicted on inferior developers: they'd an extremely useful tool that makes your life easier and your team more productive.


> This is one of those things that sounds good when you're in college, and sounds, well, horrifying is too strong a word, but it will definitely set off the warning bells for anyone that's been programming for a decade or more.

Speak for yourself :) I'm a huge TDD advocate, and I'm personally responsible for making testing a requirement of any dev work in my company. I am well over a decade in experience.

Gary's quote sounds pretty close to my criteria for what to test (my criteria would replace "a lot of details that can't be statically checked" with "meaningful details that can't be statically checked").

The type system cannot replace all testing, but there are certainly places where it can make testing superfluous. Increasingly I strive to make more and more of a codebase like that. It improves confidence in the system, with a significantly faster red-green cycle, and isolates complexity to layers of an application where complexity is expected.

> But this goes beyond code coverage or TDD: firing up the debugger and executing a unit test is the simplest way to add a feature, period. I cannot fathom building and starting our application so that I can hand test every time I change a line of code.

In general, I agree with this, and in general this is my approach to programming. However, and this is a sincere question: do you work on web frontends? In my experience, some aspects of frontend work is absolutely easier to evaluate by simply running it. You may want to have automated tests for certain interactions, but even then as a regression test, as writing the test first is often harder than getting it working first and making sure the test exercises it after.


> The type system cannot replace all testing, but there are certainly places where it can make testing superfluous.

I agree. We configure analysis to ignore boilerplate code, for example, so that doesn't count toward out coverage threshold.

And in this particular example, yeah, there is an entire class of tests that no longer need to be written; the "did he really pass me an integer, or is it JSON" style assertions. Typescript (and strong typing in general) is super helpful for eliminating that category of test.

> However, and this is a sincere question: do you work on web frontends?

When I'm forced to. I'd say my work in 80% backend, 20% frontend, though that ratio changes based on what stories we pick up for the week.

But we use TDD for our frontend, too. I have selenium tests that bring up the backend, launch the web UI, and run through the required functionality.

It's a very different mentality, and we don't check things like CSS layouts at this level, but the general "the user clicks this button and gets this result" are all verified automatically.


> When I'm forced to.

I hear you. And I think this may account for some of your frontend testing solutions. (Which I in now way mean to disparage.)

Given a set of technologies that allow it, another way is possible, significantly easier, with a faster red-green cycle and probably a higher degree of confidence. It looks a lot like what Gary described in TFA. Given:

- A good type system

- A component system with single-direction data flow

- A state abstraction that is testable outside of the DOM

(Edit: not sure why HN won't let me make a list without empty lines but there you go)

In quite a lot of cases the tests can be isolated to your state logic, and the mere presence of a component and a passing compilation is enough to ensure your frontend works. You can even, with some effort, ensure the presence with the type system.

It doesn't necessarily eliminate all end to end testing scenarios, but for a lot of sites/apps it may.


> (Edit: not sure why HN won't let me make a list without empty lines but there you go)

because hn formatting doesn't actually have a "list" concept https://news.ycombinator.com/formatdoc


It's not so much that I want it to render as a list (e.g. like Markdown), but that I want to produce separate lines of text without an additional blank line between them.


How do you deal with tests that end up self-referencing, testing only specific implementation details and adding no additional safety? This is the trap I always fall into when I try TDD. In order to add or change a feature, you now have to write the code twice: Once to implement the change, and once to re-implement the change to assert that it does what it's supposed to. And no safety has been added, because the logic itself can still be flawed. You've just written it twice instead of once.

Generally I find tests that test the controllers, where multiple units of work are attached together, to be much more useful. But TDD does not naturally lead to these kinds of tests.


> How do you deal with tests that end up self-referencing, testing only specific implementation details and adding no additional safety?

Generally, by describing tests in plain english that express "user" intent (where "user" may be the dev at the call site, or may be an end user) rather than expressing things like return values or state changes.

> In order to add or change a feature, you now have to write the code twice: Once to implement the change, and once to re-implement the change to assert that it does what it's supposed to.

From this description, it doesn't sound like you're doing TDD.

1. Write a failing test (where "test" describes user intent). 2. Make that test pass with an implementation. 3. Make that implementation meet your expectations for code quality, appropriate separation of concerns, appropriate abstraction, DRY etc.

(This is commonly referred to as red-green-refactor.)

> And no safety has been added, because the logic itself can still be flawed. You've just written it twice instead of once.

While you can always get the logic wrong, it's significantly less likely if what you're testing is intent. I really don't understand what you mean by "written it twice".

> Generally I find tests that test the controllers, where multiple units of work are attached together, to be much more useful.

I'm... honestly surprised by that. Those tests tend to be the most brittle, the least likely to cover all scenarios you'd like to test, the most error prone, and the hardest to understand and maintain.


> Those tests tend to be the most brittle

While true, it's far more likely that they will help you during a refactor. If you use TDD and have a test for every function, you will more often than not throw entire functions and their tests away. A million tests that test immutable functions are all useless. But tests that fall in between TDD style test-per-function and integration tests, tend to be re-used post refactor, and give certainty that the refactor worked as expected.


I think the size of a test shouldn't be defined by the size of the functionality you're testing, but rather a unit of "user intent", as expressed at whatever layer you're testing.


> This is the trap I always fall into when I try TDD. In order to add or change a feature, you now have to write the code twice: Once to implement the change, and once to re-implement the change to assert that it does what it's supposed to.

I think you have that backwards.

In TDD, the test is basically an assertive statement: when this test passes, the functionality is working as intended. You then write the implementation code, and when the tests pass, you're done.

This gives you two things: one, a sense of certainty that your code is meeting the requirements, and two, it bounds your work, preventing you from writing code that you don't need to make the test pass.

Now let's say we're changing a feature. Maybe if a user tried to add an out-of-stock item to their cart, the UI currently displays an error message, but now we want to add a note indicating when the item is expected back in stock.

We'd just go into the test for that error message, and add statements that look for this new `div`. We would probably have to add a few cases (for example, when we have an estimated data, when we don't have an estimate, and when we've been told the product won't be restocked), but we aren't fundamentally altering the test code.

These tests will initially fail, because there's no `div` with an `id` of `restock-message` or whatever. But one by one we can knock them out, adding messages for known or estimated restock dates, unknown restock dates, and permanently out-of-stock items.

> And no safety has been added, because the logic itself can still be flawed.

That's true, but that would also be true if you were hand-testing. You can't verify functionality you don't understand, but you can't write functionality you don't understand, either.

TDD, and testing in general, isn't a magic bullet that solves all of your problems; at some point, you still have to be smart enough to write the code. But testing is still a necessary tool in the developer's toolbox, because without it, you simply cannot know that your code is doing what you think it's doing. And since you're going to be testing anyway, why not have the machine do as much of the heavy lifting for you as possible?


> Now let's say we're changing a feature.

This is where TDD really falls apart for me. It's pretty rare, at least in the code I work on, that you add a single little bit of functionality that fits cleanly into an existing TDD function. It's much more common that entire structures changes, and the TDD tests are no longer relevant and have to be completely rewritten from scratch. So again, you're effectively just implementing the logic twice. Where you actually need it (during a refactor), it provides no safety because it just gets completely rewritten every time.


I think this is a sign that you are not designing maintainable systems in the first place. I work on reasonably complex systems where significant changes to a feature do not tend to cause huge structural changes like you describe, and where the vast majority of tests tend to remain unchanged and help detect regressions. It is entirely possible, with the right design. And in my experience, developing a comfort with TDD will help promote that kind of design.

(Edit: I want to clarify that this is not meant in any kind of a judgmental way about your work or your talents, I'm sincerely trying to help identify issues so that you can hopefully become more productive and more confident in the work you produce.)


> Unit and integration tests aren't punishment inflicted on inferior developers: they'd an extremely useful tool that makes your life easier and your team more productive.

I can't agree with this more and also find the reverse is true: not having tests will punish you repeatedly. If we measure inferiority by unproductivity, adding tests also enables you to become the opposite of inferior (ferior?)


> become the opposite of inferior (ferior?)

"superior"


> This is one of those things that sounds good when you're in college, and sounds, well, horrifying is too strong a word, but it will definitely set off the warning bells for anyone that's been programming for a decade or more.

To someone that has programmed for a good 2 decades and more, the above sounds like what a college student or junior dev that just found out about TDD would say.

Unit test and integration tests are (bad) substitutes for good and modular design, which should piggyback on a strong type system. I've seen it so often that whenever I read comments like the above I feel the need to create an HN account just to reply to them.

Junior devs in charge of major design decisions create damage that goes way beyond their limited reach. Test code is untested code. Test code is brittle code - especially integration tests are. You need tests because you're using a dynamically typed language? Well, stop using that language for anything that is even remotely complex. Your future senior dev self will thank you.


> test and integration tests are (bad) substitutes for good and modular design, which should piggyback on a strong type system

This sounds completely hand wavy. If the design is in fact modular and built of autonomous components then it should be relatively easy to test and should be tested. Types are no substitute for testing and are completely orthogonal to testing.

> Test code is brittle code

If this is the case then the code being tested is more than likely brittle as well. Test code reflects the system under test more than anything else.

> You need tests because you're using a dynamically typed language

No you simply need tests in any language.


> Types are no substitute for testing.

I don't agree with the person you're replying to that types make tests redundant, but to claim that types can't substitute a large amount of testing can only come from inexperience.

Whatever property you can model successfully through a closed type, is a property you never have to test ever again anywhere it's being used. If you have 500 functions that take a NonEmptyString, those are 499 fewer property checks in tests (because you should have one test for the type constructor).


> types can't substitute a large amount of testing can only come from inexperience

That is a very bold assertion I have seen no evidence for. I write very few tests in Ruby or Javascript aimed at validating types.


> You need tests because you're using a dynamically typed language?

I don't really understand the connection between a dynamically typed language and the need for (or lack of) testing. Can you explain what you mean? I've never once felt like the type of language I used defined how important tests were.

When I write a unit tests, I'm not looking for syntax errors or a scenario where I'm passing the wrong type in, I'm verifying that the business rules of my domain are being properly enforced.

If your unit test is brittle, I'd say that's a pretty awful unit test. You should be isolating the component you're testing and eliminating all external dependencies like in the example below.

    class ItemRecommenderGuardTest extends TestCase
    {
        public function setUp()
        {
           $dependency = Phake::mock(Dependency::class);
           $this->recommenderGuard = new ItemRecommenderGuard(
             $dependency
           );
        }

        public function testGuardsInvalidItem()
        {
          // business rules deem this item un-recommendable. This unit
          // test verifies that the guard class properly enforces these rules
          $item = /* ... */;

          $this->assertFalse($this->recommenderGuard->isRecommendable($item);
          
        }
    }

This test isn't brittle, and it's asserting that an item which does not meet recommendable standards is never recommendable. If someone comes along and modifies the guard class incorrectly, these unit tests are here to maintain the integrity of the system


To someone else that has programmed for a good 2 decades or more, the above sounds like cowboy programming that leads to unmaintainable systems.

Unit and integration tests are not "substitutes" for good and modular design. Good and modular design work hand in hand with tests to produce reliable, well understood, and maintainable code. The most testable code is usually often the most well separated code.

A strong type system provides the third leg of the tripod. It's not replaced by or a replacement for either of the other two.

Stop thinking that you're too good for test code. Your future senior dev self will thank you.


If you don't have tests for everything, how do you check that everything's still working fine? Because the only alternative I see to not thoroughly test code is to check the app by hand to make sure nothing's broken, which is, well... Not as robust as having it checked by a machine.


I think the idea isn't that you don't thoroughly test, but that types are sufficient tests of certain functionality, and they're significantly more expressive and efficient than their runtime equivalents.


A "good and modular design" with a "strong type system" won't help you when one of your dependencies has an upgrade with a subtle bug that changes the behaviour of your system.


> Unit test and integration tests are (bad) substitutes for good and modular design...

That's a complete non-sequitur. You can have spaghetti code that produces a correct result, and you can have a beautiful architecture that produces an incorrect result.

Testing is the only way to verify whether or not your code is producing the correct result. You can test by hand or automatically, but the ultimate determination on the correctness of the code relies on a test.

It has nothing to do with design.


Testing is a superset of type checking.

And people forget the main thing about TDD: The main goal is to validate your design

You don't get that with type systems.


What language do you use? My experience in Scala is that tests can be replaced with types in a number of cases (but not all!) In these cases it is often not worth implementing a test, IMO.

Depending on the code it is possible to construct the types in such a way that an incorrect implementation is very hard. This requires a modern type system and the knowledge to use it effectively. Type systems like in Java and Go are not up the task for the most part.


In Scala, you should test that the type system makes the guarantees you expect with compile time errors. Several frameworks provide tooling for this.


Unit tests tend to be hyper-specific. Even after 20 years of writing unit tests, I find that getting the right balance between not enough testing and too much can still be quite challenging in some cases.

For example, some things are a lot easier to test than others. Some things require a lot of setup that is tedious and error prone. In such cases, it's preferable to test at a lower level but it's not always possible to do that and still cover all relevant cases. Those are the cases where unit tests tend to be the most expensive.

100% test coverage may be desirable as a goal, but not at any cost, and tests absolutely do incur costs. So there is a trade-off, and it's worthy of consideration.


The title is not a very good one, because it's nearly a rhetorical question: no language I've ever seen -- not even Haskell -- has a type system that completely removes the need for tests. For example, even Haskell has partial functions over types like Int whose behavior can't necessarily be encoded in the type system (at least, not ergonomically/practically) and thus require actual tests to cover.

That said, I think a more accurate title for this post (based on the paragraphs beyond the first one) might be "are tests necessary for a view layer that doesn't have much business logic in it beyond passing properties when that view layer is written in Typescript?" At that point, it's really just a permutation of the question "are tests necessary for the view layer of an application?", which has highly-situational answers.


The title is a good one, because everyone has to learn this lesson sometime. How will they learn? Well, they will probably type "Are tests necessary in typescript?" and come across this post, which will explain it to them.

Nobody is typing "are tests necessary for a view layer that doesn't have much business logic in it beyond passing properties when that view layer is written in Typescript?" into Google.


>completely removing the need for tests

That would be formal verification.


Not really. You still need tests to check whether your application is actually achieving its targets in real-world usage, including feature completeness, usability, performance; and how all of these are impacted by the environment (e.g. network issues in a distributed application).

Formal verification only tells you that the application matches its spec, given a set of assumptions. Only testing can tell you whether the spec matches the business case, and whether the spec assumptions hold.

Paraphrasing Knuth, code that has only been proven correct but not actually tested should always be approached with caution.

Edit: Knuth, not Dijkstra


Theoretically, formal verification can only tell you as much as can be encoded in a formal spec@.The problem is that no such kind of specs are known to exist (yet, if ever). As a thought experiment, if the informal specs written in English/Chinese or trapped in our head were as incomplete as a formal spec, we wouldn’t even be able to reason about what to test in the first place.

@ Then there is the limitation of verification tech of course.


When I was working with CompCert, I had no problems producing a formally verified compiler that miscompiled its (meager) test suite. Tests are necessary for "formally verified" systems! You might mess up parts of the specs or the model you are verifying against. You can only catch these problems through testing.


Which from what I’ve seen is way harder and way less accessible than testing.

And one of the things I say to anxious coworkers who are struggling with testing is that it’s a lot harder than you think.


Short answer: yes.

Long answer: yesssssssssssssssssssssssssssssssssss


The only person who would (incorrectly) answer "no" to the title question I assume is someone who only ever wrote javascript, and used tests as a way of preventing screwups with types, such as using undefined data or passing the wrong type of data to a function. Which is of course a pretty useless way of spending your day.


> We don't want to go down the esoteric and labor-intensive path of automated image capture and image diffing

Suck it up. Screenshot diffing solves a specific problem you can't address with unit/integration tests.


We're running an experiment on this with our small company.

Our core web application consists of a Java backend (40k LOC, plus 12K LOC of tests) and a Typescript/React SPA frontend (12k LOC, zero tests). LOC #s are 'real code' via CLOC. Not a simple CRUD app.

It's fine.

I can't say that this would be fine for everyone. We have a very small team (2 fullstack engineers) with good communication; we keep as much logic as possible in the backend; we have a robust test harness for the backend. Still, it works very well for us. Even without testing the frontend, it is very rare for bugs to hit users.

Our reasoning for skipping frontend tests: Most of the frontend code is visual. Tests can't tell you if the result is aesthetically satisfying, so you have to look at it no matter what. Since the backend diligently enforces data sanity, the frontend can't really get into too much trouble, and a page reload fixes almost anything. Frontend testing seems to be high-effort low-reward.

I don't think this would be possible without Typescript. TSX helps too since the compiler (really the IDE) catches what would otherwise be template errors. Large refactorings are pretty painless.

I would do this again.

I have also reverse engineered about a dozen other company's web applications recently (to automate them), and noticed that there are other app design philosophies that would make this approach difficult. Some teams go with a "smart frontend dumb backend" approach that can produce all sorts of nasty data corruption issues if the frontend screws up. I'm not a fan of this approach but if you take it, I don't think Typescript alone will save you.


> Our reasoning for skipping frontend tests: Most of the frontend code is visual. Tests can't tell you if the result is aesthetically satisfying, so you have to look at it no matter what. Since the backend diligently enforces data sanity, the frontend can't really get into too much trouble, and a page reload fixes almost anything. Frontend testing seems to be high-effort low-reward.

This is an example front-end react component project that I wrote that benefits heavily from tests: https://github.com/lookfirst/mui-rff

Why? Because it is a wrapper around two other frameworks (React Final Form and Material-UI). Any time those two frameworks change something in a way that I didn't expect, I want test failures.


> Types can't (usually) tell us when we accidentally put a "!" in front of a conditional

critical point in any 'types vs tests' discussion. strongly typed langs can tell you that your program will run w/out crashing but not that they'll be right

(even the 'run w/out crashing' promise turns out to be not always true)

I wonder what tools will evolve in the next 10 years to measure if code is 'semantically correct'. Some sort of function that maps from code to docstrings might be able to check this.


Semantically correct is important! What kind of semantics? I prefer denotational. Many programmers think in operational semantics. And how does one verify their code?

I'm trying to push our review guidelines to ensure developers submit their code/change with an argument as to why it is correct. I got this idea from E.W. Dijkstra who was having a hard time keeping up with his graduate students and colleagues. In order to save his time and sanity he asked them to submit an argument as to why their change was correct. Instead of validating their algorithms, Dijkstra was able to validate their argument and check their proof.

One can use unit tests to make this argument. The level of rigor depends on the task on how difficult it would be to convince the reviewer that your change is correct. One can go beyond unit tests and write property tests, make arguments based on the type information, embed proofs in the types, or make more formal arguments by proof if they feel it's necessary.

Making unit tests required is a good minimum case I think but is often not sufficient.


I've certainly talked myself into 'correcting' unit tests that didn't agree with bad code


The "ideology" video linked at the end of the article was also instructive for me, by framing the issue in other terms. Both tests and types are proof of correctness; tests are proof by example, types are proof by category. The architectural analogy in my head is that types are vertical and horizontal beams, while tests are diagonal bracing. The interlocking of the two is what makes the overall structure rigid.


Yes. Types are a great way to communicate with the compiler and fellow humans, but they do not replace tests.


I liken a few different things to Zeno’s paradox. Types might save you from 10% of your failure modes. And the next tool, and every one after, fix 10% of what’s left. But you never arrive.

Or alternatively, every efficient method of code verification just results in more code. Each product is “worth” so many man-years and so anything that reduces effort just increases scope. Which may also explain program bloat; adding three libraries gives me time to... add two more libraries.


Good points. Thanks for the reminder of Zeon's Pradoxes. https://en.wikipedia.org/wiki/Zeno%27s_paradoxes +1


I think this is somewhat funny since to think of this question at all really highlights the time people waste in JS/python etc.. Testing things that can be automatically checked by a simple compiler.

I do find that as some of these people migrate to TypeScript they write too much tests, as they are just not used to have some checks already done for them.

But to others that come from compiled/typed languages that's a question you'd never ask :)


Take a non-trivial program written in a statically typed language, with 100% unit test coverage, pass it to American Fuzzy Lop and see what happens.


Yes, but not to validate your syntax and any references to undefined stuff (except on the boundary where you might interface to external - not in Typescript written - libraries). Still, tests are meant for logic testing - which is something the Typescript compiler won't help you with (or any compiler probably).


> ...which is something the Typescript compiler won't help you with (or any compiler probably)

Not so - there're languages out there in which you're able to encode your business logic within your program's types, at least to a certain extent, e.g. dependently typed languages. This can definitely replace some of your logic tests.


Didn’t know Gary Bernhardt was working on something outside of DAS. Quality content as always.


Different quesion, what kinds of tests does TypeScript eliminate?

Also, does ReasonML eliminate other kinds of tests than TypeScript?


To quote Mr Babbage:

“I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.”


Yes.


Yes. You're welcome.


> We ported our React frontend from JavaScript to TypeScript, then did the same for our Ruby backend.

Next year it will be an article on how they ported everything to rust and webassembly and how that is just so much better than everything.

Of course they come to the conclusion that types and tests aren't the same thing. Otherwise you would never need tests, not even in dynamically typed languages.


That's not really fair, they ported their stack so front end and backend share type definitions. It was not a fashion driven decision.

Wrt to types and tests, "Types" means some verifiable annotations on the code. If a type system can encode enough information in them, perhaps with preconditions or other contracts, then would tests be superfluous? If so, I don't get your assertion about tests and dynamically typed languages.

On the other hand, since typescript is unsound, shouldn't you really test like a dynamically typed language?


Gary has already checked out Rust and didn't particularly like it, so I doubt that will be happening.

Regardless, it was not about being some sort of hipster. It solved real problems: https://www.executeprogram.com/blog/porting-to-typescript-so... (this is linked from the part you quoted, of course.)


I agree that the answer is obviously yes, as types and tests do different work. However "types" in dynamically typed languages are not the same thing as types in statically typed languages. One difference is that one causes an error at compile time and one causes an error at run time. Types in statically typed languages can replace tests, but types in dynamically typed languages cannot (unless you believe program crashes are a desired response to code errors.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: