Hacker News new | past | comments | ask | show | jobs | submit login

One of the projects at a place where I have worked was set up so that when you ran the tests it automatically and silently updated the values that were expected. Completely bonkers because the first time I was contributing to the project I prepared the tests first and then started the implementation, and then while I was working on it I ran the tests which at this point should fail because I hadn’t finished writing the code but instead all tests passed. Because helpfully the test setup overwrote the expected values that I had prepared in my new tests, with the bad data. Yeah great, very helpful >:(

Oh yeah and the whole test setup was also way too tied to the implementation rather than verifying behaviour. Complete trash the whole thing.




I keep rereading this hoping I'm misunderstanding.

That is cargo cult level behaviour. They know that software with lots of tests tend to have few bugs, so let's automatically have lots of tests!

I just hope whatever you were building wasn't critical to human lives.

https://en.m.wikipedia.org/wiki/Cargo_cult


> That is cargo cult level behaviour.

One person's "cargo cult behavior" is another person's "best practices". :P

My favorite example is automatically generated documentation. The kind that merely repeats the name of the method, the names and types of arguments, and the type of return value. The ironic part is that this is later used as an evidence that all documentation is useless. Uhm, how about documenting the methods where something is not obvious, and leaving the obvious ones (getters, setters) alone? But then the documentation coverage checker would return a number smaller than 100% and someone would freak out...

This is just one of many examples, of course.


I hate to dwell on this, but I've also seen it in real life and it boggles the mind.

Like "give review feedback that this code isn't doing the right thing" -> "change the test to make it pass, not change the code to make it work". And it wasn't really a small case where you could plausibly do that and still understand what you were trying to do.

Coincidentally that was a few weeks after I saw a comment here on HN about someone who hired someone from Facebook, and the guy would change the tests so he could push to production, rather than fixing the bug that the tests pointed out ...

So yes it happens.


>Coincidentally that was a few weeks after I saw a comment here on HN about someone who hired someone from Facebook, and the guy would change the tests so he could push to production, rather than fixing the bug that the tests pointed out ...

Can't blame him, he moved fast and broke things /s


Perhaps he's a Buddhist? "If the software is going to break, then the software will be broken." Then he adds a little wabi-sabi for good measure. https://en.wikipedia.org/wiki/Wabi-sabi

I remember once, using some in-house software, which for god knows why could not log it's errors back to the IT department. Instead, they relied on users to call up IT, or email them with the error. To make it more fun for users, each error message contained a humorous haiku.

  Chaos reigns within.
  Reflect, repent, and reboot.
  Order shall return.
Edit: Just found this from 2001 https://www.gnu.org/fun/jokes/error-haiku.en.html And my experience with haiku error messages at work was 01 or 02.


Would it do this just the first time? It’s still bad it was doing this silently, but it’s pretty common to test web APIs in a similar way manually. Make a request, check the response you get back looks right (important step) and then save it as the expected value.

Edit: or after reading the article, like in the article.


It did this every time, not just the first time.


Well, you know what they say: Expect the unexpected!


I can somewhat understand, because this is kind of the goal of property based testing—the actual values themselves matter so little to the test that you’re willing to subject those inputs to randomness

That said, this doesn’t sound like a very good way to pull that off because the developer has no control over that randomness (where it’s needed greatly).


So long as the diffs get reviewed and checked in, this is a great form of testing called "regression testing". It doesn't replace unit testing, but it can be super valuable.


What’s described in the OP (Jane Street) is regression testing.

What the commenter just described is tautology testing: whatever result of the computation I get is what I expected.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: