The Epic Fail of Enforcing Unit Tests

8 points by patz 8 years ago | 5 comments
  • rstuart4133 8 years ago
    I don't agree with him.

    Everyone knows when you are forced to write tests you code differently because you have to make the lines tested visible to the testing code. In legacy code without tests no one bothered to do this, which is why adding tests to legacy code is near impossible.

    If 100% test coverage isn't forced you end up with lines that aren't visible to unit tests without re-factoring. If a bug is found in those lines and you want to add a test to demonstrate the bug has been fixed, you can't just make a minimal change - you have to re-factor the bloody thing, increasing the odds you will introduce another bug.

    The bottom line is ensuring you have 100% test coverage (or even better 100% branch coverage) brings it's own rewards because of the discipline it enforces in code structure, and that is true even if those test lines don't do much in the way of useful testing.

    So for me his argument only works you have 100% coverage. Thereafter, yeah, each test has to earn it keep.

    • patz 8 years ago
      Actually this piece code is to achieve 80% test coverage or even lower. (It's the target of another remote team.)

      From the observation, "Everyone knows when you are forced to write tests you code differently" doesn't hold. If you try to make your code testable, TDD is a better approach. (which means you have to write test first.)

      But to people who doesn't know how to make code testable, or even worse, take test coverage goal as a burden. They code before they integrate, they integrate before they test. Such stories happen a lot.

      Only if people believe in test first and help them know how to write good/maintainable tests first, can test coverage goal make sense.

    • taylodl 8 years ago
      I addressed a lot of these concerns in my blog post on Maintenance Driven Development. https://taylodl.wordpress.com/2012/07/21/maintenance-driven-...

      Yes, fine-grained tests are problematic and as such one tends to desire to minimize the creation of such tests as much as possible. But when bugs are discovered then it may be necessary to create fine-grained tests in that affected area. Essentially what you're doing is the most extensive testing in the area of your code giving you the most problems. If you notice a particular area of your code requiring a lot of extensive unit tests then perhaps it's time to consider refactoring that code.

      • Ace17 8 years ago
        I've seen lots of unit tests tests trying to reproduce the exact circumstances of a known bug, in an attempt to trigger it, and thus prevent regressions.

        The problem is, legitimate changes to your production code can make your test go blind. What I mean is that the test looses its ability to trigger the bug, while the potential for the bug to be re-introduced is still high.

        This is what happens if you're trying to trigger known bugs instead of asserting behaviours: your tests might stop playing their role and you won't even notice.

      • Ace17 8 years ago
        Each unit test not ending with an assertion is essentially a hack to locally silence your coverage tool, without actually improving the quality of your test suite.

        I call these tests "crash-tests", as a crash is nearly the only thing that they can detect. Crash-tests are indeed what you get if your devs are aiming at a high-coverage number without understanding the rationale behind code coverage.