8

Fixed a high priority bug today just prior to release. There was 100% test coverage. The tests pass both before and after the change. The product behavior is correct now where it wasn't before. Just one more reminder that test coverage does not equate to either quality or correctness. Tests are alarms (at best), and quality of tests are no better an any chunk of code. All tests have costs, but not all have value. All reasons why I am skeptical of the value of code coverage, TDD, or anything that posits that "all tests are good".

Comments
  • 2
    I believe in healthy ammount of scepticism towards everything, so, imo, you should be sceptical about those things. Always try to find flaws in tools you use so you can improve them, which include tests and coverage.
    And be aware of the ammount of bugs that were discovered because of those practices.
  • 2
    Unit tests is generally good.

    If used for TDD it can help you to mot forget some important consideration and if created afterwards its as you say more of an alarm to prevent regressions.

    For example if you are going to refactor something, starting but building all tests you can think of helps both with fully understanding what the current code does and on the way you might uncover undefined or unwanted behavior and then to make sure the new code at least does everything the same as the old except for that which you refactored to achieve.

    But tests are no magic, dumb tests don’t add value and incomplete tests could lure you into thinking something is done.

    One often overlooked type is testing for error conditions.

    If some combination of input should not be accepted, make sure you have tests that expect an error.

    Otherwise you can have undefined logical errors that are so much more difficult to track down as the situation where the problem surfaces rarely are where they actually happened.
  • 1
    Test cases are a living thing. Im assuming that you added a test for the thing you fixed?
  • 0
    @Codex404 , the way you ask the question goes back to my original point. Why assume that I added a test? IMO, a test should be added using the same judgement as adding any code to your application: does it add value? how much? what's the cost? What's the priority, severity, etc. This also reminds me of what I think is the wrongheaded insistence of including testing in the "definition of done" in scrum. A code change and its associated testing may have very different levels of effort and relative priority. For example, you could have a critical change that takes 5 minutes. The creation of unit/automated tests for that change could take 5 months. Waiting to release the critical change is wrong because you have prioritized testing of a change over N other possibly critical changes. IMO, the 2 should be separate priority calculations. Just another example of where I think testing should be a considered, case by case approach, not a blanket mandate.
  • 0
    Unittests are small tests normally have one or two inputs and thus should be setup in a minute or 5.

    Im assuming you added it because the tests were not complete (nor will they ever be) but adding the test will avoid this issue in the future.
  • 0
    tests drive your refactoring mostly, they tell you where you're edit is breaking the code.

    to me this is the most valuable part of TDD, and this is true almost independently of you coding style.

    to have the "testing" value from TDD, having many, short and side-effect free functions help A LOT. but still, alone it's not enough
Add Comment