Nobody understands testing?

Testing is one of most misunderstood things in software development. I've seen this many times and this poor understanding of testing often affects negatively on projects. Let's see common flaws:

  1. Lack of unit tests and proper discipline around code acceptance - I worked for game company and we had games that were tested only by QA. And there were bugs. Every morning I logged into daily standup and almost everyday half of standup were "what is going from in which version of game" we had many similar games, catered and modified for different countries (regulations etc.) and seasons (e.g. Halloween edition).

    Moreover programmers ignored bugs during development and delayed fixing them.

  2. Testing the implementation.

    It's common mistake to tests directly everything. People often conflate two things:

    • 100% testing coverage
    • creating separate unit test for every class/function/method
    Many things can (and should) be tested indirectly. Testing internal details of implementation results usually in situation where tests need to be rewritten everytime you modify implementation. So they render themselves useless.
  3. Complicated setup. This was big Node.js project done in DDD, CQRS, event sourcing methodologies. It was heavily overengineered, with DDD artefacts like entities, aggregate roots etc. It has some kind of DDD runtime which managed e.g. permissions for aggregates.

    And when you made new module, you couldn't just write unit test for it, but you had to manually wire all needed moving parts to start this "DDD runtime", mock all of permission roles, aggregate metadata etc. You had to it was especially hard if you didn't know project very well.

    How could this be solved? In two ways actually:

    • if we're going unit test route, then it should be possible to test separate modules without any wiring. Some architectural changes could be needed for that.
    • maybe we do need this wiring to actually test, if so, we need to automate this and prepare special test environment for integration tests. But this test environment shouldn't require manual wiring. It should be easy to create new module and be able to test it immediately.

  4. Slow tests Even if some of kind of tests (e.g. end-to-end) can be slow there should be also enough of coverage with fast tests to give proper feedback when coding (which is important). Though "why tests are slow" depends on specific setup so I won't delve too much into it right now. Just signalizing the problem.

  5. Random tests tests which sometimes fail, sometimes pass. They either are poorly written (e.g. relying on shared mutable state between tests or on some kind of asynchronous code with race condition) or depend on external factors like responses from server on internet. In latter case they can still be useful as e2e tests but they shouldn't be only/main tests. I remember working in company where tests were dependent on some special server which often failed. And sometimes you just had to postpone writing your test until server went up again.

  6. Bullshit tests - old tests that are failing and everybody on team just ignores them (I've seen this in connection with the previous point - some tests sometimes failed sometimes passed and I was told that it's the way it is)

    Such tests should be either removed or skipped. Or they also could be separated and ran separately if we really need them.

Comments

Popular posts from this blog

Communication skills in a team.