Updated 2020-09-03: Statement on coupling
I was recently blown away by watching some of Uncle Bob's Clean Code presentations (make sure to check out his blog). Of particular focus was his demo of Test-Driven Development, which when done right, reduces defects, development cost, and maintenance cost of software.
In addition, it might even nudge the software architecture into a better one, by exposing awkward-to-handle points and letting you easily refactor.
The rules are as follows:
- You are not allowed to write any production code unless it is to make a failing unit test pass.
- You are not allowed to write any more of a unit test than is sufficient to fail; and compilation failures are failures.
- You are not allowed to write any more production code than is sufficient to pass the one failing unit test.
- Only use as many brain cells as necessary each round. Beat around the golden standard (final implementation) instead of going straight for it: test everything outside it first.
- This results in comprehensive coverage, that would catch anything that may go wrong.
- At each TDD increment, make tests more specific, and implementation more general
- Failing to do so might result in coupling your tests with the implementation. This is bad because it makes for brittle tests: any time you change the implementation, you need to change the tests. Make sure to test through the interfaces - such as the less-changing public method signatures, and not private methods or fields of objects.
- You will often find that as you implement the "trivial" tests, you'd already done a lot of the work.
- If a component is hard to test, you might find that it's also badly designed. Try to split it up, make sure to use small, testable units right up to the untestable UI / DB / Web / other I/O endpoints.
- DB and front-ends are notoriously hard to test. To work around this, make sure you push all the logic into testable chunks, and you only use the endpoints as dumb display/storage/input devices.
- Uncle Bob gives an example about UIs: have "Presenters" that are testable, and whose only role is to format the data into "Response Models" to be given to a front-end view (plain objects with mostly strings to display, but also enabled/disabled flags, colors if they change, or coordinates if you're in a game) - so the view has so little code in it that it does not need automated testing.
- Use unit tests instead of integration tests.
- Integration tests are notoriously slow, requiring servers, database, heavy browsers (such as Selenium), or even a web framework (routing, middleware, complex models). Slow tests lead to you running them much less often - which make them much less useful. Unit tests which run in a few seconds at most and give you enough confidence in the code let you constantly refactor the code as needed - every TDD round.
If you use TDD and get enough coverage, you can fearlessly refactor any part of your system. "Testable" is a synonym for "decoupled", and decoupled code is easy to adapt and maintain.
Only test the behavior of stable APIs, not methods and classes that are implementation details. Otherwise, you get brittle tests that need to be changed whenever the implementation needs changing.
When your tests are brittle, it's because they are coupled to the implementation. Coupling kills software.
I remembered from my last job that we tested lots of things. The tests that came out were only on the (tiny) methods that did actual computation. Surely enough, we needed to replace them. This made me look more into what a "unit" is and where should tests be.
Testing like this is wrong, and methods and classes are not "units". A (business) use case is a unit. A module is a unit. A high-level class expressed in almost-English might be a unit.
You should test for the presence of detailed behavior that does the right thing in depth, but only through stable interfaces. This way, the tests will need modification only when the stable interfaces also need it - that is, rarely.
Still, as per "unit tests instead of integration tests", you should not hit the slow DB, Web, framework middleware, or UI during unit tests. You should isolate your code from DB/UI/Web frameworks, so that the machine can execute it quickly.
This isolation is not easy, especially in today's framework-does-all environments, but following Uncle Bob's tips above might be helpful: doing-all is machine-costly.
Test as far as you can reach on the edge of your system, while keeping tests fast and isolated. This would be almost like integration tests, but don't touch the endpoints (or touch them very lightly, such as through an in-memory DB that can be easily reset between cases, instead of a real DB server).
Update 2020-09-05: Still, if you find that the tests are not local enough to the errors, and half of your tests fail on a bug in a component, making it harder to find the component, you may need to reduce their depth. Gary Bernhardt considers a unit to be at most a 100-line class.
Unit test speed is paramount. If your tests are too slow (more than, say a few seconds), then you will not be able to use them to make decisions while adding features and refactoring.
More is spoken here by Ian Cooper - TDD, Where Did It All Go Wrong?. Ian Cooper's talk is inspired from Kent Beck's book on TDD in 2002 - and Ian claims this book is all you need to understand TDD.
All this excitement led me to try out various testing frameworks. The first one I tried was
unittest which comes with the Python default library:
import unittest class StackTest(unittest.TestCase): def setUp(self): self.s = Stack() def test_new_stack_is_empty(self): self.assertTrue(self.s.is_empty()) def test_pushed_stack_is_not_empty(self): self.s.push(0) self.assertFalse(self.s.is_empty()) def test_empty_pop_error(self): self.assertRaises(Stack.Underflow, self.s.pop)
I did not enjoy the verbosity of it:
self.assertFalse calls. There must be a better way.
Also, while it does provide a
setUp method, I then had to use
self.s everywhere, because it was an instance variable.
I then tried out another framework:
$ pip install -U pytest $ pytest stack.py
import pytest def test_new_stack_is_empty(): assert Stack().is_empty() def test_stack_not_empty_after_push(): s = Stack() s.push(3) assert not s.is_empty() def test_stack_empty_after_push_pop(): s = Stack() s.push(3) s.pop() assert s.is_empty() def test_can_not_pop_empty(): s = Stack() with pytest.raises(Stack.Underflow): s.pop()
For this particular trial, I gave up on the
setUp method. I preferred saying
s = Stack() in each test case than having to create a class (like
StackTest) with a constructor and then using
self.s instead of just
I could also have avoided the
self. prefixes in
unittest, but I would still need to create the class (for the
In any case, now I can just say
assert X instead of
self.assertTrue(X). It prints out magnificently, showing you the exact source line, and its evaluated assertion values.
I vastly prefer
unittest and warmly recommend it to you, if you use Python!