05.Testing
We carry out automated and manual testing of features and interfaces throughout development, always aiming for the right balance between rigour, time and cost.
What sorts of testing do we do?
We carry out two types of testing: automated and manual. We build automated tests for every important feature of our software. We also carry out our own manual testing, and expect clients to do their own user-acceptance tests before deployment.
Tests are planned at the same time as features, during the research phase. When we write the specification and use case for a feature, the business rules governing the operation of that feature form the basis for developing tests.
In addition, all our code is peer-reviewed in-house by another developer to make sure it’s clean, elegantly structured and economical. We like our code to be as close to natural language as possible. We use real-world terms, or ‘domain language’, whenever we can.
How much do we test?
Our testing philosophy is ‘just enough’. In other words, we test enough to address all the major risks, but not so much that testing overshadows development, or slows it down.
To decide how much to test, we look at each feature and assess its criticality. If the feature relates to money, for example, or if it’s so complex that important problems could easily be missed, we write an automated test in parallel with the actual code.
Why don’t we write an automated test for everything?
There is a school of thought that tests should be written to explore every conceivable scenario for every individual feature. But we’ve found, through experience, that this approach is time-consuming, wasteful and expensive.
It’s time-consuming because writing tests just takes a long time, and it’s wasteful because clients usually want to change features after they’re built – which means discarding part of both the main code and the relevant test code. As a result, the project ends up slower and more expensive.
We believe our approach offers the best balance of quality, time and cost.
"Be humble about what tests can achieve. Tests don’t improve quality: developers do."
James O Coplien, Why Most Unit Testing is Waste
"Every line of code you write has a cost. It takes time to write it, it takes time to update it, and it takes time to read and understand it. Thus it follows that the benefit derived must be greater than the cost to make it. In the case of over-testing, that’s by definition not the case."
David Heinemeier Hansson, Testing like the TSA
How does our automated testing work?
We use a process of continuous integration. Our testing suite runs constantly in the background as we work, automatically testing each feature following every change made by our developers. This keeps the code clean and manageable, ensuring there are no surprises later on.
How do we approach user-acceptance testing?
Clients usually test several times: usually after we finish an individual feature, after completing a correction and before a set of features goes into production.
Testing complex features can sometimes be time-consuming – setting up data, logging in as a user, logging in as an administrator and so on. If that is the case, we also produce brief demo screencasts showing our own user tests being conducted. These allow clients to review progress quickly and easily without the hassle of carrying out lengthy tests themselves.
Read Step 06. Deployment