This article is about something so natural to us that we didn't think about writing about it, until more and more clients ask us this simple, yet important question: "What is your quality process?"
What this question really means is: how do you make sure your product works? Does it work as intended? Will an emergency bugfix break something unrelated?
Software development is a complex process involving teams of people isolating problems, designing solutions, writing lines of codes, and releasing new versions.
This basic list of tasks is pretty much universal and roughly the same when implementing new features, enhancing existing ones, or fixing bugs.
What's missing from this list is quality assurance processes, because the amount of QA varies from team to team, and from company to company. It can be as little as sprinkling tests here and there to check that new developments work as expected, all the way to make quality an inherent aspect of the whole software development process.
Needless to say, we went for the "inherent quality" end of the spectrum.
Quality begins with how a team works
This chapter alone would need entire books to be discussed properly, because it is ultimately about how human interactions are the most important aspect of complex intellectual creations.
In my experience, this is an aspect that is often overlooked when a company attempts to add quality processes. I've seen new procedures introduced, new QA dedicated teams built, pages and pages of documentation written explaining how to test things, but rarely have I seen anything about collaboration, cooperation, and feedback.
Ironically, the most efficient way to introduce quality to software development is to make developers work together. There is no magical formula for this to happen, but one important step to me is to encourage feedback, both positive and negative, and most importantly: in a non-violent, assertive way.
Once everyone understands that good code is praised, that killing a bad idea doesn't invalidate it's author competences, or that it is no big deal to rewrite parts of a pull request because something has been overlooked, forgotten or simply badly written, then everyone feels empowered to bring new ideas, to propose improvements, to challenge or constructively criticize decisions… in short, everyone feels responsible for the product they develop, and that means that the whole team pays extra attention to make it work properly.
At Kuzzle, we have only one unmoveable practice, which is the retrospective: every 2 weeks, we freely and constructively discuss about what worked and what didn't, what are the pain points and the good things that happened, and we propose new ideas about how we could improve things in the near-future. Processes went and go (sprint planning for internal tasks), others are now in our very DNA (code reviews, continuous integration), we test, reject or adopt practices… all in the pursuit of only one goal: improving the quality of our work.
CI plays a crucial role in quality assurance: it means that everyone gets to work on the same code base, not in isolation but sharing interactions between parts of the code, and between developers themselves, to ensure that the whole product still complies to specifications even after being modified at several places, by different people, in near simultaneity.
Fortunately, this seemingly overwhelming task has been tackled for decades now, and we have a few good tools at our disposal. Here are the ones used by the Kuzzle team: git, gitflow (with a twist), and travis. This covers nearly all our needs, and is pretty much standard nowadays.
Basically, it works like this: when a developer makes a change, they propose a patch (a pull request, or PR). The PR undergoes automated tests.If all tests pass, the PR can then be reviewed and approved by at least 2 other developers and, only then, can it be merged onto the development branch.
On a regular basis we then release a new product version, which basically is a set of well-identified pull requests, each one tested, reviewed and documented.
Tests are not optional
At Kuzzle, we have very strict policies about testing. By "very strict", I mean this: all that can be tested, must be tested. No exception.
And for good reason: we like to be productive. Fixing bugs again and again, by definition, is a loss of time, both for us, and for our clients.
There are still a lot of people out there thinking that tests are costly, and often try to cut them out of projects to save costs. Unless the project in question is to be (very) short-lived, then this can be a painful mistake: projects tend to grow in complexity, and the more you add, the more interactions there are in your code base. The more untested interactions you have, the more bugs can (and will) appear.
Most importantly: tests costs are absorbed rapidly by guaranteeing that what worked yesterday, still works today.
I've known both extremes: in a previous company, I had once to deliver an emergency patch in the middle of the night, with no way of verifying that everything would still function other than by hastily performing a few checks by hand. And now that I work on Kuzzle, I'm able to bring large changes to our code with confidence, because everything is so exhaustively tested that unwanted side-effects are almost always detected by our CI.
I largely prefer working on a well-tested product.
Our most tested project, obviously, is Kuzzle Core, and it goes through these 5 layers of control:
- lint: the code must follow a strict set of writing guidelines, ensuring that the code is homogeneous. Bugs and bad practices can be caught that way
- unit tests: test functions in isolation, ensuring that they perform according to specifications
- functional tests: test the product from a client viewpoint, ensuring that entire parts of the code continue to interact correctly
- static code analysis: test code against heuristics to attempt to catch problems and vulnerabilities (tools used: sonarcloud, lgtm)
- human code review: make sure that the code is understandable, maintainable, documented, and usable
Test-Driven and Document-Driven development
Test-Driven Development (TDD) is the act of writing tests first, and code second.
Document-Driven Development (DDD) is the act of writing the documentation first, tests second, and code third.
The Kuzzle team strongly encourages both practices. My personal preference goes to TDD for bug fixing, and DDD for improvements and new features. Here is why.
For bug fixes, first writing a test reproducing the bug means that it has been correctly isolated and reproduced. And this means that once the code is fixed, this particular bug can never happen again, because a test now guarantees it.
Tests written this way can be unit tests, functional tests, or both, depending on the nature of the problem.
For improvements and new features, I like starting with the documentation, because this forces us, developers, to explain to our users what they will be able to do. And this has strong impacts on how a feature or an improvement must be architectured. If you're able to explain something in simple terms, chances are that your design is sound and that you know exactly how it should be implemented under the hood. This is also a way of catching bad ideas (or bad implementation proposals) even before a single line of code has been written.
Once the documentation has been produced, functional tests can be written, implementing what has been explained by the documentation. And finally, code can follow. With additional tests (units and/or functional) if needs be.
Products are meant to be used. To be used, they need to be properly and exhaustively documented.
This is obvious but at first, we didn't realize that this wasn't enough. Here is a short story: a few years ago, a potentially important client went to our site to evaluate our product. He went straight at the getting started section, copy-pasted the code there, ran it and… it failed. The client then told us about it, about how this didn't make a good impression on him, and left… forever.
We learnt something important that day: code examples contained in documentation pages can become obsolete, or erroneous. So it must be tested too.
After a few trials and errors, our documentation system is now stable and closely linked to the products it documents. All our code snippets are verified, automatically, by our continuous integration process.
Under the hood, each code snippet is isolated in its own file, with a small YAML configuration telling our documentation builder how the snippet can be injected, and tested. Code snippets are then executed by our CI, and their output verified against an expected value.
Kuzzle aims at being an application's backend: it needs to be fast, stable, reliable and understandable.
This requires omnipresent quality policies, present in every aspect of the development cycle of our product. From how developers interact, challenge their ideas, and provide feedback, all the way to how bugs are fixed and how new features are documented.
There cannot be product development on one side, and QA on another one: both are closely related and intertwined.