Published on

The role of a tester in an XP team


XP's greatest contributions have been those that provided solutions to the problems that were very troublesome in the "Waterfall" development approach. And it's possible that TFD/TDD is the most important practice in XP. The entire disciplined yet deliberate approach revolving around XP is based on ensuring quality through unit testing.

Generally, XP developers prefer to write unit tests before writing the code. This test-first workflow helps with refactoring since it's not possible without ensuring that 100% of the unit tests pass.

As I mentioned in another article, the word "Extreme" in XP comes from the discipline with which developers adopting this methodology carry out their practices. In the case of testing, this implies that a 100% approval rate in unit tests doesn't mean 87% when the project is not complete. It means 100%, all the time. Without exception. That's how extreme extreme programming is.

One of the consequences of this is the adoption of TDD. XP team programmers write their unit tests first before writing the code. It's certainly possible that they may need to add new unit tests as they discover the need for them, but the important thing is that before writing any code, they strive to generate the test. And that guarantees that 100% approval rate all the time.

This diligence from developers in applying XP practices solves numerous problems. For example, it definitively puts an end to the costly and inefficient process of error localization by traditional code testers. There is perhaps nothing that delays deliveries or completion of acceptance testing more than this. And it's extremely challenging to avoid it without the key practices of XP (not just TDD). For example, Pair Programming allows for the detection and correction of integration and unit errors during coding sessions. So, when the code reaches acceptance testing, it is doing what the programmers intended. At that point, acceptance testing can focus on determining how that intention aligns with the client's expectations.

Moreover, this approach is impossible to achieve without abandoning old practices that hinder XP. For example, heavy and inflexible requirements documents quickly become obsolete, leading to inaccurate, incomplete, or outdated test suites. They represent a great waste of overproduction (thanks for the lesson, Toyota). If we don't use something like user stories, our agile framework crumbles.

In essence, XP constitutes a cohesive and coherent set of practices that must be properly orchestrated, complementing and enriching its elements. The most important element, serving as the backbone of the others, is TDD.

Yes, TDD. Without it, aspiring to have an agile development team will be very challenging. It is precisely the lack of proper integration and unit tests, outdated requirements, and the gap between client expectations and the final product that demands the production of small, frequent cycles of delivering value, precisely offering those features that the client has most recently prioritized as important. This requires QA professionals to be directly involved in the development team and no longer seen as the enemy when project quality control becomes unmanageable due to external factors.

But then, do XP teams need testers/QA?

I have often read, as part of the culture that demonizes QA or testers as if they were to blame for certain software delivery issues, that their role is unnecessary since unit and acceptance testing are already automated. While this may be true in certain teams, it is not always the case.

Firstly, we should have a clear understanding of what to expect from a testing team. Without delving into the nuances that differentiate a tester from a QA professional (as they blur in XP), these teams should not only focus on developing and executing tests but also possess quality assurance skills and be involved in product development. I'm talking about combining functions that guarantee both quality assurance (QA) and quality control (QC). One is process-oriented, the other product-oriented. However, this distinction is not truly useful for the final product. So, I believe the best approach is to integrate all their activities and try not to overly classify the individuals involved in the development team.

When developing software, it is possible to discover missing requirements later on. It's common in software development that this is not detected until a specific version of the product is delivered to the client. Developers tend to focus on elements they consider most important or interesting in the current cycle. Unfortunately, they don't always accurately anticipate what the client expects.

In this context, a tester can be valuable as they develop acceptance tests with the client's involvement and tend to think about the system from the perspective of those who will have to live with the solution. At the same time, they understand the details, possibilities, and limitations of the construction process. Therefore, I believe that a tester can bring great value to many projects when properly integrated.