Probe testing, the pathfinder

Introduction

A few days ago I was talking with a co-worker about a problem that his team is having.

The case is, this team consumes some APIs which contracts are changed without notifying the team and without any backward compatibility so, in the worst case, they get to the situation of having errors in production.

In an ideal world, the problem would be solved by applying contract testing or the API provider notifying about the changes, and developing always with backward compatibility, but what can we do when this is not happening?

Probe testing

We talked about the concept of using tests in the form of a probe or a pathfinder, in such a way that the tests inform us of any change in the API. In this case, I could think of several ways to probe:

Keep a scheduled test’s run, in such a way that when tests fail, we know that the contract has changed. The problem is that, as it is scheduled until it is executed and the probe detects the change, the problem could already be in production.

probe01

Keep the tests active, that is, have them running continuously. The problem that I see is that we would have a very extensive report and we could lose the real visibility of the status of the probe target. If for example, we add notifications, which would be the most logical thing to do, having a communication failure with the API we may start receiving false positive notifications. We would also have to handle the notifications.

probe02Have a listener. Imagine that changes are made in the repository of the API that we consume, if we have a service listening to this and we see or we are notified of a change, the probe will be launched to check the changes. Logically we should have view access to the repository, which is the case here, as it is part of the same company, or being a public repository.

probe03

The tests should be minimal and as simple as possible, focused on the problem, trying to avoid false positives. As a probe, we should not waste much time or resources in the maintenance. It would also be interesting to manage the probes through a dashboard: to see results, activate or deactivate them, etc.

Another interesting idea is that if we have the knowledge and resources enough, we can develop more intelligent probes, which manage their notifications system, execute, etc.

Testing with sewage treatment plant

Imagine that an issue in development is like water that has to be purified, and our testing architecture or techniques are the purification machine.

If we do not purify or filter, the water will come with a big amount of waste. In order to have clean water to consume, it must go through a process of “filtering” step by step.

filter

The same happens with the issue that we are developing. If we do not apply testing techniques and clean code, we will obtain a product with many residues (bugs, corner cases, dirty code, etc).

But we have to apply them from an early stage. So it is not correct doing testing once the issue is developed. The testing starts from the very beginning, defining the acceptance criterias, applying the different testing techniques during the development process.

In this way we will detect the “imperfections” in advance, and the delivery and deployment will be safer and faster avoiding the testing bottleneck.

A bit of dirty water can contaminate the entire pool in production, after that, ¿How could you find the drop that spoiled it all?