There is a Twitter discussion going on regarding “Testers do not break the product.” I am not so sure this is true in all contexts. Consider for example, embedded products. Our organization will almost certainly have a multiple perspective approach to the testing of that product. One of those would be test the product to the requirements. Testing to requirements are one means of learning the product’s ability to meet the expectations put upon it via specifications. The result of which should be part of the contract closure phase of the project. Did we get what was specified? This would be true whether externally (supplier) or internally developed by our organization. To close the project it is incumbent upon us to identify if the project expected outcome has been met.
However, there is more to know about the product than just does it meet the requirements via passing requirements testing. Consider the consequences of an “inadvertent” power down during an EEPROM or other memory write cycle. What happens to the product and how will it respond on a subsequent power up sequence? Our product requirements may not point to such a scenario, but our test engineers may know this can happen. It could be good for us to understand the consequence on the product performance and customer satisfaction before we launch the product. The end result of such a test may be the product is now “broken” (fails to perform) even if that was not our intent. We will learn from this and may choose to add some requirements to mitigate the influence of this mechanism on the product.
Testers, Test Failure and Consequences
Depending upon the risks and costs associated with product failure, we may in fact strive to determine the breaking point of the product. It can be argued a prudent organizational approach to the testing of that product includes knowing where and how the product can fail. This is especially valid for products in which a failure may cause catastrophic outcome, such as cars, planes and medical systems. If your system is of such complexity and failure may result in loss of life or serious property damage, it may behoove the organization to take a more drastic approach to testing. In such cases we will endeavor to find the breaking point of the product. This will inform us of the margins, the gap between a working product and a failed product, and the specific failure impact. Where these margins are close, a small increment in stimuli from the requirements creates a failed product, will likewise inform us about the product’s capability. We can then make rational decisions regarding the product rather than assume all is well. For example, we may decide that the present design limit is not acceptable and alter some aspect of the product increasing the margin for product failure.
Testers and Design Limits
To further understand the design limits of the product, we may employ HALT (Highly Accelerated Life Testing) on the product. Using this technique we actively and deliberately find the product weakness (which usually means make the product fail). HALT is directed at product reliability, or the ability of the product to perform over time and variety of stimuli. Using this technique the product is pushed beyond that expected during use. In that way we learn about the product’s potential weaknesses and can make a conscious decision to address (remediate) or neglect (ignore). Not all failures are created equal and require attention.
I submit that testers do not always break the product. But to say that testers, software or hardware, do not break the product is not entirely accurate. The answer is; it depends. It depends upon the company. It depends upon the risk. It depends upon the industry. Defining break as a failure to perform as expected unless there is technical intervention (reconfigure, reset parameters, download the software again) sometimes testers, even software testers, break the product.