Testing and numbers

We take a time out from the configuration management discussion. We see in the news numerous companies with field quality problems and we cannot help but think of the discussions we have had with colleagues about how many organizations handle their product testing. Testing, done right, is a lead indicator of product quality. It is a leading indicator, in that with some effort, it is possible to “predict” the quality of the product in the field. A lagging indicator, the opposite of a leading indicator, means you are learning about the product quality after you launch the product and have production volumes with associated problems.

A few of the ways the testing process can fail to deliver is provided in a brief list below:

1. Insufficient time and resources spent testing
2. Informal handling of fault reports (excel sheets hidden on somebody’s laptop)
3. Disbelief in the testing results (that could never happen)
4. Testing results treated like opinion (even specific measured data)

One (seemingly) ubiquitous way the process fails is the insufficient time and resources applied to this critical set of activities. We see projects that allow late deliveries of the requirements and the design iterations, but the testing must be on time. However, the late deliveries means the time allowed for testing has now been reduced from the original plan. Then, of course, the end date does not change. Projects that make this decision should have significant quality contingency money reserved.

Another common failure is to deny that the failure would ever happen in the field. Invariably, we will find the failure in the field—with the same failure modes and volumes as our testing results suggest.

My favorite failure is to dispute the test results as if they were opinions and not the results of a thorough and well-executed test suite. Even when the failure information includes sound and relevant mathematical analyses, we see actions as if the results were personal opinions. This scenario is not the same as the previous failure, where we acknowledge the failure is possible. In this case we refute test results by saying the results are opinion—a form of psychological denial.

Sometimes it is not possible to test every permutation of the product—to do so would take so much time that the product would be obsolete at completion of testing. That does not mean sidestepping the testing process or spontaneously repudiating the test results. Such a mentality sets your project and your business up for failure and potential litigation. Testing is not the last thing you do for your project; you should be conducting your testing during the whole of the product development cycle and learning something about the capabilities of the organization as well as the product. We do well for our customers and for ourselves when we take the time to do things right!

Post by admin

2 Responses to Testing and numbers

  1. PeterB

    It is great to see someone from outside your own field acknowledge that problems can occur at many levels and add support to their efforts. I will return the favour by saying configuration management is a major cause of defects and other problems with testing when it isn’t done right and mainly for the same reasons quoted in your blog. Let’s stick together because some people fail to realise a project is a team. Cheers.

  2. Prahlad S Nagarkoti

    It is ok to say that several are not realize to testing of product and take the testing results as a opinion for the product performance.If we think on this we found some aspects which force to deny the testing results.
    1. Is Management process is so capable that detect the fault on right time(test results)because if the declaration of NG test results will delay no meaning of testing done.
    2. Is the manufacturing process design is capable to detect the failure product range(traceability) because several management thinks it is not good to reject whole lot on the basis of testing results and ignore testing results.