Creating a separate software test group has pluses and minuses. At a minimum, we may have more to manage.
Some of the minuses are a product of human nature. When we know our work will be inspected, we will often assume the inspector will catch issues and we pay less attention to the issue ourselves. We may also see some contention between the software development group and the test group, invoking competition rather than cooperation (test-driven development being part of the cure for that). Often, project schedules will try to compress the software product testing, leading to burnout and frustration on the part of the test team.
The primary plus is that we have a quasi-independent method for verifying the quality of our software. The test group must be under separate management than the development group (in fact, separate management as high as practical). The goal of the test group is elicitation of software defects through intense testing.
If the testing is sufficiently intense, the software will begin to show defects that the development people think are specious; for example, “ This will never happen in the field.” Such comments miss the point. The point is that we have revealed a flaw in the software, regardless of the eliciting cause. In our experience, letting these defects remain in the code means we have just allowed a time bomb to remain in our product—a time bomb whose explosion date is often unknown. As test people, we must have the fortitude to say: “Your software sucks!” Of course, we follow our rude statement with an explanation of why we think the product is defective.Tags: Quality, risk, risk management, software, testing, verification