Archive for April, 2013

Fifth Dimension Testing

Posted on: April 30th, 2013 by admin No Comments

With fifth dimension testing, we use techniques more commonly employed by the “bad guys.” For example, we may execute techniques such as:

      • Fuzzing
      • Response modification using genetic algorithms
      • Input breakdown
      • Overflows
      • Underflows

This approach allows for evolution of our test collection. We can automate a significant number of these tests if we have a comprehensive, programmed means for discerning that a failure has occurred.

Extreme testing

Posted on: April 29th, 2013 by admin No Comments

            Extreme testing occurs when we deliberately “torture” both the hardware and software to see what happens under undesirable conditions. Some examples of extreme testing include:

  • Random voltages within the allowable voltage boundaries
  • Voltage slews
  • Deliberately introduced random noise on the data bus
  • Extremely high bus loading (over 80% and sometimes over 90%) to see how the embedded software handles message dropping
  • Cold and heat because variation in temperature can affect component performance, especially with LCD displays
  • Rapid switching of hardware switches
  • Slow voltage decay across the voltage boundary (which will sometimes cause a microprocessor “latch up,” wherein the micro ceases to function).

      Extreme testing is another discipline where the test suite greatly benefits from the active imagination of the test engineer. This is where we tap into the creativity that can never be automated. Our list is only a subset of the possibilities. We have applied these types of tests for both hardware and software testing, especially in the embedded environment. In one clear case, we applied extreme cold to some instrumentation and discovered the LCD display did not meet the customer’s requirements. Since this event occurred fairly early in the product development life cycle, we were able to recover by adding a display heater. Yes, we think it unlikely a usable truck cab will be used at -40o C., but we had a requirement to meet and the test clearly indicated a product weakness.  In another instance, we used a hand drill that produced sparks when turned on and off, right on top of the product to see if the product was immune to EMC transients.

            We have had software managers complain that our tests have been so extreme that the defect would never be seen in the field. That is not the point. The point is that somewhere in that software we have a weakness, and the sooner we eliminate the weakness, the better our code. It doesn’t really matter with software how realistic our test scenario is—it matters that we have found another failure mode, which is analogous to the approach we take with the highly-accelerated life testing (HALT) we use with pure hardware testing.

Stochastic (Exploratory) Testing

Posted on: April 28th, 2013 by admin No Comments

        Stochastic testing occurs when we allow a reasonably well-seasoned test engineer to go with their “gut” and feel their way about the product’s performance. During the development of numerous embedded automotive products, we have seen stochastic testing elicit roughly the same amount of test failures as combinatorial testing. We are not recommending that stochastic testing supplant combinatorial testing, but rather, that they both have their place in our overall test strategy and they function as complements to each other.

      It is important that we have a means for recording what we do during stochastic testing. Spectacularly successful tests should be added to the existing suite of test cases and reused. We have seen some situations where program managers, customers, and software engineers have been aghast as the original suite of test cases metamorphosed into the ultimate horror show, sometimes adding thousands of test cases. They may say “the product will never see that level of stimuli.” Then the test engineer walks them out to the field application of the product and makes the fault happen in a non-laboratory environment—demonstrating that it can and will indeed happen.  However, we expect and hope the test suite grows as the test engineers learn more about the behavior of the product. Certainly the code is not static as we proceed through the development cycle and we should not expect the test suite to remain static either. The same concept applies to hardware testing; in most cases, our first attempts at testing are more of a learning experience than anything else.

       We can use automation to help us with some of our stochastic testing. Consider an automated test fixture that exercises the features (requirements) of a product.  Typically we have the automated testing go through a defined sequence of test cases. In other words, we execute these test cases in a single sequence. For example; test case A is followed by test case B.  If our test cases were numerically defined, we can then use a “random number generator” to arbitrarily sequence the order the test cases are executed. In this way we find connections within the modules that may cause failures in our product.  When these tests are automated, we can record the sequence and any resulting erroneous performance.

       We consider it normal that a suite of tests continues to grow during the life cycle of product development. Often, the specification and the requirements change; it makes sense to expect the test suite to adapt to product changes as well as alterations to understanding. We would be more concerned if the test cases don’t grow!

Combinatorial Testing

Posted on: April 27th, 2013 by admin No Comments

            We know that a very simple product or system can generate a vast number of potential test cases. With more complex systems, this number becomes astronomical. This is the result of a factorial calculation! One technique we use to get around this problem originates with designed experiments. Many designed experiments are based on orthogonal arrays of test values. Two-level testing for each input and output is often represented with a matrix of zeros and ones or of pluses and minuses to indicate the levels. More levels are possible, but increasing the levels increases the number of test cases exponentially. With digital inputs, two levels are adequate.

            In essence, we will stimulate the inputs according to the recipe from any given row of our matrix and observe the behavior of the outputs. This situation means we must understand what the correct outputs are and be able to measure them. Furthermore, for each set of specified inputs, we must also understand which behaviors we expect to see—any deviation from these is an anomaly that prudence necessitates must be explored.

            One alternative to the orthogonal array is pairwise testing, perhaps best exemplified by James Bach’s gratis Allpairs program. Allpairs takes input from a test file that is itself generated most commonly from a spreadsheet and creates a sequence of test cases that will exercise every pair. The amazing thing about Allpairs is that it holds up fairly well against commercial pairwise programs while remaining completely free! The pairwise approach has some defects, for example:

    • It will only see two-way issues
    • Interactions are generally invisible
    • It is not complete

Even with the issues that pairwise testing exhibits, we consider it to be useful if for no other reason than it provides us with a rational basis for more test cases.

Compliance Testing

Posted on: April 26th, 2013 by admin No Comments

            We have discussed compliance testing earlier. This is known as testing to requirement. These requirements can be taken directly from a customer specification (when we have one) or derived internally from a requirements review or even both. Compliance testing is the primary method we use to ensure that we are meeting all specifications and any contractual obligations. Compliance testing is therefore obviously an important part of the project management activities. In the automotive industry, for example, we will verify that all gauges are meeting pointer sweep requirements in our instrument cluster (where we find the gauges on our dashboards).

            Compliance testing has the defect of only testing for specification requirements and may provide an inadequate measure of the true performance of the system.  Compliance testing will not show us how atypical or non-specified interactions with the product will impact the performance or product longevity. Additionally, it can be difficult to record by using specifications all of the unwanted behaviors the product must not have. To do so we would amount to documenting all the things the product must do, and all of the things the product will not do.  It is not very efficient, for example, to take time to identify all of the scenarios in which the product must not lock up. It should not lock up under any conditions. We expect our test engineers to be endeavoring to discern where the product fails rather than trying to prove that the product works! This idea leads us to our next phase of testing.

Effective Product Testing NOT a One Trick Pony

Posted on: April 25th, 2013 by admin 4 Comments

To perform embedded software testing, we recommend five phases of testing. These phases may take place concurrently and are as follows:

    • Compliance testing
    • Combinatorial testing
    • Stochastic (or exploratory) testing
    • Extreme testing
    • Attack mode

Each of these methods provides some view of the product the other method does not. Of course, testing the product is not the only way we can ensure the quality.  We have already discussed some of those other tools and techniques in previous blog posts.  We turn our attention toward effectively testing the product in the next blogs, each post describing the above list.

Alternatives to Spot Checking

Posted on: April 24th, 2013 by admin No Comments

We do not need to suffer the pains of the risk due to spot checking.  We have a number of alternatives. For example:

      • Modeling and simulation
      • Iterative testing throughout the development
      • Launch AND continue test and verification
      • Launch and continue test and verification reduce volume of customers

Relying upon modeling and other forms of simulation expertise we can reduce product testing as our only verification activity.  To be effective though, we must have accurate models, and not models based upon wishful thinking or hope.

We should have been delivering increments of the product to testing for evaluation prior to the launch date.  Monitoring key attributes (faults found, severity of the faults found) help clarify whether spot checking is a ludicrously speculate or a sound wager.

Just because we have a hard date to deliver the product, does not necessarily mean the testing must stop. Ideally, you should have a good idea of what you are delivering before you release it upon your customers. That is the ideal state. The lesser than idea would be to keep testing even after launch to better ascertain the quality of your proposed delivery. In this case you may find quality problems in the product before your product goes through the logistical channels and would make it to the customer.

We can couple the previous example with a delivery to only certain customers, reducing the volume of field exposure risk of the product before we have conducted our testing.  We can even chose our customers based upon attributes such as most understanding or key relationship in which we treat them better than some other customers.

Any of these examples would be a better solution than spot checking when evidence indicates (as shown in previous blog post) spot checking is foolhardy. 

Spot Checking Versus Testing

Posted on: April 23rd, 2013 by admin No Comments

We should guard against our organization becoming enamored of the idea of “spot checking”.  Spot checking, if done at all, should be based upon facts, calculations and historical record and not due to the logistical inconvenience of a hard date.  Spot checking is when we perform a cursory review of the product. Some folks call this testing – but in my mind it is not.  It is a calculated risk taking at best, and at worst recklessness.  Let me explain via an example.

Suppose you have an update to the software that has nothing to do with the core or operating system portions, but a minor function that tangentially touches other software modules. That may encourage you to test only the part that has changed and not perform any regression testing or test beyond the changes that have been deliberately or intentionally made to the software. The assumption being we made no other errors in the build or the compile. All appropriate software modules were included and nobody “fat-fingered” anything.  So you can see our decision to spot check in this case is fraught with some uncertainty. Maybe we will get lucky.

Let us take that example just a step further.  Suppose we are receiving this software from a supplier with which we have a historical record.  Suppose this supplier has routinely and consistently delivered software to our organization that has problems. Suppose further, that the software revision that is being proposed for a spot check is to address a warranty problem found in the present iteration of software. The problem we are correcting was found not in testing but in the field. This present iteration of software has only had a three or so months of production or customer exposure, when a warranty problem was found necessitating this new “maintenance” release.  Is this really a good time to spot check?  We know our previous release had problems. We also know the supplier has sent us problematic software in the past, so much so that we need a new release to fix the last release.

The probability this will be a successful release is looking fairly slim.  Without having made some large scale changes at the supplier that would mitigate our previous track record, it seems ludicrous to think a spot check is sufficient. We have all of the risks from the first part of the example, the software handling, the build, the compile and fat fingers, and all of the “risks” we feign to not know regarding the past track record. 

Spot checking is not a good technique. It may be a necessary strategy at times, but that strategy should be based upon calculated and well reason and sound arguments. In the first part of the example, spot checking could perhaps be a rational approach. What is at stake? What does our previous track record tell us?  Spot checking should be the odd exception rather than the hard and fast rule.  In the last example, you can see our decision to spot check in this case is fraught with potential problems.  Our next blog will provide some alternatives to spot checking.

 

This page has been translated into Spanish language by Maria Ramos  from Webhostinghub.com/support/edu

 

Project Management and Verification

Posted on: April 22nd, 2013 by admin No Comments

We have briefly discussed why verification is important to the product quality. Verification does not just address the product quality. Our project work requires verification as well. When we take on a project, we should have the scope articulated in a way that we can confirm that the project did indeed fulfill the objective.  As part of our project work we will compare the product as it is to this end objective.   Part of the termination activities of the project will be the confirmation (or refutation) that we have indeed met our contractual obligations.

Just as verifying the product solely at the end is insufficient, so too is verifying the project’s ability to meet the objective solely at the end of the project a good indicator of failure. The project manager and project team must develop metrics at the start of the project that will form the guiding light for our project actions.  We provide a brief list of some possible metrics:

      • Product quality (ppm)
      • Product cost
      • Specific key product features or characteristics
      • Cost Performance Index
      • Schedule Performance Index
      • Schedule Variance
      • Cost Variance

We will know when we miss-step by monitoring the ideal against our present performance. We will note any discrepancies that foretell failure and take controlling actions to put the project back on the trajectory to deliver as needed. If we cannot uncover a way to put the project back on track, we will then have to discuss with the project sponsor and key stakeholders the next course of action.  We are in a position to have this discussion nearer to the problem rather than wait until the end of the project to be surprised that we spent money, time and talent to not produce the expected results. 

Verification is also important for project management and it is also not just at the end of the project as contract closure actions.

Securing the Quality of the Product / Project

Posted on: April 22nd, 2013 by admin No Comments

Requirements management and configuration management are required for anything that even closely resembles effective testing.  Experience suggests failing in these two areas unnecessarily complicates the product verification activities, and we will show some of those traumas in the next few posts. 

An iterative and incremental product development process calls for reviews throughout the development process.  There have been times when people did not understand this concept and have attempted to test the product at the end – just prior to launch. Guess what? That rarely works. You will usually find problems (and sometimes very bad ones) just prior to the time you were going to launch the product. Think about the hundreds or thousands of interactions and interpretations that have to happen just right to launch the product like that! Not probable! 

Just like the project manager monitoring the schedule to evaluate whether the project is on time, so too should there be monitors in place to see if the product quality will meet the prerequisite.  To do this we must have continuous or recurring assessments monitored over time to ascertain the quality.

We can do many other things to help understand the product quality. In previous blogs we have talked about requirements reviews and FMEA’s. Each of these fills a role in securing the product quality. So also does our change management and requirements management.

We should constantly critique the steps to achieving the product as well as the product to understand the quality. In that way we are actively learning from the as we progress with the work rather than learn something at the end when there is no time remaining to meet any challenge. Contrary to what many may believe, testing is NOT the final activity we will conduct upon the product prior to launch.  Prudent development ensures we have reviews, evaluations, and testing to determine the quality (ask about our TIEMPO plan). We are more than just testing and not solely at the end of the development cycle to assure the quality of the product. 

Single Sign On provided by vBSSO