Archive for April, 2014

More on Stochastic Testing

Posted on: April 30th, 2014 by admin No Comments

We have referred to Stochastic Testing in an earlier blog post as an exploratory technique.  We provide a Google definition of exploration below:

Exploration – the action of traveling in or through an unfamiliar area in order to learn about it.

Stochastic testing is not compliance testing (requirements) even though we are exercising the product features to the requirements.  It is not extreme testing in that we are not pushing the limits of the stimuli upon the product.  Neither do we apply a combination of stimuli to the product, so referring to it as a combinatorial testing is inappropriate.  Neither is it attack mode testing. We are in fact exploring the software structure, the interactions between the software modules via multiple and various routes. We are learning how the software module interactions or sequence of states the software product may undergo in the real world impact performance.  We are looking for unintended consequences in our software such as semaphore handling, interrupt handling, or specific register handling (UART enabling etc.).  We are looking for anomalies but we cannot say the specific symptom nor the specific causal events or dependencies. We are therefore employing a technique best described as exploratory. Even if we have automated this testing (and we should), we are exploring nevertheless, and in a very efficient manner.  In fact, we can make use of our automation of our compliance testing, recycling it to perform some of this stochastic testing.  With an existing array of test cases, we can assign numeric designators employ a “random number generator” in the context of those test cases, and have random sequences of the test case.  The random number generator is set up from 1 to n (the total number of test cases automated). Each test case has a numeric identifier and the random number generator is the pointer to the test case that will be conducted. In this way we randomly exercise the product using our regression suite as the core, freeing the time up for our thinking talent to push the product as they see fit.

As an aside, hops, used to make beer, is transported via burlap bags. In that context there is indeed a connection between burlap and beer.

Testing Need Not Be Elaborate and Sophisticated

Posted on: April 16th, 2014 by admin No Comments

The olden days…

A long time ago (seemingly) I graduated from university with my engineering degree.  I was lucky, my first job was with a small company and I performed many roles as it applied to developing their new product line.  The product was an embedded strand process control unit. This unit would control older electromechanical systems that were used in the managing of fibers like Kevlar, fiber optics, fiberglass, sutures and Lycra.  I set about understanding the requirements and designing the embedded system.  I picked the Intel 8051 controller, designed the reset circuitry, oscillator circuitry, and external memory, input / output  addressing mechanisms.  I had drawn the design up and a technician wire wrapped the first prototype part (that should give you an idea of how old I may be).

Development Testing

While the technician wire wrapped the prototype, I set about with my simulator to develop the first parts of the software.  Eventually I received the prototype part and then was able to use in In Circuit Emulator (ICE) to continue the software development using the prototype as the target system.  Over time the wire wrap board moved to a prototype PC board and eventually all of the features are developed with debugged and periodic testing along the way.  I became confident that the product was close to launching and handed it over to a person for use in house to help “ferret out” any remaining problems.

Testing

One day, the technician came to me with a reported failure; the output from the embedded product to turn off the machine mysteriously quit working and would start working again when the power was removed and the system restarted.  I spent much time in the lab with the prototype and the machine to which it interfaced measure working, exercising the product in the “real environment”.  I had the oscilloscope on select items for scrutiny on the prototype. One of these was the ground signal from the between the prototype and the industrial equipment to which it was tethered.  I noticed that sometimes, there would be a spurious signal coupled onto one of the circuits (now I cannot recall which one) and it was at these times that the interface integrated circuit (of CMOS construct) would not recover. The input to the integrated circuit would change, but the output driving the relay to turn the industrial machine off did not.

Not Elaborate – Improvised Learning

It turns out that the buffer circuit was susceptible to a condition known as “latchup”.  I will not attempt to describe this condition but I will say this was a great learning opportunity (that took considerable hours).  Eventually we re-worked the design to eliminate the possibility of a spurious signal returning to the unit in a way to perpetuate another such condition.  To test this theory, I found an old drill that sparked and arced through the motor vents on the side.  While the prototype unit was working and interfaced with the industrial equipment, I laid the drill with the vent side to the product and would quickly press the start button on then off, causing a small lightning storm above the prototype unit.  Upon passing this level of electromagnetic stimuli, I returned to the industrial system for testing.  I could see not negative impact from this lightning storm above the prototype and I saw no further coupling from the industrial unit onto the prototype .

Testing

The story is not about CMOS latchup. It is mostly about product development and testing. I saw a failure due to a transient coupled back into the product.  I found a convenient way to apply similar type stresses (arguably greater than that exposed in the real world) to the product that did not require a great expenditure or highly contrived solution.  While it is true, I cannot mathematically quantify the stimuli to which the unit was subjected, it is equally true that there were no further reports for the product exhibiting this “latchup” phenomenon.

Press Release: Conventional and Agile Project Management Comparison

Posted on: April 14th, 2014 by admin No Comments

Press Release:

Conventional and Agile Project Management Comparison

The Metrolina Chapter of the Project Management Institute in conjunction with Value Transformation will present a comparison of conventional project management with agile project management.

Hickory, NC — The NC Metrolina Chapter of the Project Management Institute together with Value Transformation are pleased to announce a chapter meeting which will be held on Thursday April 24, 2014 6:00 PM to 7:00 PM in Hickory, North Carolina at CVCC Corporate Development Center.

The event will provide an explanation of Agile project management methods in Conventional Project Management context.  The event will associate conventional project management practices and artifacts with the agile analogue.

An understanding of how an agile project achieves the same objectives as a conventional project reduces the misunderstanding associated with agile.  At one time, agile was relegated to software projects or projects that had a high degree of technology that may not have been mastered.  However, the tools and techniques of agile can help any business.

QR Code for PMI Hickory Chapter Event

QR Code for PMI Hickory Chapter Event

The event will take place on Thursday April 24, 2014 6:00 PM to 7:00 PM at Catawba Valley Community College Corporate Development Center 2664 Hwy 70 SE, Hickory NC 28602.

About NC Metrolina Chapter of the Project Management Institute

The NC Metrolina Chapter is a not-for-profit organization and chapter of PMI, Inc.  Our mission is to advance and promote the practice and profession of Project Management among its Chapter members and throughout the Piedmont Triad; supported through the instillation of globally recognized standards, and professional development opportunities.

To learn more about NC Piedmont Triad Chapter of Project Management Institute – Visit:  http://www.pmi-metrolina.com/

About Jon M. Quigley

Jon M. Quigley PMP CTFL is a North Carolina resident and principal and founding member of Value Transformation, a product development training and cost improvement organization. Jon has an Engineering Degree from the University of North Carolina at Charlotte, and two Master Degrees from City University of Seattle where he teaches classes via distance learning.

Author bio snapshot.

Author bio snapshot.

Jon is a member of SAE, ASQ, AIAG and PMI and he has CTFL certification from ISTQB. He is also the co-author of seven books on a variety of topics from project management (including Agile), product testing and software development management.

Visit:  http://www.valuetransform.com

Simulation and Models in Product Development

Posted on: April 4th, 2014 by admin No Comments

Modeling Is Not As New As You May Think.

Models are not new, and neither models in the employ of product development. Product development has always had some basis in discovery and always will. If everything had such a high degree of certainty, likely the product or endeavor has already been done. Developing new things ceaselessly brings questions. To be effective, we want to answer these questions as quickly and as certainly as possible. Furthermore, in the course of answering the questions, we often discover more prudent questions to ask. In the olden days, we would create the product, learn from it and modify until we asymptotically approach success or perfection. Model based development is not that new though. It was not ushered in when computers entered our post industrialization era. In fact, anybody that lives in North Carolina would likely know the story of the Wright Brothers (yes I know they were from Ohio – a fact my wife will never let me forget). The brothers had begun building their understanding of wing performance based upon the work of Otto Lilienthal. By the way, Otto Lilienthal died as a result of injuries sustained in a glider accident. No doubt this influenced how the Wright brothers preceded in their product discovery exercises.

Kites and Gliders.

In the brothers initial work with kites and gliders, they noticed a difference between the theoretical calculation and the actual performance of the model or kite. That is the kite did not react as predicted based upon calculation. They knew then that something was amiss. The doubts they had required exploration. An example of the experiments they performed can be found here. To that end the brothers decided to measure and find the real answers for themselves. The brothers set about creating a wind tunnel and tools and techniques that would allow them to understand on a smaller and more controllable scale those things that matter to achieve flight. This arrangement made it possible for the brothers to not only arrive at the understanding of lift and drag, but made it possible to critique a variety of wing geometries to determine the best solution, all without the risk to life and limb. This was in the year 1901.

Learn Quickly

This was not meant to be a story about those who were part of the discovery of flight, but their story reflects a long ago instance where models were used to understand how things work. My guess is that as far back as can be measured we would see similar instances. Perhaps the first spears and nets followed a similarly process of discovery.
Today, with high powered computers, and the ability to create mathematical models of the components, not just physically but also from the performance perspective we are able to learn much more and faster. We can model entire vehicle systems before we begin to produce the first prototype parts. We can gather environmental data from the vehicles through instrumentation of existing vehicles while undergoing typical or stressful stimulation. We can even gather some vehicle use information from telemetry systems on a product or customer vehicle, allowing us to modify or adapt future vehicle systems based upon real data from use.

Physical World and The Product

That is not to say that model based development is not without difficulty, or provides an instantaneous of understanding. We must have a way to generate the models, in this context we are referring to an understanding of the physical world in which the product will inhabit. That requires an understanding of the variations possible both in system configuration as well as external stimulation or collection of stimuli. Those attributes that are connected to the desired outcome, we must understand so we can accurately predict the outcome. Below find a graphic that shows how this works[i]:

Simulation Process

Simulation Process

Success is not due to randomly generating models via a plethora of assumptions. It is the learning and understanding of the key variables (in context of the Wright’s work lift, drag, weight and wing shape and angle of attack) in the environment then mimicking those in some manner of simulation (the Wright’s wind tunnel and models of wings). We will want to compare the real world reaction to our model predictions and understand and resolve the difference, and we will want to do learn as quickly and clearly as possible. To accomplish this goal, we will measure these real world attributes. Our understanding of the real world will make it possible for us to design the product that will perform how we want it to in the context of the environment in which it will reside.

Learn more about simulation and prototype parts.


[i] Pries, K., & Quigley, J. (2011). Scrum project management. Boca Raton, FL: Taylor and Francis Group, page 134.

Single Sign On provided by vBSSO