by: Wally Stegall and Jon M Quigley
The reason for prototypes parts is to learn something about the product before we spend larger amounts of money on the future product development. We want to know things that are not readily knowable by our immediate engineering work. The longer it takes us to learn, the longer our project and product are at risk and the later we deliver. Elimination of prototype critique without some other suitable substitute (such as modeling) is equally poor. Understanding and measuring of key attributes or critical characteristics is essential, and is the entire point of the prototype. Accomplishing this critique quickly requires:
1. Identify objectives of the prototype (questions requiring answers)
2. Identify key personnel
3. Identify activities (tests and product critiques)
4. Planning of those activities
a. Prioritization of the activities
b. Map human talent to activities
c. Sequencing of the activities
5. Monitor and follow up
Based on the critical characteristics and a frank assessment of what is known and unknown (and needs to be known) an outline of actions and test can be defined to fill in the knowledge gaps. This will be the objectives we wish to achieve with this prototype part.
The prototype review requires a dynamic technical interaction across the span of the development groups involved. To be effective, we require key individuals from teams, spanning from the concept, sub systems, hardware, software, product, and end user. This sounds simple right. Not really, considerable amounts of money and time are lost every day to the vacuum of responsibility. The activities planned may require the prototype to be broken down into sub-assemblies before going to the final prototype. With planning comes reserving time and resources. The core team planning the program must allocate the time and resources required for the test and analysis prior to the prototype availability. We must include in this planning where any potentially destructive tests may occur. We want to learn all we can about the function and performance before we schedule the tests that may evoke a failure. This is especially valid when few prototypes are available.
Once the prototype has been delivered, we must monitor who is conducting what test. We must remain aware of dependencies in the use of the part. For example: Tom cannot start until Angie has completed her critique. In this case the dependency is due to material availability and not some technical or project need. If we run out of time, Tom’s review of the prototype may suffer, and we may not learn what we need and our product is at some risk.
If you think is project management – you are largely correct, and that is where we can see things fall apart. Failure to define and follow steps similar to those above waste money and does not reduce risk. To allow a next phase release without resolving the identified critical items and performance issues is professional negligence, but we will talk more about that in a later blog.