Archive for January, 2013

CM of Configuration Management

Posted on: January 31st, 2013 by admin No Comments

We have hardly seen instances of this issue being discussed. We are painfully aware of the large extent we often require of configuration management because we have to work with tools in the embedded environment called “cross-compilers.” These tools allow the developer to work in a familiar programming language such as C and then produce executable code for the target processor, a processor that is often wildly different in architecture than the one on which we do the development. We have seen cases where we needed to manage the configuration of the version of the compiler, because subsequent product changes to that compiler would no longer work with target processor.

What happens if we change our configuration management (CM) support software? Aren’t we in the same situation as we are with the cross-compiler? We suggest that, indeed, we are in an analogous situation. Hence, any time we upgrade or change our CM support software, we need to many this change also. That suggests we must follow the standard procedure:

• Identify the configuration, including components
• Create a meta-control system to provide an orderly transition
• Ensure we can provide status reports during transitions (configuration status accounting)
• Plan the audits from the beginning of the transition project
o Physical configuration audits to ensure we have documentation appropriate to both the old version and the new
o Functional configuration audits to verify the new version works as desired
• We should also create some point checks to verify the integrity of the CM support software by evaluating the results with files that we know are difficult (e.g., immense bills of material, massive and complex drawings, etc.)

To our knowledge as of this writing, no tools exist to help with these transitions. This situation makes some level of sense because we could easily end up with an infinite regression of tools to manage tools to manage tools ad nauseam. The path to success here is traditional:

• Plan
• Do
• Check
• Act

Configuration Management and Mass Customization

Posted on: January 29th, 2013 by admin No Comments

We have discussed some issues regarding configuration management already and we will continue to discuss this underlying topic in this blog—it is that important! Mass customization presents specific issues. Mass customization occurs when set up our systems such that customers have the ability to request substantial modifications.

Mass customization works when we take some sensible steps. First, we must design our product for mass customization from the very start of development; post hoc customization tends to be graceless. Secondly, we must provide a core product that does not change—we might call this the “substrate” for the rest of the customization. Thirdly, we should constrain the allowable alteration while still allowing the customer some level of freedom.

We see mass customization in the automotive electronics business, for example, when we allow the customer to specify which gauges they want instrument cluster by instrument cluster. We can even set up our production test equipment to read the work order and verify that the cluster has the appropriate equipment installed.

The use of “apps” on computers, pads (tablets), and phones is another way to introduce mass customization. The company provides the substrate in the form of hardware, an operating system, and a base of apps and the customer supplies the rest.

How do we maintain configuration management in this embarrassment of choices? Industrially, we will use a software tool called a Configurator to help us. Each item has its own configuration. The Configurator ties into the main manufacturing resources planning (MRP) software so that orders occur as they always have, albeit with a higher level of variation. Alternatively, we may use a pull system and lean manufacturing to avoid having stacks of material. Just as a publisher will often print on demand (POD), our manufacturers will now customize on demand.

The key concepts are:
• Substrate
• Flexibility always
• Designed from the start

Testing and numbers

Posted on: January 28th, 2013 by admin 2 Comments

We take a time out from the configuration management discussion. We see in the news numerous companies with field quality problems and we cannot help but think of the discussions we have had with colleagues about how many organizations handle their product testing. Testing, done right, is a lead indicator of product quality. It is a leading indicator, in that with some effort, it is possible to “predict” the quality of the product in the field. A lagging indicator, the opposite of a leading indicator, means you are learning about the product quality after you launch the product and have production volumes with associated problems.

A few of the ways the testing process can fail to deliver is provided in a brief list below:

1. Insufficient time and resources spent testing
2. Informal handling of fault reports (excel sheets hidden on somebody’s laptop)
3. Disbelief in the testing results (that could never happen)
4. Testing results treated like opinion (even specific measured data)

One (seemingly) ubiquitous way the process fails is the insufficient time and resources applied to this critical set of activities. We see projects that allow late deliveries of the requirements and the design iterations, but the testing must be on time. However, the late deliveries means the time allowed for testing has now been reduced from the original plan. Then, of course, the end date does not change. Projects that make this decision should have significant quality contingency money reserved.

Another common failure is to deny that the failure would ever happen in the field. Invariably, we will find the failure in the field—with the same failure modes and volumes as our testing results suggest.

My favorite failure is to dispute the test results as if they were opinions and not the results of a thorough and well-executed test suite. Even when the failure information includes sound and relevant mathematical analyses, we see actions as if the results were personal opinions. This scenario is not the same as the previous failure, where we acknowledge the failure is possible. In this case we refute test results by saying the results are opinion—a form of psychological denial.

Sometimes it is not possible to test every permutation of the product—to do so would take so much time that the product would be obsolete at completion of testing. That does not mean sidestepping the testing process or spontaneously repudiating the test results. Such a mentality sets your project and your business up for failure and potential litigation. Testing is not the last thing you do for your project; you should be conducting your testing during the whole of the product development cycle and learning something about the capabilities of the organization as well as the product. We do well for our customers and for ourselves when we take the time to do things right!

Configuration and Systems Engineering

Posted on: January 28th, 2013 by admin No Comments

by Kim H Pries and Jon M Quigley

Configuration management quality will have a significant impact on the system. If configuration management is necessary for component development, producing a collection of parts that make up a system is even more complicated. Developing and delivering a workable system to test and subsequently delivering it to the customer requires close attention to the configuration. Consider, for example, developing an electrical system for a car. We have a number of electronic control units (ECUs) that comprise the system. We deliver multiple releases of the vehicle systems during the development of the system. The individual ECUs have dependencies with the other ECUs.

For each component of the system we will need to attend to the configuration elements for each of the components. Poor configuration management will have severe impact upon system development. So how do we model this situation? The configuration will be hierarchical in structure, much like an organization chart or an indented bill of materials. For the most part, this model should be sufficient to establish basic dependencies. If we have more significant concerns about some components, we can also take the time to do a systems-level failure mode and effects analysis (SFMEA) to try and avert potential interaction catastrophes. We can also use orthogonal arrays—similar to those in designed experiments—in order to provide test scenarios for stimulating the component subsystems as well as individual arrays for each component for the purpose of stimulating output signals, digital or analog.

Lifecycle management software acts much like software configuration management software but on a much larger scale. In effect, we have a controlled “bucket,” which allows us to check in all documents related to a specific project. Can we still make mistakes? Of course! Like anything else, the proper use of our tools requires individual commitments as well as a strong management commitment to the configuration management process. Take a look at http://www.oracle.com/us/products/applications/agile/index.html for an example of lifecycle management software (we have no business involvement directly with Oracle, other than as users of some of their products).

Let’s review our tools, then:
• SFMEA
• Bill of materials at high level
• Lifecycle management software

Why did I learn nothing from my ONLY prototype?

Posted on: January 23rd, 2013 by admin 1 Comment

by Jon M Quigley and Wally Stegall

This post is a flashback to the earlier series about prototypes (http://www.valuetransform.com/planning-prototype-parts). A recent event reminded me of one other area we did not cover in this series. Such is the way of the blog.

Consider the organization that decides to limit the number of prototype parts to be used for the assorted verification activities as a cost saving measure. This is not a bad idea, however, there is merit in the saying “penny wise and pound foolish” and hence this blog post. If you take the above approach, it is prudent to address the risks associated.

An organization is working a development project with limited attention and control to the prototype handling and testing. They start the prototype test with a harsh Bulk Current Injection stimulus. Upon conclusion of one of the test, we find the product is no longer functional. Thus the start of the testing of the prototype ends further learning from the product. We have learned something important (the failure) however, we now have to wait for another prototype to be generated and delivered (delays and money). We did not learn anything from non-destructive tests. For example we could have executed performance tests on that prototype part first, for example the basic functions of the product. In that way we would have learned other things before we turned our prototype part into a nonfunctioning brick. We see similar problems when we put early prototype parts (parts unable to meet durability requirements) pressed into durability testing. As an example, a stereo lithography part on a durability vehicle. The part is unable to stand up to the vibration and mechanical shock of this test and is destroyed.

Solution is again people as in earlier post. The prototype part purchase and testing must consider the objective of the part as well as the prioritized steps to achieve that objective. There is nothing wrong with destroying the prototype in testing. This also teaches us something about the potential product. However, the best solution is to not destroy the product right from the start – thereby eliminating the ability to learn anything else the part has to tell us. You do not need more risk in your project!

When Configuration Management Goes Wrong

Posted on: January 23rd, 2013 by admin 1 Comment

We have seen situations where poor configuration management has led to embarrassing situations with customers. In one case, a supplier shipped parts to a relatively new customer in which neither the hardware revision nor the software version were known. The parts arrived at the customer location for demonstration by a senior sales manager–none of them worked!!!  Consequently, a senior sales person was left blowing in the wind and this new customer was less than impressed. Ultimately, this product was dropped by the customer, representing a waster of marketing, sales, engineering development time, and early production runs.

When working with software, it is incumbent on the developers to assist the management process by including internal documentation of the code (by the way, we consider internal documentation to be significantly more important than external documentation). We have seen a developer come upon some historical code that was prime for renascence and end up wasting days and weeks trying to resuscitate an unknown version with unknown capabilities! Again, we see wasted effort, wasted time, and wasted revenue. Development tools must come under configuration management, especially when we are dealing with cross-compilers and other peculiarities of the embedded development world.

Configuration management also has the benefit of letting the test groups know what they are really testing. When a test group is unsure how to configure the software, or the system, many hours are wasted. We test things that are not included yet in an iteration of software. If this is part of a larger system our problem gets much larger as we are unsure what systems features are supported by the system, therefore we do not know what should be tested.  We waste time testing features not developed; we waste more time writing fault reports. Without configuration management, our test engineers may not know what or how to configure the product for the tests.  Of course, the test group should have their test tools and plans under configuration management also. At the start of the test the test engineer should make note of the relevant part numbers (hardware and software) providing traceability back to the configuration management plan.  When all is well, the test group will be able to execute promptly and present a meaningful report upon conclusion.

Configuration Management

Posted on: January 22nd, 2013 by admin No Comments

by Kim H Pries

Our experience shows us that configuration management lies at the very heart of professional engineering and product growth. Just to be able to run an ERP or MRP system requires a standard for nomenclature and identification of parts (including software). We mark changes to parts and software with changes to part numbers. This allows us to track the effects of the engineering change–sometimes they go awry and it helps both the customer and supplier if they can identify, control, and account for the parts that are already out in the field in addition to those that are stacked up in the plant.

With software, we generally use a software configuration management tool that allows us to check-in and check-out software from the system. These systems allow for version control, branching (multiple versions), and the application of customer version numbers. Typical examples over the last twenty-five years include: revision control system (RCS), concurrent version system (CVS), source code control system (SCCS), subversion, PTC Integrity, Microsoft Visual Sourcesafe, and many others. All of these have their quirks and advantages–we would not develop software without one of these systems! Any system is better than no system at all.

As a thought experiment, our readers might visualize their organizations without configuration management (perhaps this is does not require much imagination). What would happen? We suggest that configuration management would spontaneously arise in islands of developers (teams, individuals) and all of these would ultimately merge as the perspicacious individuals explain the benefits to their brothers and sisters in the business. Value Transformation LLC is an adamant advocate for configuration management at all levels of the enterprise!

Configuration and Change Management

Posted on: January 21st, 2013 by admin No Comments

by Kim H. Pries

Some people find terms such as configuration management and change management to be confusing and they are unsure what they mean and what the difference could be. We consider change management to be a higher order concept that includes the idea of configuration management. Let’s discuss configuration management first!

Classical configuration management in the mode of the U.S. Department of Defense breaks down into configuration identification, configuration control, configuration status accounting, and configuration auditing. Configuration identification occurs when we specify a product or component sufficiently that can be distinguished from other parts and components; we also usually use a well-specified nomenclature to avoid confusion. Configuration control occurs when we can modify what we have identified in a rational way (change of part number, change of nomenclature, change of drawings) such that we always know what we have. With hardware, configuration identification and control will also include labeling the product to avoid improper installation or ignorant deployment. Configuration status accounting allows us to inspect the condition of our configuration system as well as providing reports on progress of change initiatives involving parts or software. Configuration auditing includes functional configuration auditing (checking if the product works) and physical configuration auditing (checking if our documentation is up to standard).

Change management includes configuration management as a component. However, change management can also include the concepts of scope management, risk management, supplier management, customer management, and all the other component functions of the enterprise, including sales and marketing. Our risk management plan may also include configuration management as part of toolset for effecting change without devolving into a random walk.

 

Transition Prototype to Production

Posted on: January 15th, 2013 by admin 2 Comments

by Kim H Pries

When we are engaged in prototype development during the early to late middle phases of our new product delivery process, we usually purchase components through maintenance, repairs, and operation (MRO) purchasing. This type of purchasing is managed on an as-needed basis, and often, is not automated. We purchase the parts we need in relatively small quantities because we are not yet in production. At this point in our process, this approach is reasonable and effective. The part cost is high but we are not at risk of having any parts we need to throw away.

As we move through the process, however, we reach a point where we begin to transition from prototypes to sellable products. For these products, we most commonly use manufacturing resource planning (MRP) purchasing, which is nearly always automated. As developers, we have seen huge discontinuities in delivery when shifting from MRO to MRP purchasing. MRP purchasing has some different characteristics:

• Lot sizes are usually based on some algorithm for economic order quantity (EOQ) to get the best purchase price
• Lead times are part of the database
• Reorder points are also part of the database

We run into trouble with our transition when we suddenly step from small part purchases to triggering orders for 10,000 pieces of some component. Our experience suggests that a better solution is to treat our initially-launched product as if it were a final prototype and set the MRP quantities below the EOQ amount and use substantial safety lead time (but not large amounts of safety stock). We suggested a model for this problem in our book Project Management of Complex and Embedded Systems and we think it is still valid. The only likely bone of contention would be common parts, but if the parts are common, they will be used on another product eventually, so the risk is much less than with unique components. Customers will begin asking for changes within the first six weeks to six months and we need to be prepared for this eventuality by proactively managing our parts control system. We may even see some initial negative margin–a situation of which our customer should be aware as we stun them with our ability to provide satisfaction.

Rapid Prototyping

Posted on: January 14th, 2013 by admin 2 Comments

by Jon M Quigley
When we have a short project schedule, we need to learn from our prototype as quickly as possible. Rapid prototyping is a rational approach to a shorten schedule that does not come at the risk or cost level of skipping prototypes or starting the next level of prototype before we have learned from the previous prototype as we discussed in an earlier blog.

Rapid prototyping is possible when we have access to equipment that enables us to deliver a useable product within a few days. With the advent and improvements in three-dimensional printers (including the dropping of costs) we now see the ability SLA (stereo lithographic) parts quickly with relatively cost. Prototype parts are different from models in that we are able to conduct tests upon prototype parts. This provides us with the feedback or learning we have been writing in the previous blogs. Some things are easier to prototype than others, below is a brief list of the possibilities:
1. Printed circuit boards
2. Plastic parts via soft tool injection molded (for low volumes – 5,000 pieces or less)
3. Mechanical pars injection via soft tool injection molded (for low volumes – 5,000 pieces or less)

For example, the tools available for developing printed circuit boards have improved greatly over the years. Gone are the days from wire wrapping a “perforated-board” to produce a prototype although this technique works and could be performed relatively quickly. Today we have more sophisticated tools that reduce the human error and are even quicker to deliver the prototype part.  There are computer controlled etching for example, and some organizations have manufacturing pick-and-place equipment set up to solely address prototyping demands.

The ability to exercise or test the prototype parts has a great impact upon the future product development activities. The sooner we start working with prototypes, the sooner we are able to learn about the product and improve the design. We should learn much from these early parts and not wait until the last minute when the project is running out of options and launch is eminent.

Single Sign On provided by vBSSO