Archive for March, 2014

Risk Management Event

Posted on: March 17th, 2014 by admin No Comments

Risk Management the Best ‘Fire’ Insurance for Your Project

Risk management training day announced by the NC Piedmont Triad Chapter of the Project Management Institute in conjunction with Value Transformation.

Greensboro, NC — The NC Piedmont Triad Chapter of the Project Management Institute together with Value Transformation are pleased to announce a risk management training event which will be held on Saturday March 22, 2014 in Greensboro, NC.

The “Risk Management the Best ‘Fire’ Insurance for Your Project” course will help those involved in risk management learn about the connection between a project’s scope, a project’s direction and risk. It will enable managers at every level to save time and resources by moving beyond firefighting and understand the principles of risk management.

A project can fail to meet its budgetary requirements and its deadline because of unpredictable events that although unknown could have been assessed in terms of risk and probability at the beginning of the project.

Link to the Risk Management Event Registration site at NC Piedmont Triad Chapter of PMI.

Link to the Risk Management Event Registration site at NC Piedmont Triad Chapter of PMI.

The course will also explain the compounding nature of risk probability and how this can impact your project schedule. The training will also teach managers an agile technique that can lower the conventional project management risk.

“Project leaders should not waste precious time and resources due to ineffective risk management,” said Jon M. Quigley, the course instructor. “It is possible for project managers to reduce their firefighting time by taking into account possible risks during the planning stages.”

Other sections of the course will train managers how to use assessment and prioritization techniques in creating a risk management plan. The course will also cover ways to evoke the potential risks from a team, and how to objectively assess their impact.

The training event will take place on Saturday, March 22, 2014 from 8:00 AM to 12:30 at 7900 Triad Center Drive, Greensboro, NC 27409. The course carries 4 hours of Professional Development Unit (PDU) credit.

About NC Piedmont Triad Chapter of the Project Management Institute

The NC Piedmont Triad Chapter is a not-for-profit organization and chapter of PMI, Inc., chartered in 1990. Our mission is to advance and promote the practice and profession of Project Management among its Chapter members and throughout the Piedmont Triad; supported through the instillation of globally recognized standards, and professional development opportunities.

To learn more about NC Piedmont Triad Chapter of Project Management Institute – Visit:

About Jon M. Quigley

Jon M. Quigley PMP CTFL is a principal and founding member of Value Transformation LLC  product development training and cost improvement organization. Jon has an Engineering Degree from the University of North Carolina at Charlotte, and two Master Degrees from City University of Seattle.

Jon is a member of SAE, AIAG and PMI and he has CTFL certification from ISTQB. He is also the co-author or contributor of nine books on a variety of topics from project management, product testing, and cost improvement  in the context of software development life-cycle management.

Epic Project Management Battle: Retrospective vs. White Book

Posted on: March 13th, 2014 by admin 1 Comment

Retrospective versus White book. In conventional project management, it is called the white book. In agile, it is known as the retrospective. Both the retrospective and the white book serve the same purpose that is to learn from the past and improve the future. Though the objectives may be similar the manner and perhaps the efficacy are quite different.

The White Book and Eventually

In the case of the white book, we are either learning at the end of each phase of the project (more desirable) or we are learning at the end of the project. At the end of the project, we are unable to make use of any learning for that project. At the end of the phase, we will have performed significant execution while not taking a conscience and deliberate effort to improve. Additionally, what is recorded may not accurately reflect our total project experience. If we evaluate only at the phases, or at the end, we run the possibility of forgetting something or dwelling on the most traumatic aspects. We may lose some objectivity or clarity.
My experience with the white book is that somebody performs interviews and records this information into a “digital book”. The interviews do not happen (necessarily) with the entire team. That suggests a myopic perspective of the individual rather than the comprehensive view often required to understand the real and often complex situation. Even if there were measurements made during the project, these measurements were undoubtedly in the absences of the project execution. There is no knowing if those measurements identified at the beginning have a bearing on the resulting project. These may tell us nothing. For example, it would be analogous to measuring the floor molding to understand the size of the back splash on a sink.
The “book” (virtual that it is) is then put up. There is usually no metadata to facilitate sorting or recovery by future inquires.

The Retrospective and Immediacy

Since the retrospective happens at the end of a sprint and since each sprint is either two or four weeks, we see our memories are not so taxed to reconstruct what happened. Additionally, since the sprint team is simultaneously engaged in the review, any difficulty in reconstructing has the chance for resolution as our team is engaged in determining what actually happened. Further, since the retrospective happens at the end of the sprint, we can find our learning can more readily applied to the next sprint (assuming that we have more sprints to go).

When it comes to measurements, upon the critique of our present sprint (less than a month worth of work) if we desire.

The retrospective by process definition requires the entire team to participate. In fact, the lead of the retrospective may be a specific team member not like the conventional white book approaches which of often led by the project manager.  The retrospective addresses everything from human resources and team issues to process and product improvements.

Retrospective Graphic provided by Rick Edwards

Retrospective Graphic provided by Rick Edwards



Who says that conventional projects should wait to the end of the phase or project?  Perhaps it would benefit conventional project management to adopt a learning approach more like the retrospective. Why wait until the end of the project where we cannot improve the project outcome? Likewise, why wait until the end of a project phase to learn? Even those phases can be lengthy. Recovery of the information by posterity will still be difficult.

Check out some of the titles from The Pragmatic Bookshelf for additional information on the retrospective.

Watermelon Green?

Posted on: March 11th, 2014 by admin 1 Comment

What is “watermelon” green?

Watermelon.  A great treat in the  summer. The dark green rind, the yummy bright red center. Recently I had lunch with an IT friend named Phil.  We were talking about checklists and determining project “status” when he mentioned the color Watermelon Green.  I chuckled.  Then I continued a bit afraid to ask for elaboration, “What is watermelon green?”  He then described the modality for me. Watermelon green is the description for actions or activities reported as green yet everybody knows behind that thin skin of green resides the red pulp of truth.  That is, the task or objective is not really in a good or green state.

No doubt, many of you have seen the same phenomenon.  In fact, we expressed it in different, perhaps less creative way in our blog post Truth or Obfuscation.  Of course, this started us talking about all of the times we have seen this happen.  What follows are a few of those stories.

Watermelon is only green on the outside.

The project manager reports the state of a project in a line management meeting.  The report is via a check list with smiley faces that are colored “red” yellow” and “green” juxtaposed against a task name.  As the project manage walks through the list of items and explains, the managers owning some of those items quizzically look at each other as the state of key items reported are green smiley.  It was all the managers could do to keep from having a paroxysm and work to correct the state of the project in the meeting.

Project Gantt charts can provide us with the same sort of feeling. Consider the example where the “percent completed” is used to update the status. Those projects that do not break the tasks down far enough, as Kim Pries referred to as the atomic level, have space for interpretation and as such a loss of objectivity.  We have seen this same “watermelon green” effect as we hear those responsible for delivering specific tasks and actions “estimate” their level of completion only to find out come delivery date they have all of a sudden are unable to meet and are in fact weeks behind.

The description of the task state via smiley faces or “watermelon green” is idempotent.  It does not change the outcome or the real state of the task.  It delays and causes problems with dependencies.  It seems the lack of courage or inability to perform is omnipresent, and my bet is this is a significant source of our problem with our projects being able to deliver.

Don’t Fear the Estimates

Posted on: March 10th, 2014 by admin No Comments

I comes as not surprise that I follow other bloggers. One of my favorites is a word press blog of Tisquirrel.  She has recently posted “It seems that I hate estimations. Really?” that I thought very telling.

The trouble with Estimates

The trials she describes happen very frequently. The truth is, estimates are just that – or best (hopefully) informed assessment of what is required to achieve the stated objective.  To make the assessment from the informed perspective, will require some time for discovery.  We want to know what are we trying to do? What is the best approach? What technical and schedule risks do we have in achieving the objective?  This requires some adequate time to arrive at reasonable assessment.

Who Estimates Can Minimize Risk?

Presumably, the technical personnel that will be required to deliver, are closer than the sales or executive staff.  The people doing the work typically are in a better position to estimate, as their daily job is associated with the work.  I have been on projects where the estimates and date for delivery were known even before we understand the scope from a technical perspective. Sometimes this is unavoidable, for example legal projects like EPA mandates, that have a firm fixed date for introduction.  Missing that date will invoke heavy penalties and fines administered by the sanctioning body.  These conditions are recognized and understood by the team.  However, arbitrary dates imposed by the uninitiated on the wherewithal it takes to deliver are death marches. The team knows up front that success is not possible.  I make the analogy of the basketball coach that asks his player to make a 25 foot verticle leap.  It is pointless to attempt that “achievement”.  This request is not just out of bounds that might be achievable – th sort of thing that is a motivational catalyst to achievement.

Estimates and Metrics

Nothing wrong with estimates, but measurements should confirm or refute if your estimates were valid.  This is true whether we are estimating or whether our sales team, or executives are providing the estimates.  We should select key metrics that prove or refute those estimates and constantly review these metrics with the team and project / product sponsor.  Estimates, not matter how these are derived, should have a metric that informs us if our estimates were right or if our project and product costs and delivery date are compromised. That is one of the reasons for Earned Value Management techniques for conventional projects, and the burn down chart and sprint velocity for agile.

Risk and Severity

Posted on: March 8th, 2014 by admin No Comments

What is severity?

We have been spending considerable time on risk in preparation for our up coming Piedmont Triad PMI Risk Management event.  Our last specific risk blog we discussed risk and probability.  That is only a part of the equation.  We are interested in probability, but we also need to know the severity.  Severity is the impact of the event coming to realization.  How much money is at risk due to the event. Not all risks are equal in this regard. If the severity or impact is inconsequential, we will not spend much time on our actions to mitigate.  As good stewards of our organization, we will want to apply our efforts to protect from the downside of our actions.  We will still need to go through some measure of diligence to assess the impact upon our project rather than assume an event will have no impact on our project.

How do we determine the severity?

We can learn some things to help us in the severity prediction.  We can scrutinize our historical record to ascertain the sort of risks we have experienced in the past.  Our project white books are a good place for some of this information.  If our organization keeps a risk register for each, we can likewise learn from that documentation.

Sometimes we make risk assessment more difficult than it needs to be. Certainly, there is a time for positive attitude.  However, the uncovering and assessing potential implications of a risk coming to the fore, is decidedly not the time for blind optimism.  The project team can often provide a reasonable prediction of the risk severity.  You just have to listen to the “negative news”, keeping in mind that is what you need to hear.  Teams and specific expertise in your organization can help you postulate risks and predict the consequences associated with those risks.  That is the goal – theorize what can happen for each identified risk.

Specifically, we will want to associate a magnitude of impact with the trauma event.  For example, we can have three categories as follows:

  • low impact
  • medium impact
  • severe impact

We can also use a scale or range of numeric values to demonstrate our assessment of severity. For example, we can use from 1 to 100, with 100 being the most severe.   This ranking will allow us to prioritize our plan of attack and focus our precious resources on the most critical, and from the last blog, most probable events.

What will do with this?

At the PMI Risk Management event, we will demonstrate how to connect all of these “dots” to produce a comprehensive approach to risk management. The event is going to be highly interactive.  We will demonstrate techniques and processes then you will have the opportunity to try the technique yourself.

Test, Inspection, Evaluation Master Plan Organized

Posted on: March 7th, 2014 by admin No Comments

 TIEMPO – Test, Inspection, Evaluation Master Plan Organized

by Jon M. Quigley and Kim Robertson


Ensuring product quality is not accomplished solely through testing and verification activities. Testing is but a fraction of the techniques that are at an organization’s disposal to improve their development quality.  Good planning of the product incarnations; that is, a phased and incremental delivery of the feature content, makes it possible for an organization to employ test, inspections, and evaluation as tools for competitive advantage. To really improve (and prove product quality), a more comprehensive approach is required.


No Single Bullet to Product Quality.

No Single Bullet to Product Quality.


The Test, Inspection & Evaluation Master Plan Organized adds an extra method to the Test and Evaluation Master Plan [TEMP—specified in IEEE 1220 and MIL-STD-499B (Draft)] to support product quality. TIEMPO expands the concept of a Test and Evaluation Master Plan by focusing on staged deliveries with each product/process release  a superset of the previous release.  We can consider these iterations our development baselines closely linked to our configuration management activities.  Each package is well defined; likewise the test, inspection and evaluation demands are well defined for all iterations.  Ultimately, the planned product releases are coordinated with evaluation methods for each delivery. Under TIEMPO inspections are:

  • Iterative software package contents
  • Iterative hardware packages
  • Software reviews
  • Design reviews
  • Mechanical
  • Embedded product
  • Design Failure Mode and Effects Analysis (DFMEA)
  • Process Failure Mode and Effects Analysis (PFMEA)
  • Schematic reviews
  • Software Requirements Specification reviews
  • Systems Requirements Reviews
  • Functional Requirements Reviews
    • Prototype part inspections
    • Production line process (designed)
    • Production line reviews
    • Project documentation
    • Schedule plans
    • Budget plans
    • Risk management plans


A.  Philosophy of the Master Plan

At its core, TIEMPO assists in coordinating our product functional growth. Each package has a defined set of contents, to which our entire battery of quality safeguarding techniques will be deployed.  This approach—defined builds of moderate size and constant critique—has a very agile character, allowing for readily available reviews of the product.  In addition, TIEMPO reduces risk by developing superset releases wherein each subset remains relatively untouched.   Most defects will reside in the new portion (the previously developed part of the product or process is now a subset and proven defect-free).  Should the previous iteration contain unresolved defects, we will have had the opportunity between these iterations to correct these defects.  Frequent critical reviews are utilized to guide design and to find faults. The frequent testing facilitates quality growth and reliability growth and gives us data from which we can assess the product readiness level (should we launch the product).

B.  Benefits of the Master Plan

Experience suggests the following benefits arise:

  • Well planned functional growth in iterative software and hardware packages
  • Ability to prepare for test (known build content), inspection and evaluation activities based upon clearly-identified packages
  • Linking test, inspection and evaluations to design iterations (eliminate testing or inspecting items that are not there)
  • Reduced risk
  • Identification of all activities to safeguard the quality—even before material availability and testing can take place
  • Ease of stakeholder assessment, including customer access for review of product evolution and appraisal activities

Experience also indicates that at least one 15% of the time associated with downstream trouble shooting is wasted in unsuccessful searches for data simply due to lack of meaningful information associations with the developmental process baselines. The TIEMPO approach eliminates this waste ferreting out issues earlier in the process and allowing more dollars for up front product refinement.

C.  An overview of one approach:

How each of these pieces fit together is shown below.  TIEMPO need not be a restricted only phase-oriented product development, but any incremental and iterative approach will be the beneficiary of a constant critique including entrepreneurial activities.

 D.  System Definition:

The TIEMPO document and processes begin with the system definition. Linked to the configuration management plan it will describe the iterative product development baselines that build up to our final product function entirety.  In other words, we describe each of our incarnations of the products in each package delivery.  We are essentially describing our function growth as we move from little content to the final product. Each package will have incremental feature content and bug fixes from the previous iteration. By defining this up front, we are able to specifically link the testing, inspection and evaluation activities to not only make an iteration but specific attributes of that iteration and capture it in an associative data map in our CM system.  In the example of testing, we know the specific test cases we will conduct by mapping the product instantiation with the specifications and ultimately to test cases.  We do this through our configuration management activities.  We end up with a planned road map of the product development that our team can follow.  Of course, as things change we will again update the TIEMPO document through our configuration management actions.

E.  Test (Verification):

Test or verification consists of those activities typically associated with determining whether the product meets the specification or original design criterion.  If an incremental and iterative approach is applied, prototype parts are constantly compared against specifications.  The parts will not be made from entirely production processes but will increase in level of production content as the project progresses and we approach our production intent. Though prototype parts may not represent production in terms of durability, they should represent some reasonable facsimile of the shape and feature content congruent with the final product.  We use these parts to reduce the risk by not jumping from idea to final product without learning along the way.  We should learn something from this testing to use to weigh the future quality of the resultant product.  It is obvious how testing fits into TIEMPO.  However, there are some non-obvious opportunities to apply TIEMPO. We could use inspection as a form of test plan. Did we get the testing scope correct?  We can also use this inspection technique on our test cases, analyzing if we will indeed stress the product in a way valuable to our organization and project, not to mention that we can inspect software long before it is even executable. The feedback from this inspection process will allow us to refine the testing scope, test cases, or any proposed non-specification or exploratory-based testing. The testing relationships in a typical HW/SW release plan are shown below.


Our release plan will be linked to our configuration management.

Our release plan will be linked to our configuration management.


F. Reliability testing

In the case of reliability testing, we assess the probable quality behavior of the product or process over some duration.  Finding failures in the field is a costly proposition, with returned parts often costing the producer to absorb from profit five to ten times the sales price, not to mention the intangible cost of customer dissatisfaction.  For reliability testing, small sample sizes are used when a baseline exists (and we combine Weibull and Bayesian analytical techniques) or larger sample sizes without a baseline. Physical models are used for accelerated testing in order to compute probable product life.  Inferior models will hamper our progress, especially when a baseline does not exist.  Our approach is specified in the TIEMPO document; along with what specific packages (hardware / software) used to perform this activity.  Thus our development and reliability testing are linked together via our configuration management work

G.  So, when do we start testing?

Many may believe that it is not possible to test a product without some measure of hardware or software samples available. It is possible to test if we have developed simulators to allow us to explore the product possibilities in advance of the material or software. This requires accurate models as well as simulation capability. To ensure accurate models we will run tests between our model results and real world results to determine the gap and make necessary adjustments to the models.  We may even use these tools to develop our requirements if sophisticated enough.  These activities reduce the risk and cost of the end design because we have already performed some evaluation of the design proposal.  As prototype parts become available, testing on these parts is done alone or in concert with our simulators. If we have staged or planned our function packages delivered via TIEMPO, we will test an incrementally improving product.

 H. Types of tests during development

When we get into the heavy lifting of the product or service testing, we have a variety of methods in our arsenal.  At this stage we are trying to uncover any product maladies we which not to be impacted by, nor our customer.  We will use approaches such as:

  • Compliance testing (testing to specifications)
  • Extreme testing (what does it take to destroy and how does the product fail)
  • Multi-stimuli or combinatorial testing
  • Stochastic (randomized exploratory)


I.  Inspections

Reviews are analogous to inspections.  The goal of reviews is to find problems in our effort as early as we can.   There may be plenty of assumptions that are not documented or voiced in the creation of these products.  The act of reviewing can ferret out the erroneous deleterious ones allowing us to adjust. We can employ a variety of review techniques on our project and product such as:

  • Concept reviews
  • Product requirements reviews
  • Specification reviews
  • System design reviews
  • Software design reviews
  • Hardware design reviews
  • Bill Of Materials
  • Project and  Product Pricing
  • Test plan reviews
  • Test case reviews
  • Prototype inspections
  • Technical and Users Manuals
  • Failure Mode Effects Analysis (see immediately below)

Design Failure Mode Effects Analysis (DFMEA) and the Process Failure Mode Effects Analysis (PFMEA) employed by the automotive industry can be applied to any industry.  These tools represent a formal and well-structured review of the product and the production processes.  The method forces to consider the failure mechanism and the impact.  If we have a historical record we can take advantage of that record or even previous FMEA exercises.  There are two advantages; the first of which is the prioritization of severity.  The severity is a calculated number known as the Risk Priority Number (RPN) and is the result of the product of:

  • Severity (ranked 1-10)
  • Probability (ranked 1-10)
  • Detectability (ranked 1-10)

The larger the resulting RPN, the higher the severity, we then prioritize our addressing of these concerns first.  The second advantage fits with the testing portion of the TIEMPO.  The FMEA approach links testing to those identified areas of risk as well.  We may alter our approach or we may choose to explore via testing as an assessment to our prediction.  For example, let’s say we have a design that we believe may allow water intrusion into a critical area.  We may then elect to perform some sort of moisture exposure test to see if we are right about this event and the subsequent failure we predict.

J. Evaluation (Validation)

Evaluation can be associated with Validation.  With these activities we are determining suitability of our product to meet the customers need.  We are using the product (likely a prototype) as our customer would.  If the prototype part is of sufficient durability and risk or severity due to malfunction low, we may supply some of our closest customers with the product for evaluation.  Their feedback is used to guide the remaining design elements. This facilitates our analysis of the proposed end product.  There are other ways to employ evaluation.  Consider the supplier or manufacturer of a product to a customer for subsequent resale. We may perform a run-at-rate assessment on the manufacturing line to measure probable production performance under nearly realistic conditions. Now, we are evaluating the line under the stresses that would be there during production.  We may use the pieces produced during the run-at-rate assessment to perform our testing.  This approach is a reasonable, since the resulting parts will be off the manufacturing line and built under comparable stresses to those in full production.  In fact, we may choose to use these parts in the mentioned early customer evaluations.

K. Inspection caveats

By definition, an inspection is a form of quality containment, which means trapping potential escapes of defective products or processes. The function of inspection, then, is to capture substandard material and to stimulate meaningful remediation.  Inspection for such items as specification fiascoes prevents the defects from rippling through impending development activities where the correction is much more costly.  The containment consists of updating the specification and incrementing the revision number while capturing a “lesson learned.”  Reviews take time, attention to detail and an analytical assessment of whatever is under critique, anything less will likely result in a waste of time and resources.

L. Product Development Phases

Usually there are industry specific product development processes.  One hypothetical, generic model for such a launch process might look like the following:








For the process outlined above, we could expect to see a test (T), inspection (I) and evaluation (E) per phase as indicated in the chart below:


Example of phases in a project and what activities we can undertake.

Example of phases in a project and what activities we can undertake.

The design aspects will apply to process design just as much as to product or service design.

M.  Conclusion

There is no one silver bullet.  Test, inspection and evaluation are instrumental to the successful launch of a new product. This is also true for a major modification of a previously released product or service.  We need not limit these to the product but can also employ the techniques on a new process or even a developing a service. Both testing and inspection provide for verification and validation that the product and the process are functioning as desired. We learn as we progress through the development.  If all goes well, we can expect a successful launch. The automotive approach has been modified and used in the food and drug industry as the hazard analysis and critical control point system. Critical control points are often inspections for temperature, cleanliness, and other industry-specific requirements.

Below is an outline for a TIEMPO document[1]:

1.         Systems Introduction

1.1.      Mission Description

1.2.      System Threat Assessment –

1.3.      Min. Acceptable Operational Performance Requirements

1.4.      System Description

1.5.      Testing Objectives

1.6       Inspection Objectives

1.7       Evaluation Objectives

1.8.      Critical Technical Parameters

2.         Integrated Test, Inspection Evaluation Program Summary

2.1.      Inspection Areas (documents, code, models, material)

2.2.      Inspection Schedule

2.3.      Integrated Test Program Schedule

2.4.      Management

3.         Developmental Test and Evaluation Outline

3.1.      Simulation

3.2.      Developmental Test and Evaluation Overview

3.3.      Component Test and Evaluation

3.4.      Subsystem Test and Evaluation

3.5.      Developmental Test and Evaluation to Dates

3.6.      Future Developmental Test and Evaluation

3.7.      Live Use Test and Evaluation

4.         Inspection

4.1.      Inspection of models (model characteristics, model vs. real world)

4.2.      Inspection Material (Physical) parts

4.3.      Prototype Inspections

4.4.      Post Test Inspections

4.5.      Inspection Philosophy

4.6.      Inspection Documentation

4.7.      Inspections Software

4.8.      Design Reviews

5.         Operational Test an Evaluation Outline

5.1.      Operational Test and Evaluation Overview

5.2.      Operational Test and Evaluation to Date

5.3.      Features / Function delivery

5.4.      Future Operational Test and Evaluation

6.         Test and Evaluation Resource Summary

6.1.      Test Articles

6.2.      Test Sites and Instrumentation

6.3.      Test Support

6.4.      Inspection (Requirements and Design Documentation) resource requirements

6.5.      Inspection (source code) resource requirements

6.6.      Inspection prototype resource requirements

6.7.      Threat Systems / Simulators

6.8.      Test Targets and Expendables

6.9.      Operational Force Test Support

6.10.    Simulations, Models and Test beds

6.11.    Special Requirements

6.12.    T&E Funding Requirements

6.13.    Manpower/Personnel Training


[1] Pries, Kim H. , Quigley, Jon M. Total Quality Management for Project Management, Boca Raton, FL CRC Press, 2013

Projects and Distractions

Posted on: March 6th, 2014 by admin No Comments

Cell phones and Laptops, Tools – or the Distraction to Success

Ever think your not getting the most out of your team due to distraction.  The greatest invention perhaps is the smart phone.  Now it is easy to check all of our email accounts, text message our friends, post on Facebook, blurt on Twitter, connect on LinkedIn, and best of all, play those seemingly innocuous games.  Oh and we should not forget that we can actually us it as a phone.

Many organizations issue laptops to their employees.  Laptops go with employee everywhere the employ goes.  That can be an efficient and certainly a convenient thing.  However, when meetings are via distance, and there is no video, we can’t see our people “multitasking” through the meeting or discussion.  “I apologize, I had my microphone on mute” can be the cover for trying to get oriented to the discussion.  Multitasking seems like we are accomplishing more but we may not be. In fact, there are downsides to this way of working.  We may find we have many actions partially underway with few concluded, the lack of progress can have a serious impact upon motivation.

Distributed Teams and Communications

In conference call meetings without video, we run this risk of multitasking essentially eroding any possible performance objective.  Consider document or specification reviews conducted virtually – as the members are reading emails or working on other tasks for other projects.   Even if you have the right people in the right place, the focus upon the task is vitally important to success and the PM must be aware of these distractions.  This problem is not just for virtual teams though experience suggests that it is quite pervasive there.

Project Team Interaction

Direct interaction with the other team members makes it more difficult to ignore or allow your time divided due to distractions.  The closer the connection to the other team members, perhaps the more difficult it is to disappoint them.  Our team behavior rules may have aspects that restrict these tools.  Peer pressure when seeing the seeming inattentiveness from a team member can help.  As our group of people assigned to the project work together, we will want to find ways to turn them into a team.  How we handle these distractions sets the stage for our team’s development by defining the expected behavior from the individuals.

Studies Say…

We lose considerable productivity in task switching or attempting to multitask. We can use our team to establish norms for behavior that will help keep these in check.  We can also make sure our team is less distracted by keeping focus on the prioritized tasks, no matter how many projects the individuals or the project members may be undertaking.  I submit that one of the biggest positive impacts from a agile method of managing projects is the hyper focus on the immediate and the reiteration of that throughout the project.


Agile, Outsourcing and Risk Mitigation

Posted on: March 4th, 2014 by admin No Comments

Risk Reduction Via Outsourcing? Perhaps.

Many times it seems that companies believe they minimize risks to their project simply by outsourcing. There is a kernel of truth in that approach with a caveat.  Consider the implications on the project if the outsourced work package is not delivered on time or at the right cost and quality.  If this outsourced work package is part of a larger package, and the success of the larger is contingent upon the success of the outsourced package, failure in the outsourced will ripple to failure of the larger project. Often that was the thing we were trying to prevent from happening by outsourcing  in the first place!

Monitoring and Participation are Key

Agile can help here with via the same constant communications we have in our sprint meetings.  Our outsourcing of the package now puts us in the role of the product owner. That means we are not allowed to sit on the sidelines after handing the definition of the work to our supplier.  We must prioritize the product backlog.  We have answer questions all along the way from the sprint team.  At the end of each sprint we will review the product to ascertain if we are on the right path with the outsourced work.  We will then review the product backlog for any adjustments and prioritize that into the next sprint. Instead of a “big bang” development (all of the product at once) we are constantly reviewing and critiquing the product.

Neither Agile nor Conventional

Our success can be seriously impaired if we say we are employing agile (minimized documentation) and act like the traditional or conventional project management that seems to have the hands off approach at times.  In those instances we will have neither “comprehensive specification” (as if that were possible) nor will we have the constant interaction with the supplier to drive the product toward our end goals.  In addition to these risks, we will likely have risks with our contracts as we struggle with the supplier to find the appropriate contract type to address both our risk exposure and our supplying organizations risk exposure.

HOPE: A Method of Project Management?

Posted on: March 1st, 2014 by admin 1 Comment

by Kim H. Pries and Jon M. Quigley

When you step onto an airplane, you hope it will not crash. You, as a passenger, have no control over what happens during the flight. Statistics indicate flying is relatively safe, which is due to vehicle mechanics, pilot training and competence, flight crew and tower teamwork, and substantial planning.  Purchasing a lottery ticket or a spin on a roulette wheel is all about luck and hope, with little possibility of influencing the outcome. Sure, it is possible to purchase more lottery tickets or to buy more spots on the wheel for another spin, but the outcome is only incrementally improved.  Why, then, do we so often see the HOPE method used in software project management?

Scope Hope

The use of hope often starts at the beginning of the project. It is never too early to exercise the use of hope.  It begins with Hurriedly Overlooked Project Extent (HOPE). The project manager will not know exactly what is to be delivered or how to go about doing it.  The project manager will spend little time asking questions of the customer, never qualifying what is to be delivered, or what constitutes good quality for the deliverable item.  At best, any discussion will be around the very highest of levels of abstraction.  We hope we got the full scope.  This will have an impact when we estimate the project.

After the project manager spends his modicum of time understanding the targets of the project, he will transfer this knowledge to the project team. The project manager that employs hope will take a team that is handed to him instead of identifying the resources needed.  Of course, as a project manager you have to make do with what the organization can provide, including team composition. However, that does not mean starting without trying to assemble the resources needed for a successful project.  The project manager will neglect resource procurement and take what they get rather than get (or ask for) what is needed for project success.

Once the team is assembled, the hope practitioner will let the project team magically know the demands upon them via Halfhearted Orientation to Project Expectations (HOPE).  Here we neglect identification of the areas of responsibility, or better yet, they can all be responsible for delivering the project.  The PM will not spend time identifying what constitutes success or how the team member contributions will be measured.–no need for a resource allocation matrix or some other way of defining team roles. If they all are accountable – then none of them are accountable.

With hope, there is no need for communications plans or project reviews.  These just cloud the air like a bad odor.  With hope, those key communications channels are self identifying and develop on their own.  People or groups of people depending upon information from other parts of the project get the information needed – without any work or coordination required.  For product development, this means product specifications are updated and automatically routed to the test group.


With Highly Optimistic Project Estimation (HOPE), there is no need to review how other projects were executed in the past. With hope, we do not have to go through the due diligence of developing a schedule based on past performance.  We can eliminate duration estimates from the team and get them from our individual expert or line manager, or better still – the project manager should set the durations.  We can treat the duration estimates as a single point source, no matter how much risk is associated with the task, or how the task impacts subsequent, dependent tasks.  It is best not to even recognize dependencies.


Again, Hands Off Prevention Effort (HOPE) comes to the rescue. With hope, we can take only a cursory look at the risks and impediments to the project success.  We may identify some very high level catastrophic risks. However, these risks may not be very probable.  We will not generate back up plans nor will we make sure the team is able to identify a symptom of a risk that has come to fruition.  With our hope “armor,” we can sit on our fannies until heavy-duty effort is required to solve the problem.  We will not respond when a team member brings a risk our way. We will summarily dismiss the input as rumors or hearsay.  We choose to be reactive instead of understanding what could go wrong and develop a plan for handling the risk should it come to bear.

Project Execution and Control

The application of Hope Other People Excel (HOPE) to project execution and control typically begins with the previously mentioned abdication or deferment of responsibility.  Hope relies upon the project team to deliver.  The project manager becomes a hermit in a remote office and only emerges when there is pressure from the project team or from upper management.  There will be few, if any, project reviews or project team meetings and the ones that do occur will usually be the result of ignoring a small problem until it becomes a fiasco. Generally, we mark our schedule (if we have a schedule) with a line item called “…and here a miracle occurs.”

Communications and project status

Management may use doublespeak. It is not necessary to practice Hide / Obfuscate Project Errors (HOPE) project management to use this technique.  Using this technique makes it possible to down play the severity of a problem and the root cause.  It is hard to say if this is a function of self-preservation or if this is indeed an overly optimistic perspective.  However, in this mode, problems will not be called problems but, rather, “challenges” or “opportunities.”

When a problem is spoken of, it will be due to a non-team member getting frustrated or an unpredictable risk coming to fruition–“we could never have foreseen such a condition happening to our project.”  Those who earlier predicted an issue would come to fruition we will brand as naysayers, hypercritical or just plain negative.  Their council having been disregarded; when the risk comes to pass the people who tried to alert the project manager will sarcastically say “if only somebody could have predicted this”. We choose instead to rationalize away those things that are amiss by management spin.

Other ‘opportunities”

We can employ the Hands Off Prevention Effort (HOPE) to eliminate tracking of metrics and avoid monitoring the project development schedule. We can delay actions with Hard Options Put at the End (HOPE), so that we don’t have to deal with anything until it grows like an ugly cancer, potentially killing the host. And, of course, we have Heartless and Obstinate Personnel Ecosystem (HOPE) driving employee, apathy, resistance and interpersonal futility. Our entire team works in a Half-Optimized Project Environment (HOPE), redolent with procrastination, misunderstandings, and blundering.


The hope method of project management eliminates the need for activity from the project manager.  Hope project management is not a repeatable method that secures success.  We think it is better to be HOPE-less than to be HOPE-ful.  Hope should not be the first and certainly not the only action taken by the project manager.  Hope after all other actions and efforts to improve the situation are taken.  People often misunderstand the story of Pandora’s box: when the ills that afflict mankind flew from the box, only HOPE remained. We shudder.

Subscribe to our Newsletter

Subscribe to our Newsletter
Email *
Single Sign On provided by vBSSO