Archive for October, 2018

Project Organization Structure

Posted on: October 30th, 2018 by admin No Comments

If you have been a project manager for any time at all, you probably have experienced competing demands from the sponsors for the project.  The sponsor is the person(s) who drive the scope of the project in conventional projects.  In some instances, the project manager may find that there are in fact multiple sponsors and the respective priorities of those sponsors may be at odds.  This is not the only difficulty to which the project manager must respond.

From the CHAOS study by the Standish Group, on project failures and success factors, we find the results below. This study analyzed 23,000 projects in 1998[i], of the factors that enable project success and failure.

Success Factor Influence
User involvement 20
Executive support 15
Clear business objectives 15
Experienced project manager 15
Small milestones 10

 

A list of the top 5 factors from this study, demonstrate that importance of the interface between the project, the executives that are supporting the project and clear business objectives.  The top three alone are significant exchanges between the project the sponsor and the user or client. When we are not sure who the sponsors are, or which have the highest priority, we spend considerable time wrestling with the balancing of inputs to determine which of these conflicting views is the most important.

In scrum, we have a product owner that interfaces with our team, and this person brings the voice of the customer to our team. Not only do we have a single point of contact via the product owner, but we have the input from that product owner groomed; that is, analyzed, understood, and prioritized for the scrum team to generate the product backlog. The scrum master works with the product owner to make sure that this work is prioritized enough for the team to undertake and produce the desired results.  This prioritization and grooming before time to doing the work, reduces if not eliminates the wrestling match between sponsors when the work is underway.

The requirement along come with some burden, and that is what are true requirements and what are things posing as requirements that do not belong in this software product.[ii]

Far too often the literature on software quality is passive and makes the incorrect assumption that users are going to be 100% effective in identifying requirements.  This is a dangerous assumption.  User requirements are never complete and they are often wrong.  For a software project to succeed, requirements need to be gathered and analyzed in a professional manner, and software engineering is the profession that should know how to do this well.

Software engineers have an ethical and professional obligation to caution clients about these problems and to assist clients in solving them, if possible.  In other words, software engineers need to play a role similar to the role of physicians.  We have an obligation to our clients to diagnose known requirements problems and to prescribe effective therapies.

Once user requirements have been collected and analyzed, then conformance to them should of course occur.  However, before conformance can be safe and effective, dangerous or toxic requirements have to be weeded out, excess and superfluous requirements should be pointed out to the users, and potential gaps that will cause creeping requirements should be identified and also quantified.   The users themselves will need professional assistance from the software engineering team, who should not be passive bystanders for requirements gathering and analysis.

While agile has a method outlined for how to ascertain the veracity of the requirements and only work those that are understood, those that practice conventional approaches may not have this sort of process, which would include among many things. requirements reviews, walk throughs, mock ups and prototypes, focus groups and more.  These are not project management doctrine, strategies or tactics but more product and software development methods.  In instances where an organization or a project manager does not know of these approaches these types of activities are not included in the work. WE end up with a long list of requirements, some of which are not even requirements, and if we do not take proactive steps in the beginning

The product is built upon the requirements, and the quality of the requirements will have an implication on the final product.  The agile approach reinforces this thinking by requiring a product backlog be understood and prioritized, before any work starts in the sprint. Conventional approaches can take this same approach, but it is not part of project doctrine but part of a product development methodology.

 

 

[i] Jim Johnson, et al. 1998. ChAOS: A recipe for Success, 1998. Published Report, but the Standish Group.

[ii] Jones, Capers. July 24, 2015 Software Requirements and the Ethics of Software Engineering, page 3-4 Namcook Analytics

Value, Scope and Change Requests

Posted on: October 26th, 2018 by admin No Comments

Value, Scope and Change Requests

Change requests are part of any development project.  Change requests are sometimes necessary as we learn by building and doing the work.   In my experience, change requests often are born from requirements we thought we understood, only to learn by working with the product or system that we really did not have enough understanding to be able to record this in the form of specifications.  We think we are making things better when we spend an abundance of time documenting the requirements, at least those requirements about which there is uncertainty. That is not to say this is not a worthwhile endeavor, we have been in product development long enough to know there are downsides to delivering a product that has no associated documentation.  The testing and manufacturing portions of the work will make use of these requirements documentation and the errant or lack of documentation makes the work of these areas and more difficult or nigh impossible.

It should not come as a surprise to learn that as we generate more requirements the risk of change requests also increases.  In fact, a study from Capers Jones, demonstrates that as function points for the software increases, there is an increase in change requests and this is not linear.[i]  Function points are a measure of content of the (software) product that eliminates the necessary consideration of the language implications.

What is a function point?[ii]

Function Points are an internationally standardized unit of measure used to represent software size. The IFPUG functional size measurement method (referred to as IFPUG 4.3.1) quantifies software functionality provided to the user based solely on its logical design and functional requirements. The resultant number is called a Function Point Count.  With this in mind, the objectives of FP counting are to:

  • measure functionality that the user requests and receives;
  • measure software development and maintenance rates and size independently of the technology used for implementation; and
  • provide a normalizing measure across projects and organizations;
  • perform IT Portfolio Management for C suite executives;
  • perform benchmarking against ISBSG data (ISBSG is the acronym for the International Software Benchmarking Standards Group);
  • collect and report on CMMI(R) capability maturity model data

Value, Requirements and Change

Understanding the past relationship between function points and project costs, allow us to extrapolate much like builders know the cost per square foot to build a house of a specific level of finish.  We do not need to document all the requirements (generally a bad idea unless the scope of the work is well understood and very small).

There is little value in change request. Any value there may be, is an after the fact, it is better to build the correct thing than what the requirements originally indicated. This represents waste or mura (definition below[iii]), via the over production of requirements that as it turns out were errant.

Mura means unevenness, non-uniformity, and irregularity. Mura is the reason for the existence of any of the seven wastes. In other words, Mura drives and leads to Muda. For example, in a manufacturing line, products need to pass through several workstations during the assembly process. When the capacity of one station is greater than the other stations, you will see an accumulation of waste in the form of overproduction, waiting, etc. The goal of a Lean production system is to level out the workload so that there is no unevenness or waste accumulation.

Mura can be avoided through the Just-In-Time ‘Kanban’ systems and other pull-based strategies that limits overproduction and excess inventory. The key concept of a Just-In-Time system is delivering and producing the right part, at the right amount, and at the right time.

Excess Scope not necessarily mean Value

It is possible to have an excess inventory of requirements, especially when those requirements are uncertain and may in fact end up representing rework, this is not just a software issue but extends to the hardware development as well as our approach to planning the work, for example spending too much time detailed planning too far into the future (another example of mura).

[i] Jones, Capers.  1997 Applied Software Measurement. McGraw Hill

[ii] http://www.ifpug.org/faqs-2/#One last accessed 10/25/2018

[iii] https://theleanway.net/muda-mura-muri last accessed 10/25/2018

Value Proposition

Posted on: October 25th, 2018 by admin No Comments

Value

Business is predicated on providing value to the customer, but it does not end there. The business itself needs to see value in the work, so this is really a value chain that is only as strong as the weakest link.  If the value to the customer is too low or not existing, no customers will purchase the product.  If the value to the business from the product is too low, there will be no investment

What is value

Value is the difference between the utility received and the cost for things.

Value = Utility – Cost

Value = Benefit – Cost

The utility may be customer dependent, but this must be understood, as this will drive the subsequent work. Not knowing what the customer values, clouds how we approach the work.  We will write more about the conundrum later.  It suffices to say no value to the customer means no value to the business, and therefore no work or project.

Traditional Organization’s Value Calculations

The organization calculation for value may be a little easier to discern.  This is essentially the business case for the work and can be the following:

  1. Internal Rate of Return (IRR)
  2. Return On Investment (ROI)
  3. Payback Period

To determine this value proposition, we will need to have the cost estimates as this is the “cost” balance against the potential income.

In conventional projects, the prediction or estimates for the work happens early and happens on a large scale, specifically, there is an attempt to understand the potential total cost for the project, knowing these will be adapted over time and that the earlier predicted estimates will be much further from the actual than the end result and that over time learning and reviewing work results the estimates become closer to knowable or more certain, sort of like a GPS system work for ETA calculations.

The agile approach may still reserve a block of funds for the entire project, but the withdraw from this bucket of funds is based upon the specific expenditures for that set of work.  Rather than the sum of the project work equals the sum of the project budget, we will build prioritized increments.  This is important because as you will note below from a study of software projects from Jim Johnson[i], less than ½ of the features delivered via a project are used, that means we have done work (expenditure and time) with no perceived value by the customer.  This effort burdens the entire cost of the product as the organization attempts to recoup the investment for the entire project.

Features and Use

Features and Use

 

Value and Learning

Rather than attempt to build out each of the features, there is a focus on the prioritized features, until:

  • the most beneficial feature is completed sufficiently to present to the customer
  • the product is sufficiently complete to experiments with and generate more detailed requirements
  • the product is deemed to not be viable and the feature development is discontinued

This is somewhat true for conventional projects regarding ideology, there is no doctrine that suggests delivery large increments of feature content in each software iteration.  However, in practice this hyper focus on one of the features or short bursts of work is much more diffuse in conventional project management practice.  This is not conventional project management doctrine but how things are generally enacted.

So, we see that that this small increment and iterative approach may be great for providing quick value or at least opportunities for exploring the value with the customer via increments, but there is still a need for the organization to understand how this project, upon conclusion, will impact the bottom line of the organization and this falls more to the business case assessment methods previously noted.

Value at the Micro and the Macro Level

Just as focusing solely on the top level monetary or value proposition comes at a cost through the conventional business approach, so too does a sole focus in the immediate value and cost via an agile approach.  A focus solely on the minutiae will often come at the expense of understanding the sum of these details and how that impacts the companies bottom line an potential profit or value extracted from the proposed project over the course of the project life cycle.  A focus on one or the other comes at some detriment.  This can be especially complicated as the product development includes more than just development personnel, but may include marketing and sales personnel, and manufacturing and post manufacturing support organizations from the company that are part of the organizations value chain. We will write more on the value stream mapping later.

 

[i] Johnson, Jim. 2001 Keynote speech XP 2002, Sardinia Italy

What works: team participation

Posted on: October 24th, 2018 by admin No Comments

The team

My first job out of university was with a small product development and manufacturing company. The company developed their own embedded products for sale all over the world.  I do not know how this collection of technicians and engineers ended up as a tight or as close when it came to work.  The group was a collection of characters. The other electrical engineer, we will call Flicky (we had secret names for each of the team members). There was one technician we referred to occasionally as IR because of the unfortunate anagram is name made.  There was a mechanical designer we referred to as BWI, I will explain that later.  There was another assembler / technician we referred to as Wal.

Games with the Team

When it came to the electrical or embedded product idea generation and development, I can recall a game we played.  We would read the specifications together, asking questions where we could of the sales and marketing people.  We would then go to our respective work areas and work the design as a challenge.  It was not a cut throat competition but a friendly one.

This part was largely but not exclusively between Flicky (still friends with him today actually) and me.  The first one with the core design sketched out what they thought would work, would go find the other to see if the design withstood critique as a good start.  It was not just not to be the first, the design had to withstand a review substantiated by mathematical and logical scrutiny.

The person presenting the concept would walk through the proposed or potential design solution with the other team members or at least the other engineer.  This was sort of like an early design review and accept alteration to improve the design.  If the proposed solution stood up to this scrutiny it would be further elaborated, tested and critique.  If this did not bear productive fruit, the learning from the design would be carried forward and a return to the drawing board.

There were plenty of other such  impromptu exchanges as the product materialized, a continuous review of the design and the design objectives.  The top-level requirements and design expectation were all foremost in the work.

These reviews and design proposals were subject to review and critique, an earnest and vigorous critique across the discipline.  This is a good place to introduce you to BWI, or in the long hand “better way incorporated”.  This mechanical guy would review our work and invariable he had was to improve the design, even if this way of introducing this fact to us seemed a bit abrasive.  This would have been annoying if he in fact did not have a better way.

IR was a patient and persistent technician. When we were at our wits end building prototypes or first production parts, he would delicately perform the last steps in the assembly process, taking the early part from me just before I was ready to the throw the product across the room.  He would custom size the interconnecting wires, deliberately and delicately tucking the wires out of the way so the product could be closed for our testing in the lab.  At this point the product had been tested in development environment but not at all in the real environment simulating the interaction with the operator or even in the setting the product will be subjected, the scenarios in which the product can and will be used and that is where Luke came into the picture.

Wal would take the product into the product lab which was configurable to a small scale of the customer’s manufacturing line. The range of equipment available was comparable to that of the various customer equipment, but not at the volume of the customer’s production line.  Without fail, he would come back to the engineering area, with some odd scenario that could happen in the field that would evoke a strange and unacceptable or unpredictable outcome. He would bring this to the engineering office and we would seek out why this happened and adjust either the hardware or software.

Layout of Facility

Layout of Facility

Team Proximity

Note the layout of the facility, the engineering office was connected to the manufacturing floor, the prototype area is really part of the manufacturing floor, there was no wall around this area, but a small area reserved largely for the prototype part assembly. The manufacturing floor could be noisy.  However, the benefits of being in so close to the manufacturing line ensured the design and even prototype development could be (and was) influenced by the manufacturing of the product. Similarly, the test lab was in similar proximity to the engineering office, when failures or non-conformance were discovered, it was quickly brought to the engineers, and we would walk to the lab with the test personnel (Wal).

There were few functional areas, that manufacturing and engineering, but neither of these held these functional areas tightly, that is the lines were blurred between these two areas where communications and work was easily and readily apportioned as the development demands or manufacturing dictated.

This group of strong personalities with respective areas of competence, nothing was taken personally, and we considered input from each person, even when that feedback or information was delivered in less than an optimum way.  We worked hard, and we played hard, that unique names for each other was part of that playing, along with the friendly competition aspects of the development work.

What does not work – Queuing Theory

Posted on: October 5th, 2018 by admin No Comments

Queuing Theory

Queuing theory is the study of waiting lines and is associated with business in determining resources needed to achieve service business throughput objectives, but it does not just apply to services and material handling.

Queuing Theory and Billable Hours

I have worked at companies that had a target for billable hours, that well in the 90%. That is, 90% or more of the hours the employee worked, had to be assigned to specific project work. The organization treated the time an employee was at work and available to work on specific projects, at nearly 100%, so for example, in a 40-hour work week, it was expected that 36 hours or greater were dedicated to specific project activities.  This was recorded in the project schedule.

Queuing Theory and Product Development

The impact of queues on product development and knowledge management in general is explained well in this Harvard Business Review article a snippet of which is found below:[i]

In both our research and our consulting work, we’ve seen that the vast majority of companies strive to fully employ their product-development resources. (One of us, Donald, through surveys conducted in executive courses at the California Institute of Technology, has found that the average product-development manager keeps capacity utilization above 98%.) The logic seems obvious: Projects take longer when people are not working 100% of the time—and therefore, a busy development organization will be faster and more efficient than one that is not as good at utilizing its people.

But in practice that logic doesn’t hold up. We have seen that projects’ speed, efficiency, and output quality inevitably decrease when managers completely fill the plates of their product-development employees—no matter how skilled those managers may be. High utilization has serious negative side effects, which managers underestimate for three reasons:

1.) Variation of the development work

2.) Incomplete understanding of queues and economic performance

3.) Lack of clear visibility into product development work in progress

We have discussed variation in the work in many other posts. Variation is everywhere, and experience suggests this is rarely accounted in the project task estimates and the recording thereof via the project schedule.  We will not go further into this please see our other blog posts on variation.

Figure 1 Harvard Business Review Graphic

Figure 1 Harvard Business Review Graphic

Queuing theory has been applied to operations; that is for services and manufacturing, but that has not rippled down to product development and knowledge management based on observation.  This is likely the reason management works to ensure that the employees are booked nearly to capacity. In fact, on the surface, from a business perspective, the management is there to ensure the best use of the resources and employment of the talents of our team.  It probably looks like we should keep our people focused on our work, use of the time available out of the 40 hours.  The problem this does not work the way we suppose, consider the Harvard Business Review graphic above.

In conventional projects, the specifics of the work CAN get obscured, this happens when we think we can plan months out, and when we do not spend time with our team consistently and deliberately, to understand what is physically happening, what work is coming up, what is underway and what progress is being made on those work items.  In agile projects, the work is segmented or parsed in very controlled packages.  Work items are pulled from the product backlog in a very controlled manner.  In this way the queue of work items and the work in progress can be controlled and well managed.

Queuing Theory and Product Development Summary

The more we attempt to maximize the number of hours expected from our talent, the more problems we create regarding throughput. Variation of tasks, as well as how the work is staged and accomplished is likewise important.  We should study queuing theory and understand that we cannot book the hours of our team like we absolutely understand the task variation, along with the myriad of things we need to learn along the way while doing the work.  Time must be available for this unknown learning.

 

[i] Six Myths Of Product Development Stefan Reinertsen – https://hbr.org/2012/05/six-myths-of-product-development last accessed 10/3/2018

Poor Process or Poor Execution

Posted on: October 4th, 2018 by admin No Comments

Poor Process or Poor Execution

I have used both conventional approaches to projects, as well as agile.  In fact, i have used some of the agile techniques in conventional projects with success. I know, anecdotal but perhaps an interesting anecdote.

Conventional projects have had considerable high failure reported (Standish Group Studies for example).  The problem become, why these conventional projects fail.  For example, I have been on projects where the project manager is seldom seen, where conventional project processes are ignored or executed poorly.  There can be many reasons for failure, poor process, poor execution or poor strategy, can end in the same failure.  It is like a play in football, if the offense executes it well, we may get a touchdown. If we have not play, or execute a good play poorly, we might fail in both cases.

So the question becomes what is the root cause of the failure? For example, I wonder how the strategy was selected for the failed conventional approaches, poor strategy could ultimately mean poor outcome or failure, and in that case simply changing how we execute the project may not change the outcome.

In my experience with Agile (Scrum) we have a dedicated (time secured not necessarily in attitude) team.  Consequently, my conventional project management experience is the project manager and team members may have other projects upon which they work.  This is not a conventional project management doctrine, but unique incarnations or inclinations of the individual project manager, management or the individual companies. Would we then say this type of failure was due to conventional project approaches?   Of course not.  The point is, if conventional project failure rates are higher, it might not be the formal process but the often ad hoc approach or errant application of these project management processes.

Nowhere in conventional project management approaches, is it recommended distracting the project manager and team members with an over load of work on a variety of projects.

Nowhere in conventional project management is the project manger instructed to sit in their office and not spend time with their team understanding the daily activities and trials that the project must endure.

Nowhere in conventional project doctrine will you find “counsel” to detail plan month and even a year out in the future.  these are blemishes that are borne out of poor project management knowledge, and poor execution.  if these sorts f things are the reason for the failures, then they should be corrected.  I have seen conventional approaches work as well as Scrum both are viable approaches, but poorly done both will produce similar outcomes, poor.

Product Development – what does not work.

Posted on: October 3rd, 2018 by admin No Comments

What does not work -duration

Besides wasting time planning out many months into the future as if we could see and control that far ahead, there have been studies over the years that have established an inverse correlation between the length of time a project runs and project success rate.  Perhaps this does not sound so odd, given the more tasks we have or the more work we must do, the greater the risks. Consider a product and the opportunities for failure, the more parts, the more opportunities for failure or more failure points.

There is no silver bullet when it comes to product development. However, there are some things that studies tell us what does not work.

Duration

First, the planning and long-term project or treating the job like we know the details 6 months to 3 years in the future.  There are studies from the Standish group that illustrates the project s longer than 6 months in duration have a higher failure rate than projects less than 6 months.[i]

Project Duration and Success

Project Duration and Success

Function points

Like the project duration, the number of function points is another leading indicator the project is to deliver has an impact on the project success rate.

Function Point Analysis (FPA) is a sizing measure of clear business significance. First made public by Allan Albrecht of IBM in 1979, the FPA technique quantifies the functions contained within software in terms that are meaningful to the software users. The measure relates directly to the business requirements that the software is intended to address. It can therefore be readily applied across a wide range of development environments and throughout the life of a development project, from early requirements definition to full operational use. Other business measures, such as the productivity of the development process and the cost per unit to support the software, can also be readily derived.  The function point measure itself is derived in a number of stages. Using a standardized set of basic criteria, each of the business functions is a numeric index according to its type and complexity. These indices are totaled to give an initial measure of size which is then normalized by incorporating a number of factors relating to the software as a whole. The end result is a single number called the Function Point index which measures the size and complexity of the software product.[ii]

In summary, the function point technique provides an objective, comparative measure that assists in the evaluation, planning, management and control of software production.  The more function points we have, the more change requests, and generally speaking change requests represent rework which is waste.  The graphic below illustrates this relationship[iii].  This waste has an impact on the timely project delivery and cost.

 

function_points_size_and_requirements_dyamics_Capers

Function Point and Requirements Changes

 

[i] Jim Johnson, et al. 2000. CHAOS in the New Millennium. Published Report. The Standish Group

[ii] http://www.ifpug.org/about-function-point-analysis/ International Function Point User Group last accessed 10/2/2018

[iii] Jones, C. 1997. Applied Software Measurement. McGraw Hill.

Things that Secretly Sabotage Projects and Teams -Cognitive Bias

Posted on: October 2nd, 2018 by admin No Comments

(Lexington, NC, September 28, 2018)  – Cognitive biases are always at work, playing dirty tricks behind our perceptions. Jon M. Quigley, Founder and Principal, Value Transformation, will address this issue in his latest presentation, “Things that Secretly Sabotage your Project and Team” to be held on Thursday, October 4, 2018 from 07:00 pm to 07:50 pm at Turbine Hall, Winston-Salem, North Carolina.

Project managers and teams of all organizations have experienced this common conundrum: their team members are capable, intelligent and well equipped, yet do things that appear to make no sense. The signs of bias at work here are hard to see, but that is most likely the case.

In his talk, Jon Quigley will reveal how cognitive bias impacts work, the most common biases we are likely to be affected by, and how they influence selection of strategies, team development, and other areas. The takeaway from the session will include ways to transform a group of individuals into a whole that is larger than its parts.

Jon Quigley is a PMP and CTFL certified coach and mentor with over 20 years of experience in product development (from embedded hardware and software to project management). He has seven US patents to his name and several outside the US, and has won awards such as the Volvo-3P Technical Award (2005) and the Volvo Technology Award (2006). Jon has co-authored over 10 books on project management and agile topics.

Value Transformation offers flexible coaching and training to managers and key technical staff in specific product and project management areas. The company uses one-on-one mentoring aimed to grow talent, improve outcomes, and upgrade individual competencies. The team handles quick problem solving in case of process or product failures, and handles team development using a variety of approaches with concrete objectives.

Among its achievements, Value Transformation has enabled companies to save millions of dollars in quality and cost improvements related to products and processes. It has also assisted in developing intellectual property to improve revenue streams. The company’s expertise ranges across product development, project management, agile and lean, TQM, etc.

 

Event at a glance:

 

When: Thursday, October 4, 2018 from 07:00 pm to 07:50 pm

Where: Turbine Hall | Bailey Power Plant at 486 Patterson Avenue, Winston-Salem, North Carolina, United States, 27101

Cost: This is a free event!

Contact: Venture Café, Winston-Salem

——————————————————————————————————————

For more information, please visit: www.valuetransform.com

Media contact

Value Transformation LLC

Jon M Quigley

Lexington, NC 27292

Tel: 336-963-0119

Email: jon.quigley@valuetransform.com

Single Sign On provided by vBSSO