Author: Duke Okes

Duke Okes is an expert in Quality Management with 35 years of experience as a quality engineer, consultant and trainer. He has worked with dozens of companies in ten countries, and hundreds of organizations have attended his public workshops on auditing, quality systems, performance metrics and root cause analysis. He is an ASQ Fellow and certified by ASQ as a quality manager, engineer and auditor. He holds degrees in technology, business and education, and is a frequent conference speaker on quality management. He is the author of “Root Cause Analysis: The Core of Problem Solving and Corrective Action,” and has published dozens of articles on quality. He can be reached through his website.

Root Cause Analysis and Corrective Action for Project Managers

Project managers have the immense task of juggling requirements and resources that are often not under their direct control in order to produce the required project deliverables within the limited constraints to which they must adhere (scope, time, quality, etc.). Even if the perfect project plan could be designed and executed, it would not remove all of the risks that could ultimately impact a project. Plans must inevitably change for one reason or another. During the phases of a project, it could be said that there are three major activities focused on reducing project risk. The first risk reduction activity occurs during project planning, when a proactive risk assessment is conducted and the identified risks are either mitigated or avoided (e.g., by modifying the project plan), transferred (such as through insurance) or accepted (by doing nothing and accepting that “if it happens, it happens”). The second activity is the continual assessment of risk throughout the project. The final risk reduction activity is to hold a retrospective “lessons learned” at the end of the project, which will have the least impact on the current project but will serve to benefit others in the future. However, for the unforeseen problems that occur throughout a project, risk management is too late, since it has already been completed, and lessons learned are too early, since that is conducted at the conclusion of the project. Corrective action is then a critical process for dealing with ad-hoc problems encountered during projects. Unfortunately, actions taken to resolve an issue often only address the problem itself, not its underlying causes. Symptoms of the problem are addressed and project resources are adjusted to compensate for the problem, but true corrective action may not be taken. In other words, the causes of the problem remain unknown, meaning the problem may reoccur later in the project and/or in future projects. Consider this example: Problem: A design project to develop a new vehicle has come to a complete stop because one of the key work packages for it is on the critical path but is behind schedule. Action taken: The work package behind schedule is deemed to be a low risk, so it is decided that it will proceed in parallel with other modules, changing the critical path. This means that if no major problems found are with the module, there will be no additional delay. Note that while the action taken in this example may allow the project to proceed along a modified critical path, nothing was done to identify why the work package was behind schedule in the first place. That is, while the problem was resolved (corrected), no action was taken to ensure that the same problem would not occur in the future (corrective action). In our example, was the module behind due to inadequate capacity of the assigned resources, or for some other reason? Corrective action consists of two major phases: Diagnosis: Performing an investigation to find the root causes of the problem Solution: Taking action to prevent the causes from recurring To provide a more detailed breakdown of these steps, we put forward an example “10-step problem solving model” that we hope will be of use in guiding you through a corrective action process. Steps 1 through 5 are for problem diagnosis, and 6 through 10 for solution implementation. Define the Problem: What occurred, where and when was it identified, when did it begin, and how significant is it? Understand the Process: What were the process steps that should have been carried out before the problem was found? Identify Possible Causes: If they did not occur as planned, which of the process steps could have caused the problem? Collect Data: What information could indicate which of the possible causes actually occurred in a way that would create the problem? Analyze Data: What does the data indicate about which of the possible causes did or did not contribute? Identify Possible Solutions: What changes to the processes of project planning and execution might keep those processes from failing in the future? Select Solutions: Which of the possible solutions identified are the most viable? Implement Solutions: Plan and carry out the selected solutions. Evaluate the Effects: Were the solutions implemented and have they worked? Institutionalize the Change: Update project management guidelines and tools to ensure that future projects are carried out in alignment with the improved processes. Note that steps 1 through 5 are typically done iteratively, until the causes found are at a depth sufficient to prevent recurrence. For example, if on a software project testing, delays are due to inadequate capacity of the testing software, the reason for the capacity problem would need to be determined in order to prevent such a failure in the future. Of course, it is not necessary to carry out this level of investigation and action for every problem that occurs during a project, so an important component of the corrective action process is risk assessment and agreement on a sensible course of action. That is, for each problem that occurs, the relative magnitude and likelihood as part of a risk assessment should be considered before assuming root cause analysis is required. There are many barriers that prevent corrective action from being carried out effectively. We have already alluded to a lack of guidance a process for carrying it out. Thats the purpose of steps 1 through 10. Other barriers and resulting imperatives for project managers include: There is often a tendency for a single individual to try to perform the investigation and solve the problem without help. However, project failures are often the result of incremental variations within multiple processes, and a single individual is unlikely to be sufficiently familiar with all processes to be able to evaluate them effectively and without bias. Therefore, project managers must ensure that they involve multiple players in the diagnosis of complex problems. They need to encourage their team to “put their hand up for help”. In the rush to solve problems, people make assumptions and jump to causes or solutions without having data to back them up. This leads to tampering with processes, which can result in further problems. Project managers need to be certain that adequate information is available before deciding which actions to take. Corrective action often has a negative connotation in organizations, which means people dont look forward to being involved. However, many studies have shown that humans and organizations learn more from their failures than from their successes, so corrective action needs to be viewed as simply the process of learning more about how processes actually operate. Project managers need to employ positivity when assessing the need for corrective action and putting the case forward to do it. Corrective action is seen as something that is in addition to the “regular work,” rather than as part of effective business management, as indicated by the Plan-Do-Check-Act cycle. Project managers who emphasize the PDCA cycle as part of day-to-day thinking, as well as during major milestone reviews, will help others see the more complete picture of their roles. It is certainly an embedded part of Quality Management. Many organizations want to automatically assign the cause of all problems to human error. The problem with this is that it is insufficient to provide identification of solutions, since the cause for that human error would need to be known. Many of the causes of human error turn out to be deficiencies in information, equipment, and management processes. Project managers who focus on process deficiencies rather than blaming people will find that others are more willing to dig down to the real causes of problems. There are also challenges specific to project management which serve to make the activity of corrective action more difficult. These include: Many projects involve multiple organizations, each a separate legal entity having unique knowledge/skills for which they are being contracted. This means players may try to protect their own turf (think of the BP disaster in the Gulf, and how the various contractors blamed each other), making the truth hard to find. Project personnel may only consider the current project, rather than future projects, as potential beneficiaries of corrective action. The reality is that all players should be able to learn from investigations and often carry that knowledge into future projects. Similarly, due to the fact that each project has an end-point, it may be difficult to do a full-on evaluation of effectiveness. The value of solutions may only be appreciated in the course of future projects. Another significant advantage of developing better root cause analysis skills within the project team is that such thinking is fundamental for risk management, quality management and the creation of a “learning culture.”  

Measuring the Value of a Program Management Office

Many organizations have contemplated or implemented program management as a means of managing inter-related projects within their organization, with varying degrees of success. For the purposes of this article we refer to a Program Management Office (PgMO) as a vehicle which can be used to manage the life-cycle of a specific program or, if a permanent body, have the purpose of achieving strategic benefits that are not available by managing projects as separate efforts. Some organizations may refer to PgMOs (or other types of Program/Project Management Office) as Centres of Excellence. Critical to the success of setting up a program management practice is gaining agreement from stakeholders on what constitutes that success. Think of it as the “why it will exist factor.” This step sounds obvious, but it is very easy to give it inadequate focus early on, and like many things, it can devolve bit by bit into providing “interesting but ultimately low-value advice or guidance.” It should always be remembered that “people make projects” and experienced project campaigners know how to get their projects across the line. A PgMO should provide experienced practitioners with an appropriate service, just as they should focus on slightly different needs of inexperienced practitioners. Industry standards offer a good source from which to define success. Use them to specify what you will do to justify your existence. As or even more critical to having measures of success is ensuring you can measure them without adding needless bureaucracy. Capturing the wrong set of measures wastes time and could lead to the failure of the program office. PgMOs have different purposes based on their longevity, the characteristics of the organization and the industry, the maturity of organizational processes and the scope of power with which they are endowed. Regardless of these dynamics, one of the primary goals of any PgMO should be to ensure benefit realization on behalf of the organization. One thing they need to be seen by key stakeholders is a value-adding function, rather than as bureaucratic overhead. The specific actions undertaken will depend on the scope of the PgMO but may include: Integration of deliverables plans to ensure a “Just in time” availability of needed resources. Planning, taking and measuring actions to increase expected benefits. Establishing processes and procedures for the effective management of projects (and project resources) and where necessary, standardizing routines and processes. The level of success organizations ultimately attain through PgMOs will vary. Organizations that have efficient and mature project management processes, for example, will usually incur less risk in implementing a PgMO structure and returning benefits to the organization. Organizations new to project management or lacking mature project processes will often struggle with program management and implementing a PgMO. Having insufficient project management processes should serve as a warning sign of underlying issues, and it may therefore be inadvisable to attempt to implement a PgMO until the root cause of project management process issues are uncovered and addressed. Given the complexities and variations in PgMOs, measuring the value of a PgMO is not always simple but you should aim to make it so. The value measurement could be as unique as the projects/programs the organization manages, but several key measurement topics should be considered in all PgMO measurement strategies. These include: Planning for Measurement: Just as with a new project, a PgMO should not be undertaken without a plan, so one of the first steps should be to create a strategy that identifies the mission, role and structure, and the measurements for evaluating success. The measurement strategy must consider stakeholder priorities. That is, the measurement plan should be able to tell the story not only from the perspective of the PgMO, but also provide key metrics of interest to its primary stakeholders. Establishing the right measurement plan early is critical as it will serve as the basis by which success will be determined. The measurement plan should allow for change. This doesn’t mean modifying the metrics (what is measured) because they are not being met or are otherwise not providing a positive light, but could include changes to the targets (e.g., what constitutes “acceptable”) or the frequency for collecting and reviewing metrics. For example, the percentage of troubled projects (those not considered “green” in the standard amber, red, green reporting process) may initially be set at 95% for the green metric. If, after a few reporting periods, it is found the average is found to be 90% percent, consideration may be given to changing the green metric to 90%, as long as one continues to measure and will increase the threshold as processes and resources mature. 95% could have been too aggressive of a metric for a specific organization at the start. Risk/Issue Management: Risk and issue management is a critical aspect of any PgMO and any program or project, and your metrics should include these factors. Merely measuring the numbers of risks and issues is not an effective indicator. The number of issues escalated to the PgMO from the projects could be a useful indicator of either a poor interrelationship between projects managers and the Program Office or an understanding of risks and issues and inter-dependencies across projects. The PgMO is not designed to micro-manage project risks and issues, but metrics capturing, at the Program level, the effective management of Risks and Issues at the project level, as well as those managed by the PgMO, should be considered. When issues arise, having a means to manage, track and report is important. An advanced PgMO may consider as a metric the percentage of issues with identified root causes and actions to rectify them (and the progress of such actions). Detailing the Measurement Processes: The measurement plan should include key definitions for collecting and reporting metrics data, including what is meant by each metric (e.g., the operational definition as well as any normalization/modification required), source of the data, who is responsible for collecting and analyzing it and to whom it must be reported. The communication plan should also detail how the metrics will be delivered to the various stakeholders. Delivering metrics to a stakeholder in a way that isnt properly understood, regardless of how positive it may be, can alienate the PgMO. Stakeholders can suffer from data overload, and lose the intended message associated with the measures. Having the proper mix of metrics is important. For example, both outcomes/results as well as in-process metrics should be developed. The latter are useful predictors (leading indicators) of the results that will be achieved, and allow taking corrective action. Measurement should look at both effectiveness (meeting primary customer/stakeholder requirements) and efficiency (how well organizational resources were utilized in carrying out the program). Example Metrics: Having looked at the considerations of a measurement strategy, lets now turn our attention to some specific metrics strategies based on the concept of PgMO maturity. As has been observed, there is no standard set of metrics that will work for every PgMO. When planning the metrics for the PgMO, do so with the understanding that the metrics needs to be insightful, strategically focused, and help to drive decisions rather than “telling people what they already know.” The right metrics for a PgMO will enable decisions which facilitate business strategy, increasing the value of the PgMO. Collecting and reporting the wrong metrics will make demonstrating the true value of the PgMO a challenge. While the exact metrics vary depending on the type of PgMP and other factors, the following are some example categories stratified by the maturity of a PgMO. For a newly formed (young) PgMO: The number of interactions between stakeholders, project managers, and other key players. Since these interactions are critical to effective program outcomes, the PgMO can help ensure that they occur, increasing the probability of success (a process metric). Since program outcomes are of course important, the rollup status of projects (e.g., variance in timelines and resource usage, projected outcomes) should allow early detection of critical interface problems (process metrics). All programs, and therefore PgMOs should track and measure benefits. The benefits should be captured and reported based on the strategic objectives of the organization (outcome metrics). For an experienced PgMO: More in-depth status of critical interfaces, such as the number of problems encountered and resolved, and estimates of reliability/risks of program outcomes (process metrics) Alignment of program to stakeholder interests, such as stakeholder feedback relative to concerns and satisfaction Cost of operating the PgMO (outcome metric, but efficiency rather than effectiveness) For a mature PgMO: Value of the PgMO, such as value added and costs avoided, divided by the cost of PgMO Project, program, and/or system technology knowledge/skills developed and deployed across projects and the organization through PgMO efforts (outcomes) Comparison of the PgMO to benchmark PgMOs (could be outcome or process metrics) Percent of issues for which root cause was determined, and the ongoing benefits resulting from resolving the root causes through changes in PgMO processes Determining the level of maturity of the PgMO is complex, as it may be related to the length of time the office has been in place, the level of standards and/or skills used by the office, the number or complexity of tools used for program and project management and the number of successful program outcomes. A measurement system based on audits of the PgMO office processes, guided by a maturity rubric/matrix, can be useful when determining maturity level. Conclusion Far too often, measurement systems used in project/program management focus on what has been done, rather than whether the strategic intent of an initiative is being or has been attained. A PgMO, if properly designed and aligned with your business strategy, should capture metrics that help the organization understand where they are on this trajectory and what ongoing actions will deliver success. Developing the right measurement system, and obtaining agreement on that system from key stakeholders, is a critical part of the PgMO, since it drives the way in which it operates.