Confusing QBD Baseline Changes with QBD EAC Changes

, , , ,

Historic Battleship Engine Room

Quantified Backup Data

Quantified Backup Data (aka QBD) has become a requirement for contractors who make use of the “Percent Complete” Earned Value Technique (EVT). This requirement is actually a good thing because it helps eliminate the guesswork previously cited as a flaw in the percent complete EVT. Unfortunately, it has become a good idea gone bad through over-implementation.

The primary problem: Confusing “QBD Baseline Changes” with “QBD EAC Changes.” Many contractors are unnecessarily bogging down their change control process with requests to “change the QBD” when all that is really changing is the detail behind some of the steps in the QBD.

The Cake Example

Let’s take a simple, practical example of baking a cake to illustrate the difference. What follows is an approach found online from one baking company. The example weights have been added by the author.

 

10 Basic Steps to Making Any Cake Step Weight (Percentage of Process)
1.     Select the recipe for the type of cake 2%
2.     Select and prepare (grease) the pans 2%
3.     Preheat the oven 2%
4.     Prepare the ingredients 30%
5.     Mix the ingredients 40%
6.     Put batter in pan and bake the cake 5%
7.     Remove the cake from the pan 2%
8.     Let the cake cool 2%
9.     Make the frosting 10%
10.  Frost the cake 5%

 

For the context of this blog, this bakery’s approach IS the Cake (baseline) QBD. Chocolate cake, pound cake, apple cake, one layer cake, multi-tiered cake, birthday cake, or wedding cake. It does not matter. This bakery approaches making any type of cake with this QBD. Some cakes, however, are more complex than other cakes – not all cakes are created equal.

Steps 1, 2, 3, 6, 7, and 8 are standard and would likely have the same “budget weight” in the Cake QBD, regardless of the type of cake. Steps 4, 5, 9, and 10, on the other hand, might be more involved for a complex cake. Remember: Complexity does not change the Cake (baseline) QBD!

Simple Cake vs Complex Cake

Let’s look at the Cake QBD to see how different types of cakes are handled by comparing a Simple Cake, Vanilla with Chocolate Frosting, to a Complex Cake, Apple Walnut German Chocolate.

Follow Steps 1, 2, and 3 (unchanged)

Step 4: Prepare the ingredients

Simple Cake: Gather eggs, milk, flour, cake batter mix.

Complex Cake: Gather eggs, milk, flour, sour cream, walnuts, apples, coconut, lemon juice, pre-cooked apples, chopped walnuts, shaved coconut.

Step 5: Mix the ingredients

Simple Cake: Blend together the eggs and milk. Then add in the flour and dry cake batter mix.

Complex Cake: Blend together the eggs and milk. When smooth, fold in sour cream. Then add in the flour and the cake batter mix. When thoroughly blended, add cooked apples, lemon juice, chopped walnuts, and shaved coconut. Blend until evenly mixed.

Follow Steps 6, 7, and 8 (unchanged)

Step 9: Make the frosting

Simple Cake:  Pre-made chocolate frosting mix.

Complex Cake:  German chocolate frosting mix, add coconut, add chopped walnuts, mix gently until additions are evenly spread throughout mix.

Step 10:  Frost the cake

Simple Cake:  Frost between layers, frost top layer, frost side of cake.

Complex Cake: Frost between layers, frost top layer, frost side of cake. Apply apple wedges on top of the frosted layer, sprinkle on more chopped walnuts, apply frosting flowers around bottom with frosting tool.

As you can see, the 10 Step Cake QBD did not change throughout this process. What did change was the set of ingredients and some of the added lower level steps within the QBD Steps 4, 5, 9, and 10. This might add some cost (EAC) for the added ingredients, and for the additional labor of precooking, chopping, and shaving. However, in the overall context of the Cake Baking process, the steps (QBDs) and the associated weighting of each QBD remained the same. In this bakery, all that changes would be the forecast cost (EAC) for the complex cake over the simple one – i.e., no baseline change request (BCR) is needed to change the QBD!

QBD vs EAC

In this simple example, the QBD is Baking a Cake. It is not “Baking an Apple Walnut German Chocolate Cake.” If this were the QBD, then this bakery would have hundreds of QBDs, depending on the different types and complexities of cakes the bakery could possibly make. For example, would you want a separate QBD for a birthday cake to say “Happy Birthday Johnny”? No, that would be a cost (EAC) for the Frosting QBD Step. The QBD EACs affected would be Step 9, Make the frosting, and Step 10, Frost the cake. Rather than hundreds of QBDs, this bakery has ONE! The cost (EAC) of a cake varies based on the complexity and content of the cake.

This QBD approach applies to any number of other processes. Here are a couple of others:

Flowchart Showing the difference and similarities of the scientific and engineering methods.

Above: Basic idealized steps of the scientific method (left) and engineering design process (right).

*From sciencebuddies.org

 

In this case, the complexity of the problem being addressed might impact the amount of research required or the experiments or brainstorming solutions needed. This could dictate the number of times the yellow box to the right (the re-do box) is required. However, the approach is exactly the same until the process achieves the “Results Align with Hypothesis” (for the scientific method) or the “Solution Meets Requirements” (for the engineering design method), and the results are communicated.

The House Example

Another practical hands-on example could be the steps in building a house:

  1. Grading and site preparation
  2. Foundation construction
  3. Framing
  4. Installation of windows and doors
  5. Roofing
  6. Siding
  7. Electrical Rough-In
  8. Plumbing Rough-In
  9. HVAC Rough-In
  10. Insulation
  11. Drywall
  12. Trim
  13. Painting
  14. Finish electrical
  15. Bathroom and kitchen counters and cabinets
  16. Finish plumbing
  17. Carpet and flooring
  18. Finish HVAC
  19. Hookup to water main, or well drilling
  20. Hookup to sewer or installation of a septic system
  21. Punch list

As with baking a cake, for this builder, the size of the house will make a difference in how much the house will cost, but the QBD approach to building each house is the same. Some of the lower level activities below each QBD step might be more involved. For example, Step 2, Foundation construction might be more involved if the house is to have a basement. Other steps might be more or less involved. Step 17, Carpet and flooring might stop at the concrete slab because the buyers want to have their own custom flooring and carpeting put in later. None of these examples change the 21 QBDs this contractor follows when building a house. The lower level activities will simply cost more or less than other models (the EAC – not the baseline QBD), but the overall weighting of the QBDs for “Building a House” would be the same.

The Scope Has Not Changed

The same approach can be used for engineering drawings, conducting inspection testing, developing a drug for FDA approval, a scientific approach to a health problem, or any other process that follows a standardized approach toward its end product. The key is segregating the EAC aspect from the baseline QBD aspect of the process. Don’t get mired in constantly trying to “change the QBD” when it is not needed.

If the basic steps do not change, the QBD is not changing! More or less granularity in the lower level details beneath each QBD step is handled in EAC space and will be reflected in the cost of the task. You do not need to change the budget. Why? Because the scope has not changed.

Repeat with emphasis. THE SCOPE HAS NOT CHANGED – you are still:

  • Doing an engineering drawing,
  • Resolving a scientific or engineering design problem,
  • Building a house, or even just
  • Baking a cake.

Let’s keep QBDs simple for the CAMs – and keep the change control process uncluttered.

 

Humphreys & Associates can help with your QBD planning and implementation. Contact us at (714) 685-1730 or email us.

Confusing QBD Baseline Changes with QBD EAC Changes Read Post »

Updates to the Compliance Review Series of Blogs

,

Time Lapse of Freeway representing the progress of time.

2020 Update

Humphreys & Associates has posted a 2020 update to the series of blogs discussing the DCMA Compliance Review (CR) process. “Compliance Review” is the term used for the formal EVM System review DCMA performs to determine a contractor’s compliance with the EIA-748 Standard for EVMS guidelines. This can also include, as applicable, Surveillance Reviews and Reviews for Cause (RFC).

DCMA used to follow a 16 Step compliance review process. This changed to an 8 Step process with the release of the DCMA Instruction 208 (DCMA-INST 208) titled “Earned Value Management System Compliance Reviews Instruction.” This Instruction has been rescinded and replaced with a set of DCMA Business Practices (BP). These Business Practices split out topics that the old DCMA Instruction 208 covered in one document. Whether you are a contractor new to the EVM contracting environment or a seasoned veteran, if the Earned Value Management System (EVMS) compliance and acceptance authority is the Defense Contract Management Agency (DCMA), these new Business Practices apply to you.

The four updated blogs include:

  • EVMS Compliance Review Series #1 – Prep for the DCMA Compliance Review Process. This blog presents the set of DCMA Business Practices (BP) that define the EVMS and Review process and specifically discusses Business Practice 6 “Compliance Review Execution.” It also discusses what you can expect should you need to complete the DCMA Compliance Review process through the 5 phases and 23 Steps outlined in BP6. It is critical you are able to complete each step in the process successfully the first time through to prevent delays. The best way to make sure you are prepared is to conduct one or more internal EVMS Mock Reviews, the topic for the next blog.
  • EVMS Compliance Review Series #2 – Conducting Internal Mock Reviews (Self Assessments). This blog discusses the importance of conducting a thorough internal review of your EVMS. You may or may not have the expertise in-house to conduct this simulation of a Compliance Review. An independent third party can help you prepare for a DCMA compliance review. The objective is to conduct the EVMS Mock Review to simulate everything DCMA will do. DCMA also expects a thorough scrub of the schedule and cost data – data traceability and integrity is essential.
  • EVMS Compliance Review Series #3 – Using Storyboards to Depict the Entire EVMS. Do you need a refresher on the role of storyboards in a compliance review? Storyboards can make a difference in training your personnel and explaining to the DCMA personnel how your EVMS works. Storyboards can take many forms, and if you don’t have one in place, consider starting with the flow diagrams in your EVM System Description.
  • EVMS Compliance Review Series #4 – Training to Prepare for Interviews. This blog highlights the importance of conducting training for your personnel, particularly the control account managers (CAMs), so they are able to complete successful interviews with DCMA personnel. H&A recommends completing a three step training process to proactively address any issues.

Help Preparing for a Compliance Review

Do you need help preparing for a DCMA compliance or surveillance review? Download the set of DCMA Business Practices and read our updated blogs so you have an idea of what is ahead. Humphreys & Associates can help you conduct a Mock EVMS Review, perform a data quality assessment, create a storyboard, or conduct EVMS interview training and mentoring for your personnel. Call us today at (714) 685-1730 or email us.

Updates to the Compliance Review Series of Blogs Read Post »

Formal Reprogramming – What Happened?

, , , , , , , , , , ,

Graph of an Increasing Budget

A long time ago, in a galaxy far, far away….an Over Target Baseline (OTB) – by design – was a rare occurrence (and the OTS concept did not even exist as part of Formal Reprogramming). Formal Reprogramming was a very difficult and cumbersome process that most contractors (and the government) really did not like to consider. The government, in its 1969 Joint Implementation Guide, said:

“Reprogramming should not be done more frequently than annually and preferably no more frequently than once during the life of the contract.”

The Office of the Under Secretary of Defense (OUSD) Acquisition, Analytics and Policy (AAP) – formerly PARCA – , in their latest OTB/OTS guide, states that Formal Reprogramming now has expanded to include an Over Target Schedule (OTS).  However, in that guide, it is stated in Paragraph 1.3.8:

“Ideally, formal reprogramming should be done no more than one time during the life of a contract. However, there may be instances where another formal reprogramming is warranted… When formal reprogramming is accomplished in accordance with the procedures in this guide, with a realistic cost and schedule estimate established for the remaining work, it should not be necessary to undergo formal reprogramming again.”

Today, though, whenever contractors incur a significant cost or schedule variance, instead of resolving the variance cause, the first words seem to be: “Let’s do an OTB or OTS.”  The lure of “getting rid of cost and schedule variances” seems too good to pass up.  Unfortunately, an OTB/OTS implementation has never been an instantaneous process. With AAP’s 12 step OTB/OTS process, it is obvious that the contractor will not be able to start today and incorporate the OTB/OTS in the next Integrated Program Management Data and Analysis Report (IPMDAR) dataset. In fact, AAP’s OTB/ OTS guide states in paragraph 3.8:

“It may be difficult to ascertain the length of time it will take to implement a new baseline based on the scope of the effort. It is not uncommon for the entire process to take up to six months which would be too long of a period without basic cost reporting.”

The last line of the above cited paragraph was referencing the reporting requirements to the customer when an OTB/OTS is being implemented.

The IPMDAR Implementation and Tailoring Guide (5/21/2020) even recognizes the issues with timeliness of implementing an OTB/OTS:

2.3.2.5.5  Formal Reprogramming Timeliness. Formal reprogramming can require more than one month to implement. During formal reprogramming, reporting shall continue, at a minimum, to include ACWP, and the latest reported cumulative BCWS and BCWP will be maintained until the OTB/OTS is implemented. 

So why does it take so long to implement the OTB/OTS?  Can the contractor just adjust the bottom line variances and move on?  Actually no, nothing is really that simple.  This is one of the reasons that implementing an OTB and OTS should not be taken lightly.   The AAP OTB/OTS Guide addresses adjustments this way:

“3.5.6.2 Adjusting Variances: A key consideration in implementing an OTB is to determine what to do with the variances against the pre-OTB baseline. There are essentially five basic options. This is a far more detailed effort than these simple descriptions imply, as these adjustments have to be made at the detail level (control account or work package).”

When considering the number of control accounts and work packages involved in a major contract, a Formal Reprogramming can become a rather daunting task.  The contractor also has to report the effects of the Formal Reprogramming in the IPMDAR Reprogramming Adjustments columns. These adjustment columns appear on both Format 1 and Format 2 of the IPMDAR database, which means the contractor must undertake the assessment for both the contract’s WBS and the OBS – for each WBS element and for each OBS element reported.  This can be further complicated if the OTB/OTS exercise were flowed down to subcontractors for a given program.  The AAP OTB/ OTS Guide, paragraph 3.8 also states:

“The customer should be cognizant of the prime contractor’s coordination complexities and issues with its subcontractors. The time to implementation may be extended due to accounting calendar month overlaps, compressed reiterations of contractor ETC updates, internal reviews, subcontractor MR strategy negotiations, senior management approvals, etc., all while statusing the normal existing performance within a reporting cycle.”

In the early days, when implementing an OTB with variance adjustments, the company and the customer agreed on a month-end date to make the data adjustments.  Then the contractor ran two CPRs or IPMRs (now the IPMDAR): (1) the first report as though no OTB had been implemented [to determine the amount of adjustments to cost variance (CV) and schedule variance (SV) at all the reporting levels] and, (2) the second report [after the OTB implementation had been completed – no matter how long it took] showing the Column 12 adjustments plus whatever BAC changes were being implemented.

Under the current OTB/OTS Guide, it appears as though this process is being done all at once. As stated in the AAP OTB/ OTS Guide paragraph 3.8 above, this implementation could take up to 6 months to complete, so lagging the second report until the OTB/OTS implementation is completed seems logical. The last sentence in paragraph 3.8 also stipulates that regardless of how long implementation takes, the contractor and customer will agree on interim reporting that will be required, further stating that:

“In all cases, at least ACWP should continue to be reported.”

Perhaps this agreement with the customer should also specify the content of the first IPMDAR following OTB/OTS implementation.

All things taken into account, the process of requesting and getting approval for an OTB or OTS can be a long and difficult process, especially if, at the end of it all, the contractor’s request is denied.  Even if it were approved and the contractor implements and works to the newly recognized baseline, immediately doing another one is not a pleasant thought – and remember, it was not intended to be pleasant. Reprogramming was always supposed to be a last resort action, when reporting to the current baseline was totally unrealistic.

Now, what about those cases where a contract has one or two elements reporting against totally unrealistic budget (or schedule) baselines?  The AAP OTB/ OTS Guide does cover a partial OTB, but reiterates that this is still an OTB because the Total Allocated Budget (TAB) will exceed the Contract Budget Base (CBB).  In the early days, however, the government allowed what was called Internal Operating Budgets (IOBs) for lower level elements (control accounts, or specific WBS elements, etc.) that were having problems resulting in an unrealistic baseline for the work remaining. The 1987 Joint Implementation Guide, paragraph 3-3. I (5) described IOBs as follows:

“(5) Internal Operating Budgets. Nothing in the criteria prevents the contractor from establishing an internal operating budget which is less than or more than the total allocated budget. However, there must be controls and procedures to ensure that the performance measurement baseline is not distorted.

(a) Operating budgets are sometimes used to establish internal targets for rework or added in-scope effort which is not significant enough to warrant formal reprogramming. Such budgets do not become a substitute for the [control] account budgets in the performance measurement baseline, but should be visible to all levels of management as appropriate. Control account managers should be able to evaluate performance in terms of both operating budgets and [control] account budgets to meet the requirements of internal management and reporting to the Government.

(b) Establishment and use of operating budgets should be done with caution.  Working against one plan and reporting progress against another is undesirable and the operating budget should not differ significantly from the [control] account budget in the performance measurement baseline. Operating budgets are intended to provide targets for specific elements of work where otherwise the targets would be unrealistic. They are not intended to serve as a completely separate work measurement plan for the contract as a whole.”

Current literature no longer specifically addresses Internal Operating Budgets (IOBs), but with the recent trend of contractors jumping to the OTB/OTS conclusion, it might be a better alternative to have individual instances of unrealistic budgets (or schedules) that do not otherwise push the total program to the need for a complete OTB and/or OTS implementation.

These could be good discussion topics for future AAP and DCMA meetings with industry representatives, to determine if there are ways to streamline the process, or at least reduce the amount of requests to implement Formal Reprogramming.  Variances are, after all, performance measurement indicators that should not just be routinely and artificially eliminated.

Formal Reprogramming – What Happened? Read Post »

Along the IMS Time-Now Line

, , , , , , , , , , , ,

Arrows moving to the right.Recently one of our consultants was instructing a session on the Integrated Master Schedule (IMS) with a group of project personnel from one of our larger clients. The group was a mixture of beginners with no real experience in schedules and some much more experienced practitioners; some with more than 10 years of experience. The mixture made it somewhat difficult, but it also made for some interesting discussions that might have been missed in a more homogenous group. One of those things was the usefulness or importance of the “time-now” line.

When the group was asked about the importance of the time-now line and what information could be easily gained from a look at the line, there was silence. The beginners did not have a clue but also none of the experienced people had any response. What should have been a short discussion with just one “slide” as a visual, turned out to be a longer and more informative session on this topic.

The time-now line has different names in different software tools but it refers to the data date, or status date, of the schedule. That also would be the first day of the remainder of the schedule. When a scheduler sorts tasks by date, the time-now line runs down the screen and forms a highly useful visible reference.

In the small example below [see Figure 1], you can see the time-now line and visually assess the situation. Time-now is shown by a vertical line at the beginning of September, so all remaining effort has been scheduled to after that date. In other words, no work can be forecasted in the past. A walk down the line shows Task 1 has both started and completed. Task 2 started but has not completed. In fact, the remaining work in Task 2 has been pushed out by the time-now line. The start of Tasks 5 and 9 are also being pushed out by the time-now line. In most real project schedules, filters and other techniques may be needed to isolate information like this; but in our small example, we can simply “eyeball” the time-now line and see valuable information. Task 9 starts the critical path shown in red tasks.

 

The project start date was August 1. The status date is September 1. Tasks 2, 5, and 9 show gaps from the predecessor to their starts. in the case of Task 2 the cap is to the start of the remaining work. This gap is caused by the time-now being set to September 1 with all remaining work starting after that. The critical path is being pushed by time now.Figure 1

 

A slightly different setup for that same small example [see Figure 2] shows something interesting. The time-now line is still at the beginning of September. But now there is a gap between time-now and work on the critical path. This is an unusual situation and should be investigated for the root cause. It is possible this is an accurate portrayal of the situation, but regardless of the cause, it must be verified and explained.

 

Time now is still at September 1. There is a gap on the critical path at the start of Task 9 which, in this case, is caused by a Start-No-Earlier-Than constraint.Figure 2

 

In yet one more variation [see Figure 3], we see that a broken link results in Task 8 ending up on the time-now line. A task without a predecessor will be rescheduled to start at the earliest possible time (if the task is set to be “As Soon As Possible”). And the earliest possible time is the time-now line; the beginning of September. Just as broken things fall to the floor in real life, “broken things” fall to the time-now line in a schedule. Un-started work can land there. Un-finished work can land there. And un-linked work can land there.

It is further possible to see that Task 2 has had an increase in the remaining duration that has driven it onto the critical path. Task 2 at this moment is the most important task on the entire project. A slip to Task 2 will drive out the end date for the entire project. One question that needs answering is what is holding up Task 2?

If the display had been sorted by increasing total float/slack and the usual cascade by date, then the critical path would be starting at the upper left-hand corner; like the critical path in this example. The action on the project is almost always on the time-now line and the most important action, when sorted as described, will be at the upper left-hand corner.

 

Task 2 is now driving the critical path. Task 8 has fallen back to the time now line. The constraint on Task 9 has been removed.Figure 3

 

So, a walk down the time-now line can help us see the critical path action, find broken parts of the schedule, and locate unusual circumstances that need our attention. Our recommendation is to look at the time-now line any time there is data being changed in the IMS. This will help you catch issues early and keep the schedule cleaner.

Along the IMS Time-Now Line Read Post »

Humphreys and Assoc Reviews 7 Principles of Earned Value Management Tier 2 System Implementation Intent Guide

, , , , , , ,

In this video we review the 7 Principles of Earned Value Management Tier 2 System Implementation Intent Guide published by the Assistant Secretary for Preparedness and Response, or ASPR.

This Guide is primarily used by the Biomedical Advanced Research and Development Authority, or BARDA, on countermeasure R&D contracts that have a total acquisition cost greater than $25 million and a Technical Readiness Level of less than 7.

7 Principles of Earned Value Management Tier 2 System Implementation Intent Guide -- EVM Cross Reference Guide

Humphreys and Assoc Reviews 7 Principles of Earned Value Management Tier 2 System Implementation Intent Guide Read Post »

Agile/Scrum Ceremonies and Metrics Useful in EVMS Variance Analysis and Corrective Action

, , , , , , , , , , , , , , , , , , , , , , , , , ,
Agile Scrum EVMS

P. Bolinger, CSM October 2016
Humphreys & Associates

How can Agile/ Scrum be used to support EVMS Variance Analysis and Forecasting in a way that provides program managers with cost and schedule integrated information for no extra effort?

The discipline of EVMS and the Agile/Scrum practices have several touch-points that are covered in two major documents: NDIA IPMD Agile Guide March 2016, and PARCA Agile and EVM PM Desk Guide. Neither of these documents, as yet, drives to the level of specifics when it comes to best practices for use of Agile to support EVM Variance Analysis and EVM Forecasting.

Looking at the literature for Agile/Scrum, we know that there are recommended ceremonies that are conducted at various levels of the product structure and at different times during the project life cycle. These ceremonies are supported by many discussions of the metrics that can be collected at each ceremony and their potential use in managing the technical work of development within the Agile/Scrum framework. But where do these ceremonies potentially support EVMS Variance Analysis and Forecasting?

Now suppose we are presented with a control account that has exceeded the EVMS thresholds for cumulative cost and schedule variances. Wouldn’t it be great to have at our fingertips the underlying data from the process? In this case we might find, for example, that the Velocity is less than that needed to meet the end goal, the story cycle time is longer than desired, the pass/fail ratio is not favorable, too many team members have been absent in the last Sprint, the number of disruptions has been excessive, and the work to accomplish the stories is higher than estimated. It is not difficult to surmise the outcome would likely be a behind schedule and overrun condition in the EVMS. These data measures would provide the fodder for deep diving to the root cause and impact statements.

Where would we get that information?

Let’s start with the Agile/Scrum ceremonies. The particular Agile/Scrum ceremonies that we find conducted during the project are:

Backlog Refinement. This ceremony can be nearly continuous. It involves redefining the backlog of development work (scope), the prioritization or re-prioritization of that work, and potentially the assignment of responsibilities for the backlog.

Release Planning. This is a recurring ceremony aligned with the release cadence for the project. It involves establishing the capabilities and features of the product and when they will be released.

Sprint. This is a short time-defined effort to accomplish the design, code, and test of some subset of the product. The Sprints are controlled by the self-managing teams.

Daily Scrum or Standup. This is a daily (recommended) team meeting to discuss what has happened, what roadblocks exist, what is planned for the day, and other necessary items.

Sprint Review. The session in which the team products are demonstrated to the owner and self-off is accomplished.

Sprint Retrospective. A meeting of the stakeholders to discuss what went right (or wrong) during the Sprint and to define improvement actions that are needed.

The relationship of these Agile ceremonies with EVMS might look like this:

Agile/Scrum Ceremony

Agile Purpose

Relationship to EVM Variance Analysis, Root Cause Analysis, Corrective Action Planning & Follow-up and Forecasting

 

 

 

Backlog Refinement Manage, estimate, prioritize, and organize the product backlog in an on-going routine. Estimating – impacts the EVMS ETC and EAC as well as durations of efforts.
    Prioritizing – in response to issues is corrective action management.
    Organizing the backlog could be a form of corrective action effort.
     
Release Planning Establishing the contents and timing for releases of product. Updates could be part of corrective action planning in response to issues. Creation of new work packages. Changes to planning packages and SLPPs would also.
     
Sprint Short time-box performance unit. Work is done in Sprints. Below the work package. Short span measurement period is possible.
     
Daily Scrum and Stand-up Make short term plan, adjust to issues, discuss problems, clear roadblocks. Much of the daily action would relate to root cause analysis and corrective action planning although the time-frame is very short and the issues may be too small to individually impact feature work package or the Epic control account.
     
Sprint Review Demonstrate the product, update released work, make changes to product. This would relate to corrective action planning and follow-up. Issues would be found here that would impact risks, ETCs, corrective actions, performance metrics.
     
Sprint Retrospective Reflect on the project, progress, people processes, what was good, what was bad, and take actions to improve. This should be the richest source of supporting information for EVMS root cause & corrective action within the VAR realm. Very timely for variance analysis as it happens potentially many times during work package (feature) duration.
Feature Retrospective (not one of the basic ceremonies) Review situation regarding technical scope deficit. Reflect on the project, progress, people processes, what was good, what was bad, and take actions to improve. Because this only happens at the end of the feature it is limited in value for variance analysis timeliness. Any lessons learned can only be applied to future feature work.

But where is the meat? Where do we get actionable data or at least data we can analyze to decide what management efforts are required?

There are numerous potential metrics that can be collected during these ceremonies. These metrics can form the basic data set that could be analyzed to define the root cause of cost and schedule variances. In addition to isolating the cause of issues, within some of these ceremonies the impact of the issue on the Sprint, or Feature, or team may be made. Certainly, these metrics can be used as the basis for projecting future workload and performance.

The total number of potential metrics is not known. In this paper, we looked at 17 metrics and considered what the data might mean. The results of this review are contained in this matrix:

Metric

Type of Measure

Discussion

Sprint Burn up/Burn Down Backlog Value of backlog remaining for Sprint. Decrease is expected when work is done. Increase means work increased. Burn up can include total completed plus remaining; a great metric.
Feature Burn Up/Burn Down Backlog Value of backlog remaining for Feature. Decrease is expected when work is done. Increase means work increased or shifted.
Customer support requests received. Disruption Number of instances. Unplanned interruptions by customer can lower the output of the team if excessive.
Disruption measures Disruption How many and what type (except customer support requests). Higher disruptions impact team efficiency.
Estimate Accuracy (Sprint or Feature) Estimating Measure of budgeted value (estimated value) for the Stories in the Sprint or Feature versus the actual cost (calculated cost) of the Stories when done. Related to team size.
Discovered work Estimating Emerging work discovered during the Sprint. Will translate to extra effort in future if adopted into backlog.
Exceeds WIP Limits Management If WIP limits are set on team or individuals then exceeding set limits will impact efficiency and output.
Retrospective Action Log Management Count of improvement actions listed in Retrospective. Increasing count means issues are not being resolved.
Attendance Management Comparison of actual hours worked by team compared to baseline expectations in plan.
WIP Productivity Measure of the number of stories or points in WIP at any time. WIP growth can indicate bottlenecks and inefficiencies.
Velocity Productivity Measure of the amount of work (Stories or Points) accomplished during a time period. Higher velocity means greater throughput per person/team.
Stability measures Productivity Comparing Sprint by Sprint basic measures from this list. If there is high variability between Sprints in measures, then future is unpredictable.
% Tests Automated Productivity Higher automated testing should increase efficiency & decrease cycle time.
Defects found by team Quality Number of bugs reported during team effort. Measures quality of work. Higher bug incidence translates to lower output and higher costs.
Defects found by customer Quality Number of bugs reported by user/customer. Measures quality of work delivered. Higher bug incidence translates to lower customer satisfaction and higher rework costs.
Pass/Fail (Re-do) measures Quality How does the rate of success in testing compare to the number of attempts? High success rate should mean greater output and efficiency.
Cycle Time Schedule Time from start of a story to complete. Short cycle time is desired.

Let us continue the theme of the behind schedule and overrun control account and look at what information would be available for support to developing the estimate-to-complete. An updated and refined backlog would have the scope of work remaining for the control account. The updated release plan would have the timing for the deliveries to be made in the control account. The metrics collected about the effort expended per accomplished story or story point would provide a factor for projecting future real-work hours. Planned corrective actions and improvements would tell us how we might expect improvement to the quality and improvement in the speed or cost of work. The insights available from a full set of metrics are impressive.

Does a project have to collect all of these metrics? If not all then which ones would be the right ones? Questions like that would be answered by the project management team analyzing their prior experience and the particular challenges of the project. The team would establish a data collection plan, likely described in their Software Development Plan or Program Management Plan that would explain the metrics, meaning, and frequency along with their purpose. With a clear understanding of the technical data to be collected and analyzed, the Control Account Managers would not have a difficult task to define how they would use that data in developing Variance Analyses and generating well-considered Forecasts. In fact, these tasks should be much simpler with the data in hand.

Agile/Scrum Ceremonies and Metrics Useful in EVMS Variance Analysis and Corrective Action Read Post »

Scroll to Top