Updates to the Compliance Review Series of Blogs

,

Time Lapse of Freeway representing the progress of time.

2020 Update

Humphreys & Associates has posted a 2020 update to the series of blogs discussing the DCMA Compliance Review (CR) process. “Compliance Review” is the term used for the formal EVM System review DCMA performs to determine a contractor’s compliance with the EIA-748 Standard for EVMS guidelines. This can also include, as applicable, Surveillance Reviews and Reviews for Cause (RFC).

DCMA used to follow a 16 Step compliance review process. This changed to an 8 Step process with the release of the DCMA Instruction 208 (DCMA-INST 208) titled “Earned Value Management System Compliance Reviews Instruction.” This Instruction has been rescinded and replaced with a set of DCMA Business Practices (BP). These Business Practices split out topics that the old DCMA Instruction 208 covered in one document. Whether you are a contractor new to the EVM contracting environment or a seasoned veteran, if the Earned Value Management System (EVMS) compliance and acceptance authority is the Defense Contract Management Agency (DCMA), these new Business Practices apply to you.

The four updated blogs include:

  • EVMS Compliance Review Series #1 – Prep for the DCMA Compliance Review Process. This blog presents the set of DCMA Business Practices (BP) that define the EVMS and Review process and specifically discusses Business Practice 6 “Compliance Review Execution.” It also discusses what you can expect should you need to complete the DCMA Compliance Review process through the 5 phases and 23 Steps outlined in BP6. It is critical you are able to complete each step in the process successfully the first time through to prevent delays. The best way to make sure you are prepared is to conduct one or more internal EVMS Mock Reviews, the topic for the next blog.
  • EVMS Compliance Review Series #2 – Conducting Internal Mock Reviews (Self Assessments). This blog discusses the importance of conducting a thorough internal review of your EVMS. You may or may not have the expertise in-house to conduct this simulation of a Compliance Review. An independent third party can help you prepare for a DCMA compliance review. The objective is to conduct the EVMS Mock Review to simulate everything DCMA will do. DCMA also expects a thorough scrub of the schedule and cost data – data traceability and integrity is essential.
  • EVMS Compliance Review Series #3 – Using Storyboards to Depict the Entire EVMS. Do you need a refresher on the role of storyboards in a compliance review? Storyboards can make a difference in training your personnel and explaining to the DCMA personnel how your EVMS works. Storyboards can take many forms, and if you don’t have one in place, consider starting with the flow diagrams in your EVM System Description.
  • EVMS Compliance Review Series #4 – Training to Prepare for Interviews. This blog highlights the importance of conducting training for your personnel, particularly the control account managers (CAMs), so they are able to complete successful interviews with DCMA personnel. H&A recommends completing a three step training process to proactively address any issues.

Help Preparing for a Compliance Review

Do you need help preparing for a DCMA compliance or surveillance review? Download the set of DCMA Business Practices and read our updated blogs so you have an idea of what is ahead. Humphreys & Associates can help you conduct a Mock EVMS Review, perform a data quality assessment, create a storyboard, or conduct EVMS interview training and mentoring for your personnel. Call us today at (714) 685-1730 or email us.

Updates to the Compliance Review Series of Blogs Read Post »

Formal Reprogramming – What Happened?

, , , , , , , , , , ,

Graph of an Increasing Budget

A long time ago, in a galaxy far, far away….an Over Target Baseline (OTB) – by design – was a rare occurrence (and the OTS concept did not even exist as part of Formal Reprogramming). Formal Reprogramming was a very difficult and cumbersome process that most contractors (and the government) really did not like to consider. The government, in its 1969 Joint Implementation Guide, said:

“Reprogramming should not be done more frequently than annually and preferably no more frequently than once during the life of the contract.”

The Office of the Under Secretary of Defense (OUSD) Acquisition, Analytics and Policy (AAP) – formerly PARCA – , in their latest OTB/OTS guide, states that Formal Reprogramming now has expanded to include an Over Target Schedule (OTS).  However, in that guide, it is stated in Paragraph 1.3.8:

“Ideally, formal reprogramming should be done no more than one time during the life of a contract. However, there may be instances where another formal reprogramming is warranted… When formal reprogramming is accomplished in accordance with the procedures in this guide, with a realistic cost and schedule estimate established for the remaining work, it should not be necessary to undergo formal reprogramming again.”

Today, though, whenever contractors incur a significant cost or schedule variance, instead of resolving the variance cause, the first words seem to be: “Let’s do an OTB or OTS.”  The lure of “getting rid of cost and schedule variances” seems too good to pass up.  Unfortunately, an OTB/OTS implementation has never been an instantaneous process. With AAP’s 12 step OTB/OTS process, it is obvious that the contractor will not be able to start today and incorporate the OTB/OTS in the next Integrated Program Management Data and Analysis Report (IPMDAR) dataset. In fact, AAP’s OTB/ OTS guide states in paragraph 3.8:

“It may be difficult to ascertain the length of time it will take to implement a new baseline based on the scope of the effort. It is not uncommon for the entire process to take up to six months which would be too long of a period without basic cost reporting.”

The last line of the above cited paragraph was referencing the reporting requirements to the customer when an OTB/OTS is being implemented.

The IPMDAR Implementation and Tailoring Guide (5/21/2020) even recognizes the issues with timeliness of implementing an OTB/OTS:

2.3.2.5.5  Formal Reprogramming Timeliness. Formal reprogramming can require more than one month to implement. During formal reprogramming, reporting shall continue, at a minimum, to include ACWP, and the latest reported cumulative BCWS and BCWP will be maintained until the OTB/OTS is implemented. 

So why does it take so long to implement the OTB/OTS?  Can the contractor just adjust the bottom line variances and move on?  Actually no, nothing is really that simple.  This is one of the reasons that implementing an OTB and OTS should not be taken lightly.   The AAP OTB/OTS Guide addresses adjustments this way:

“3.5.6.2 Adjusting Variances: A key consideration in implementing an OTB is to determine what to do with the variances against the pre-OTB baseline. There are essentially five basic options. This is a far more detailed effort than these simple descriptions imply, as these adjustments have to be made at the detail level (control account or work package).”

When considering the number of control accounts and work packages involved in a major contract, a Formal Reprogramming can become a rather daunting task.  The contractor also has to report the effects of the Formal Reprogramming in the IPMDAR Reprogramming Adjustments columns. These adjustment columns appear on both Format 1 and Format 2 of the IPMDAR database, which means the contractor must undertake the assessment for both the contract’s WBS and the OBS – for each WBS element and for each OBS element reported.  This can be further complicated if the OTB/OTS exercise were flowed down to subcontractors for a given program.  The AAP OTB/ OTS Guide, paragraph 3.8 also states:

“The customer should be cognizant of the prime contractor’s coordination complexities and issues with its subcontractors. The time to implementation may be extended due to accounting calendar month overlaps, compressed reiterations of contractor ETC updates, internal reviews, subcontractor MR strategy negotiations, senior management approvals, etc., all while statusing the normal existing performance within a reporting cycle.”

In the early days, when implementing an OTB with variance adjustments, the company and the customer agreed on a month-end date to make the data adjustments.  Then the contractor ran two CPRs or IPMRs (now the IPMDAR): (1) the first report as though no OTB had been implemented [to determine the amount of adjustments to cost variance (CV) and schedule variance (SV) at all the reporting levels] and, (2) the second report [after the OTB implementation had been completed – no matter how long it took] showing the Column 12 adjustments plus whatever BAC changes were being implemented.

Under the current OTB/OTS Guide, it appears as though this process is being done all at once. As stated in the AAP OTB/ OTS Guide paragraph 3.8 above, this implementation could take up to 6 months to complete, so lagging the second report until the OTB/OTS implementation is completed seems logical. The last sentence in paragraph 3.8 also stipulates that regardless of how long implementation takes, the contractor and customer will agree on interim reporting that will be required, further stating that:

“In all cases, at least ACWP should continue to be reported.”

Perhaps this agreement with the customer should also specify the content of the first IPMDAR following OTB/OTS implementation.

All things taken into account, the process of requesting and getting approval for an OTB or OTS can be a long and difficult process, especially if, at the end of it all, the contractor’s request is denied.  Even if it were approved and the contractor implements and works to the newly recognized baseline, immediately doing another one is not a pleasant thought – and remember, it was not intended to be pleasant. Reprogramming was always supposed to be a last resort action, when reporting to the current baseline was totally unrealistic.

Now, what about those cases where a contract has one or two elements reporting against totally unrealistic budget (or schedule) baselines?  The AAP OTB/ OTS Guide does cover a partial OTB, but reiterates that this is still an OTB because the Total Allocated Budget (TAB) will exceed the Contract Budget Base (CBB).  In the early days, however, the government allowed what was called Internal Operating Budgets (IOBs) for lower level elements (control accounts, or specific WBS elements, etc.) that were having problems resulting in an unrealistic baseline for the work remaining. The 1987 Joint Implementation Guide, paragraph 3-3. I (5) described IOBs as follows:

“(5) Internal Operating Budgets. Nothing in the criteria prevents the contractor from establishing an internal operating budget which is less than or more than the total allocated budget. However, there must be controls and procedures to ensure that the performance measurement baseline is not distorted.

(a) Operating budgets are sometimes used to establish internal targets for rework or added in-scope effort which is not significant enough to warrant formal reprogramming. Such budgets do not become a substitute for the [control] account budgets in the performance measurement baseline, but should be visible to all levels of management as appropriate. Control account managers should be able to evaluate performance in terms of both operating budgets and [control] account budgets to meet the requirements of internal management and reporting to the Government.

(b) Establishment and use of operating budgets should be done with caution.  Working against one plan and reporting progress against another is undesirable and the operating budget should not differ significantly from the [control] account budget in the performance measurement baseline. Operating budgets are intended to provide targets for specific elements of work where otherwise the targets would be unrealistic. They are not intended to serve as a completely separate work measurement plan for the contract as a whole.”

Current literature no longer specifically addresses Internal Operating Budgets (IOBs), but with the recent trend of contractors jumping to the OTB/OTS conclusion, it might be a better alternative to have individual instances of unrealistic budgets (or schedules) that do not otherwise push the total program to the need for a complete OTB and/or OTS implementation.

These could be good discussion topics for future AAP and DCMA meetings with industry representatives, to determine if there are ways to streamline the process, or at least reduce the amount of requests to implement Formal Reprogramming.  Variances are, after all, performance measurement indicators that should not just be routinely and artificially eliminated.

Formal Reprogramming – What Happened? Read Post »

Along the IMS Time-Now Line

, , , , , , , , , , , ,

Arrows moving to the right.Recently one of our consultants was instructing a session on the Integrated Master Schedule (IMS) with a group of project personnel from one of our larger clients. The group was a mixture of beginners with no real experience in schedules and some much more experienced practitioners; some with more than 10 years of experience. The mixture made it somewhat difficult, but it also made for some interesting discussions that might have been missed in a more homogenous group. One of those things was the usefulness or importance of the “time-now” line.

When the group was asked about the importance of the time-now line and what information could be easily gained from a look at the line, there was silence. The beginners did not have a clue but also none of the experienced people had any response. What should have been a short discussion with just one “slide” as a visual, turned out to be a longer and more informative session on this topic.

The time-now line has different names in different software tools but it refers to the data date, or status date, of the schedule. That also would be the first day of the remainder of the schedule. When a scheduler sorts tasks by date, the time-now line runs down the screen and forms a highly useful visible reference.

In the small example below [see Figure 1], you can see the time-now line and visually assess the situation. Time-now is shown by a vertical line at the beginning of September, so all remaining effort has been scheduled to after that date. In other words, no work can be forecasted in the past. A walk down the line shows Task 1 has both started and completed. Task 2 started but has not completed. In fact, the remaining work in Task 2 has been pushed out by the time-now line. The start of Tasks 5 and 9 are also being pushed out by the time-now line. In most real project schedules, filters and other techniques may be needed to isolate information like this; but in our small example, we can simply “eyeball” the time-now line and see valuable information. Task 9 starts the critical path shown in red tasks.

 

The project start date was August 1. The status date is September 1. Tasks 2, 5, and 9 show gaps from the predecessor to their starts. in the case of Task 2 the cap is to the start of the remaining work. This gap is caused by the time-now being set to September 1 with all remaining work starting after that. The critical path is being pushed by time now.Figure 1

 

A slightly different setup for that same small example [see Figure 2] shows something interesting. The time-now line is still at the beginning of September. But now there is a gap between time-now and work on the critical path. This is an unusual situation and should be investigated for the root cause. It is possible this is an accurate portrayal of the situation, but regardless of the cause, it must be verified and explained.

 

Time now is still at September 1. There is a gap on the critical path at the start of Task 9 which, in this case, is caused by a Start-No-Earlier-Than constraint.Figure 2

 

In yet one more variation [see Figure 3], we see that a broken link results in Task 8 ending up on the time-now line. A task without a predecessor will be rescheduled to start at the earliest possible time (if the task is set to be “As Soon As Possible”). And the earliest possible time is the time-now line; the beginning of September. Just as broken things fall to the floor in real life, “broken things” fall to the time-now line in a schedule. Un-started work can land there. Un-finished work can land there. And un-linked work can land there.

It is further possible to see that Task 2 has had an increase in the remaining duration that has driven it onto the critical path. Task 2 at this moment is the most important task on the entire project. A slip to Task 2 will drive out the end date for the entire project. One question that needs answering is what is holding up Task 2?

If the display had been sorted by increasing total float/slack and the usual cascade by date, then the critical path would be starting at the upper left-hand corner; like the critical path in this example. The action on the project is almost always on the time-now line and the most important action, when sorted as described, will be at the upper left-hand corner.

 

Task 2 is now driving the critical path. Task 8 has fallen back to the time now line. The constraint on Task 9 has been removed.Figure 3

 

So, a walk down the time-now line can help us see the critical path action, find broken parts of the schedule, and locate unusual circumstances that need our attention. Our recommendation is to look at the time-now line any time there is data being changed in the IMS. This will help you catch issues early and keep the schedule cleaner.

Along the IMS Time-Now Line Read Post »

Humphreys and Assoc Reviews 7 Principles of Earned Value Management Tier 2 System Implementation Intent Guide

, , , , , , ,

In this video we review the 7 Principles of Earned Value Management Tier 2 System Implementation Intent Guide published by the Assistant Secretary for Preparedness and Response, or ASPR.

This Guide is primarily used by the Biomedical Advanced Research and Development Authority, or BARDA, on countermeasure R&D contracts that have a total acquisition cost greater than $25 million and a Technical Readiness Level of less than 7.

7 Principles of Earned Value Management Tier 2 System Implementation Intent Guide -- EVM Cross Reference Guide

Humphreys and Assoc Reviews 7 Principles of Earned Value Management Tier 2 System Implementation Intent Guide Read Post »

Agile/Scrum Ceremonies and Metrics Useful in EVMS Variance Analysis and Corrective Action

, , , , , , , , , , , , , , , , , , , , , , , , , ,
Agile Scrum EVMS

P. Bolinger, CSM October 2016
Humphreys & Associates

How can Agile/ Scrum be used to support EVMS Variance Analysis and Forecasting in a way that provides program managers with cost and schedule integrated information for no extra effort?

The discipline of EVMS and the Agile/Scrum practices have several touch-points that are covered in two major documents: NDIA IPMD Agile Guide March 2016, and PARCA Agile and EVM PM Desk Guide. Neither of these documents, as yet, drives to the level of specifics when it comes to best practices for use of Agile to support EVM Variance Analysis and EVM Forecasting.

Looking at the literature for Agile/Scrum, we know that there are recommended ceremonies that are conducted at various levels of the product structure and at different times during the project life cycle. These ceremonies are supported by many discussions of the metrics that can be collected at each ceremony and their potential use in managing the technical work of development within the Agile/Scrum framework. But where do these ceremonies potentially support EVMS Variance Analysis and Forecasting?

Now suppose we are presented with a control account that has exceeded the EVMS thresholds for cumulative cost and schedule variances. Wouldn’t it be great to have at our fingertips the underlying data from the process? In this case we might find, for example, that the Velocity is less than that needed to meet the end goal, the story cycle time is longer than desired, the pass/fail ratio is not favorable, too many team members have been absent in the last Sprint, the number of disruptions has been excessive, and the work to accomplish the stories is higher than estimated. It is not difficult to surmise the outcome would likely be a behind schedule and overrun condition in the EVMS. These data measures would provide the fodder for deep diving to the root cause and impact statements.

Where would we get that information?

Let’s start with the Agile/Scrum ceremonies. The particular Agile/Scrum ceremonies that we find conducted during the project are:

Backlog Refinement. This ceremony can be nearly continuous. It involves redefining the backlog of development work (scope), the prioritization or re-prioritization of that work, and potentially the assignment of responsibilities for the backlog.

Release Planning. This is a recurring ceremony aligned with the release cadence for the project. It involves establishing the capabilities and features of the product and when they will be released.

Sprint. This is a short time-defined effort to accomplish the design, code, and test of some subset of the product. The Sprints are controlled by the self-managing teams.

Daily Scrum or Standup. This is a daily (recommended) team meeting to discuss what has happened, what roadblocks exist, what is planned for the day, and other necessary items.

Sprint Review. The session in which the team products are demonstrated to the owner and self-off is accomplished.

Sprint Retrospective. A meeting of the stakeholders to discuss what went right (or wrong) during the Sprint and to define improvement actions that are needed.

The relationship of these Agile ceremonies with EVMS might look like this:

Agile/Scrum Ceremony

Agile Purpose

Relationship to EVM Variance Analysis, Root Cause Analysis, Corrective Action Planning & Follow-up and Forecasting

 

 

 

Backlog Refinement Manage, estimate, prioritize, and organize the product backlog in an on-going routine. Estimating – impacts the EVMS ETC and EAC as well as durations of efforts.
    Prioritizing – in response to issues is corrective action management.
    Organizing the backlog could be a form of corrective action effort.
     
Release Planning Establishing the contents and timing for releases of product. Updates could be part of corrective action planning in response to issues. Creation of new work packages. Changes to planning packages and SLPPs would also.
     
Sprint Short time-box performance unit. Work is done in Sprints. Below the work package. Short span measurement period is possible.
     
Daily Scrum and Stand-up Make short term plan, adjust to issues, discuss problems, clear roadblocks. Much of the daily action would relate to root cause analysis and corrective action planning although the time-frame is very short and the issues may be too small to individually impact feature work package or the Epic control account.
     
Sprint Review Demonstrate the product, update released work, make changes to product. This would relate to corrective action planning and follow-up. Issues would be found here that would impact risks, ETCs, corrective actions, performance metrics.
     
Sprint Retrospective Reflect on the project, progress, people processes, what was good, what was bad, and take actions to improve. This should be the richest source of supporting information for EVMS root cause & corrective action within the VAR realm. Very timely for variance analysis as it happens potentially many times during work package (feature) duration.
Feature Retrospective (not one of the basic ceremonies) Review situation regarding technical scope deficit. Reflect on the project, progress, people processes, what was good, what was bad, and take actions to improve. Because this only happens at the end of the feature it is limited in value for variance analysis timeliness. Any lessons learned can only be applied to future feature work.

But where is the meat? Where do we get actionable data or at least data we can analyze to decide what management efforts are required?

There are numerous potential metrics that can be collected during these ceremonies. These metrics can form the basic data set that could be analyzed to define the root cause of cost and schedule variances. In addition to isolating the cause of issues, within some of these ceremonies the impact of the issue on the Sprint, or Feature, or team may be made. Certainly, these metrics can be used as the basis for projecting future workload and performance.

The total number of potential metrics is not known. In this paper, we looked at 17 metrics and considered what the data might mean. The results of this review are contained in this matrix:

Metric

Type of Measure

Discussion

Sprint Burn up/Burn Down Backlog Value of backlog remaining for Sprint. Decrease is expected when work is done. Increase means work increased. Burn up can include total completed plus remaining; a great metric.
Feature Burn Up/Burn Down Backlog Value of backlog remaining for Feature. Decrease is expected when work is done. Increase means work increased or shifted.
Customer support requests received. Disruption Number of instances. Unplanned interruptions by customer can lower the output of the team if excessive.
Disruption measures Disruption How many and what type (except customer support requests). Higher disruptions impact team efficiency.
Estimate Accuracy (Sprint or Feature) Estimating Measure of budgeted value (estimated value) for the Stories in the Sprint or Feature versus the actual cost (calculated cost) of the Stories when done. Related to team size.
Discovered work Estimating Emerging work discovered during the Sprint. Will translate to extra effort in future if adopted into backlog.
Exceeds WIP Limits Management If WIP limits are set on team or individuals then exceeding set limits will impact efficiency and output.
Retrospective Action Log Management Count of improvement actions listed in Retrospective. Increasing count means issues are not being resolved.
Attendance Management Comparison of actual hours worked by team compared to baseline expectations in plan.
WIP Productivity Measure of the number of stories or points in WIP at any time. WIP growth can indicate bottlenecks and inefficiencies.
Velocity Productivity Measure of the amount of work (Stories or Points) accomplished during a time period. Higher velocity means greater throughput per person/team.
Stability measures Productivity Comparing Sprint by Sprint basic measures from this list. If there is high variability between Sprints in measures, then future is unpredictable.
% Tests Automated Productivity Higher automated testing should increase efficiency & decrease cycle time.
Defects found by team Quality Number of bugs reported during team effort. Measures quality of work. Higher bug incidence translates to lower output and higher costs.
Defects found by customer Quality Number of bugs reported by user/customer. Measures quality of work delivered. Higher bug incidence translates to lower customer satisfaction and higher rework costs.
Pass/Fail (Re-do) measures Quality How does the rate of success in testing compare to the number of attempts? High success rate should mean greater output and efficiency.
Cycle Time Schedule Time from start of a story to complete. Short cycle time is desired.

Let us continue the theme of the behind schedule and overrun control account and look at what information would be available for support to developing the estimate-to-complete. An updated and refined backlog would have the scope of work remaining for the control account. The updated release plan would have the timing for the deliveries to be made in the control account. The metrics collected about the effort expended per accomplished story or story point would provide a factor for projecting future real-work hours. Planned corrective actions and improvements would tell us how we might expect improvement to the quality and improvement in the speed or cost of work. The insights available from a full set of metrics are impressive.

Does a project have to collect all of these metrics? If not all then which ones would be the right ones? Questions like that would be answered by the project management team analyzing their prior experience and the particular challenges of the project. The team would establish a data collection plan, likely described in their Software Development Plan or Program Management Plan that would explain the metrics, meaning, and frequency along with their purpose. With a clear understanding of the technical data to be collected and analyzed, the Control Account Managers would not have a difficult task to define how they would use that data in developing Variance Analyses and generating well-considered Forecasts. In fact, these tasks should be much simpler with the data in hand.

Agile/Scrum Ceremonies and Metrics Useful in EVMS Variance Analysis and Corrective Action Read Post »

Who Owns EVM? Programs or Finance?

, , , , , , , ,
earned value management: finance department or programsI have read several Earned Value Management (EVM) reports, papers, and articles that debate what company organization should “own” EVM and the company’s Earned Value Management System (EVMS). These debates most often mention the programs’ organization and the finance department as common EVM “owners.” The majority opinion seems to be that because EVM is a program management best practice it belongs in programs. A minority opinion is that because EVM is denominated in dollars, schedule included, and because EVM reports are financial in nature, EVM belongs in the finance department. Before we dive into this debate, a summary of the responsibilities of a Chief Financial Officer (CFO) and of the head of programs is useful. In our company A and company B examples to follow, both the CFO and the head of programs report to the company president.

WHAT ARE THE DUTIES OF A CHIEF FINANCIAL OFFICER ( CFO)?

A CFO has three duties, each measured in the time domain. The first duty of the CFO is as the company’s controller and is responsible to accurately and honestly report past company financial performance. The CFO is also responsible for the current financial health of the company – to insure that today’s decisions create rather than destroy value. And lastly, the CFO must protect the company’s future financial health and that all expenditures of capital maximize future financial health. Every business decision, especially those of the CFO, are either good decisions (are accretive – increase shareowner value) or are bad decisions (are dilutive – destroys shareowner value).

WHAT ARE THE DUTIES OF THE HEAD OF PROGRAMS?

The head of programs is typically a Vice President or higher and all program and project managers report to him or her. The head of all programs has profit and loss responsibility for his or her portfolio of programs and projects. In addition, each program is responsible for achieving the technical, cost and schedule requirements of the contracts it is executing on behalf of its customers.

A TALE OF TWO COMPANIES:

I will now describe first-hand experiences with two companies and how each company decided who should “own” EVM.

Company “A” had EVM assigned to the finance department. All EVM employees were overhead, even those assigned to a program. A new CFO arrived and quickly decided to reduce indirect costs, declaring that he was “coin-operated.” The new CFO terminated the employment of all EVM employees. Each program attempted to create an EVM branch office but failed. A level 3 CAR enumerating EVM deficiencies was issued and the CFO was fired. A second “new” CFO arrived and agreed to transfer EVM to the head of programs. The head of programs was instrumental in changing the disclosure statement making EVM personnel assigned to a program a direct charge to that program / contract. The head of programs created a Program Planning and Control (PP&C) organization and demanded all PMs and their program members to quickly learn, use and master EVM. A program control room was built with five screens. Daily 2:00 pm EVM data-driven reviews were held on short notice. These daily reviews became known as “CAM Bakes.” The EVM and program management culture changed quickly and dramatically at Company “A.”

Company “B” had EVM assigned to the CFO who was as “coin-operated” and unaware of EVM as the first “new” CFO of Company “A.” The culture of company “B” was very hostile to EVM, so it probably did not matter who “owned” EVM. The company failed 16 of 32 guidelines and was decertified. Significant withholdings were imposed and the company’s reputation was damaged. Several top managers hostile to EVM “sought employment elsewhere.” A new CFO arrived who was also “coin-operated” but expert in EVM. The new CFO formed a partnership with the head of programs. The new CFO was as much a PM as he was a CFO. The new CFO told his direct reports assigned to each program to “make the program managers successful.” And they did exactly that.

The new CFO understood that the company was the sum of all its contracts and that every dollar flowed from its customers. The EVM and program management culture at Company “B” changed rapidly.

Who Should “Own” EVM? Programs or Finance?

Returning to our original question of who should “own” EVM, the majority theory is that the Programs’ organization should “own” EVM. All else equal, I tend to agree with this theory.

However, while theory is suggestive, experience is conclusive. My experience at Company “A” proved that a strong programs’ leader could change the EVM and program management culture of a company rapidly. My experience at Company “B” proved that a CFO could “own” EVM and be successful at changing the company’s EVM and program management culture. The CFO and the head of programs must form an EVM partnership no matter who “owns” EVM.

Who “owns” EVM at your company?

Robert “Too Tall” Kenney
H&A Associate

Who Owns EVM? Programs or Finance? Read Post »

Scroll to Top