Ensuring CPI to TCPI Comparisons are Valid at the Total Contract Level

TCPI and CPI ComparsionHave you been in a meeting when presenters show differing To-Complete Performance Index (TCPI) values at the total contract level for the same contract? In these situations, the presenters have made different assumptions about the inclusion of Undistributed Budget and Management Reserve (MR) in the TCPI calculations. So let’s use some sample values and show different ways the TCPI can be calculated at the total contract level.

As a reminder, this is the formula for TCPI:

TCPI

Consider the following extract from the lower right portion of Format 1 of the Integrated Program Management Report (IPMR) (Contract Performance Report (CPR)).

To-Complete Performance Index (TCPI)

When comparing the TCPI to the CPI at the total contract level, the most realistic approach is to calculate the TCPI at the level of the Distributed Budgets. Stated differently, the TCPI should be calculated without Undistributed Budget and Management Reserve. The Cost Performance Index (CPI), BCWP divided by ACWP, represents the cost efficiency for the work performed to date. Notice in the above table that the BCWP and ACWP values in the rows for “Distributed Budgets by WBS”, “Subtotal”, and “Total” are the same; therefore, the CPI calculation will be the same for any of these data levels. The TCPI represents the cost efficiency necessary to achieve the reported EAC. The “Distributed Budgets by WBS” contain approved budgets as well as performance data against those budgets. The CPI and TCPI compared at this level of data certainly provide a valid comparison of past performance to projected performance. The CPI for the above data is 0.73 while the TCPI is .92.

Since the difference between the CPI and TCPI is greater than 0.10, the control account managers (CAMs) and the analysts should research the reasons that the future performance indicates improvement and provide EAC rationale.

Calculating the TCPI at the Performance Measurement Baseline level (i.e. including Undistributed Budget in the BAC and EAC) yields a different TCPI than at the Distributed Budget level. Mathemati-cally, the TCPI will be the same for the Distributed Budgets and PMB only if the value of the Estimate to Complete (EAC – ACWP) equals the budgeted value of the remaining work (BAC – BCWP). In that case, the TCPI will be 1.0. If the contract has an unfavorable cost variance and projects an overrun on future work, the TCPI at the PMB level (includes UB) will be higher than the TCPI calculated at the Distributed Budget level (does not include UB).

For the data in the above table, the Distributed Budget TCPI = 0.92 but increases to 0.94 if Undistributed Budget is included in the calculation. The Undistributed Budget, with the same value added to both BAC and EAC, represents a portion of the Estimate to Complete (ETC) that will be performed at an efficiency of 1.0. In an overrun situation at the distributed budget level, the disparity between the CPI and TCPI increases when Undistributed Budget is included in the TCPI because more work must be accomplished at a better efficiency to achieve the EAC. In the above data, the disparity between CPI and TCPI increased from 0.19 to 0.21.

Calculating the TCPI at the total contract level with Undistributed Budget and Management Reserve in both the BAC and EAC yields TCPI values very close to TCPI values calculated at the distributed PMB level. The UB and MR values included in the BAC and EAC increase the proportion of the remain-ing work that is forecast to be completed at an efficiency of 1.0 and push the TCPI toward the 1.0 val-ue. The larger the values of UB and MR, the more the TCPI will diverge from the TCPI calculated at the Distributed Budgets level. Using this approach for the sample data above, the CPI is 0.73 and the TCPI is 0.94.

Calculating the TCPI at the total contract level, but not including Management Reserve in the EAC, creates a significant disparity between the CPI and TCPI. This situation represents the classic “apples to oranges” comparison: the work remaining in the formula includes MR, but the funds estimated do not. Obviously, with a higher numerator, the TCPI would be higher than any of the other approaches discussed above. Using this approach for the sample data above, the CPI is 0.73 and the TCPI is 1.06. While situations arise where exclusion of MR from the EAC makes sense, it is still important to review the project manager’s rationale with respect to MR application. Most situations assume that MR will be depleted during contract performance; consequently, it should be added to the EAC at the PMB level.

In summary, be sure you understand what is included in the TCPI calculation before you make comparisons to the CPI at the total contract level. The following table summarizes the CPI and TCPI for the sample data in this article and highlights the differences in the TCPI when calculated at the various data summary levels.

CPI / TCPI

To ask about this topic or if you have questions, feel free to contact Humphreys & Associates.

Ensuring CPI to TCPI Comparisons are Valid at the Total Contract Level Read Post »

Keeping Track of Budgets, Changes, and IPMR Data

project IPMR DataFor projects, the moment the baseline is established it is subject to change and a disciplined approach in the change process must be in effect.  The source of project changes can be either external or internal. External changes frequently affect all aspects of a contractor’s internal planning and control system and are generally for effort that is out-of-scope to the contract.  Contract changes impact the Contract Budget Base (CBB) and are distributed to the Performance Measurement Baseline (PMB), which includes the distributed budgets containing control accounts, and Summary Level Planning Packages, and to the Undistributed Budget.

These changes may also impact the Management Reserve (MR) budget if the decision were made to withhold reserve from the budget for the change.  The Work Breakdown Structure (WBS) serves as the framework for integrating changes within the project’s structure.  Internal changes operate much the same, but they do not change the CBB. The most common reasons for internal changes are the allocation of MR for contractually in-scope effort, replanning of future work, and converting planning packages to work packages.

Keeping Track of Budgets, Changes, and IPMR Data

The Earned Value Management Systems Guidelines require that all changes, regardless of the source, be incorporated in a timely and disciplined manner. Consequently, the project needs to have a formal change process and procedures in place. Following these processes and procedures will also help minimize disruptions in the current effort while changes are being incorporated.  An undisciplined change control process has the potential to create timing or quality issues that will lessen the baseline’s effectiveness as a management tool.

Baseline changes must also be tracked to ensure baseline integrity. The most effective way to do this is to establish baseline logs to track all approved changes. These can include the Contract Budget Base (CBB) Log, as shown below, the Management Reserve (MR) Log, and the Undistributed Budget (UB) Log.  In addition, a log may be established to track all approved, unapproved and unresolved change requests.

Keeping Track of Budgets 2 blog

Once established, these logs must be maintained and reconciled to the data reported in the Integrated Program Management Report (or Contract Performance Report) that is delivered to the customer on a monthly basis. This reconciliation helps validate that the PMB accurately represents the project’s technical plans and requirements.

To find out more about this topic or if you have questions, feel free to contact Humphreys & Associates.

Keeping Track of Budgets, Changes, and IPMR Data Read Post »

Is it OTB/OTS Time or Just Address the Variances?

,

EVM: OTB/OTS Time or Just Address the VariancesNo project manager and project team ever wants to go through an Over Target Baseline (OTB) or Over Target Schedule (OTS).  The idea of formally reprogramming the remaining work and adjusting variances at the lowest level can be daunting and extremely time consuming. As painful as an OTB/OTS is, a project manager must first determine if the reprogramming is necessary.  Several factors should be considered before an OTB/OTS is declared and implemented.

NOTE: This paper addresses a Formal reprogramming as including both an OTB and an OTS.  If the Contract Performance Report is the CDRL Requirement, an OTS is not a part of a Formal Reprogramming.  It is a separate action.

Performance Data

Projected successful execution of the remaining effort is the leading indicator of whether an OTB/OTS is needed. Significant projected cost overruns or the inability to meet scheduled milestones play a major role in determining the need for an OTB/OTS as these indicators can provide a clear determination that the baseline is no longer achievable.

Leading indicators also include significant differences between the Estimate to Complete (ETC) and the Budgeted Cost of Work Remaining (BCWR). This is also demonstrated by major differences between the Cost Performance Index (CPI) and the To Complete Performance Index (TCPI).  These differences are evidence that the projected cost performance required to meet the Estimate at Completion is not achievable, and may also indicate that the estimated completion costs do not include all risk considerations. Excessive use of Management Reserve (MR) early in the project could also be an indicator.

 Schedule indicators include increased concurrency amongst remaining tasks, high amounts of negative float, and significant slips in the critical path, questionable activity durations and inadequate schedule margin for remaining work scope.  Any of these conditions may indicate that an OTB/OTS is necessary.

Quantified Factors

Various significant indicators in both cost and schedule can provide a clear picture that an OTB/OTS is warranted.  The term “significant” can be seen as extremely subjective and vary from project to project. For further evidence, other more quantified indicators can be used to supplement what has already been discussed.

Industry guidelines (such as the Over Target Baseline and Over Target Schedule Guide by the Performance Assessments and Root Cause Analyses (PARCA) Office) suggest the contract should be more than 20% complete before considering an OTB/OTS.  However, the same guidance also recommends against an OTB/OTS if the forecasted remaining duration is less than 18 months. Other indicators include comparing the Estimate to Complete with the remaining work to determine projected growth by using the following equation:

Projected Future Cost Overrun (%) = ([(EACPMB-ACWP) / (BACPMB-BCWP) – 1)] X 100

If the Projected Future Cost Overrun percentage were greater than 15%, then an OTB/OTS might be considered. Certainly the dollar magnitude must be considered as well.

Conclusion

There is no exact way to determine if an OTB/OTS is needed, and the project personnel must adequately assess all factors to make the determination. Going through an OTB/OTS is very time consuming, and the decision regarding that implementation should not be taken lightly.

After all factors are adequately analyzed, the project manager may ultimately deem it unnecessary and just manage to the variances being reported. This may be more cost effective and practical than initiating a formal reprogramming action.

If you have any questions about this article contact Humphrey’s & Associates. Comments welcome.

We offer a workshop on this topic: EVMS and Project Management Training Over Target Baseline (OTB) and Over Target Schedule (OTS) Implementation.

Is it OTB/OTS Time or Just Address the Variances? Read Post »

Schedule Health Metrics

What are Schedule Health Metrics?

Schedule Health Metrics by Humphreys & AssociatesAt the heart of every successful Earned Value Management System (EVMS) is a comprehensive Integrated Master Schedule (IMS) that aligns all discrete effort with a time-phased budget plan to complete the project.  As such, the IMS must be complete and accurate to provide the necessary information to other EVMS process groups and users.  The IMS may be a single file of information in an automated scheduling tool, or a set of files that also includes subcontractor schedules.

For any medium to large project, the IMS may contain thousands of activities and milestones interconnected with logical relationships and date constraints to portray the project plan.  Schedule Health Metrics provide insight into the IMS integrity and viability.

Why are Schedule Health Metrics important?

For a schedule to be useable, both as a standalone product and as a component of the EVMS, standards have been developed to reflect both general scheduling practices and contractual requirements.  Schedule Health Metrics contain checks designed to indicate potential IMS issues.  Each check has a tolerance established to help focus on particular areas of concern.  The individual metrics should not be considered as a pass or fail score, but should be used as a set of indicators to guide questions into specific areas of the IMS.

For example, if there is an unusually large number of tasks with high total float properties, a review of the logic in the IMS is warranted.  At the end of the analysis, if the Control Account Manager (CAM) responsible for the work, with the help of the Planner/Scheduler, can explain why the high float exists, then the issue is mute.  Metrics are simply a method to help isolate issues in a large amount of data.  In this example, the analysis will continue to depict issues with this CAM’s data, but those issues are not indicative of failure.

What are the standards?

From the beginning of automated scheduling systems in the 1980’s, attempts have been made to take advantage of the scheduling databases for the purpose of metrical analysis.  The maturity of scheduling software tools has provided better access to metrics in both open architecture databases and with export capabilities to tools such as Microsoft’s Excel and Access products. With the availability of the new tools, new analysis techniques were developed and implemented.

Several years ago, the Defense Contract Management Agency (DCMA) reviewed the various Schedule Health Metrics being used within the US Government and selected 14 tests they believed to be the best tests of an IMS.  Because they support a wide variety of customers from the DOD, NASA, and DOE, they have developed these checks with thresholds that should be common to all types of programs, but not specific or restrictive to a particular one. The thresholds help bring focus to the issues in the schedule under review.  With agreement between the customer, the DCMA and the contractor, they may be altered in some cases to reflect the unique nature of a project.

Unless otherwise indicated, the DCMA Health Metrics apply only to incomplete activities or tasks in the IMS, not milestones, with baseline durations of 1 day or longer. This set also excludes Level of Effort (LOE) and Summary tasks because they should not be driving the network.  The DCMA 14 point Schedule Health Metrics are:

1.  Missing Logic

The test: The percentage of incomplete activities that do not have a predecessor or successor.

The threshold: 5%.

For a schedule to function correctly, the tasks must be logically linked to produce a realistic mathematical model that sequences the work to be performed.

2.  Activities with Leads

The test: The percentage of relationships in the project with lags of negative 1 day or less.

The threshold: 0%.

The project schedule should flow in time from the beginning to the end.  Negative lags, or leads, are counter to that flow and can make it more difficult to analyze the Critical Path.  In many cases this may also indicate that the schedule does not contain a sufficient level of detail.

3.  Activities with Lags

The test: The percentage of incomplete activities that have schedule lags assigned to their relationships.

The threshold: 5%.

An excessive use of lags can distort an IMS and should be avoided.

4.  Relationship Types

The test: The percent of Finish to Start relationships to all relationships.

The threshold: 90%.

A project schedule should flow from the beginning of the program to the end.  Finish to Start (FS) relationships are the easiest and most natural flow of work in the IMS, with the occasional Start to Start (SS) and Finish to Finish (FF) relationship as required.  Start to Finish relationships should not be used because they represent a backward flow of time and can distort the IMS, as do the overuse of SS and FF relationships.

5.  Hard constraints

The test: The current definition includes any date constraint that effects both the forward and backward pass in the scheduling engine.  These include any date constraint that says ‘Must’ or ‘Mandatory’, ‘Start On’ or ‘Finish On’, and ‘Start’ or ‘Finish Not Later Than’ date constraints.

The threshold: 5%.

Hard constraints limit the flexibility of the IMS to produce reliable Driving Paths or a Program Critical Path.  Techniques using soft constraints and deadlines can allow the schedule to flow and identify more issues with float values.

6.  High Float

The test: Percentage of tasks with High Total Float values over 44 days.

The threshold: 5%.

A well-defined schedule should not have large numbers of tasks with high total float or slack values.  Schedules with this condition may have missing or incorrect logic, missing scope or other structural issues causing the high float condition.  The DCMA default threshold of 44 days was selected because it represents two months of effort.  Individual projects may wish to expand or contract that threshold based on the length of the project and the type of project being scheduled; however, any changes in thresholds should be coordinated with the customer first to confirm the viability of the alternate measurement.

7.  Negative Float

The test: The percentage of activities that have a total float or slack value of less than zero (0) days of float.

The threshold: 0%.

When a schedule contains tasks with negative float, it indicates that the project is not able to meet one or more of its delivery goals. This is an alarm requiring redress with a corrective action plan.  Please see the Negative Float blog for additional discussion.

8.  High Duration

The test: A percentage of tasks in the current planning period with baseline durations greater than 44 days.  This check excludes LOE, planning packages and summary level planning packages.

The threshold: 5%.

Near term tasks should be broken down to a sufficient level of detail to define the project work and delivery requirements.  These tasks should be shorter and more detailed since more is known about the immediate scope and schedule requirements and resource availabilities.  For tasks beyond the rolling wave period, longer duration tasks in planning packages are acceptable, as long as the IMS can still be used to accurately develop Driving Paths to Event Milestones and a Program Critical Path to the end of the project.

9.  Invalid Dates

The test: Percentage of tasks with actual start or finish dates beyond the Data Date, or without actual start or finish dates before the Data Date.

The threshold: 0%.

The check is designed to ensure activities are statused with respect to the Data Date in the IMS.  Claiming actual start or finish dates in the future are not acceptable from a scheduling perspective, but can also create distortions in the EVM System by erroneously claiming Earned Value in the current period for future effort.  Alternately, if tasks are not statused with actual start or finish dates prior to the Data Date, then they cannot be logically started or finished until at least the day of the Data Date, if not later.  If the forecast dates are not moved to the Data Date or later, the schedule cannot be used to correctly calculate Driving Paths to an Event Milestone, or calculate the Program Critical Path.

10.  No Assigned Resources

The test: Percentage of incomplete activities that do not have resources assigned to them.

The threshold: 0%.

This is a complex check because of two basic factors: 1) resources are not required to be loaded on tasks unless directed by the contractor’s internal management requirements, and 2) some tasks such as Schedule Visibility Tasks (SVTs) and Schedule Margin tasks should not be associated with work effort.  If the contractor chooses not to load resources into the schedule the options are:

  1. Associate basic quantities of work with tasks and define in a code field, transfer those quantities to the EVM cost system and verify the traceability between the IMS quantities and the associated budgets in the cost system.
  2. Maintain the budgets entirely in the EVM cost system and provide a trace point from the activities in the IMS to the associated budgets in the cost system.  The trace points are usually in the form of control account and work package/planning package code values.

In either case, care must be exercised so that Schedule Visibility Tasks are reviewed and confirmed to ensure that work is not misrepresented to either the contractor or the customer.

11.  Missed Activities

The test: Percentage of completed activities, or activities that should have been completed based on their baseline finish dates, and failed to finish on those dates.

The threshold: 5%.

Many people view this as a performance metric.  That is true, but it is also used to review the quality of the baseline.  For example, if a project has a 50% failure rate to date, what level of confidence should the customer have in future progress?  Is the baseline a workable plan to successfully complete the project?  Does the EVM System reflect the same issues as the IMS?  If not, are they correctly and directly connected? These are questions that should be addressed by the contractor before the customer or other oversight entities ask them.

12.  Critical Path Test

The test: Select a task on the program Critical Path and add a large amount of duration to that task, typically 600 days.

The threshold: The end task or milestone should slip by as many days as the delay in the Critical Path task.

This is a test of the integrity of the schedule tool to correctly calculate a Critical Path.  If the end task or milestone does not slip by as many days as the artificial delay, there are structural issues inhibiting this slip.  These issues may be logic links, hard constraints or other impediments to the ability of the schedule to reflect the slip.  These issues should be addressed and corrected as the schedule data is to be relied upon to provide meaningful information to management.

13.  Critical Path Length Index (CPLI)

The test: The Critical Path length + the Total Float on the Critical Path divided by the Critical Path Length.  This formula provides a ratio that puts the Critical Path Float in perspective with the Critical Path length.

The threshold: .95 or higher.

If the program is running with zero (0) Total Float on the Program Critical Path, then the ratio is 1.00.  If there is negative float on the Program Critical Path, then the ratio will fall below 1.00 which indicates that the schedule may not be realistic and that project milestones may not be met.

14.  Baseline Execution Index

The test: The number of completed activities divided by the number of activities that should have been completed based upon the baseline finish dates.

The threshold: .95 or higher.

This check measures the efficiency of the performance to the plan.  As such, some people also dismiss this as a simple performance metric, but as in the case of Metric #11 (Missed Tasks), this is also a measurement of the realism of the baseline plan.  As in Metric #11, if the schedule performance is consistently not to the plan, how viable is the plan?  How viable is the EVMS baseline?  How accurate is the information from the baseline that Management is using to make key decisions?  Metrics #11 and #14 may reflect the result of the effort being performed on the contract, but also represent the quality and realism of the baseline plan.

What are additional metrics that help identify schedule quality issues?

The DCMA’s 14 point schedule assessment should be considered a basic check of a schedule’s health, but by no means is the only check that should be used to analyze an IMS.  More industry standard checks are identified in other documents, including the Planning and Scheduling Excellence Guide (PASEG) revision 2.0 (6/22/12). The PASEG is a National Defense Industrial Association (NDIA) product and was developed in cooperation between industry and the Department of Defense. Section 10.4, Schedule Execution Metrics, discusses in greater detail some of the Health Metrics identified above, as well as other metrics including the Current Execution Index (CEI) and the Total Float Consumption Index (TFCI).

In addition to these metrics, checks should be performed on activity descriptions, activity code field values, risk inputs, Earned Value Techniques and other tests to assure alignment of the IMS with its partner information systems.  These systems include, but are not limited to the MRP system, the cost system, program finance systems and the risk management system.  The IMS in an integral component of a company’s management system, therefore issues with the IMS data will be reflected in the other components of the EVMS.

All of the above health checks can be performed manually with the use of filters and grouping functions within the scheduling tool; however, they may take too much time and effort to be successfully sustained.  The marketplace has tools available to perform these and other checks within seconds, saving time and cost, allowing schedule analysts and management to devote valuable time to address and resolve the issues.  With the aid of these tools, a comprehensive schedule health check can be performed as part of the business rhythm instead of an occasional, time available basis.

Summary

Schedule Health Metrics are an important component of the schedule development and maintenance process.  While the DCMA has established some basic standards for schedule health assessments, the 14 metrics should not be considered the only checks, but just the beginning of the schedule quality process.

Schedule checks should be an integral part of the schedule business rhythm and when issues are identified, they should be addressed quickly and effectively. Significant numbers of tasks that trip the metrics, or persistent issues that are not resolved, may require a Root Cause Analysis (RCA) to identify the reasons for the problems and to develop a plan to address them.

Give Humphreys & Associates a call or send us an email if you have any questions about this article. 

Schedule Health Metrics Read Post »

Earned Value and Negative Float

Earned Value and Negative FloatQuick.  What do Bankers, Ship Captains and Program Managers have in common?  Answer: They all want to address negative float issues in a timely manner.

While those of us working in program management are not concerned so much with a ship’s ability to stay afloat or financial maneuvers, we should be concerned with earned value and negative float in the schedules.  It is an important warning sign that one or more of the Program’s schedule goals cannot be met with the current plan.

As described above, the term ‘negative float’ has different meaning to different people even within the project management community.  To be precise, the term refers to a property assigned to each task or milestone in the schedule called Total Float, or Total Slack in Microsoft Project.  The values in the property usually represent days and are assigned as a result of a scheduling analysis run.  These numbers can be positive, zero or a negative number of days:

  1. For tasks with positive numbers assigned to the Total Float property, the tasks can be slipped by that number of days before impacting a milestone or the end of the project.
  2. When the task Total Float value is zero, the task cannot slip at all.  Conditions 1 and 2 should be the norm, with all tasks having zero or higher total float values.  If the schedule were well constructed, has realistic task durations and includes all discrete scope, the schedule indicates the project has a good plan in place to achieve its goals, albeit contractual or internal.
  3. When tasks have negative float values, the schedule is sounding an alarm.    Tasks with negative float values indicate probable failure to meet one or more completion goals.  These goals are represented in the schedule as date constraints assigned to tasks or more preferably, milestones. These date constraints represent necessary delivery deadlines in the schedule and if the current schedule construct is unable to meet those delivery deadlines, negative float is generated on every task that is linked in that potential failure.  The more tasks with negative float, and the larger the negative float values on those tasks, the more unrealistic the schedule has become.

If the schedule contains tasks with negative float, the first step is to quantify it. This can be performed in the tool using filters or grouping by float values.  Analysis tools, such as Deltek’s FUSE, Steelray or the DCMA’s new Compliance Interpretive Guide (CIG), are used to evaluate contractor delivered data and provide metrical analysis to Auditors prior to a review.  The tolerance threshold in the CIG (current nickname ‘Turbo’), as in all schedule analysis tools, is 0 (zero) percent of tasks with negative float.

Once identified, the next step is to determine the cause of the issue(s).  Because negative float is generated by a date constraint in the schedule, if the end point can be determined, then the predecessors can be identified that are forcing the slip to the end point.  One of the easiest ways to do this is to group the schedule by float and sort by finish date.  This is because most of the string of tasks that push a task/milestone with a delivery date constraint share the same float values; look for those groups of tasks with the same negative float values.

The final step is to take action.  Planners, CAMs and their managers should meet and collaborate to determine the cause and options available to solve the issues.  These meetings should result in a corrective action plan to solve the problem. In general, there are five options available to the program team:

  1. Change durations – if the negative float leading up to a delivery point were low, perhaps additional resources assigned to those tasks may help reduce the durations of the activities and relieve the negative float issues.  It is important to understand that reducing durations just to avoid a bad metric reading for negative float is just putting off the issue until the ultimate surprise is delivered; a delay in delivery, and all the pain associated with that delay (penalties, lost award fees, lost business if consistently late, etc).
  2. Change relationships – perhaps some tasks may be run in parallel instead of in series. A review of all the logic contributing to the negative float condition should be performed and adjustments should be made only if they make sense.
  3. Review date constraints in the Integrated Master Schedule (IMS) – for example, if subcontractors could deliver product earlier, that could also help solve the issue. If waiting for customer-provided equipment or information, perhaps the effort can be accelerated to relieve the stress on the schedule.
  4. Consume Schedule Margin – If there is still negative float leading up to a major contract event or contract completion, and if all of the above options have been exhausted, the PM has the option to use a portion of the Schedule Margin to relieve the negative float pressure leading up to the milestone.  If the Schedule Margin were represented by a bar, it means decrementing the forecast duration of the bar.  If the Schedule Margin were represented as a milestone, the date constraint on that milestone can be changed to a later point in time, but not later that the contractual delivery date assigned to it.
  5. Ask for relief – if, after all processes above have been completed and the schedule still has negative float indicating an inability to meet schedule deadlines, it is time to have a discussion with the customer.  It is usually better to have these bad news discussions earlier rather than later when there is still time to implement work-around or corrective action plans.  The customer has been reading the same schedule and may have helpful suggestions to solve the problems or could potentially provide contractual relief for the delivery dates.   As a last resort, the contractor can inform the customer and seek concurrence that an Over Target Schedule (OTS)* should be instituted to relieve the schedule condition and a more realistic schedule developed.  This is an option of last resort and should not be taken lightly unless all of the other options have been thoroughly explored. *See our blog: Is it OTB/OTS Time or Just Address the Variances?
    .

Summary

The definition of a schedule is a time phased plan that defines what work must be done, and when, in order to accomplish the project objectives on time. Earned value and negative float is a condition in the schedule that indicates the project will be unable to meet one or more of its objectives. It should not be ignored, or worse, marginalized with slap-dash tricks to get rid of it such as deleting relationships or reducing durations to zero.

Instead, negative float should be quantified, analyzed and addressed with a corrective action plan which includes steps and follow-up reviews to ensure adequate remediation of the problem.  It is a zero tolerance metric with most customers and, if not addressed internally, will most likely be identified by the customer for action.

Contact Humphrey’s & Associates, Inc. with questions or information on how to set up a corrective action plan for earned value and negative float. 

Earned Value and Negative Float Read Post »

Reviewing Authority Data Call – Not Just a Wish List

Authority Data Call

Data Call

One of the most important items needed to prepare for an Earned Value Management System (EVMS) review is the data call. This is not just a list of random data; the reviewing authorities have a defined set of data items they want to review so they can evaluate the EVMS implementation and compliance.

Required Artifacts

Over the years the reviewing authorities have fine-tuned the review process and created a very specific list of required artifacts. They use these items to pre-determine the review focus areas so they are prepared to get right to the soft spots in the system and processes.

Formal Review Notification

The process begins when the contractor receives a notification from the reviewing authority that they will conduct a formal review of a project. This could be a Compliance Review (CR); an Integrated Baseline Review (IBR); standard Surveillance; or one of many other reviews conducted to determine the implementation or continued compliance of the EVMS processes and reports. Regardless of the type of review, one of the key items is the data call request. The data call is used to request project information, and could consist of 12 reporting periods, or more, of data. This will vary by agency, type of program, and type of review. In most cases, a minimum of three months of project data will be required; typically, however, 6 to 12 months of data would be requested.

Basic Reports

Some of the basic reports requested are the Contract Performance Reports (CPRs), Integrated Program Management Reports (IPMRs), or similar time phased project performance reports produced from the earned value (EV) cost tool database. The data call request includes the detail source data from the EV cost tool as well as the Integrated Master Schedule (IMS) from the beginning of the program. This source data is often delivered electronically to a customer following the IPMR or Integrated Program Management Data and Analysis Report (IPMDAR) Data Item Description (DID) prescribed data formats. The Baseline Logs are often also requested.

Quality Data

It is essential to provide quality data in response to the Review Authority data call. The entire review process can be derailed when data call items are incomplete or inaccurate. Some of the things to consider are:

  1. Make sure the list of requested items is fully understood (some nomenclature issues could cause an issue).
  2. The data should be available in the format required in the call.
  3. Determine the best way to support the data call delivery if it is not specified in the request. The data can be provided using electronic media such as thumb drive, as attachments to emails (the size of the files may prohibit this), or possibly establishing a secure access cloud server to store the data for the reviewing authority to retrieve.
  4. Contact the requesting reviewing authority to establish a meeting to discuss the data call. This meeting should be used to resolve or clarify any issues regarding the requested information, negotiate potential equivalents of the project data if it does not exactly match the requested information, and establish a method to transmit all data files.
  5. Develop an internal plan to monitor the progress of data collection. Be sure to have non-project personnel review the data for accuracy and compliance with the specifics in the data call.
  6. Submit the data call to the requesting authority, follow-up with a phone call or meeting to verify the reviewing authority received the data, can open all the files, and agrees the complete set of data has been provided.
  7. Follow-up with another call a few weeks before the review to check if the reviewing authority has any issues or problems in evaluating and understanding the data call information. Be willing to work with them until the authority is comfortable with the data.

[NOTE: The number of items on the list depends on (1) the agency conducting the review and on (2) the type of review being conducted. The number of items requested could vary from around 30 to 100 or more.]

Typical Data Call

Some of the basic items typically requested in the data call are:

  1. Earned Value Management System Description including the matrix of the System Description and related system documentation mapped to the 32 guidelines in the EIA-748 Standard for Earned Value Management Systems as well as to the current version of the reviewing agency’s EVMS Cross Reference Checklist.
  2. EVMS related policies, processes, procedures, and desktop instructions. Examples include organizing the work, scheduling, budgeting, work authorization, details about earned value techniques and how each is applied, change control, material planning and control, subcontract management, and risk/opportunity management.
  3. Organization charts down to the Control Account Manager (CAM) level.
  4. Accounting calendar.
  5. Project directives including the Statement of Work (SOW) pertaining to Program Management or Statement of Objectives (SOO), EVM clauses, and EVM Contract Data Requirements List (CDRLs) or Subcontract Data Requirements List (SDRLs).
  6. Work Breakdown Structure (WBS) Index and Dictionary.
  7. Responsibility Assignment Matrix (RAM) including budget detail at the CAM level.
  8. Project and internal work authorization documents.
  9. Integrated Master Plan (IMP) or milestone dictionary.
  10. Contract Budget Base Log, Management Reserve Log, and Undistributed Budget Log.
  11. Risk/opportunity identification and assessments, risk/opportunity management plan.
  12. Cost performance reports (all applicable formats) or datasets. Provide the reports or dataset in the format provided to the customer such as PDF, Excel, UN/CEFACT XML, or JSON encoded data per the DID on contract such as the CPR, IPMR, or IPMDAR.
  13. Integrated Master Schedule (IMS) submissions and related native schedule file. This includes the IMS summary report if required.
  14. IMS Data Dictionary.
  15. Most recent Contract Funds Status Report (CFSR) or equivalent funding status report.
  16. Variance Analysis Reports (VARs) or equivalent progress narrative reports as well as the internal and external variance thresholds.
  17. List of subcontractors including value and type (such as cost reimbursable, firm fixed price, time and materials) including the applicable purchase orders. When EVM requirements are flowed down to the subcontractors, provide a copy of subcontractor EVM related contractual requirements (CDRLs and DIDs).
  18. Major subcontractor CPRs, IPMRs, or equivalent cost performance reports (all applicable formats) or IPMDAR datasets.
  19. Major subcontractor IMS submissions.
  20. Previous audit or surveillance findings, resulting reports, corrective action plans, and resolution and tracking Logs.
  21. List of specific software toolsets used for accounting, scheduling, cost management, resource management, risk/opportunity management, or performance analysis.
  22. EVMS Storyboard and flowcharts.
  23. Chart of accounts, including cost element definition.
  24. Staffing plans or weekly/monthly labor reports.
  25. List or copy of contract modifications.
  26. Cost Accounting Standards (CAS) disclosure statement or equivalent internal corporate procedures.
  27. Baseline Change Requests.
  28. Any other data previously provided to the customer as part of a data call.
  29. Basis of Estimates (BOE) or historical data/productivity rates and efficiency factors.
  30. Estimate to Complete (ETC) and Estimate at Completion (EAC) documentation.
  31. Budget reports or control account plans by element of cost (labor hours and dollars, material dollars, and other direct cost dollars) and associated burdens or overhead costs.
  32. Actual cost reports.
  33. Open commitment reports.
  34. Bill of material including cost detail.
  35. Quantifiable Backup Data for percent complete work packages including MRP/ERP Reports for production work packages.

Reacquaint Yourself

The list includes items that are used frequently, as well as items that are used only at specific times during the project, and will probably be less familiar to the review team. As the collection of the data call items progresses, be sure to establish quick refresher sessions on the less frequently used documents and any other items where the review team might be having difficulty. As part of the process of gathering the data call items, be sure internal reviews are conducted to verify accuracy and traceability, verify the users of the data are familiar with the data content so they can be prepared to answer questions, and current data are available to the review team.

NOTE: This Data Call List is intended for general guidance in preparation for any agency review (e.g., DCMA, DOE, FAA, etc.). For example, in the past, the DCMA Compliance Review Data Call item list contained 102 specific items, but this number varies from review to review and has changed over the years.  The number is not as important as the quality of the data items that are delivered to the review authority.

First Impressions

The data call items will provide the first look at the project’s EVM data and process for many of the review team members. The review team members will have the data several weeks prior to the on-site review. They will be performing multiple validation checks using various analytical software tools as well as hands-on analysis of the information. If the data is incomplete, contains errors, and does not trace well, the review team will form a more negative opinion of the EVMS application.

Double Check the Data Call

The data analysis results will be a basis for where attention is focused during the on-site review, as it emphasizes areas that contain anomalies or indicates a lack of system integrity. Significant emphasis should be devoted to the data call items to ensure accuracy and compliance with the review authority’s requests, as it is a very positive way to begin the data call review.

A Humphreys & Associates EVM specialist is always available to answer questions. Give us a call or send an email.

Reviewing Authority Data Call – Not Just a Wish List Read Post »

Scroll to Top