Earned Value

EVM Consulting – Modeling & Simulation

, , , , , , , , ,

Fighter Jet Air Plain Flying in Front of Moon

Forewarned is Forearmed

Forewarned is forearmed. John Farmer, of New Hampshire, said that in a letter in 1685. But that advice is most likely biblical and very much older. No matter the source of the thought, we should take it as divine guidance if we are project managers. Maybe we should have it cut into a stone tablet, so we can share it with our team members.

Most of our work as project managers is spent in the “controlling” phase which is made up of the three steps “measure, analyze, act.” Our EVMS and IMS exist to be able to support this management function. The measuring part is done very well in our EVMS and our IMS; we know where we are and how we got there. The analyzing is equally well handled in the IMS and EVMS. Only the management task of acting is not well supported. Generally, we lack decision making support and tools.

EVM Consulting - Measure, Analyze, Act

Deterministic Path

No matter how well constructed and how healthy our IMS is, it has a deterministic path forward. The logic links between the activities are there because we expect them to be fulfilled. Indeed, if activity “B” is a finish-to-start successor to activity “A” we fully expect that at some point activity “A” will finish and will provide its output to activity “B”. That is a single path forward and it is a deterministic path. It is also a somewhat simplistic model.

EVM Consulting - Deterministic Relationships in EVMS

Multiple Outcomes

Our management system asks us to perform root cause analysis followed by corrective action. But what if there is more than one corrective action to be taken. And worse; what if the corrective actions can have multiple outcomes with each enjoying its own probability. That means multiple choices and multiple outcomes. How would we show that in our plan? How would we analyze the multiple possible futures that such a situation presents?

Happily, there are ways to model a future without a set path. And once we have the future model, there are also ways to simulate the outcomes to give the probabilities we need to decide which actions to take. We are talking about probabilistic branching, and we are saying that we can build a probabilistic map of the future to use in making decisions; especially making decisions on corrective actions.

Take a simple example of running a test on the project. The expectation is that eventually we will pass the test. We will keep trying until we do. In the IMS deterministic model the test portion of the IMS might look like this:

EVM Consulting - Run the Test then Use the Product

Simulation

We can simulate this situation with different expected durations for the test. That is helpful information, but it does not explain or even capture what is going on in those different durations. It looks like we are just taking longer to do the testing but is that really what is happening? What is going on here? We certainly don’t show that.

In the real world, this simple model might have three potential outcomes. There might be three paths we can take to get to the point where we use the product. Each path has a time and money cost. We might run the test and find that we passed. Or we might have to stop the test for issues on the item or the test setup. We might even fail the test and must correct something about the product to improve our chances of passing a rerun. Eventually we will get to a usable product. But what do we put in our estimate and our plan? What do we tell the resources we need? What do we tell the boss? The customer?

EVM Consulting - Real World Testing

Full Future Model

We now have a much better understanding of the future and can explain the situation. We also can simulate the situation to find out the most likely time and cost outcomes, so we can explain the future without any histrionics or arm waving.
If the issue is important enough we can build out the full future model and simulate it.

EVM Consulting - Full Future Model and Simulation

No matter how far we pursue the model of the future, having a valid model and being able to stand on solid ground are very valuable to us as project managers.

This is not to say that we should model out complex situations as a routine in the IMS. That would be impossible, or at least prohibitively costly. We are saying that when situations arise, we need to be able to use the IMS to help us make decisions.

This type of probabilistic modeling of the future is particularly useful in defining major decision points in our plan. When we reach a decision point the IMS may have multiple branches as successors but that implies we take every branch and that is not valid. Modeling each branch and its probabilities is valid. In the example below, where the milestone represents a decision point, we have shown three possible paths to take. If each were modeled out into the future with time and cost data, we should have the information we need to choose the path we wish to pursue. Without processes and tools like this, we would be flying blind.

Future Blog Posts

This discussion will be continued in future blogs to develop a better foundational understanding of the process and power of probabilistic modeling in our EVMS.

EVM Consulting - Decision Point

Good information sets the stage for good decisions. The IMS and the EVMS have sufficient information to help us model the pathways ahead of our critical decisions. We just need to learn to take advantage of what we have available to us.

Find out how an experienced Humphreys & Associates EVM Consultant can help you create a full future model and simulation of your most vital EVMS Systems. Contact Humphreys & Associates at (714) 685-1730 or email us.

EVM Consulting – Modeling & Simulation Read Post »

Agile/Scrum Ceremonies and Metrics Useful in EVMS Variance Analysis and Corrective Action

, , , , , , , , , , , , , , , , , , , , , , , , , ,
Agile Scrum EVMS

P. Bolinger, CSM October 2016
Humphreys & Associates

How can Agile/ Scrum be used to support EVMS Variance Analysis and Forecasting in a way that provides program managers with cost and schedule integrated information for no extra effort?

The discipline of EVMS and the Agile/Scrum practices have several touch-points that are covered in two major documents: NDIA IPMD Agile Guide March 2016, and PARCA Agile and EVM PM Desk Guide. Neither of these documents, as yet, drives to the level of specifics when it comes to best practices for use of Agile to support EVM Variance Analysis and EVM Forecasting.

Looking at the literature for Agile/Scrum, we know that there are recommended ceremonies that are conducted at various levels of the product structure and at different times during the project life cycle. These ceremonies are supported by many discussions of the metrics that can be collected at each ceremony and their potential use in managing the technical work of development within the Agile/Scrum framework. But where do these ceremonies potentially support EVMS Variance Analysis and Forecasting?

Now suppose we are presented with a control account that has exceeded the EVMS thresholds for cumulative cost and schedule variances. Wouldn’t it be great to have at our fingertips the underlying data from the process? In this case we might find, for example, that the Velocity is less than that needed to meet the end goal, the story cycle time is longer than desired, the pass/fail ratio is not favorable, too many team members have been absent in the last Sprint, the number of disruptions has been excessive, and the work to accomplish the stories is higher than estimated. It is not difficult to surmise the outcome would likely be a behind schedule and overrun condition in the EVMS. These data measures would provide the fodder for deep diving to the root cause and impact statements.

Where would we get that information?

Let’s start with the Agile/Scrum ceremonies. The particular Agile/Scrum ceremonies that we find conducted during the project are:

Backlog Refinement. This ceremony can be nearly continuous. It involves redefining the backlog of development work (scope), the prioritization or re-prioritization of that work, and potentially the assignment of responsibilities for the backlog.

Release Planning. This is a recurring ceremony aligned with the release cadence for the project. It involves establishing the capabilities and features of the product and when they will be released.

Sprint. This is a short time-defined effort to accomplish the design, code, and test of some subset of the product. The Sprints are controlled by the self-managing teams.

Daily Scrum or Standup. This is a daily (recommended) team meeting to discuss what has happened, what roadblocks exist, what is planned for the day, and other necessary items.

Sprint Review. The session in which the team products are demonstrated to the owner and self-off is accomplished.

Sprint Retrospective. A meeting of the stakeholders to discuss what went right (or wrong) during the Sprint and to define improvement actions that are needed.

The relationship of these Agile ceremonies with EVMS might look like this:

Agile/Scrum Ceremony

Agile Purpose

Relationship to EVM Variance Analysis, Root Cause Analysis, Corrective Action Planning & Follow-up and Forecasting

 

 

 

Backlog Refinement Manage, estimate, prioritize, and organize the product backlog in an on-going routine. Estimating – impacts the EVMS ETC and EAC as well as durations of efforts.
    Prioritizing – in response to issues is corrective action management.
    Organizing the backlog could be a form of corrective action effort.
     
Release Planning Establishing the contents and timing for releases of product. Updates could be part of corrective action planning in response to issues. Creation of new work packages. Changes to planning packages and SLPPs would also.
     
Sprint Short time-box performance unit. Work is done in Sprints. Below the work package. Short span measurement period is possible.
     
Daily Scrum and Stand-up Make short term plan, adjust to issues, discuss problems, clear roadblocks. Much of the daily action would relate to root cause analysis and corrective action planning although the time-frame is very short and the issues may be too small to individually impact feature work package or the Epic control account.
     
Sprint Review Demonstrate the product, update released work, make changes to product. This would relate to corrective action planning and follow-up. Issues would be found here that would impact risks, ETCs, corrective actions, performance metrics.
     
Sprint Retrospective Reflect on the project, progress, people processes, what was good, what was bad, and take actions to improve. This should be the richest source of supporting information for EVMS root cause & corrective action within the VAR realm. Very timely for variance analysis as it happens potentially many times during work package (feature) duration.
Feature Retrospective (not one of the basic ceremonies) Review situation regarding technical scope deficit. Reflect on the project, progress, people processes, what was good, what was bad, and take actions to improve. Because this only happens at the end of the feature it is limited in value for variance analysis timeliness. Any lessons learned can only be applied to future feature work.

But where is the meat? Where do we get actionable data or at least data we can analyze to decide what management efforts are required?

There are numerous potential metrics that can be collected during these ceremonies. These metrics can form the basic data set that could be analyzed to define the root cause of cost and schedule variances. In addition to isolating the cause of issues, within some of these ceremonies the impact of the issue on the Sprint, or Feature, or team may be made. Certainly, these metrics can be used as the basis for projecting future workload and performance.

The total number of potential metrics is not known. In this paper, we looked at 17 metrics and considered what the data might mean. The results of this review are contained in this matrix:

Metric

Type of Measure

Discussion

Sprint Burn up/Burn Down Backlog Value of backlog remaining for Sprint. Decrease is expected when work is done. Increase means work increased. Burn up can include total completed plus remaining; a great metric.
Feature Burn Up/Burn Down Backlog Value of backlog remaining for Feature. Decrease is expected when work is done. Increase means work increased or shifted.
Customer support requests received. Disruption Number of instances. Unplanned interruptions by customer can lower the output of the team if excessive.
Disruption measures Disruption How many and what type (except customer support requests). Higher disruptions impact team efficiency.
Estimate Accuracy (Sprint or Feature) Estimating Measure of budgeted value (estimated value) for the Stories in the Sprint or Feature versus the actual cost (calculated cost) of the Stories when done. Related to team size.
Discovered work Estimating Emerging work discovered during the Sprint. Will translate to extra effort in future if adopted into backlog.
Exceeds WIP Limits Management If WIP limits are set on team or individuals then exceeding set limits will impact efficiency and output.
Retrospective Action Log Management Count of improvement actions listed in Retrospective. Increasing count means issues are not being resolved.
Attendance Management Comparison of actual hours worked by team compared to baseline expectations in plan.
WIP Productivity Measure of the number of stories or points in WIP at any time. WIP growth can indicate bottlenecks and inefficiencies.
Velocity Productivity Measure of the amount of work (Stories or Points) accomplished during a time period. Higher velocity means greater throughput per person/team.
Stability measures Productivity Comparing Sprint by Sprint basic measures from this list. If there is high variability between Sprints in measures, then future is unpredictable.
% Tests Automated Productivity Higher automated testing should increase efficiency & decrease cycle time.
Defects found by team Quality Number of bugs reported during team effort. Measures quality of work. Higher bug incidence translates to lower output and higher costs.
Defects found by customer Quality Number of bugs reported by user/customer. Measures quality of work delivered. Higher bug incidence translates to lower customer satisfaction and higher rework costs.
Pass/Fail (Re-do) measures Quality How does the rate of success in testing compare to the number of attempts? High success rate should mean greater output and efficiency.
Cycle Time Schedule Time from start of a story to complete. Short cycle time is desired.

Let us continue the theme of the behind schedule and overrun control account and look at what information would be available for support to developing the estimate-to-complete. An updated and refined backlog would have the scope of work remaining for the control account. The updated release plan would have the timing for the deliveries to be made in the control account. The metrics collected about the effort expended per accomplished story or story point would provide a factor for projecting future real-work hours. Planned corrective actions and improvements would tell us how we might expect improvement to the quality and improvement in the speed or cost of work. The insights available from a full set of metrics are impressive.

Does a project have to collect all of these metrics? If not all then which ones would be the right ones? Questions like that would be answered by the project management team analyzing their prior experience and the particular challenges of the project. The team would establish a data collection plan, likely described in their Software Development Plan or Program Management Plan that would explain the metrics, meaning, and frequency along with their purpose. With a clear understanding of the technical data to be collected and analyzed, the Control Account Managers would not have a difficult task to define how they would use that data in developing Variance Analyses and generating well-considered Forecasts. In fact, these tasks should be much simpler with the data in hand.

Agile/Scrum Ceremonies and Metrics Useful in EVMS Variance Analysis and Corrective Action Read Post »

Earned Value Management: How Much Is Enough?

, , , , , ,

How Much EVMS Is Enough

I took the scenic route to selecting the theme of this blog. First, it was suggested that I write a blog on the benefits and costs of the earned value process as it applies to program management. Next it was suggested that I describe the harm of not using any of the elements of the earned value process.

In the case of the benefits and costs of the earned value management process, it would be difficult to improve upon Dr. Christensen’s 1998 paper on this heading or to attempt to improve other papers and studies done by Wayne Abba, Gary Humphreys, Gary Christle, Coopers & Lybrand and others. So I will not make citations to these past studies. Rather I will leave them undisturbed, as the monuments they have become.

This blog will summarize my observations of how companies have chosen “how much EVM is enough” for them and share my observations of the results of these decisions. Each company has selected an EVM implementation strategy and each company’s strategy falls along a bounded continuum.

I will describe this continuum of company EVM strategies with a left hand and a right hand goal post, and the space between as a cross bar. The “left hand goal post” represents companies that elect to be very poor at EVM or to not use EVM at all. The “right hand goal post” represents companies that have committed to being “best-in-class” practitioners of the EVM process and are the polar opposite of the companies at the left hand goal post. There are few companies at either the left or right hand goal posts. The “cross bar” represents the vast majority of companies that have selected an EVM strategy somewhere between the left and right goal posts.

Two Goal Posts And A Cross Bar; Recalcitrant, Merely Compliant, Efficiently Expert

There are as many strategies to earned value management as there are companies using EVM to manage their programs and projects.

Left Goal Post; The Recalcitrant

I have firsthand experience with a company, that at the time I initially joined them and had decided to ignore earned value management even though it was a requirement in several of its contracts. After many painful years of attempting to maintain this recalcitrant EVM strategy, this company decided that a better strategy would be to become “efficiently expert” at EVM.

Cross Bar; Merely Compliant at EVM

It has been my experience that most companies desire to “become EVM compliant,” which generally means being compliant to the 32 guidelines and not failing those guidelines so as to be de-certified. This is the vast middle ground between the two goal posts. I will now share five observations regarding companies in the “cross bar” majority.

Observation #1: Compliance As A Goal; Golf and EVM

Compliance should be a “given,” or a “pre-condition,” not a “goal.” Remaining merely compliant implies a status quo or static posture.

I will use the game of golf as an analogy. Golf is a game of honor and compliance to well established rules. All PGA professional tour golfers “comply” with the rules that govern golf. Although all PGA tour pro golfers comply with these rules, their performance on tour differs dramatically.

Fifty-three percent of all PGA golf pros, past and present, have no tour wins. That means only 47% of all PGA tour golf pros have won at least a single PGA tour. There are seven players in the history of the PGA that have fifty or more tour wins. If the bar is lowered to forty or more wins, only three players are added to the list. If the bar is lowered yet again to thirty or more tour wins, only eight more players are added to the list. Only 18 golfers have won 30 or more PGA tournaments.

Professional golfers do not confuse compliance with performance, nor do these professionals assume that “being compliant” will improve their performance.

Observation #2: “The Tyranny of The Status Quo”

With apologies to Milton Friedman and his book of the same name, companies that attempt to maintain mere guideline compliance will do no better than the status quo, and more often than not, regress toward non-compliance. Maintaining status quo is a myth – you either improve or regress.

All professionals, companies included, must compete in their markets and selected fields. To succeed in this competition requires constant improvement in areas critical to success. A company, organization, or individual without the means or the desire to improve will eventually fail and perhaps perish.

Observation #3: Blaming The Scoreboard

As a program manager, I considered EVM as my scoreboard. I reacted to the EVM data – the scoreboard – and made decisions based on that data (GL #26).

I recall the 2014 Super Bowl’s final score: Patriots 28, Seahawks 24. Did the scoreboard cause the Seahawks to lose the game or did a poor decision by their coach cause the loss? Imagine a coach that cannot see the scoreboard. That coach does not know the score or how much time remains. That coach cannot react to the realities of the game.

Observation #4: EVM Causes Poor Program Performance

I have witnessed several company leaders assert that the use of EVM on a poorly performing program is the cause of that program’s poor cost and schedule performance. A correlation between two variables, or a sequence of two variables (use of EVM and poor performance), does not imply that one caused the other. This is the logical fallacy known as “X happened, then Y happened, therefore X caused Y.” Night follows day, but day does not cause night. Use of EVM does not cause poor program performance. Not reacting to EVM data and promptly taking corrective action with your program’s cost and schedule performance often leads to poor outcomes.

Observation #5: It Takes More Energy To Be Poor At EVM Than To Be Expert

Returning to the earlier golf analogy, professional golfers make very difficult shots appear easy. I played in one pro/am tournament years ago. The pro I was teamed with took me to the range hours before our tee time. He asked me how many balls I hit before each round. I told him sometimes none and sometimes 50. He hit 1,000 balls before our round. When we finished our round, he was ready for another 18 holes. I was not. Both of us “complied” with the rules of golf. His score was significantly lower than mine. His game was effortless and produced a below par score. My game was labored and produced a poor result.

And so it is with EVM or any other process. The better you are at a skill, the easier it becomes. Experts consume far fewer calories at their craft than ambivalent amateurs.

Right Goal Post; Efficiently Expert At EVM

The polar opposite of a recalcitrant strategy to EVM is a strategy to become “efficiently expert.” As I mentioned earlier, I joined a company that attempted to sustain a recalcitrant EVM strategy. Their recalcitrant EVM strategy led to de-certification, large dollar withholdings, and significant damage to their corporate reputation.

After the most ardent EVM recalcitrants in this company “sought employment elsewhere,” a new strategy was adopted. This company embraced a strategy to become “best-in-class” as expert practitioners of EVM. This company’s goal was EVM perfection. EVM perfection is an impossible ambition, but wiser than “mere compliance.” And as with the PGA tour golf pro, EVM became nearly effortless.

Which EVM strategy will your company choose?

 

Robert “Too Tall” Kenney
H&A Associate

Earned Value Management: How Much Is Enough? Read Post »

Earned Value Management | Integrated Program Management Report (IPMR) XML Electronic Submittals

, , , , , , ,

One of the major changes in the 2012 IPMR Data Item Description (DID) was the requirement to use the DoD-approved XML schemas and guidelines to electronically submit formats 1 through 4, 6, and 7. The DoD-approved XML schemas were developed under the auspices of the United Nations Centre for Trade Facilitation and Electronic Business (UN/CEFACT), a formal international organization for establishing electronic business standards.  The DoD-approved XML guidelines are the Data Exchange Instructions (DEIs) or business rules for using the UN/CEFACT XML schemas to support the data requirements in the IPMR DID.  This XML electronic submittal format replaces the ANSI X12 Electronic Data Interchange (EDI) transaction sets 839 and 806 found in the previous reporting DIDs, the 2005, DI-MGMT-81466A, Contract Performance Report (CPR) and the 2005, DI-MGMT-81650, Integrated Master Schedule (IMS).

The purpose of using a software vendor neutral international standard to submit data to the DoD was to eliminate the need for any specific toolset or proprietary database at either end.  Contractors can use their toolset of choice or internally developed applications to produce the XML instance files and electronically submit the data. For the various DoD end-users, they can use their toolset of choice or internally developed applications to read the XML data for their use and analyses.

The business owner for the DoD IPMR Data Exchange Instructions is the OSD Office of Performance Assessments and Root Cause Analyses (PARCA), Earned Value Management (EVM) Division (https://www.acq.osd.mil/evm). The electronic submittals are designed to support the OSD EVM Central Repository (https://dcarc.cape.osd.mil/EVM/EVMOverview.aspx), a joint effort between the Defense Cost and Resource Center (DCARC) and OUSD/AT&L, managed by PARCA.  The EVM Central Repository provides a secure centralized reporting, data collection and distribution of EVM data environment for the DoD acquisition community.

There are a number of UN/CEFACT XML related resources available to contractors, software vendors, and government users on the DCARC EVM Central Repository web site.

  • Select the UN/CEFACT XML navigation option to download the base UN/CEFACT XML schemas as well as the Data Exchange Instructions for the IPMR formats. There are three primary DEIs. One for Formats 1 through 4 (can include Format 5 data as an option), one for Format 6 (the IMS), and one for Format 7 (time phased historical data). Also on this web page is a link for a digital file signing tool; this works as an outer envelope that contractors can use to digitally sign and secure an XML instance file submission to the EVM Central Repository.
  • Select the EVM Tools navigation option to download the XML instance file IPMR Schema/DEI Checker or XML instance file viewers. The schema/DEI checker can be used to verify a given XML instance file conforms to the basic XML schema requirements as well as the business rules defined in the DEIs.  The XML instance files viewers can be used to read and display the XML data content in a more human friendly format.

A number of the commercial off the shelf (COTS) software vendors have submitted their IPMR outputs for testing to the EVM Central Repository to verify their XML outputs can pass the Central Repository data submission validation process. A number of contractors also tested outputs produced from their internal application systems (no COTS tool was used). This testing was part of the implementation verification process for completing the Data Exchange Instructions. To confirm a software vendor has successfully completed the process to verify their tool-set outputs can be successfully read and uploaded to the Central Repository, send an email to the EVM Contact for PARCA listed on the DCARC EVM Central Repository web site (Contact Us navigation link).

PARCA has also recently taken ownership of the XML schema and DEI Change Control Board (CCB) and related process. The intent is to use the PARCA Issue Resolution process (https://www.acq.osd.mil/evm/ir/index.shtml) for software vendors, contractors, or other end users to submit change requests for the base UN/CEFACT XML schemas or IPMR Data Exchange Instructions.

Earned Value Management | Integrated Program Management Report (IPMR) XML Electronic Submittals Read Post »

Earned Value and Negative Float

Earned Value and Negative FloatQuick.  What do Bankers, Ship Captains and Program Managers have in common?  Answer: They all want to address negative float issues in a timely manner.

While those of us working in program management are not concerned so much with a ship’s ability to stay afloat or financial maneuvers, we should be concerned with earned value and negative float in the schedules.  It is an important warning sign that one or more of the Program’s schedule goals cannot be met with the current plan.

As described above, the term ‘negative float’ has different meaning to different people even within the project management community.  To be precise, the term refers to a property assigned to each task or milestone in the schedule called Total Float, or Total Slack in Microsoft Project.  The values in the property usually represent days and are assigned as a result of a scheduling analysis run.  These numbers can be positive, zero or a negative number of days:

  1. For tasks with positive numbers assigned to the Total Float property, the tasks can be slipped by that number of days before impacting a milestone or the end of the project.
  2. When the task Total Float value is zero, the task cannot slip at all.  Conditions 1 and 2 should be the norm, with all tasks having zero or higher total float values.  If the schedule were well constructed, has realistic task durations and includes all discrete scope, the schedule indicates the project has a good plan in place to achieve its goals, albeit contractual or internal.
  3. When tasks have negative float values, the schedule is sounding an alarm.    Tasks with negative float values indicate probable failure to meet one or more completion goals.  These goals are represented in the schedule as date constraints assigned to tasks or more preferably, milestones. These date constraints represent necessary delivery deadlines in the schedule and if the current schedule construct is unable to meet those delivery deadlines, negative float is generated on every task that is linked in that potential failure.  The more tasks with negative float, and the larger the negative float values on those tasks, the more unrealistic the schedule has become.

If the schedule contains tasks with negative float, the first step is to quantify it. This can be performed in the tool using filters or grouping by float values.  Analysis tools, such as Deltek’s FUSE, Steelray or the DCMA’s new Compliance Interpretive Guide (CIG), are used to evaluate contractor delivered data and provide metrical analysis to Auditors prior to a review.  The tolerance threshold in the CIG (current nickname ‘Turbo’), as in all schedule analysis tools, is 0 (zero) percent of tasks with negative float.

Once identified, the next step is to determine the cause of the issue(s).  Because negative float is generated by a date constraint in the schedule, if the end point can be determined, then the predecessors can be identified that are forcing the slip to the end point.  One of the easiest ways to do this is to group the schedule by float and sort by finish date.  This is because most of the string of tasks that push a task/milestone with a delivery date constraint share the same float values; look for those groups of tasks with the same negative float values.

The final step is to take action.  Planners, CAMs and their managers should meet and collaborate to determine the cause and options available to solve the issues.  These meetings should result in a corrective action plan to solve the problem. In general, there are five options available to the program team:

  1. Change durations – if the negative float leading up to a delivery point were low, perhaps additional resources assigned to those tasks may help reduce the durations of the activities and relieve the negative float issues.  It is important to understand that reducing durations just to avoid a bad metric reading for negative float is just putting off the issue until the ultimate surprise is delivered; a delay in delivery, and all the pain associated with that delay (penalties, lost award fees, lost business if consistently late, etc).
  2. Change relationships – perhaps some tasks may be run in parallel instead of in series. A review of all the logic contributing to the negative float condition should be performed and adjustments should be made only if they make sense.
  3. Review date constraints in the Integrated Master Schedule (IMS) – for example, if subcontractors could deliver product earlier, that could also help solve the issue. If waiting for customer-provided equipment or information, perhaps the effort can be accelerated to relieve the stress on the schedule.
  4. Consume Schedule Margin – If there is still negative float leading up to a major contract event or contract completion, and if all of the above options have been exhausted, the PM has the option to use a portion of the Schedule Margin to relieve the negative float pressure leading up to the milestone.  If the Schedule Margin were represented by a bar, it means decrementing the forecast duration of the bar.  If the Schedule Margin were represented as a milestone, the date constraint on that milestone can be changed to a later point in time, but not later that the contractual delivery date assigned to it.
  5. Ask for relief – if, after all processes above have been completed and the schedule still has negative float indicating an inability to meet schedule deadlines, it is time to have a discussion with the customer.  It is usually better to have these bad news discussions earlier rather than later when there is still time to implement work-around or corrective action plans.  The customer has been reading the same schedule and may have helpful suggestions to solve the problems or could potentially provide contractual relief for the delivery dates.   As a last resort, the contractor can inform the customer and seek concurrence that an Over Target Schedule (OTS)* should be instituted to relieve the schedule condition and a more realistic schedule developed.  This is an option of last resort and should not be taken lightly unless all of the other options have been thoroughly explored. *See our blog: Is it OTB/OTS Time or Just Address the Variances?
    .

Summary

The definition of a schedule is a time phased plan that defines what work must be done, and when, in order to accomplish the project objectives on time. Earned value and negative float is a condition in the schedule that indicates the project will be unable to meet one or more of its objectives. It should not be ignored, or worse, marginalized with slap-dash tricks to get rid of it such as deleting relationships or reducing durations to zero.

Instead, negative float should be quantified, analyzed and addressed with a corrective action plan which includes steps and follow-up reviews to ensure adequate remediation of the problem.  It is a zero tolerance metric with most customers and, if not addressed internally, will most likely be identified by the customer for action.

Contact Humphrey’s & Associates, Inc. with questions or information on how to set up a corrective action plan for earned value and negative float. 

Earned Value and Negative Float Read Post »

Scroll to Top