Prioritizing Functionally Diverse and/or Geographically Dispersed Facility Upgrades
Problem Statement
When prioritizing capital upgrade projects that are vastly different in function or geographically dispersed, it typically requires agreement between multiple individuals regarding the correct priority. Obviously, the facility manager of one site or system must think of the “greater good” and recognize that his project may not be funded to support the needs of another site or system. This is often more difficult than it seems.
This issue is complicated when the facility manager is held accountable for the aesthetics and performance of their site yet cannot get the funding to ensure it is maintained at the appropriate level. The problem is further compounded by the simple fact that, often, the upgrade can make the FM’s life easier. An example of this is a roof that currently leaks. We all know how difficult it can be to maintain a roof that desperately needs re-roofed. Every time it rains, we get out buckets to minimize the impact on operations. The problem arises when the only capital dollars allocated for upgrades go to another FM for a chiller upgrade.
Sometimes, it can be easy to understand prioritizing the “chiller versus the roof” if the specifics are understood – the chiller was for a data center that houses all of the company data. On the other hand, the roof of a support building, like a warehouse, was not damaging the items stored there.
But what if the two scenarios were closer to the same? To illustrate this, let’s say that we have two buildings that need a new roof; both are 160,000 gsf office buildings. One is in the Philippines, the other in Malaysia. Let’s also assume that they both have the same climate. How do we prioritize between them?
Initial Analysis
When faced with these competing priorities, the prioritization process must often be better thought out. Typically, only a little upfront planning is used to eliminate the subjectivity. Yet, the number of variables that may need to be considered to make the appropriate decision can be more than the average person can process simultaneously. According to research, when humans try to process more than four variables, their ability to achieve accurate results is reduced to “chance.”
So, when someone needs to consider multiple variables in correctly prioritizing facility upgrades, how could it best be done? While working on this with various teams at different locations and companies, I have found that the best tool to use is a priority matrix. One of the most successful uses of a matrix I have seen applied to facility upgrade prioritizing resulted in a number called a PMV or Prioritization Matrix Value. This model is the basis of this presentation. The PMV resulted from scoring each project individually against set criteria or company spending considerations to reach a single representative score. Each company's spending consideration was weighted to ensure that priority considerations were chosen appropriately.
Building the Matrix
To strategically align around a tool that will shape how items are prioritized for some time, begin by building the right team to lay the foundation. Be sure to select the right mix of talent and responsibilities. Are reps from Purchasing, Finance, Safety, Environmental, Security, Maintenance, Engineering, MIS, or any other department required for a balanced perspective? Are reps from different geographies required as well? Someone from Florida, Colorado, Japan, China, Ireland, or Argentina needs to be on the team.
Once the right balance of team members has been recruited – spend time looking at company values, mission, vision, objectives, and behaviors. This will help to determine what areas each project should be scored against. Narrow the areas down to a select four to seven.
After selecting the four to seven that will be used, weight them to align with your company values, etc. For example, is safety more important than aesthetics? If so, ensure that it carries a greater total of available points.
Finally, develop a scale for the points across a spectrum of “good to bad” or “high to no impact.” For example, with Safety/Environment, the scale might be something like the scale to the right. As you can see, the most points are awarded for a project that will correct the potential for a catastrophic safety or environmental issue. Then, the points decline as the safety impact of the issue reduces. Finally, at the bottom, there is no impact on safety; therefore, there are zero points in the safety column.
This process is duplicated across all areas identified to be categories of prioritization. Be mindful that the points are scaled appropriately. For example, it is sometimes easy to subjectively say that something that does not have a safety impact should get more points – but as demonstrated in the example, when objectivity is applied, the scale is correct.
The final step in developing the scale is to test it. The team should utilize some familiar projects and score them with the matrix. Based on the team’s experience, does the most critical project score the most points? What about the low-scoring project? Is it the one that should most likely be cut? If the matrix does not appropriately prioritize, adjust the weightings between columns or the scale within the columns until the expected result is achieved.
Using the tool
Once the tool is developed, it is straightforward to prioritize projects against each other. Score each project, put them in a spreadsheet, and sort on the PMV in descending order. This will eliminate much of the subjectivity and give a relatively clear differentiation between priorities.
There will be several cases where multiple projects will have the same PMV. This is an issue if the funding dollars require one or two projects to be selected from a group assigned to the same PMV. For example, if there are three projects with a PMV of “24”, and the approved budget allows the completion of all projects with a PMV higher than 24, but only two of the three projects score 24, how do you decide? At this point, focus on the three projects that scored 24 (I typically throw in one number up or down) and have the appropriate team review the list to put them in explicit order. For example – if $600,000 of funding is set aside to accomplish as many of the projects below as possible – where is the line drawn? The top three can be completed, but only one of the two projects has a PMV of 34.
It is recommended to review a minimum of both projects with a 34 PMV and consider reviewing the projects up one and down one from that. After all, the model has limited subjectivity, not eliminated it.
After those funded projects were completed and removed from the list, others were added, and those with lower PMVs were moved up the list. This happened because, over time, the PMV scoring changes (as the condition of the issue changes) or as the priorities, economy, business, or people/management changes. This process becomes a “living prioritization list.”
Results
As a result of implementing this program, several companies have achieved alignment around their priorities, with the ability to allocate funding to the prioritized list of projects that used collaboration to create. Ownership for the result was shared by all representatives, including those that did not attain funding during that particular cycle…because they helped to create the model that made the decision. If they did not participate in the design of the process, it was at least documented and explainable.
One company eventually found that they could write a web-based tool to have each site worldwide enter their projects and score them by answering a series of questions. This tool reduced the manipulation of the system by hiding the actual scoring matrix behind a series of simple questions stated in layperson’s terms. This had the further benefit of enabling the initial requestor to engage in the prioritization process. As a result, they also had a greater degree of buy-in to the approved project list.
As this process and tool were implemented, it was found that a change control process was needed to ensure that the tool was kept current with the changing business environment. A spot audit process was also implemented to verify the accuracy of the data entered. However, these efforts required very little time (one occurrence of each every six months) and were vastly overshadowed by the benefits of using the matrix.
Conclusion
This is a straightforward and strategically oriented tool that can be easily deployed. It has since been utilized in numerous circumstances for making decisions. Recently, it was used to create a hiring decision by a San Diego Community College District under the guidance of IFS Facilities, LLC, and at a major software company for prioritizing facility process improvements.