This blog entry was originally written by Patrick Florer, I am just migrating the post to the new SIRA site.
(This is the first of three posts)
Most of the people in SIRA have heard of the PERT and BetaPERT distributions. Many of us use them on a daily basis in our modeling and risk analysis. For this reason, I think it is important that we understand as much as we can about where these distributions came from, what some of their limitations are, and how they match up to actual data.
The PERT Distribution:
The PERT approach was developed more than forty years ago to address project scheduling needs that arose during the development of the Polaris missile system. With regard to its use in scheduling, we can agree that the passage of time has a linear, understandable nature (leaving quantum mechanics out of the discussion, please) that might be appropriate for estimates of task completion times. The Polaris missile program probably wasn’t the first big project these people had done (DoD and Booz Hamilton), so we can also assume that the originators of PERT had both experience and data to guide them when they constructed the function and created the math that they did.
The BetaPERT distribution was developed by David Vose in order to provide more flexibility in allowing for certainty or lack of certainty in PERT estimates. Vose added a fourth parameter, called gamma, that impacts the sampling around the most likely value and consequently controls the height of the probability density curve. As gamma increases, the height of the curve increases, and uncertainty decreases. As gamma decreases, the opposite happens. Some people use gamma as a proxy for confidence in the most likely estimate.
For additional information about the PERT and BetaPERT distributions and how to use them, please see the excellent SIRA blog post that Kevin Thompson wrote a few weeks ago.
(In order to keep things simple, from this point forward, unless there is a reason to make the distinction, I will use PERT to mean both PERT and modified/BetaPERT.)
What’s the problem?
Most of us have been taught that PERT distributions are appropriate tools for taking estimates from subject matter experts (SME) and turning them into probability distributions using Monte Carlo simulation. As many of you know, this is very easy to do. The graphics and tables look very nice, informative, and even a bit intimidating.
But how do we really know that the distributions we create have any validity?
Just because they may have worked in project scheduling, why should we believe that the distribution of loss magnitude, frequency, or anything else actually corresponds to the probability distribution that a PERTfunction can create? Even if these distributions are useful and informative, might there be circumstances where we would be better served by not using them? Or by using other distributions instead?
I will address three of these issues below.
In case you don’t feel like reading the whole post, I will tell you right now that:
- Yes, there are circumstances where PERT distributions do not yield good information.
- In a series of tests with a small data set, PERT distributions DID seem to correspond to reality – closely enough, in my opinion, to be useful, informative, and even predictive.
- Depending upon what we are trying to model, there are other distributions that might be even more useful than PERT.
- It’s a continual learning process – I want to encourage everyone to keep studying, experimenting, and sharing when possible.
When the PERT distribution doesn’t work
One of the assumptions of the PERT approach is that the standard deviation should represent approximately 1/6th of the spread between the minimum and maximum estimates. When I look at the PERT and BetaPERT math, I can see this at work. (for a full explanation, see http://vosesoftware.com/ModelRiskHelp/index.htm#Distributions/Continuous_distributions/PERT_distribution.htm) I have also read, and have demonstrated in my own experience, that PERT doesn’t return useful results when the minimum or maximum are very large multiples of the most likely value.
For example, try this with OpenPERT or any other tool you like:
Min = 1
Most Likely = 100
Max = 100,000,000
gamma/lambda = 4
run for 1,000,000 iterations of Monte Carlo simulation, just to be fanatical about sampling the tail values.
(BTW, this is not a theoretical example – these data were supplied by a SME as part of a very large risk analysis project I was involved with two years ago – some list members who were involved in that project may remember this scenario.)
I think that you will find that the mode (if your program calculates one), the median, and the mean are all so much greater than the initial most likely estimate as to be useless. In addition, I think that you will find that the maximum from the MC simulation is quite a bit lower than the initial maximum estimate.
Here are my results:
(Please note – if you do this yourself, you won’t get exactly the same results, but, if you run 1,000,000 iterations, your results should be close)
Min = 10
Mode = 40,058,620
(This is very interesting – in a distribution this skewed, you would expect the mode < median < mean: maybe this is an example of why Vose considers the mode to be uninformative?)
Median = 12,960,195
Mean = 16,675,377
Max = 94,255,562
Min = 6
Mode = ModelRISK doesn’t calculate a Mode – see Vose’s book for the explanation of why not
Median = 12,923,895
Mean = 16,654,354
Max = 93,479,781
What’s the takeaway here?
That there are some sets of estimates that PERT distributions don’t handle very well. When we encounter large ranges that are highly skewed, we may need to re-think our approach, or ask the SMEadditional questions.
To be continued …