I recently found somewhere on the internet this quote:
“Forecasting is the art of saying what will happen, and then explaining why it didn’t.”
A forecast, by its very nature, will never be perfect. It can be statistically sound, highly disciplined, and remarkably accurate, but it will never achieve 100% certainty.
The world is simply too volatile.
From localized supply disruptions and material shortages to “Black Swan” events like pandemics, geopolitical shifts, or sudden regulatory changes, there is always an unpredictable variable waiting to disrupt our models. Even something as simple as an unseasonable temperature spike or a late delivery can impact a finely-tuned plan.
Since we cannot eliminate error, our value as professionals lies in how we manage it.
To move from reactive firefighting to proactive planning, we must follow a three-step evolution:
- Let go of “perfection trap” and accept that variance is an inherent part of the business.
- Detect and measure as crazy – We cannot manage what we do not measure. We must implement high quality KPIs to identify where and when our predictions diverged from reality.
- Turn Data into Root Cause Analysis – Measuring the error is only half the battle. The real “art” is understanding the why behind the delta, allowing us to prepare better for the future.
So, if you are already over the point No. 1 (seriously, accept that your forecast will never be perfect, otherwise change your profession), let’s focus on the analytical part: detecting and measuring forecasting errors.
Here are the most popular KPIs for measuring forecast errors in demand planning, supply chain, and business forecasting:
- Forecast BIAS
- MAD
- MAPE
- RMSE
I’ll explain each one in simple, straightforward way to make you feel confident in this field.
1. Forecast Bias
The difference between the actual value and the forecasted value.
What it tells you in easy words:
It shows whether your forecasts are too high (over-forecasting) or too low (under-forecasting).
Positive bias = tending to overestimate
Negative bias = tending to underestimate
Advantages
- Very easy to interpret: immediately tells you the direction of the problem (are we consistently too optimistic or too pessimistic?)
- Critical for business decisions — e.g., chronic over-forecasting creates excess inventory, chronic under-forecasting causes stockouts and lost sales
Disadvantages
- Positive and negative errors cancel each other out → you can have a near-zero bias even with large errors in both directions (very misleading if used alone).
- Doesn’t tell you anything about the size or magnitude of the errors.
2. MAD / MAE (Mean Absolute Deviation / Mean Absolute Error)
The absolute value of the error (ignoring whether it’s positive or negative).
It’s the average size of your errors, ignoring whether they were over- or under-. If MAD = 120 units, your forecasts are wrong by 120 units on average (in absolute terms).
Advantages
- No cancellation of positive/negative errors → gives a honest picture of typical error size.
- Super intuitive and expressed in the same units as your data (e.g., pieces, kg) → easy to explain to managers and planners.
- Treats every error equally (no extra punishment for big ones) → robust when you have occasional outliers or “crazy” values.
Disadvantages
- Doesn’t penalize large errors more than small ones → if big misses hurt your business much more (e.g., stockouts of high-value items), it underplays their importance.
- Not scaled → hard to compare accuracy across products with very different volumes (50-unit error on a 100-unit item is worse than on a 10,000-unit item).
3. MAPE (Mean Absolute Percentage Error)
The average of absolute percentage errors across all data points.
It shows the average error as a percentage of the actual value. If MAPE = 12%, your forecasts are off by 12% on average (in relative terms).
Advantages
- Scale-independent → great for comparing accuracy across products, categories, or time periods with very different volumes (e.g., comparing a slow-moving spare part to a fast-moving SKU).
- Very intuitive for non-technical people → “we’re wrong by about 15% on average” is easy to understand and communicate.
- Widely used and expected in business reporting.
Disadvantages
- Becomes extremely large or even undefined when actual values are zero or very close to zero (division by zero or tiny numbers blows up the percentage, sometimes worth to exclude such cases).
- Asymmetric: over-forecasting a small number hurts MAPE much more than under-forecasting → can bias models toward lower forecasts when optimized on MAPE.
4. RMSE (Root Mean Square Error)
The square root of the MSE (Mean Squared Error), giving a measure of the average magnitude of the forecast errors.
It’s like MAD but gives much more weight to your largest errors (because errors are squared before averaging, then square-rooted back). It still ends up in the same units as your data.
Advantages
- Strongly penalizes big errors → perfect when large misses are especially costly (e.g., under-forecasting peak demand)
- Same units as the original data → reasonably interpretable.
Disadvantages
- Very sensitive to outliers → one or two huge errors can make RMSE look dramatically worse, even if most forecasts are good.
- Harder to explain to non-technical stakeholders than MAD or MAPE (“what does an RMSE of 450 really mean?”).
Below you can find quick Recommendation Table (when to prefer each)
| Metric | Best when you want to… | Avoid when… | Business-friendliness |
|---|---|---|---|
| Bias | Detect systematic over/under-forecasting | You only care about error magnitude | ★★★★★ |
| MAD/MAE | Robust average error size, no outlier drama | Large errors hurt much more than small ones | ★★★★☆ |
| MAPE | Compare across very different scales/volumes | Many zeros or very small actuals | ★★★★★ |
| RMSE | Heavily penalize big, expensive mistakes | You have outliers that aren’t meaningful | ★★★☆☆ |
In practice, most mature forecasting teams look at several of these together — especially Bias + one absolute measure (MAD or MAPE) + RMSE when big errors matter a lot.
Summary
I hope this short comparison of KPIs makes it easier for you to select exactly the ones you need! Remember, the audience to whom you want to show the data is key, so select your KPIs wisely!
Soon, I will prepare the next post for you, in which I will dive deep into Step No. 3: how to use those calculations to generate real insights. Stay tuned!
Leave a Reply