Discover how forecasting models can go awry due to incorrect assumptions or structural breaks, and learn practical ways to improve forecast accuracy and manage uncertainty.
Forecasting, especially in macroeconomics and capital market expectations, naturally involves some guesswork. On a personal level, I remember working on a macroeconomic model a few years back—proudly convincing myself that I’d nailed every detail—only to discover that a policy change suddenly made half my assumptions invalid. Ouch. That experience taught me just how sensitive forecasts can be when the economics (or the markets) decide to flip the script. In this section, we’re going to explore those tricky uncertainties that lurk behind our carefully designed models, give them a name—“model risk”—and discuss practical strategies for staying on top of it all while striving for better forecast accuracy.
Model risk refers to the possibility that a chosen forecasting method might be wrong or mis-specified. It’s a risk we encounter any time we rely on a mathematical or statistical structure to project the future. This can happen for several reasons:
• Incorrect functional forms (like using a linear model when the relationship is partly nonlinear).
• Missing variables (overlooking an influential factor like consumer sentiment or regulatory changes).
• Structural breaks (when relationships that held in the past simply disintegrate, often due to policy shifts or game-changing technologies).
As an illustration, consider how the classic Phillips curve—once a reliable mainstay for linking inflation and unemployment—has been less stable over time. It’s as if the curve sometimes goes “on vacation,” ignoring historical patterns and leaving monetary policy officials scratching their heads.
When used in portfolio management or setting capital market expectations, imperfect models can lead to suboptimal asset allocation decisions, incorrect hedging strategies, or misguided risk assessments. For CFA candidates, understanding and acknowledging model risk is critical. It’s not just about having a working knowledge of mathematical formulas; it’s also about questioning assumptions and anticipating potential pitfalls.
It’s tempting to assume that relationships among macro variables, such as GDP growth and interest rates, or inflation and unemployment, will stay stable. But in reality, globalization and rapid technological change have thrown curveballs at these traditional linkages. Policies can shift overnight, data can be revised, and new economic structures can emerge. That’s the nature of real-world complexity.
In practice, we often rely on historical data to estimate correlations or regression coefficients. Think about the historically negative correlation between bond yields and stock prices: it might hold in certain regimes but break down under extreme conditions or following major policy changes. Recognizing that these relationships are inherently time-varying—and that yesterday’s patterns may not hold tomorrow—is a must.
Forecast errors don’t just pop out of thin air. They come from specific, often interrelated sources:
When we only have limited data—particularly if it isn’t representative of the true population—parameter estimates can become biased or imprecise. For example, if we’re evaluating stock market volatility since 2009, we might be missing the extreme dislocations of 2008, leading us to systematically misestimate risk.
Coefficients in regression models can shift over time due to structural breaks or evolving economic conditions. If you used data from the 1980s to build an interest rate forecast model, you’d probably see significantly different relationships than if you used data from the post-2008 financial crisis era.
Unexpected events—like natural disasters, sudden geopolitical tensions, or global pandemics—can alter economic trajectories in a hurry. Even well-specified models might struggle to incorporate these “bolt from the blue” scenarios.
Markets aren’t just about data points; they’re about people, too. Emotions such as panic and euphoria lead to herding or contrarian behaviors that do not always match the neat assumptions of rational decision theory. This can magnify forecast errors, especially during bubbles or crises.
To manage model uncertainty, you first want to measure your forecast performance in a structured way.
MAE is the average of the absolute differences between predicted and actual outcomes. It puts every deviation on the same footing:
$$ \text{MAE} = \frac{1}{n} \sum_{i=1}^{n} |y_i - \hat{y}_i| $$
This measure is intuitive—“How far off were we, on average?”—but it doesn’t emphasize larger errors any more heavily than smaller ones.
RMSE also captures the average deviation between predictions and actuals, but it squares deviations before taking the average and then the square root:
$$ \text{RMSE} = \sqrt{\frac{1}{n} \sum_{i=1}^{n} (y_i - \hat{y}_i)^2} $$
This measure penalizes large errors more strongly than MAE, which can be beneficial in risk-averse contexts where large misses are particularly damaging.
It’s one thing to build a model that performs fantastically on historical data. But can it perform just as well on data it hasn’t seen before? That’s where out-of-sample testing comes in. You train (or calibrate) a model on part of the available data, then test its predictive power on a later (or otherwise different) data set. This helps highlight overfitting issues and ensures you aren’t simply memorizing historical noise.
A quick example in Python might look like this:
1import numpy as np
2
3def rmse(actual, predicted):
4 return np.sqrt(np.mean((actual - predicted)**2))
5
6historical = np.array([100, 102, 105, 110, 115, 117])
7forecasted_in_sample = np.array([99, 101, 104, 109, 114, 118])
8
9future_actual = np.array([120, 123])
10future_forecasted = np.array([119, 126])
11
12print("In-sample RMSE:", rmse(historical, forecasted_in_sample))
13print("Out-of-sample RMSE:", rmse(future_actual, future_forecasted))
In practice, your real code might be more complex, but this snippet demonstrates verifying whether your model “generalizes” to new data.
One proven way to mitigate model risk is to combine multiple forecasting approaches, also known as ensemble methods. Instead of placing all your bets on a single model, you gather forecasts from different models—like an ARIMA model, a vector autoregression (VAR), and perhaps a machine learning model—and then aggregate or weigh them. This tactic recognizes that each model has strengths and weaknesses. Averaging across them helps reduce the chance of large errors if one approach fails badly.
You might give more weight to a model that has historically performed better, or to one that you expect will do better in certain regimes. Sometimes these weights can be estimated dynamically, so the forecast can shift as new data becomes available.
In more data-driven contexts, you can resample your dataset many times and build different models on each “bootstrapped” sample. Then you aggregate the forecasts. This approach harnesses the power of variance reduction—kind of like forming an investment portfolio with assets that have less-than-perfect correlation.
Forecasting methods need to be agile and able to learn from new events. Adaptive forecasting helps:
• Continuously re-estimate model parameters (e.g., a rolling-window regression).
• Use Bayesian updating so prior beliefs about the model or its coefficients get revised based on incoming data.
Imagine you run a Bayesian vector autoregression—each new GDP or inflation data point updates the posterior distributions of your parameters. Over time, your model “learns” that certain relationships might be shifting. This dynamic approach can be particularly handy in volatile macroeconomic environments.
We can’t remove all uncertainty—like it or not, forecasting is about forging ahead with incomplete info. But we can manage the risk effectively:
• Margin of Error or Confidence Intervals: Present forecasts with an uncertainty band (e.g., 90% or 95% range). That way, decision-makers can gauge how much risk they’re taking if the forecast deviates from the base scenario.
• Communicate Uncertainty: You’ll often see investor reports with disclaimers like “these projections are subject to significant volatility factors.” While it might sound like legal boilerplate, it’s also a real reflection of the environment we live in.
• Ongoing Monitoring: Don’t set random forecasts on autopilot. Keep track of actual outcomes versus the forecast, and re-evaluate your approach if errors become too large or patterns shift. This helps catch model drift early.
• Scenario Analysis: Even if you have a single best guess, it’s useful to consider alternative scenarios—bull, base, and bear—to highlight how different assumptions about variables (like interest rates, inflation, or growth) can impact results.
Below is a simple flowchart illustrating how forecasting, evaluation, and combination strategies can be structured:
flowchart LR A["Forecasting Model <br/>Development"] B["Parameter <br/>Estimation"] C["Model <br/>Selection"] D["Forecast <br/>Evaluation"] E["Ensemble <br/>Combination"] F["Final <br/>Forecast"] A --> B B --> C C --> D D --> E E --> F
• Model Risk: The potential for a flawed or inappropriate model to generate erroneous results, possibly leading to misguided decisions.
• Structural Break: A substantial change in the underlying data-generating process, rendering prior relationships obsolete.
• Phillips Curve: Historically, a relationship suggesting inflation is inversely related to unemployment. This curve has been less reliable in recent decades.
• Out-of-Sample Testing: Evaluating model performance on data not used in model building, to reduce overfitting risk.
• MAE (Mean Absolute Error): The mean of absolute differences between forecasts and actual observations.
• RMSE (Root Mean Squared Error): The square root of the average of squared differences, penalizing larger misses.
• Bayesian Updating: A process of continuously adjusting beliefs or model parameters as new evidence becomes available.
• Confidence Interval: A probabilistic range around a forecast that indicates where the actual data might lie with a given level of certainty.
• Prepare for scenario-based questions that present a model’s forecasts and ask for interpretation or improvement strategies.
• Practice evaluating bias and variance trade-offs in forecasts—understanding how overfitting might exaggerate short-term accuracy but fail out-of-sample.
• Be ready to articulate the rationale for using multiple models (ensemble methods) and how to interpret confidence intervals.
• Study how to incorporate unexpected shocks into forecasts—examiners may give you hypothetical geopolitical or natural disaster scenarios.
• Always map your forecast discussion to potential portfolio implications. You might see a question about how forecast errors affect an asset allocation decision under uncertain inflation.
• Diebold, F.X. and Mariano, R.S. “Comparing Predictive Accuracy.” Journal of Business & Economic Statistics.
• Chatfield, C. “Forecasting in Action.” Cambridge University Press.
• CFA Institute Publications on Economic Forecasting: “Forecast Uncertainty and Model Risk Management.”
Important Notice: FinancialAnalystGuide.com provides supplemental CFA study materials, including mock exams, sample exam questions, and other practice resources to aid your exam preparation. These resources are not affiliated with or endorsed by the CFA Institute. CFA® and Chartered Financial Analyst® are registered trademarks owned exclusively by CFA Institute. Our content is independent, and we do not guarantee exam success. CFA Institute does not endorse, promote, or warrant the accuracy or quality of our products.