Explore in-depth how to employ t-tests and partial F-tests for validating regression coefficients, understand economic versus statistical significance, and master critical exam applications for CFA Level II.
Enhance Your Learning:
Picture this: you’re sitting in an investment committee meeting, confidently presenting the results of your fancy multiple regression model that predicts, say, quarterly equity returns based on macroeconomic factors. Suddenly, one of your colleagues asks why you bothered including an interest rate variable because—according to them—there’s no evidence that it matters. Another colleague says, “I see your p‑values, but are you sure none of these factors are significant collectively?” And next thing you know, you’re knee‑deep in the world of individual t‑tests and partial F‑tests, trying to defend your model. If that sounds a little nerve-wracking, don’t worry: we’ve all been there.
In this section, we’ll dive into how to use hypothesis testing to determine which coefficients in your regression are statistically significant, both individually and as a group, and how to interpret these tests in a finance context. We’ll also chat about potential pitfalls, how to keep an eye on economic significance, and how you might see all of this show up in an exam vignette. Let’s dig in.
When we say “t‑test for individual coefficients,” we’re typically testing whether a given slope coefficient in the regression is zero (meaning the explanatory variable has no linear effect on the dependent variable). The standard setup is:
Sometimes, we might be interested in the sign of the coefficient (positive or negative), in which case you might use a one‑sided test, such as H₀: βᵢ ≤ 0 vs. H₁: βᵢ > 0. But on the CFA exam, and in many textbook treatments, two‑sided tests are the norm unless otherwise indicated.
If β̂ᵢ is your estimated slope coefficient for variable Xᵢ, and SE(β̂ᵢ) is the standard error of that estimate, the t‑statistic is:
(We put “0” in the numerator because that’s our hypothesized value under H₀.)
This statistic follows a t‑distribution with (n − k − 1) degrees of freedom, where n is the number of observations and k is the number of predictors. If the absolute value of t is greater than the critical t‑value (based on your chosen significance level and degrees of freedom), you reject H₀ and conclude that the coefficient is significantly different from zero.
• If p‑value < α (common α values are 0.01, 0.05, or 0.10), you reject the null hypothesis. Congratulations—your variable is statistically significant.
• If p‑value ≥ α, you fail to reject H₀. That doesn’t automatically mean the variable has zero effect; it just means you don’t have statistically convincing evidence that it’s nonzero.
Be careful, though. Failing to reject H₀ is not the same as “proving the coefficient is zero.” It merely suggests that, given your sample and method, you haven’t found strong enough evidence that βᵢ is different from zero.
Let’s say you have a bunch of macroeconomic indicators in your model: inflation rate, GDP growth, unemployment rate… the works. You might individually test each coefficient’s significance with separate t‑tests. But maybe you suspect that these three macro factors together tell a cohesive story. Is there a scenario where they’re jointly significant even if individually they seem borderline?
This is exactly where partial F‑tests come into play. A partial F‑test lets you check whether a subset of coefficients (like the three macro variables) are jointly zero or not. Put more formally:
The partial F‑test compares two versions of your regression:
• Restricted Model (RM): The model without the subset of variables you’re testing (i.e., you restrict those coefficients to zero).
• Unrestricted Model (UM): The full model that includes the subset of variables.
You compute the residual sum of squares (RSS, or SSE) for each model. Then the partial F statistic is:
Here,
• RSS_R = RSS of the restricted model,
• RSS_U = RSS of the unrestricted model,
• q = number of restrictions (i.e., the number of variables you’re jointly testing),
• n = total sample size,
• k = total number of predictors in the unrestricted model.
If the F value is sufficiently large, you reject H₀, suggesting the subset of variables in question is jointly significant.
If your partial F‑test indicates significance, it means at least one variable in the group matters—even if each individual t‑test did not clearly show significance. This often happens when variables within the subset are correlated with each other. So in practice, you might want to keep them in your model as a group instead of dropping them individually.
• Type I Error (False Positive): Rejecting a true null hypothesis. If you’re using a 5% significance level, that’s roughly a 5% risk of concluding something is significant when it’s actually not.
• Type II Error (False Negative): Failing to reject a false null hypothesis, missing a relationship that truly does exist.
In finance, Type I errors might lead you to include worthless factors (“noisy variables”) in your trading or asset allocation model. Type II errors might cause you to ignore genuinely price-relevant signals.
Sometimes we see a p‑value so tiny we want to dance with joy—like we found the Holy Grail of return predictors. But hold on. Let’s also look at the magnitude of that coefficient. Is it big enough to really matter in practice, or is it just barely shifting your portfolio’s risk‑return profile?
• If the coefficient is significant but the effect size is tiny, it might not move the needle in real investment decisions.
• On the other hand, some finance pros keep certain variables with borderline p‑values if domain knowledge strongly supports their inclusion.
I remember once running a regression that showed a small but statistically significant relationship between a specialized commodity index and stock returns. The effect was so minuscule that it wouldn’t cover trading costs. So sure, it was “statistically significant.” But from an economic perspective, it was a total snooze.
Let’s say you have the following model for monthly excess returns on a broad equity index:
Excess Return = α + β₁(Interest Rate) + β₂(Inflation) + β₃(Consumer Sentiment) + ε
• Individual T‑Tests:
• Partial F‑Test:
Result? You keep both Interest Rate and Inflation in the model, given they jointly explain variation in returns, even though individually, Inflation was less impressive.
• Always articulate your hypotheses clearly. Are you doing a two‑sided or one‑sided test?
• Beware of data mining. Including too many variables can lead to false positives (Type I error).
• Check for multicollinearity. Highly correlated regressors may confuse your individual t‑tests while a partial F‑test might show they’re collectively important.
• Use both significance levels and effect sizes. Don’t get hypnotized by p‑values alone.
• In exam vignettes, watch for tables of regression output that present coefficient estimates, standard errors, or t‑statistics. They might ask you which variables are significant at a given α. Or they might show you two different versions of a model and nudge you to do the partial F‑test.
Below is a simple Mermaid diagram showing how you move from an unrestricted model to a restricted model by “dropping” certain coefficients to zero.
flowchart LR
A["Unrestricted Model <br/> Y = α + β₁X₁ + β₂X₂ + ... + βₖXₖ + ε"] --> B["Test significance of a subset <br/> (q variables)"]
B --> C["Restricted Model <br/> Y = α + β₁X₁ + ... + βⱼXⱼ <br/> (excluding the q dropped variables)"]
C --> D["Compute RSS_R vs. RSS_U <br/> Perform Partial F‑Test"]
You might find it helpful to revisit “3.1 Analyzing Goodness of Fit: R-Squared, Adjusted R-Squared” to see how overall model fit can complement your coefficient significance tests. Also, check “2.5 Identifying Violations from Residual Plots” for diagnosing assumptions that can affect your inference and significance tests.
• CFA Institute Level II Curriculum – Hypothesis Testing for Regression Coefficients
• Damodar N. Gujarati, “Basic Econometrics.” A classic guide covering advanced regression diagnostics and hypothesis testing.
• For deeper dives, see specialized academic papers on the interplay of partial F‑tests and multicollinearity in macroeconomic models.
• You’re likely to see multiple independent variables with partial significance. Be prepared to explain whether you’d keep or drop a variable.
• Know how to read a regression output table: watch the t‑stats, the standard errors, and the p‑values.
• Remember partial F‑tests for checking joint significance. This skill is especially important if you see “versions” of a regression in the item set.
• Manage your time: scanning the regression output for the relevant stats early in the vignette can save you from flipping pages back and forth.
Important Notice: FinancialAnalystGuide.com provides supplemental CFA study materials, including mock exams, sample exam questions, and other practice resources to aid your exam preparation. These resources are not affiliated with or endorsed by the CFA Institute. CFA® and Chartered Financial Analyst® are registered trademarks owned exclusively by CFA Institute. Our content is independent, and we do not guarantee exam success. CFA Institute does not endorse, promote, or warrant the accuracy or quality of our products.