Learn how to tackle limited and unreliable data in emerging and frontier markets. Explore alternative data sources, proxies, smoothing techniques, and advanced statistical tools to handle high volatility effectively.
So, you’re trying to value a company in an emerging or frontier market, but you’re stuck with lousy or limited data. Sound familiar? In many of these markets, official economic reports can lag by months or even years, annual statements don’t meet the same stringent requirements you might see in developed markets, and your typical forecast models often just go haywire because of all that volatility. Maybe you’ve tried the usual suspects—growth models, price multiples, or discounted cash flow—and ended up scratching your head because the figures are basically all over the place.
Today, we’re diving into some practical ways to deal with data scarcity and wild market swings. We’ll talk about everything from using proxy indicators and rolling averages to advanced statistical approaches like exponential smoothing or ARIMA. Even wavelet analysis (the fancy stuff!) can show up in sophisticated circles. But honestly, the main goal here is straightforward: help you make reasoned estimates in an environment where data is patchy, inconsistent, and often very volatile. Let’s roll right in.
You know, it’s one thing to deal with questionable data, but it’s another when data barely exists at all. In a typical frontier or emerging market, you might see:
• Limited Reporting Requirements: Some firms operate under a regulatory framework that isn’t as standardized. They might not publish quarterly financials, and sometimes they skip detailed notes in their annual reports.
• Patchy Operating Histories: Many companies are young or have only been listed for a few years. Historical financials might only cover one or two economic cycles—if that.
• Unreliable Macroeconomic Indicators: GDP growth figures could be revised multiple times, inflation data might be subject to political influences, and sometimes the local central bank’s stats are just incomplete.
The bottom line? You can’t rely on the same robust datasets you might expect in, say, the U.S. or Europe. And guess what happens when you try to apply typical ratio analysis or fundamental forecasting? You get questionable results that might not stand up to real-world conditions.
When the usual data well is dry, you’ve got to get resourceful. That often means tapping into alternative data sources or using proxies. Here are a few ideas:
Maybe the firm you’re analyzing won’t give you updated revenue figures. However, you can sometimes approximate their performance from broader regional trends. Let’s say you’re trying to figure out growth rates for a startup microfinance institute in sub-Saharan Africa—if you can at least find good data on regional loan growth or average microfinance penetration, you might infer a ballpark figure for the company’s expansion.
• Example: Using local GDP growth or consumer spending data from a bigger region as a stand-in for a single firm’s revenue trend.
• Caution: Always consider your proxy’s correlation to the specific firm. Some proxies may be misleading if the company’s business doesn’t align well with the regional average.
We live in an age of big data, so you can often gather info from unusual places. Satellite images of parking lots (popular in retail analysis), mobile phone usage data, or even local news scrapes can paint a picture of consumer behavior. I’ve personally come across scenarios where an analyst used freight shipping data to estimate a local port operator’s performance. Sounds a bit crazy, I know, but that’s the nature of frontier markets sometimes.
• Satellite Imagery: Track store traffic, farmland usage, or building development.
• Mobile Transactions: Estimate how many daily transactions a firm might be handling if it’s in fintech or e-payments.
• Interviews and Industry Journals: Sure, they might be a bit subjective, but talking to the actual management team or reading specialized local journals can help refine your assumptions.
Let’s face it: even if you scrounge up data, the numbers in emerging or frontier markets often bounce around like a hyperactive dog. Inflation can spike unexpectedly, currency exchange rates can go haywire, and political instability might cause huge capital inflows or outflows at the drop of a hat.
A common technique is applying rolling averages or trailing data windows. For instance, you might compute a 3-year rolling average for earnings instead of using last year’s earnings. This helps smooth out short-term noise, especially in sectors prone to cyclical peaks and troughs.
If you want to put it into a neat formula, you could represent a rolling average of length \( m \) as follows:
where \( X_{t} \) is the value of the variable (like earnings or cash flow) at time \( t \).
All too often, I’ve seen analysts come up with just one “predictive” scenario, only to be left red-faced when actual results deviate drastically. In high-volatility markets, you really want to create multiple potential outcomes—like a best-case, worst-case, and base-case scenario. Then you can see how different assumptions (e.g., inflation at 5% vs. 15%) change the overall valuation.
• Sensitivity Analysis: Tweak one variable at a time (like discount rate or revenue growth) and see what happens.
• Scenario Analysis: Combine multiple uncertain inputs (e.g., inflation, commodity prices, regulatory changes) to build coherent pictures of the future.
When you can’t pin down a single approach, it sometimes works best to average multiple valuation techniques. Maybe a discounted cash flow (DCF) analysis suggests \( $ 100 \) per share, a comparable P/E method suggests \( $ 120 \), and a price-to-book approach yields \( $ 90 \). You might decide to weigh each approach differently (for example, weighting them by your confidence in the underlying data) and come up with a blended figure.
If you’ve got a little more time (and the inclination), there are more advanced ways to incorporate and forecast messy data. Let’s take a quick stroll through some popular choices:
A simple moving average is basically a rolling average, but exponential smoothing adds an extra twist: it places more weight on recent data points. This makes sense in volatile environments where last month’s data might be more relevant than, say, data from two years ago.
Autoregressive Integrated Moving Average (ARIMA) models are a class of time-series forecasting tools that can handle different forms of stationarity. They can be particularly useful when you have enough historical data, but that’s the rub in emerging markets—often you don’t have a lot to play with. Still, if there is enough data, ARIMA can help produce forecasts that factor in autocorrelation (how past data influences future values), trends, and seasonality.
OK, wavelets might sound a bit out there, but they can separate time series data into various frequency components—kind of like sifting out the shorter-term noise from the big-picture trend. This might be overkill in many practical settings, but in academic or advanced research roles, wavelet analysis can provide insights into how volatility changes over different timescales.
graph LR A["Identify Data Scarcity"] --> B["Use Proxies <br/> & Alternative Data"] B --> C["Employ Rolling Averages <br/> or Other Smoothing"] A --> D["Apply Scenario <br/> & Sensitivity Analyses"] D --> C C --> E["Blended Valuation <br/> Approaches"] E --> F["Advanced <br/> Statistical Techniques"]
In the mermaid diagram above, you can see a conceptual flow: first, you recognize data scarcity, then you gather proxies or alternative data. After that, you smooth volatility using rolling averages, run scenario analysis, blend your valuations, and only then consider advanced statistical models if you have enough data depth or expertise.
Let’s bring this to life with a brief scenario. Imagine you’re analyzing Amilli Agro, a small frontier market agricultural firm that’s only been public for three years:
• Historical Data (3 years)
• Government Crop Yield Stats (delayed by 6 months and frequently revised)
• International Agencies’ Country Reports (broad, not specific to this sub-district)
• Management Interviews (optimistic, with minimal supporting detail)
You could approach this exactly as we’ve discussed:
• Over-Reliance on a Single Data Point: Don’t entirely trust the “official” government inflation figure if your own research indicates it’s understated.
• Failure to Adjust for Political Risk: Sometimes data looks good until a sudden political event changes rules for foreign investors or seizes assets. Incorporate that risk in your discount rate or scenario analysis.
• Excessive Complexity: It’s tempting to go nuts with advanced models. But if your data is super noisy or incomplete, simpler approaches—like rolling averages and scenario analysis—might actually produce more realistic outcomes.
• Show your work. On a vignette-based item set, often you’ll have to walk through how you arrived at certain assumptions, especially if the question tries to see if you know how to handle incomplete data.
• Expect “trick” references to data that might be out-of-date or inconsistent. The exam loves to see if you can figure out which data is actually reliable.
• Always, always consider qualitative inputs. If you see a question about an analyst visiting the site and noticing unreported expansions, that’s a clue you might need to adjust your model or shift your scenario weighting.
• Bodie, Z., Kane, A., & Marcus, A.J. (2017). Investments (Global Edition). Chapters on international investing.
• CEIC Data: https://www.ceicdata.com/ – Provides emerging market macroeconomic data.
• The Journal of Emerging Market Finance – For academic research on frontier market volatility and analysis.
• CFA Institute Code and Standards – Always relevant for ethical and professional considerations in research.
Important Notice: FinancialAnalystGuide.com provides supplemental CFA study materials, including mock exams, sample exam questions, and other practice resources to aid your exam preparation. These resources are not affiliated with or endorsed by the CFA Institute. CFA® and Chartered Financial Analyst® are registered trademarks owned exclusively by CFA Institute. Our content is independent, and we do not guarantee exam success. CFA Institute does not endorse, promote, or warrant the accuracy or quality of our products.