Reading a TuneMap: Stock Behavior, Market Context, and Validation
- Vincent D.

- Aug 9
- 11 min read
I’ve seen many recent discussions on our forum about validating some of the strategies that appear in a stock’s TuneMap optimal region by testing them on periods outside the one used to create the map. The idea of checking how a given strategy might perform — or has performed — in different situations is excellent. However, it’s also challenging and requires a solid understanding of the nuances and caveats of optimization.
As I mentioned in my initial post about TuneMap: “Strategy optimization is a broad topic with several nuances. To keep things accessible, I’ll focus on the practical aspects mentioned above. For those who are more enthusiastic and eager to dive deeper into the underlying theory, I’ll share a dedicated post later on.”
Perhaps this post will serve as Part 1 of that “dedicated post later on.” Here, we’ll focus on gaining a clearer understanding of what the map is really showing us, the underlying factors that shape it, and the conclusions we can draw from it.
TuneMap Optimal Map
First, let’s agree on one thing: TuneMap is exactly what its name suggests — a map of the returns associated with all possible parameter configurations for simple technical strategies, calculated over the period from May 2020 to December 31, 2024. It allows us to visualize which configurations for different indicators would have performed best during that time.
Several of you have validated these results in TradingView (we even provided a strategy script for you), and the outcomes matched closely. I say “closely” because there are minor differences — notably in how we start counting buy-and-hold performance and how we stop counting it. Keep in mind that TradingView never stops the buy-and-hold calculation until the very last calendar day and only start on a buy signal.
Now, let’s move to a principle that always holds true in investing: past performance is not a guarantee of future returns. In practice, this means you should generally expect the return from a strategy in the optimal zone of a map to be lower than what you see in TuneMap. Yes, occasionally you might beat those returns, but more often you’ll underperform them. There are exceptions — for example, in difficult markets like bear markets, you might systematically outperform the TuneMap results. I’ll come back to that point later in this post.
So, does this mean that analyzing the best way to hold a stock in the past is useless for the future? Not at all — but its value depends on several factors.
Factors Shaping a Map
A map is the result of two combined influences: the stock’s own behavior and the broader market environment. Let’s start with what I mean by stock behavior.
Stock Volatility
Each stock has an intrinsic volatility, which is closely related to its beta. In other words, some stocks often swing in the 1–3% daily range, while others tend to move in a narrower 0.5–1% range. Think about the difference between holding a steady stock like Procter & Gamble versus a high-flyer like Tesla.
The more volatile a stock is, the more frequent the alternation between uptrends and downtrends tends to be typically. In such cases, if we take a two-EMA crossover strategy as an example, the optimum often lies in a more defensive zone — meaning shorter fast and slow EMA periods that cross over and under more often, providing quicker protection. For instance, here’s Tesla over our study period: a highly volatile stock that went through massive stretches of both strength and weakness, with extreme moves in both directions.

By contrast, a stock that steadily climbs with little drama will usually have a map showing much longer fast and slow EMA periods. These settings trigger far fewer crossovers, only exiting the market during major downturns. Is there any stock that better embodies the “just hold it” approach over the last five years than NVDA? Here’s its map:

Stock behavior isn’t fixed over time. Generally, the younger the stock, the more speculative and volatile it is. Take Google’s map from its IPO in 2004 up to just before the Global Financial Crisis (GFC):

If you remember that period, it was one of the easiest markets in history — low volatility and a steady uptrend. Yet despite this calm environment, Google’s map was still somewhat defensive.
Now compare that to the 2020–2025 period, a good stretch for Google with strong price appreciation:

Even with a tougher year for the market in 2022, the map suggests simply HODLing the stock or being only minimally protective.
Market
How the overall market performs will also have a significant impact on a map’s shape and the returns it shows. Take a look at Google’s map during the Global Financial Crisis (GFC):

It shows that being extremely defensive during that period would have massively outperformed buy-and-hold (by a factor of about 1.8!). Is that surprising? Not really. In a market that fell nearly 50% — and with Google itself dropping around 68% during the GFC — getting out quickly saved investors from a lot of pain.
This is an extreme example where the map’s message is driven almost entirely by the market environment rather than the stock’s unique behavior. In fact, for that period, nearly every stock’s map looks the same. But in most periods you choose to build a map, you’ll get a combined effect of the stock’s own behavior and the broader market trend.
For example, look at Google’s map for January 2016 to April 2020:

Here, you still see the same middle optimum, suggesting a slightly defensive stance, but the “just hold it” zone from the 2020–2025 map is gone. Is that surprising? Not really. This period included a notable market drop in January 2018, a massive correction in fall 2018 that nearly pushed markets into bear territory, a sharp correction for cloud stocks in September 2019, and, of course, the infamous COVID crash. In such an environment, being somewhat defensive was appropriate. Still, choosing the optimum zone from the 2016–2020 map would have worked well in the 2020–2025 period, showing that Google’s underlying behavior stayed relatively consistent across both periods.
This 2016–2020 example had a mix of sharp downtrends and strong uptrends. But the most challenging market for building a strategy is a sideways market. These are less common, with the closest recent examples being 2011–2012 and 2015–2016:

In these years, the market mostly moved sideways, with mild corrections that failed to rebound beyond the sideways range. These environments are particularly tough because strategies can’t protect you from much downside, but also struggle to capture meaningful upside. The result is capital erosion as you repeatedly exit at lower prices than you re-enter. Even more complex strategies, such as our Hedge signal, don’t shine in these conditions.
For our Google example, here’s the map:

Notice how there’s no clearly defined optimal zone — and more importantly, how even the best-performing settings still significantly underperform buy-and-hold.
What Makes a Good Period?
Now that we understand the two factors shaping a map’s pattern, we can better define what makes a “good” period and what makes a “bad” one.
When it comes to a stock’s intrinsic behavior, going too far back in time usually doesn’t make sense. As we saw, Google from 2004–2007 was very different from the much more mature Google of today. On the other hand, Google from 2016–2020 wasn’t significantly different from the Google we know now.
Tesla is a good counterexample. Before it gained traction as the leader of the 2020 EV boom, Tesla in 2016–2020 was not the same company — in business profile or market behavior — as it is today. This leads us to an important point: there is no single “universal” truth for how far into the past or future the 2020–2025 historical map remains a good predictor. Instead, the right question is: Is the behavior of this stock during our historical period still relevant today, and how far back has it been consistent?
Answering that question is usually not too difficult. Studying the stock’s history often gives a good sense. One useful tool is to look at historical realized volatility. To help with this, I’ll be providing a script this week that calculates volatility using the well-regarded Yang–Zhang method. But sometimes, just knowing the company’s story is enough.
For example, remembering how dominant Google was in 2016–2020 and how dominant it remains today tells me it’s essentially the same stock. Indeed, if we look at a map from 2023 to the end of 2024 — or extend it to last week — we see a very similar pattern:

Naturally, the map that extends to August of this year places less emphasis on the upper HODL zone. That’s expected, given that the April correction (absent in the original map) had a strong impact. Here’s a strategy associate with a random point from the optimum middle zone:

It performed reasonably well during that correction, even though the correction wasn’t part of the original historical period.
In summary: going far back in time for the historical period — or for “out-of-sample” validation — often doesn’t make sense, because the stock’s behavior has likely changed too much. This is why we don’t use the entire history of a stock to create the maps. Here’s what Google’s map from it's IPO would look like if we did:

The highlighted zones correspond to strategies that were relevant in Google’s early years, when compounding amplified the effect of early gains. Back then, a successful trade could compound for 20 years. But note that the overall return relative to buy-and-hold is very low — normal, because while those early strategies worked well in Google’s infancy, they no longer fit its behavior over the last decade.
Beyond the evolution of the stock itself, we’ve also discussed how the underlying market trend shapes the map. Following the same logic:
Using a sideways market as the historical period won’t yield good results.
Using a strong downtrend period like the GFC will let the market trend overpower the stock’s own behavior, resulting in ultra-defensive strategies that won’t work well in bull years.
This is why I believe April 2020 to January 1, 2025 is a fantastic historical period. Few five-year spans have offered such diversity in market conditions:
The euphoric, post-COVID melt-up, reminiscent of pre-GFC or dot-com bubble surges, ending around Christmas 2021 (or ~8 months earlier for riskier growth stocks).
A solid one-year bear market — something that occurs roughly once a decade.
A narrow recovery starting in late 2022, with leaders surging while much of the market stayed beaten down (low breadth).
Gradual improvement in breadth through 2023-2024.
A final six months of high volatility to close the period.
This mix is so varied that it’s hard to overfit to any single type of market behavior. My concern is that when we eventually have to update the map to keep pace with evolving stock behavior, we may not get a period as diverse as 2020–2025. That’s a challenge for the future — but one we already have some solutions in mind for.
Out-of-Sample Validation
Since we can’t go too far back in time — as older samples may no longer be relevant — how can we validate our strategy? This is a well-known problem in quantitative analysis, and one of the most widely accepted solutions is Monte Carlo simulation.
Before getting into that, let’s note that we are currently in a somewhat lucky situation. The sharp correction from February to April of this year gives us an excellent, highly relevant out-of-sample opportunity. After all, these are exactly the kinds of drops we want to be protected against. Looking at how the optimal zone performed during this decline can provide valuable insight. It can also help you decide between zones close to the optimum — perhaps choosing one that doesn’t deliver the absolute highest historical return but offers better protection in this type of downturn.
Monte Carlo simulation is another powerful validation tool. In this context, it’s a way of exploring many possible “futures” for a stock to see how a trading approach might perform under different market conditions. Using the GJR-GARCH model of a stock’s behavior — which mimics how real prices move and how volatility changes over time — we generate 1,000 different five-year price paths. Each path is like a parallel universe where the stock experiences its own ups and downs.
Here are two examples of the realized volatility over time for some simulated trajectories, showing how accurately the model reflects real Google behavior:

We then test the same technical trading parameters on all these scenarios to see how they would have performed on average, in the best cases, and in the worst cases. This gives a far more realistic sense of how robust a strategy might be, instead of relying only on the single historical path that actually happened.
If a stock has historically had a strong uptrend, running hundreds or thousands of five-year simulations will naturally highlight that simply holding the stock is often the best choice. But if the model is sound, the zones that historically performed well should also remain strong, while “false positives” caused by unique quirks (thing of my Crowdstrike example) in the historical data will tend to disappear.
For example, here’s a raw map for Google generated from 100 GJR-GARCH trajectories:

As expected, the middle optimum zone stands out clearly. This simulated map is then combined with the historical map using our proprietary algorithm to produce what we call our Simulation Filter.
Because we simulate many paths, the result naturally leans toward the overall long-term trend. In Google’s case, that’s been a broadly positive one. As I mentioned in my initial TuneMap post, we plan to add a stress test feature in the future. This would allow you to isolate and analyze only simulated bearish paths to see how strategies perform in bad markets, as well as paths that move mostly sideways. We’re also considering a mode that isolates historical downtrends to show how your strategy would have performed — with statistics calculated — in each case.
Until then, Monte Carlo simulation and the resulting Simulation Filter remain powerful tools to highlight which zones on the historical map are most likely to continue performing well in the future, assuming the stock’s intrinsic behavior stays similar.
Conclusion
It’s unlikely that the past historical return of the exact optimal point in TuneMap will repeat itself perfectly. However, if the stock’s behavior remains very similar, parameters within the optimal zone should continue to outperform those in other zones — especially if your choice also takes the Simulation Filter into account.
If you believe the stock was at roughly the same stage of maturity in the 2016–2020 period, you can use that timeframe for additional validation. It offers good market diversity. While you may see lower returns, you should still find that your chosen parameters are close to the best configuration for that period. However, be mindful of the starting date you choose. If it begins during a strong bull trend, the crossover point may have already occurred before your start date, meaning you could go an entire year without a trading signal if the trend persists, thus distorting the strategy performance.
In any case, reviewing how your chosen zone behaved during the recent February–April correction can also provide valuable insight.
Of course, we can’t know the future — and that uncertainty is something every investor must accept. That said, simple technical strategies like the two-EMA approach used in this post are harder to overfit. If your strategy disengaged from the market in a reasonable way during the recent correction, chances are it will behave similarly in the next market downturn. The bigger challenge is avoiding excessive disconnections during strong uptrends. Unless the stock’s volatility changes dramatically, recent historical performance in bull phases should still provide useful guidance.
I know being able to generate maps for every possible period, as I’ve shown in parts of this post, would be incredibly interesting. But at the moment, it would require more computing power than we currently have available.
Until then, choose a strategy that fits who you are as an investor. Personally, I have a high tolerance for market risk overall, but less so when holding individual stocks. I choose parameters that reflect that. I might only hold blindly if I ever see a pattern like this:

…with the hope that it eventually morphs into this:




No article to accompany the WU Text today that QQQ was bought? I sure hope that text wasn't false? (raises hand, he who followed and also bought...)
Very helpful as always! I am one of those "backtesters" (to 2016 or 2010 in many cases) and greatly appreciate the TV tools that you provided to do so. I do this as it gives me a couple of OOS results of how well the strategy might have performed (Vs BNH and Vs the "market" in "relatively recent" times -- and some limited perspective of the possible variance of results associated with the TuneMap parameters. However, I do recognize the uselessness of this approach when looking at a stock like TSLA. I build my parameters off of the Simulated Filter in the expectation that they will perform better than those generated from the Returns Map over time. My simple under…