Many companies featured on Money advertise with us. Opinions are our own, but compensation and
in-depth research may determine where and how companies appear. Learn more about how we make money.

Money is not a client of any investment adviser featured on this page. The information provided on this page is for educational purposes only and is not intended as investment advice. Money does not offer advisory services.

Monopoly money
Beware investment strategies that haven't been tried with real money.
Alamy—Alamy

You probably know, because you've read the boilerplate disclaimer in mutual fund ads, that past performance of an investment strategy is no indicator of future results.

And yet, funnily enough, nearly everyone in the investment business cites past results, especially the good results. Evidence that an investment strategy actually worked is a powerful thing, even if one knows intellectually that yesterday's winners are more often than not tomorrow's losers. At the very least, it suggests that the strategy isn't merely a swell theory—it's been tested in the real world.

Except that sometimes you can't take the "real world" part for granted.

Just before Christmas, an investment adviser called F-Squared Investments settled with the Securities and Exchange Commission, agreeing to pay the government $35 million. According to the SEC, F-Squared had touted to would-be clients an impressive record for its "AlphaSector" strategy of 135% cumulative returns from 2001 to 2008, compared with 28% in an S&P 500 index. Just two problems:

First, contrary to what some of F-Squared's marketing materials said, the AlphaSector numbers for this period were based solely on a hypothetical "backtest," and there was no real portfolio investing real dollars in the strategy. In other words, after the fact, F-Squared calculated how the strategy would have performed had someone had the foresight to implement it. Underscoring how abstract this was, the backtest record spliced together three sets of trading rules deployed (hypothetically) at different times. The third trading model, which was assumed to go into effect in 2008, was developed by someone who, the SEC noted in passing, would have been 14 years old at the beginning of the whole backtest period, in 2001. (The AlphaSector product was not launched until late 2008; its record since it went live is not in question.)

Second, even the hypothetical record was inflated, says the SEC. The F-Squared strategy was to trade in and out of exchange traded funds based on "signals" from changes in the prices of the ETFs. But F-Squared's pre-2008 record incorrectly assumed the ETFs were bought or sold one week before those signals could possibly have flashed. The performance, says the SEC, "was based upon implementing signals to sell before price drops and to buy before price increases that had occurred a week earlier." Not surprisingly, a more accurate version of even the hypothetical strategy would have earned only 38% cumulatively over about seven years, not 135%.

Call it a woulda, shoulda—but not coulda—track record.

Steve Gandel at Fortune has been following this story for some time and has the breakdown here on how it all happened. This kind of thing is (one hopes) an extreme case. But there's still a broader lesson to draw from this tale.

Although it's a no-no to say that a strategy is based on a real portfolio when it isn't, there's not a blanket rule against citing hypothetical backtest results. In fact, backtesting is a routine part of the money management business. Stock pickers use it to develop their pet theories. Finance professors publish papers showing how this or that trading strategy could have beaten the market. Index companies use backtests to construct and market new "smart" indexes which can then be tracked by ETFs. But even when everyone follows all the rules and discloses what they are doing, there's growing evidence that you should be skeptical of backtested strategies.

Here's why: In any large set of data—like, say, the history of the stock market—patterns will pop out. Some might point to something real. But a lot will just be random noise, destined to disappear as more time passes. According to Duke finance professor Campbell Harvey, the more you look, the more patterns, including spurious ones, you are bound to spot. (Harvey forwarded me this XKCD comic strip that elegantly explains the basic problem.) A lot of people in finance are combing through this data now. But if they haven't yet had to commit real money to an idea, they can test pattern after pattern after pattern until they find the one that "works." Plus, since they already know how history worked out—which stocks won, and which lost—they have a big head start in their search.

In truth, the problem doesn't go away entirely even when real money is involved. With thousands of professional money managers trying their hands, you'd expect many to succeed brilliantly just by fluke. (Chance predicts that about 300 out of 10,000 managers would beat the market over five consecutive years, according to a calculation by Harvey and Yan Liu.)

So how do you sort out the random from the real? If you are considering a strategy based on historical data, ask yourself three questions:

1) Is there any reason besides the record to think this should work?

Robert Novy-Marx, a finance professor at the University of Rochester, has found that some patterns that seem to predict stock prices work better when Mars and Saturn are in conjunction, and that market manias and crashes may correlate with sunspots. His point being not that these are smart trading strategies, but that you should be very, very careful with what you try to do with statistical patterns.

There's no good reason to think Mars affects stock prices, so you can safely ignore astrology when putting together your 401(k). Likewise, if someone tells you that, say, a stock that rises in value in the first week of January will also rise in value in the third week of October, you might want to get them to explain their theory of why that would be.

2) What's stopping other investors from doing this?

If there's a pattern in stock prices that helps predict returns, other investors should be able to spot it. (Especially once the idea has been publicized.) And once they do, the advantage is very likely to go away. Investors will buy the stocks that ought to do well, driving up their price and reducing future returns. Or investors will sell the stocks that are supposed to do poorly, turning them into bargains.

That doesn't mean all patterns are meaningless. For example, Yale economist Robert Shiller has found that the stock market tends to do poorly after prices become very high relative to past earnings. It may be that prices get too high in part because fund managers risk losing their jobs if they refuse to ride a bull market. Then again, the same forces that affect fund managers will probably affect you too. Will you being willing to stay out of the market and accept low returns while your friends and neighbors are boasting of double-digit gains?

And even Shiller's pattern doesn't work all the time—stock prices can stay high for years before they come down. Betting that you can see something that's invisible to everyone else in the market is a risky proposition.

3) Does it work well enough to justify the expense?

Lots of strategies that look good on paper fade once you figure in real-world trading costs and management fees. A mutual fund based on the AlphaSector strategy, by the way, charges about 1.6% per year for its A-class shares. That's eight times what you'd pay for a plain-vanilla index fund, which is all but certain to deliver the market's return, minus that sliver of costs. And there's nothing hypothetical about that.