A Manager’s Past Performance Matters More Than Ever: Vasant Dhar
Bloomberg – A common disclaimer in the investment business is that “past performance is not indicative of future results.” This is consistent with the Theory of Finance which argues that obvious advantages disappear quickly in a competitive market. Historical evidence largely supports this position: Manager performance is mostly unpredictable from month to month.
The theory rests on some key assumptions, namely, that the money manager is a human whose basis or “investment philosophy” for decision making is discretionary, variable and opaque. But in today’s age of data and algorithms, the theory should be revisited. For starters, algorithms are driven by an explicit “objective function.” If the investment process is algorithmic and invariant then a track record demonstrating how the basis and the process performed under different regimes should provide an expectation of its performance in similar market regimes.
Let’s consider an example. Imagine a human investor who is plugged into markets and makes decisions based on the latest available information. Compare this with a scenario where decisions are generated by an algorithm that ingests similar information and follows a well-defined process without exception. If you observe both over a sufficient period, which one provides more useful information? The obvious answer is the latter example.
Here’s a real-world example showing the two-year monthly return performance of a money manager (whose identity will be revealed later) in black bars relative to the performance of the S&P 500 Index in blue bars. The manager’s performance is impressive, with no losing months during a period when the market was in turmoil, as evidenced by the large negative blue bars. Perhaps the manager made the right call by shorting S&P 500 futures or options during market declines, but simultaneously making money in the strong positive months is nothing short of spectacular, requiring impeccable timing on a relatively frequent basis.
More impressive is the manager’s performance on a risk adjusted basis, using the Information Ratio. The IR is calculated as the average of a set of returns divided by the realized risk to achieve those returns, usually expressed as their standard deviation. By equalizing risk, the IR enables an apples-to-apples performance comparison across diverse strategies and markets. An average return of 1 percent with a volatility of 1 percent is identical to a return of 2 percent with a volatility of 2 percent.
The long term IR of the S&P 500 has been roughly 0.4, while during the period above it was negative 0.59. And yet, the manager’s IR was a whopping 4.34! In the following year, the market tumbled about 25 percent while our manager notched another 8.2 percent gain! His IR remained solidly above 3.5 for the three years in a remarkable performance.
Ostensibly, such a track record should matter. However, one obvious question is whether the length of the track record is sufficient to create an expectation of future performance. Equally important is whether the past record was produced by a repeatable, verifiable process reflecting an investment philosophy that will be followed in the future.
The first question is easier to answer. Analytically, the amount of data needed to determine statistical significance depends on the frequency of decisions. All else being equal, a higher-frequency program’s return distribution will have a lower variance than that of a lower frequency program due to shorter holding periods — it will make or lose less per decision. This lower variance and larger sample size of decisions over a specific period provides more information relative to a program with higher variance and fewer decisions.
But it is the second question that fascinates market observers. What could have produced such steady performance in a difficult market? Evidence suggests that even the most experienced of human managers find it impossible to time the market like our manager to produce such steady performance. Rather, exceptional performance is more likely to result from a fluke or an outlier — a long-term hold on Apple or Amazon.com while they soared, for example — which is unwise to consider repeatable.
Absent an outlier or a fluke, the most probable explanation to a high IR is that there’s a process. For example, Warren Buffett’s investment philosophy of buying “ably-managed businesses, in whole or part, that possess favorable and durable economic characteristics at sensible prices” is coupled with a process for identifying “ably-managed” and “sensible prices” that is very likely to be followed going forward. This should provide an investor with an expectation for the future.
But even Buffett has an IR of under 0.7 in the stock market with several stumbles, including the recent Kraft Heinz investment, so investors should beware of performance that looks too good to be true. While programs with high IRs do exist, these tend to be high frequency systems that manage relatively small amounts of money or niche strategies that are not generally available to the public. The manager in our example was neither. It was Bernie Madoff!
The current trend in investing is algorithms that seek to find an edge through new kinds of data that are burgeoning from all kinds of sources including satellites, social media, government agencies and commercial enterprises. This trend, coupled with an explosion in instruments such as exchange-traded funds designed for specific kinds of risk exposure, provide investors with an increasing array of choices. It is therefore critical for them to be able to evaluate the basis of such programs and, equally important, whether there is a well-defined process that will be followed in the future to achieve the basis.