Why Benchmarks are very overrated in venture capital manager selection
Follow me @samirkaji for my thoughts on the venture market, with a focus on the continued evolution of the VC landscape.
Venture Capital (VC) is a dynamic and rapidly evolving asset category.
For Limited Partners (LPs) that invest in venture funds managers, it's tempting to lean heavily on benchmarks. After all, benchmarks ostensibly provide a definitive measure of a manager's ability by assessing past performance. Given the reported higher persistence in manager performance in VC versus other asset categories, it's easy to understand why LPs use benchmarks so heavily to pick managers.
Benchmarks published by Cambridge, Pitchbook, and Preqin are all familiar data sources LPs often use when conducting comparative analysis on funds. These benchmarks typically provide useful performance metrics based on a sample set of managers for each vintage year by displaying the mean, median, top quartile, etc., for a given year.
While benchmarks can provide an excellent comparative measure, too often, they are significantly overweighted for manager selection, and many don't account for the inherent flaws of VC benchmarks (especially recent vintages).
Here are some of the things to consider about benchmarks:
1. Persistence of Returns:
In the first few decades of venture capital, there was a stronger persistence of returns in venture capital, and past returns often served as a relatively decent predictor of future success (i.e. According the National Bureau of Economic Research, venture capital during 1984-2014 had a relatively strong level of persistence (as nearly 70% of firms whose prior fund was in the first quartile had a successor fund that performed over median). However, persistence was shown to be weaker when looking at predecessor fund performance at the time the investor was looking at the new fund issuance (typically 2-3 years later). While investors can go back several fund vintages to remedy this, note that today, older fund performance is harder to use as a proxy given the number of changes that can occur on a micro and macro basis — i.e., growing fund sizes, general partner team transitions, and changes in the competitive landscape.
2. Nature of early-stage investing and mark-ups
During the most recent decade, we have seen fund managers quickly raise and deploy capital. In extreme bull markets, benchmarks favor groups that deployed rapidly into more companies and, therefore, had more shots of portfolio mark-ups. For example, a fund that deployed capital in 14 months often saw far more marked-up companies than those funds that took 36 months to deploy.
Thus, benchmarks will be friendlier for funds that deployed quickly (especially in momentum areas) and had rapid mark-ups. This can be a contraindication of the actual quality and resiliency of the fund. Additionally, a manager who starts deploying in January of a given year will be in the same vintage year benchmark as a manager who starts in November (the former will have more time to build metrics such as MOIC).
It's also important to note, based on our experience, that a fund's quartile ranking only settles in years six or seven.
3. Discrepancies in Valuations:
It's not uncommon for different funds to hold the same company at vastly different valuations. When evaluating a fund, always inquire about its valuation methodology. For example, various funds that hold the same share class of a certain company may value the shares at entirely different values. We recently saw a single company (same share class) held at valuations ranging from $600MM to $1.3B at the same moment in time.
Funds that are quick to markdown are disadvantaged when it comes to benchmarks versus funds that don't markdown aggressively. It's essential to evaluate the fund manager's valuation methodology and understand the valuation marks at which the manager holds their biggest portfolio drivers. For LPs evaluating managers, I always advise understanding the holding costs of the biggest portfolio drivers to assess the marks' quality better.
4. The "One-Size-Fits-All" Benchmarking Problem:
Benchmarks often treat VC as a singular class, lumping together funds of all sizes and stages. Comparing a $3B VC fund to a $40MM pre-seed fund is comparing apples to oranges. They have different risk profiles, return expectations, and liquidity timelines. Using the same benchmark for both can be misleading and simply inaccurate. Today, the common definition of venture covers pre-seed to pre-IPO, and different funds have varying characteristics regarding time to liquidity, return expectations, and risk. Most publicly available benchmarks do not offer this level of granularity.
5. The Survivorship Bias Trap:
Many benchmarks suffer from limited sample sizes and often rely on self-reported numbers. This can lead to survivorship bias, where only the successful funds report, skewing the results of benchmarks.
Investing as an LP is a long-term game. While past results and benchmarks can provide some insights, they shouldn't be the sole or even primary basis for decision-making. Instead, focus on evaluating the future potential of a manager on a go-forward basis by evaluating their thesis (and their team's fit to the thesis), and potential to outperform based on their comparative advantages related to sourcing, winning, picking, and portfolio management.
Investing as an LP is a long-term game. While past results and benchmarks can provide some insights, they shouldn't be the sole or even primary basis for decision-making. Instead, focus on evaluating the future potential of a manager on a go-forward basis by evaluating their thesis (and their team’s fit to the thesis), and potential to outperform on the basis of their comparative advantages related to sourcing, winning, picking, and portfolio management.