Have your funds' returns justified the risks they take? Could you have chosen funds with equal returns but lower risk? Risk-adjusted performance measures offer some insight, helping you see what you're getting in exchange for risk.
Two of the more common measures that deal with both performance and risk are alpha and the Sharpe ratio. You can find both of these on the Risk & Rating tab of each fund.
Alpha
The first measure, alpha, is the difference between a fund's expected returns based on its beta (volatility against a benchmark) and its actual returns. Alpha is sometimes interpreted as the value that a portfolio manager adds, above and beyond a relevant index's risk/reward profile. If a fund returns more than what you'd expect given its beta, it has a positive alpha. If a fund returns less than its beta predicts, it has a negative alpha.
Because a fund's return and its risk both contribute to its alpha, two funds with the same returns could have different alphas. Further, if a fund has a high beta, it's quite possible for it to have a negative alpha. That's because the higher a fund's risk level (beta), the greater the returns it must generate in order to produce a high alpha. Just as a teacher would expect his or her students in an advanced class to work at a higher level than those in a less-advanced class, investors expect more return from their higher-risk investments.
Caveats--Alpha is dependent on the legitimacy of the fund's beta measurement. After all, it measures performance relative to beta. So, for example, if a fund's beta isn't meaningful because its R-squared is too low (below 75, meaning the fund is not being compared to a relevant benchmark), its alpha isn't valid, either.
Additionally, alpha fails to distinguish between underperformance caused by incompetence and underperformance caused by fees. For example, because managers of index funds don't select stocks, they don't add or subtract much value. Thus, in theory, index funds should carry alphas of zero. Yet many index funds have negative alphas. Here, alpha usually reflects the drag of the fund's expenses.
Finally, it's impossible to judge whether alpha reflects managerial skill or just plain old luck.
Sharpe Ratio
The second measure, the Sharpe ratio, uses standard deviation--how much returns have varied around the mean, or average--to measure a fund's risk-adjusted returns. The higher a fund's Sharpe ratio, the better a fund's returns have been relative to the risk it has taken on.
Developed by its namesake, Nobel Laureate William Sharpe, this measure quantifies a fund's return in excess of our proxy for a risk-free, guaranteed investment (the 90-day Treasury bill) relative to its standard deviation.
The higher a fund's standard deviation, the larger the returns it needs to earn a high Sharpe ratio. Conversely, funds with modest standard deviations have a lower-return threshold to carry high Sharpe ratios.
The Sharpe ratio has a real advantage over alpha. Standard deviation measures the volatility of a fund's return in absolute terms, not relative to an index. So whereas a fund's R-squared must be high for alpha to be meaningful, Sharpe ratios are meaningful all the time. The fact that the Sharpe ratio is not relative to an index also means that it can be used to compare risk-adjusted returns across all fund categories.
Caveats--As with alpha, the main drawback of the Sharpe ratio is that it is expressed as a raw number. Of course, the higher the Sharpe ratio the better. But given no other information, you can't tell whether a Sharpe ratio of 1.5 is good or bad. Only when you compare one fund's Sharpe ratio with that of another fund (or group of funds) do you get a feel for its risk-adjusted return relative to other funds' returns.
Risk and Return in Context: the Morningstar Rating for Funds
Alpha and the Sharpe ratio both need a context to be useful. Who can say whether an alpha of 0.7 is good? Or whether a Sharpe ratio of 1.3 is good? That's where Morningstar's star rating comes in. Unlike alpha and the Sharpe ratio, the star rating puts data into context, making it more intuitive.
The star rating is a purely mathematical measure that shows how well a fund's past returns have compensated shareholders for the amount of risk it has taken on. It is a measure of a fund's risk-adjusted return, relative to that of similar funds. Funds are rated from 1 to 5 stars, with the best performers receiving 5 stars and the worst performers receiving a single star.
Morningstar gauges a fund's risk by calculating a risk penalty for each fund based on "expected utility theory," a commonly used method of economic analysis. It assumes that investors are more concerned about a possible poor outcome than an unexpectedly good outcome, and those investors are willing to give up a small portion of an investment's expected return in exchange for greater certainty.
A risk penalty is subtracted from each fund's total return, based on the variation in its month-to-month return during the rating period, with an emphasis on downward variation. The greater the variation, the larger the penalty. If two funds have the exact same return, the one with more variation in its return is given the larger risk penalty.
Funds are ranked within their categories according to their risk-adjusted returns (after accounting for all sales charges and expenses). The 10% of funds in each category with the highest risk-adjusted return receive 5 stars, the next 22.5% receive 4 stars, the middle 35% receive 3 stars, the next 22.5% receive 2 stars, and the bottom 10% receive 1 star.
The star rating can help you get a sense of how a fund's returns compare with others in the category after accounting for the amount of risk taken on to achieve the returns. Still, like all backward-looking measures, the star rating has limitations. It is critical to remember that the rating is not a forward-looking, forecasting tool. The star rating is best used as an initial screen to identify funds worthy of further research--those that have performed well on a risk-adjusted basis relative to their peers.