This article is part of our MLB Observations series.
I've disparaged the idea of using projections to generate draft lists for various reasons, including projections not being geared toward specific league quirks -- in the NFBC, for example, there is an overall contest component that increases the price of players who produce in scarce categories. But their biggest flaw is that most are based on players' 50th percentile seasons, i.e., their mean expected outputs.
A mean-output based cheat sheet is a poor model of how fantasy baseball is actually played. For example, a non-star player who gets injured or loses playing time, i.e., one who performs at the bottom end of his outcome range, will simply be cut from one's roster rather than having his zeroes accumulate all year. But the risk of that happening is factored into his mean projection nonetheless, making it lower than it should be. Thus, players with playing time risk are undervalued relative to their peers on a projections-based cheat sheet.
Here's a concrete example -- say two players are competing for the closer job for a good team whose manager tends to stick with a sole closer rather than a committee. Let's say each player has a 50/50 shot at winning it, so each has a mean projection of 20 saves. Let's also stipulate this is a 12-team mixed league where low-end closers are available via free agency most weeks. Based on the projections, both players will be ranked below a declared closer on a bad team projected for 25 saves. This is obviously a mistake.
If you draft one of the closers on the good team, you have a 50 percent chance of getting 40 saves and a 50 percent chance of getting a player to drop, whose slot you can use on someone else, presumably the next pitcher named a closer. (In deeper leagues where new closers are more scarce, this is a closer call.)
Another example is when, later in the draft, you're deciding between a toolsy prospect who might not make the team out of camp, and an established veteran whose contract and defense guarantee him playing time. Again, the mean projection will factor in the prospect's chances of starting in the minors, dragging down his overall output. But the veteran doesn't have such a drag on his mean projection, so if their mean outputs are similar, the prospect has the higher ceiling, and should be drafted earlier, especially if the league is shallow.
The problem with using mean projections is they treat outcomes that deviate from the mean symmetrically when they should be asymmetric. If my player has his 2010 Jose Bautista explosion (99th percentile), I get all 54 home runs. If he has his Carter Kieboom flop, he's benched by April 15 and cut no later than May 1. The Kieboom miss is almost irrelevant to my overall finish, whereas the Bautista hit is a huge contributor to my success. In other words, the 1st and 99th percentile outcomes do not affect my team equally.
Moreover, most fantasy baseball leagues do not pay out symmetrically. Unlike your stock portfolio where finishing in the 10th percentile among all investors is much worse than having an average year, there is typically no difference in payout whether you finished sixth or 10th in your 12-team league. That means you need to beat the rest of the league, and the easiest way to do that is by drafting players whose ceilings dwarf their costs.
One might argue mean projections are still valuable, at least in the early rounds, where the players drafted are hard to replace, and I'd concede that to a point. It might actually be that mean projections are too aggressive in the first round, and 35th percentile projections would be more apt, i.e., take the highest ranked player assuming somewhat disappointing seasons for all of them. But as we go deeper into drafts, and players get more replaceable, downside becomes decreasingly important until the very late rounds where it disappears entirely.
So how could one design a better projections system to account for this? One would project (based on three-year averages, age, similarity scores, park effects, physical traits, etc. -- whatever is found to be most predictive) 99 player seasons for each player, from 1st percentile to 99th. Yes, each player would have 99 projections, each tailored to his particular skill set, as determined by his prior performance, comps and team/park/role/health/age/experience, etc.
I'd start by backtesting, using everyone's 1st percentile projections from prior seasons to make my draft list, and seeing how they fared. (You'd have to calculate and build in a replacement value player, based on league depth, to fill in for all the picks who flopped.) Then their second percentile rankings, then their third, etc., all the way to 99. I'd figure out which list yielded the best results on average -- my guess is it would be between 55 and 70 -- and I'd set whichever one that was as the default. But I'd also experiment with custom per-round or per pick-number settings, wherein picks 1-5 could be ranked by 30th percentile outcomes, 6-15 40th, 16-35, 50th, etc. I'd then ask the system to optimize for per-round or per-pick-increment percentiles and spit out the rankings based on those. You could, in larger overall contests like the NFBC Online Championship, specify a more aggressive draft approach wherein the rankings algo drew from higher percentiles to generate the rankings, or in one-off leagues, use a more conservative approach, more likely to beat the average, but less likely to take down the overall.
In short, if you want a model that's useful for drafting purposes, you must take into account more than the mean output for each player. You must also figure out the range of outcomes that are pertinent to his draft slot and rank him accordingly.