2013 Ski Test: How It Works

We pull back the curtain on our industry-leading ski test.
Avatar:
Author:
Publish date:
Social count:
0
We pull back the curtain on our industry-leading ski test.
2013 How We Test FT

We know how this sounds, but testing skis is challenging. It requires a racer’s technical skills to differentiate between models, plus a writer’s verbal skills to articulate those differences clearly—not to mention a shop guy’s knowledge of the equipment. Does a ski that slings you across the fall line get its power from a stiff tail? A sharp tune? A sheet of titanium? Also required: quads of steel. In the 2012 test at Snowbird we tested a total of 131 skis over five days, averaging upwards of 30,000 vert per day. Most of all a tester needs testing experience. Even for an excellent skier, the sheer number of test models can be overwhelming at first. But with experience, nuances between skis become more obvious. 

Each winter, we discuss our categories with product managers from each major manufacturer, who then decide which of their models they think will compete well in each category. (We also run a separate ski test with smaller “indie” brands. Look for those results in a later issue.) We test only high-performance models because our readers are avid skiers. If you’re not a ripper quite yet, look for skis with high scores in the Forgiveness criterion, or consider less expensive models related to the models we review. Each manufacturer is allowed a certain number of entries—some get as many as 11, some as few as six— which they can allocate in any categories they choose. (Many enter two in Deep Snow, for instance, because that’s the hot category these days. How do we decide how many models to allot each brand? Our formula takes into account the manufacturer’s market share (we want to evaluate skis that consumers can readily find) and its performance in the previous year’s test (we want the best skis from brands that have already proven themselves). We shouldn’t have to say this, but we do: We never make companies pay or buy advertisements to be included; an comparison of ad pages to gold medal skis shows zero correlation.

Test team › We handpick a cadre of ripping, experienced testers: ex-racers, instructors, shop guys, retailers, local rippers. For objectivity’s sake, we avoid sponsored athletes. Any cards that demonstrate company allegiances are thrown out before results are tallied.

Venue› We test late in the winter at Snowbird, Utah, because of its convenience, variety of terrain, quality and consistency of snow and, well, because it’s Snowbird. Which is to say it’s awesome.

Test protocol › We set our test corral up at the bottom of the Gadzoom Express quad (high-speed, 1,823 vertical feet) where the product managers hang out with diamond stones for between-run tune-ups. We lap the lift for five days straight, testing the category that best suits the conditions each day. Testers take each ski into every type of terrain, then fill out a card on the chairlift. They score each in nine criteria, then write descriptive comments about the ski’s behavior on the back. The skis are then ranked according to their average score across all criteria.

Results › We medal a total of 81 skis, a little more than half of what we test, which, in turn, is only the top 10 percent of what’s on the market. We do not review skis that didn’t make the cut.

Here are quick previews of our three test categories:

Hard Snow

Mixed Snow (West)

Mixed Snow (East)

Deep Snow