It's not every day you see seven computer scientists grinning awkwardly behind one of those goofy six-foot checks they give out to lottery winners.
That was the scene in Manhattan's Four Seasons Hotel this morning as Netflix announced the winners of their $1 million competition to improve the system that underlies the movie recommendations that they make to their customers. I wrote about these "recommender systems" and the Netflix Prize in the August issue of Communications of the Association for Computing Machinery. Just as that article was going to press, someone beat the 10%-improvement threshold for bringing home the award.
But only today did Netflix announce that the winning team was BellKor's Pragmatic Chaos, a longtime leader, some of whose members appeared in my story. Although this team was the first to break the barrier in late June, other teams had subsequently passed them, and they submitted their winning entry only 20 minutes before the deadline (30 days after the barrier breaking). In fact, another submission matched their 10.6% improvement --but because the Ensemble team submitted their entry ten minutes later, they spent the presentation clapping politely from the audience.
Most researchers will say the prize money is only a part of the excitement of this competition--and in any case the winning members from AT&T Labs will be handing their winnings over to their corporate sponsor. A major draw for researchers was access to Netflix's enormous database of real-world data. The company also maintained an academic flavor by requiring that winners publish their findings in the open literature, and by maintaining a discussion board where competitors discussed results and strategy.
We don't know how many other companies have taken advantage of these open results, but Netflix certainly has. Netflix's Chief Product Officer Neil Hunt says the company said the company has already incorporated the two or three most effective algorithms from the interim "progress prizes." Moreover, "we've measured a retention improvement" among customers, Hunt said. The company is still evaluating which of the several hundred algorithms that were blended together to win the prize will be incorporated into future recommendations, since the results need to be generated very rapidly. "We still have to assess the complexity of introducing additional algorithms," Hunt said.
At the ceremony, the company didn't talk much about the other features, beyond predicting "star" rankings, that define good recommendation systems. As discussed in my CACM story, these include aspects of the user interface, such as the way users are encouraged to enter data and the way the results are presented. In addition, a good recommendation needs to go beyond predictable satisfaction to include serendipitous choices that a customer would not find on their own.
Rather than take on these more psychological challenges, the "Second Netflix Prize" will address the more algorithmic challenge of making predictions for customers who haven't ranked many movies, for example those who just signed up or who don't feel like providing ratings. To augment this "sparse" data, Netflix will provide competitors with various other tidbits of data, including demographic information like zip code and data about prior movie orders. But not, Hunt hastened to add, names or credit-card numbers.
As my earlier story discussed, such "implicit" user information is of growing importance for recommender systems. For one thing, it's harder to distort this kind of input by pumping up certain products with fake ratings. In addition, although Netflix can easily cajole customers to take the time to enter ratings, many commercial sites are more limited and have only implicit data to work with.
The new prize doesn't set any explicit performance goals. Instead, Netflix plans to award $500,000 each to the best performers as or April, 2010 and April, 2011. But most of the winners today weren't sure they were going to sign on to the new challenge. They were too tired.
No comments:
Post a Comment