Traditionally, the success of recommender systems is evaluated by predicting accuracy of recommendations off-line using existing datasets. For example, see the million dollar Netflix prize for a meager 10% improvement of their collaborative filtering algorithm. Netflix provided access to 100 million of its customers’ movie ratings to train new algorithms and test them. In other words, the algorithm is judged more accurate the more it recommends movies the user has already seen. Recommendations based upon this traditional accuracy metric are not the most useful to users.
Researchers know that success of recommendations is better measured by recording user satisfaction - the positive emotional response at having discovered something new that one likes. But that is more difficult to measure - as it requires a community of users and a useful mechanism to compel (or at least strongly encourage) the reporting of satisfaction, it's strength and perhaps type. Satisfaction of recommendations seems to follow in ascending order of the following recommendation types:
- Low quality, low accuracy recommendation. Users obviously don't appreciate having their time wasted in evaluating something that the system should have known the user would not be likely to appreciate. These are "trust-busters"; the user will lose trust in the system.
- An accurate, but known recommendation. An item the user is already aware of. The user likes the item, but it is not novel. Trust is maintained because at least the system recommended something that the user already likes. Too many of these recommendations imply an excess number of false-negatives or "missed opportunities".
- A novel, but obvious recommendation. A novel recommendation is something new and appreciated, but something the user would have discovered on his/her own. For example, a new song from a favorite musician, or a new movie from a favorite director. The user will have a positive, though muted, reaction. Many users will suspect that there were "missed opportunities", given the huge number of unfamiliar items in any domain.
- A serendipitous recommendation. A serendipitous recommendation is something new, non-obvious and appreciated that the user would likely not have discovered on his/her own. For example, an unfamiliar song from an unfamiliar musician, or a unfamiliar movie from an unfamiliar director. The user will likely have a very positive reaction, though it has been argued that, in some users, such recommendations may be seen as obscure and not immediately appreciated.
The serendipitous recommendation is obviously the ideal for most users, the problem is that collaborative filters tend to focus on what is commonly known and popular - items that the user has heard about or items that the user would have experienced eventually because of their "blockbuster nature". Many of the most interesting items for the user may be buried in the "long tail", so some collaborative filtering systems have attempted to tweak their algorithms to try to maximize this type of recommendation by reducing the more popular recommendations. Even so, recommendation diversity tends to be reduced in collaborative filtering systems, leading to a large number of false-negatives or "missed opportunities".
Recommendations based on a user's core identity will not focus on the popular, or items from artists or directors the user likes, or that the user's friends like. Instead, the user will be recommended items from the entire item landscape that by definition the user is most likely to appreciate based on that core identity (their "preference engine"). Thus the recommendation diversity (coverage of item space) within a domain (such as music) is as large as the diversity of items within that domain, leading to a large number of serendipitous recommendations - possibly the vast majority. Keep in mind that the number of domains in our community is also unlimited, and the same core identity can be used to recommend anything and everything in life.
No comments:
Post a Comment