Jan 5, 2008

Cause vs. Effect of Human Preference

"One crucial unsolved problem for recommender systems is how best to learn about a new user".
Getting to Know You: Learning New User Preferences in Recommender Systems
Rashid, et al, 2002


"Success comes from understanding both data and people"
Open Issues in Recommender Systems
John Riedl, Bilbao Recommenders School, 2006


"The problem with recommendation systems is... it measures and acts upon the effect, not the cause".
Response to “UIEtips Article: Watch and Learn: Recommendation Systems are Redefining the Web
Adam Smith, 2006

So far, the internet has been all about effect. What other people say they like, you might also like; what you liked in the past suggest what you may like in the future. Google does it with PageRank; Amazon.com and Netflix do it with their recommender systems. They act based on your, or others', past preferences (the effect) rather than the cause of your past preferences. As you interact with the web, applications can record your actions and choices in order to create a filter with which to formulate suggestions that you might appreciate in the future.

But this is not the way the natural social process of recommendation seeking works. If you really want a good recommendation you ask someone who knows you well, as an individual. This is the way good friends do it. We accept recommendations from good friends because they understand our core identity (hopefully) and have no ulterior motives (hopefully). For example, as a single guy, I will never again go on a blind date unless the intermediary is a good friend who understands my taste and my attitudes, values, personality, etc., as well as that of the prospective date. One could make an assumption based on my past dates and relationships, but it would be an assumption based on insufficient (see below) and indirect data: the effect rather than the cause.

What is the cause? Preferences do not appear out of thin air, they are a result of your core identity: some combination of nature and nurture, your genes and your cultural and social influences, the configuration of your brain. This is the direct cause of your preferences: it is your preference engine. Unfortunately, it is a black box that we cannot really open. Possibly in the future there will be a scanning device that can capture and replicate your precise neural configuration. With this copy, and sufficient understanding of the human mind, we might be able to accurately predict your choices. In making a choice, the steps are:

  1. Core Identity + Exposure -> Preferences (i.e. Brazilian Supermodels)
  2. Preferences + Availability -> Choices      (Damn!)

Current recommender systems, such as collaborative filters, attempt to simulate a filter at the second stage. What we need is a way to accurately simulate your filter at the first: not quite a copy of your brain - but close.

Your preferences are also extremely limited by your limited exposure. Take music as an example. I love music, but I have only heard a tiny fraction of a percent of all music. So how the hell can my current favorites be expected to be entirely descriptive of my true taste or ultimate favorites? I have been exposed to that which is largely popular, better marketed, in English, etc. Music recommender applications suffer from this limitation: they consider only what I have already heard, and so they receive highly skewed data about my true taste. It would be great to have a good friend who is the ultimate "long tail" DJ and can match me to music based on his knowledge of my core identity and detailed knowledge about all music and musical tastes.

"Thus, the task is not so much to see what no one yet has seen, but to think what nobody yet has thought about that which everybody sees".
– Arthur Schopenhauer

It seems obvious that far better recommendations would result from an intimate knowledge of a person's core identity. But identity is mysterious and unapproachable; better left to the fantasies of pipe-smoking psychologists. In reality, it is the chain around the elephant's leg. We all have the tools to break free from the constraints of assumption, but smart people have not previously applied themselves to the task.

2 comments:

JTRiedl said...

There's some good insight in this post. I think the deepest observation is that current recommendation technology cannot recommend outside of the space of taste observations it has seen, so users don't get recommendations for things they have no idea even exist that they might like very much. My guess is that people won't find these recommendations very interesting if they're not "prepared" for them. That is, we can move in a trajectory from where we are in taste space to where we might go, but we cannot jump easily from one point to another without cognitive dissonance.

I think there are also mistaken ideas in this post, though. Practically there may be a big difference between a good friend who knows me well and can make recommendations to me, and a machine learning algorithm that has seen my behavior and is making recommendations based on that behavior. However, it's important to remember that philisophically, there is not a fundamental difference. Neither the person nor the machine has studied my computational machinery in any serious way; both are basing their recommendations entirely on externally visible actions. If we're not careful, we end up defining recommendations as things that can only happen if we have solved the problem of "strong AI", and can fully simulate a human. The existence of good friends who are good recommenders despite not having a strong AI model of how my brain works, demonstrates that is not necessary.

(My arguments are based on the hypothesis that the human theory of mind is primarily based on observation (a posteriori), rather than based on reflection (my brain works like this, therefore your brain must be similar) ... It is possible that that hypothesis is incorrect, and that humans are able to reflect on their own computational mechanisms in a way that yields insight into the way others' brains work, in ways that are not available to a computer that cannot simulate a human brain. I don't have good scientific evidence one way or the other, but my own experience is that people are lousy at understanding other people's computation mechanisms. (One of my friends, a psychology Ph.D., encodes this perspective as "don't reason based on your guesses about other people's motivations". Instead, he encourages reasoning based on their observed behavior.)

John

Steve Ruttenberg said...

John, I personally am thrilled when I discover something new that I really enjoy. Ideally this is the function of recommender systems; not simply to re-introduce you to something you already know exists. There may be people who need to be semi-familiar with something in order to appreciate it upon re-introduction, but that's not me or most of my friends. I think most people are prepared to evaluate new media, ideas, products and other people, though there may need to be a rumination period.

A good friend understands you better than simply the sum of a few of your past choices. As much as your choices are based on your attitudes, values and personality, a good friend has a real advantage (practically and philosophically). Your choices are only the effect of what a good friend already knows about you. Of course, a good friend may have severe limitations of exposure, perception, and natural ability to intuit a potential match between the person and the item.

Recommendations do not require 'strong AI', and our system does not attempt it. But a filter should become more accurate the better it 'understands' a person's way of thinking (the cause). A good friend understands your brain, often better than you do. I think people build a model of people they have relationships with. Often it is an idealized model, often highly simplified, often starting out identical to their self-model, but nonetheless constantly refined.