Jan 14, 2008

One Degree of Separation

Social networks rely on your primary network - your existing friends and contacts - to introduce you to THEIR friends and contacts. Each of the people in the network are called 'nodes', each with one or more connections to other nodes. Each of those connections is sometimes called a degree of separation; a friend of a friend (FOAF) would then be two degrees of separation. The famous phrase "six degrees of separation" was based on work by psychologist Stanley Milgram who determined that any two Americans, connected in the nation-wide extended network, are separated by an average of five intermediaries, i.e. six steps or degrees.

Despite their connectedness, two people separated by so long a chain are extremely unlikely to ever meet. In fact, we usually only ever meet the friends of our friends: an extremely small fraction of the larger network. Web services like LinkedIn, the business contact network, tracks your chain to three degrees of separation - though I wonder how often the third degrees ever connect. [Friendster tracked the chain even further, and this pursuit has been credited with Friendster's downfall, as tracking long chains is very difficult computationally and has much larger hardware requirements.]

Online Social Networks are not really social, and the network - as degrees of separation - serves mostly to separate. So, if one really wants to 'kill' social nets, one needs to get rid of the 'net' (the multiple degrees of separation that separate people) in order to bring people together. Jyri Engeström argues that social networks should not be based on individual connections between people that can be counted and accumulated, rather people must be connected by shared objects. We agree and take this to the next level by making everything in the virtual community an object, where each object is connected to every other object.

The New Paradigm

As proposed in the last post, what is lacking in the current data islands and the proposed schema solutions is a way of harnessing the true power of the collective to actually reduce information overload and increase discovery. This will require a revolution in content and relationship discovery that can only arise with a completely new kind of information filtration and recommender technology.

"The social web will be powered by recommender systems".
Open Issues in Recommender Systems
John Riedl, Bilbao Recommenders School, 2006

The true power of the collective will be realized with the proper integration of social media, new universal discovery techniques, and associated detailed portable identity and personalization info. The result is a Social Web based on one degree of separation: all people and things are related to each other directly, with each such relationship differing only in type and strength. The following graphic is a representation of such a "one degree" circle of people relationships, but keep in mind that each person is also similarly related to all items, ideas, endeavors, etc. in the system as well.

Critical to this new paradigm are the new universal discovery techniques that I've hinted at previously. Current recommender systems, including collaborative filters, are too primitive and limited to accomplish the task. Instead, we have applied certain bioinformatics concepts to solve the puzzle of simulating the human preference engine without requiring "strong AI". This starts with a quick determination of a person's "core identity", that internal mechanism which is responsible for generating appreciation, and sifting through the chaos and making choices.

Determining that "core identity" is a critical breakthrough as it allows us to quantify the relationship (strength and type) between all people, and between all people and all other things in the system. It also can yield portable data that can be used to quantify such relationships between users and items from multiple data islands, and can even be used in mobile devices and in real-world activity. This discovery system involves no collaborative filtering, psychological testing or interpretation, statistical or stochastic methods, etc.

"But there is no go-to discovery engine - yet. Building a personalized discovery mechanism will mean tapping into all the manners of expression, categorization, and opinions that exist on the Web today. It's no easy feat, but if a company can pull it off and make the formula portable so it works on your mobile phone - well, such a tool could change not just marketing, but all of commerce."
The race to create a 'smart' Google
by Jeffrey M. O'Brien, writing for Fortune Magazine

In addition to the current benefits of the social web, the integration of these universal discovery techniques will allow:

  • A brief one-page registration with no need for private information. Qualifies as 'Cold-Start' for people and also items, ideas, endeavors, etc.
  • Immediate access to promising relationships of all types, i.e. universal recommendations. These relationships are the predicted interest and affinity between a person and all other people, music, movies, books, recreation, groups, products, services, ads, travel destinations, vocations, jobs, teams, politics, religion, ideas, websites, articles, news items, games, etc.
  • Portable data that can be compared and relationships quantified. This portable data can be used between social and data islands, for mobile devices and in real-world activity.
  • No language or cultural barriers: no folksonomy or semantic constraints.
  • No need for existing relationships. Emphasis is on relationship discovery, though existing friends and contacts are revealing.
  • No need to observe history of actions and choices. A one-page registration is enough to provide significantly more information, and better information, than collaborative filters can accumulate.
  • The new system will act as a good friend who knows you well and delivers trusted recommendations of all types, both solicited and unsolicited.
  • Reduced privacy concerns as personal or demographic data is unnecessary.
  • Automatic person-level granularity. Each relationship has a strength and type.
  • Universal recommendations allows for highly successful affiliations of all types, direct sales and downloads, and highly targeted advertising as the diverse business model.
  • Ratio of discovery to effort is high. No need for constant messages, spam, requests, friend searches, etc.
  • Discovery is filtration, so 'information overload' and the 'tyranny of choice' are greatly reduced.
  • Enables highly personalized search engine functionality, news aggregation, and many other forms of person-level information filtration.
  • Constant excitement of discovery, so no "what's next?" reaction. No limit to novelty and interest, little boredom. No feeling of wasted time.
  • Highly useful and usable: the keys to success of any product or service.

Jan 8, 2008

Social Standardization and the Death of Social Networks

"...we’re reaching an inflection point where some fundamental conceptions of the web (and social networks) need to change".
from Stop building social networks, by Chris Messina

It seems that everybody is predicting the end of something due to something else, typically calling the later a 'killer app'. Are VOIP and email replacing the phone and fax? Is social media replacing Google search, email, communication in general? Is IM replacing email? Well, who would have predicted that the trusty typewriter would disappear in the span of a few years? It seems many are making another prediction: Social Nets are on their way out, at least in their current configuration. In this post, I'll talk about the problems and proposed solutions.

Social Nets are hugely popular and are obviously doing something right. They were clearly a revolution in online communication and information sharing. Let's first list why people enjoy them. They allow you to:

  1. express yourself and try to look cool
  2. people-watch / voyeurism / "gawk at strangers"
  3. 'collect friends' and compete to see who has more
  4. waste time doing semi-fun alone stuff with apps, etc.
  5. keep in touch with existing friends (the primary network)
  6. make new friends, dates and business contacts (the largely unfulfilled promise of the 'network')
  7. manage your personal data
  8. exchange knowledge and information
  9. re-connect with old friends and colleagues

As for the negatives, here are some of the points mentioned on the blogosphere:

  1. 'Friend collecting' is not 'social'. No real communication takes place, and no real friends are made. Checkmarking someone as a friend is not being social. Not much relationship building going on.
  2. Information Overload is not reduced, quite the opposite: too many people, messages, spam, etc. There is a limit to our ability to absorb information: our internal filters cannot handle it.
    "There isn’t enough time in the day for any person to find value in what a 1,000 people have to say - our internal filters just won’t allow it. At some point all that information; whether it be valuable or just fluff, becomes nothing more than white noise".
    from Enough with the social crap I think I’m gonna puke, by Steven Hodson
  3. "Massive waste of time" / "It takes too much time" / 'Social Net Fatigue'
  4. Privacy concerns / 'abuse of trust'. Services track user activity on and off the service, and post some of those activities to the "friends". Combining information from multiple sources may reveal private information.
  5. Social nets are 'Walled Gardens'. They are not portable - information is trapped within the bounds of each service. New users must re-enter profile information, must search and re-add network contacts, and must reset notification and privacy preferences for each new social net joined.
  6. Social nets are by definition 'network-centric'. Most users are exposed only to friends of friends (i.e. two degrees of separation). This presents an obstacle to discovering true friends and contacts, most of the potential being outside of your network.
  7. No Business Model beyond popularity and possibly advertising. Also, because new users on social networks often misrepresent themselves and enter false personal information, demographic data for advertisers is therefore unreliable.
  8. The "superficial emptiness"
  9. The "what's next?" phenomenon (after exhausting the novelty of the site) / Lack of Innovation
  10. Not granular enough - no ability to group friends and contacts in categories, or indicate how close or trustworthy those relationships are.
  11. Tired of having to add friends or accept friend requests in all of these networks.
  12. Use a given service only because that's where your friends are.

Proposed Solutions:

Many feel that Identity/Info concepts like OpenID, OpenSocial, FOAF, the 'Semantic Web', Microformats, have great potential in solving a few of the above problems.

"a distributed, user-centric identity scheme would destroy almost every "walled garden" social software application on the web".
from Identity Management Will Destroy Social Software, by Brian 'Bex' Huff

The idea is that each internet user would have a single universal and portable profile that would be used and understood by all services, thereby elimiating the need to enter and configure the same information and connections on every new service. Ideally, this would have the effect of removing the walls between services, creating a single large community or 'cloud' where "relationships transcend networks/documents".

The social and data islands that dot the internet can clearly be helped by some kind of standardized profile that can be uploaded to (and modified by) each service. The burden of registration and establishing relationships would be greatly reduced. Such a profile can grow to include all the data that a person might share, including photos and information, music, movie, web site favorites, etc. As long as all services agreed on standardization, this should work pretty well. As an example, browser standardization is largely successful - though differences do exist and can be frustrating for developers and surfers alike.

The Next Revolution:

Schemas, however, will not solve most of the issues mentioned above, and some are made worse (like privacy concerns). Some even argue that standardization and identity aggregation would not be entirely apprieciated. As much as schemas depend on FOAF information, most of the problems with social networks will remain. If one really wants to 'kill' social nets, one needs to get rid of the 'net' part, i.e. the degrees of separation. What is lacking in the current data islands and the proposed schema solutions is a way of harnessing the true power of the collective to actually reduce information overload and increase discovery. The next revolution in content and relationship discovery can only arise with a completely new kind of information filtration and recommender technology.

"The social web will be powered by recommender systems".
Open Issues in Recommender Systems
John Riedl, Bilbao Recommenders School, 2006

The true power of the collective can only be realized with the proper integration of social media, new universal discovery techniques, and associated detailed portable identity and personalization info. The result is a Social Web based on one degree of separation: all people and things are related to each other directly, with each such relationship differing only in type and strength. More on this new paradigm shortly.

Jan 5, 2008

Cause vs. Effect of Human Preference

"One crucial unsolved problem for recommender systems is how best to learn about a new user".
Getting to Know You: Learning New User Preferences in Recommender Systems
Rashid, et al, 2002


"Success comes from understanding both data and people"
Open Issues in Recommender Systems
John Riedl, Bilbao Recommenders School, 2006


"The problem with recommendation systems is... it measures and acts upon the effect, not the cause".
Response to “UIEtips Article: Watch and Learn: Recommendation Systems are Redefining the Web
Adam Smith, 2006

So far, the internet has been all about effect. What other people say they like, you might also like; what you liked in the past suggest what you may like in the future. Google does it with PageRank; Amazon.com and Netflix do it with their recommender systems. They act based on your, or others', past preferences (the effect) rather than the cause of your past preferences. As you interact with the web, applications can record your actions and choices in order to create a filter with which to formulate suggestions that you might appreciate in the future.

But this is not the way the natural social process of recommendation seeking works. If you really want a good recommendation you ask someone who knows you well, as an individual. This is the way good friends do it. We accept recommendations from good friends because they understand our core identity (hopefully) and have no ulterior motives (hopefully). For example, as a single guy, I will never again go on a blind date unless the intermediary is a good friend who understands my taste and my attitudes, values, personality, etc., as well as that of the prospective date. One could make an assumption based on my past dates and relationships, but it would be an assumption based on insufficient (see below) and indirect data: the effect rather than the cause.

What is the cause? Preferences do not appear out of thin air, they are a result of your core identity: some combination of nature and nurture, your genes and your cultural and social influences, the configuration of your brain. This is the direct cause of your preferences: it is your preference engine. Unfortunately, it is a black box that we cannot really open. Possibly in the future there will be a scanning device that can capture and replicate your precise neural configuration. With this copy, and sufficient understanding of the human mind, we might be able to accurately predict your choices. In making a choice, the steps are:

  1. Core Identity + Exposure -> Preferences (i.e. Brazilian Supermodels)
  2. Preferences + Availability -> Choices      (Damn!)

Current recommender systems, such as collaborative filters, attempt to simulate a filter at the second stage. What we need is a way to accurately simulate your filter at the first: not quite a copy of your brain - but close.

Your preferences are also extremely limited by your limited exposure. Take music as an example. I love music, but I have only heard a tiny fraction of a percent of all music. So how the hell can my current favorites be expected to be entirely descriptive of my true taste or ultimate favorites? I have been exposed to that which is largely popular, better marketed, in English, etc. Music recommender applications suffer from this limitation: they consider only what I have already heard, and so they receive highly skewed data about my true taste. It would be great to have a good friend who is the ultimate "long tail" DJ and can match me to music based on his knowledge of my core identity and detailed knowledge about all music and musical tastes.

"Thus, the task is not so much to see what no one yet has seen, but to think what nobody yet has thought about that which everybody sees".
– Arthur Schopenhauer

It seems obvious that far better recommendations would result from an intimate knowledge of a person's core identity. But identity is mysterious and unapproachable; better left to the fantasies of pipe-smoking psychologists. In reality, it is the chain around the elephant's leg. We all have the tools to break free from the constraints of assumption, but smart people have not previously applied themselves to the task.

Jan 2, 2008

Current Recommender Types

There are a number of types of recommender systems currently available. They vary significantly in their mode of action and ultimate user experience. In terms of results, recommender systems are expected to offer sufficient good quality recommendations ('New Favorites'). In addition to this, the quality of the results is also dependent on minimizing false positives ('Trust Busters') and false negatives ('Missed Opportunities'). In other words, users should also not be shown inappropriate results and should not be denied appropriate results.

The quality of the user experience is also influenced by the time and effort required to give the recommender system enough information to make minimally reasonable recommendations. Users are sometimes asked to fill out lengthy questionnaires, or applications require that a user's history of choices or ratings be observed and recorded. It takes time and effort before things start working well. These days, users don't like to wait for anything and expect immediate gratification - delivering instant results upon quick registration is called 'cold start'. However, existing applications that permit a 'cold start' lack anything close to sufficient information, explicit or implicit, required to make accurate, high-quality recommendations.

There are a number of strategies that recommender systems are taking today. These include:

  1. Non-personalized: "Web 1.0" technology offering the highest rated or most popular items to all users. No intrinsic personalization, poor quality results, but immediate.
  2. Demographic: Require some knowledge about the user in order to group similar users together (i.e. by age, gender, area code, other similar features). Poor quality recommendations, low personalization, though slightly better than the above. May require "private" information, and depending on the length of the questionnaire, registration can take time.
  3. Simple answer or ratings matching: Matches users based on explicit matching of answers, selections, ratings, etc. Makes recommendations with extremely limited scope, many missed opportunities, requires answers or observations.
  4. Heuristics, probabilistic models (Bayesian, Markov), decision tree, neural net, etc. An application must collect a large amount of user-item preferences, or user/item features before quality recommendations are possble. This approach attempts to identify the underlying logic (or apply certain assumptions, in the case of heuristics) to a user's choices.
  5. User-based Collaborative Filtering: similarity of historical choices or actions allows the application to find highly correlated users. The assumption is that users who agreed in the past might tend to agree in the future. Limited immediate results, most items will not be rated/answered (sparsity). Users with non-typical opinions or taste (the 'long tail') may not get good recommendations.
  6. Item-based collaborative filtering: Finds items that tend to be preferred together. Limited immediate results, and users with non-typical opinions or taste may not get good recommendations.
  7. Content-Based: Find items with similar features (Keywords, author, genre, i.e. DNA) to known preferences of a user. Items must be properly and thoroughly represented as a set of features - this generally requires a large staff. Generally limited to a single domain as there may be few cross-domain features. Limited immediate results.

There are many recommendation engines and recommender applications available on the internet and many more seem to be popping up all the time. Currently they all have severe limitations and offer mediocre to poor quality results when compared to, say, recommendations by a best friend. Examples of current applications include:

  • eHarmony requires a very lengthy questionnaire and uses a proprietary empirical heuristic to match people romantically. It's success depends on the quality of the questions and the heuristic, the person's willingness to answer truthfully, and the person's willingness to spend a few hours to register. Mixed results are reported, but there is certainly an advantage over matchmaking sites that allow daters to make their own bad choices.
  • Pandora and Last.fm both recommend music though they do so in different ways. Pandora's large staff must determine the separable features ("DNA") of a song and observe a user's choices in order to extract common features of a user's preference. Last.fm seems to work by grouping users of similar taste. Both suffer from reduced choice diversity for slightly different reasons. Both are mildly satisfactory, but also suffer from excessive false negatives and false positives, and require recording your existing preferences. Two roommates using the same account will likely see poor results.
  • Amazon.com's recommendations work by observed a user's choices and activity and grouping items (books, CDs, DVDs, etc.) that tend to be chosen or viewed by the same users. After viewing or choosing items, you are presented with: "users who liked X (the currently viewed item) also liked Y (a correlated item). As may be considered a typical pattern, users who buy for multiple people, like for children or friends, will likely see poor results.
  • Social DNA sounds like it works similar to Pandora, but the granularity is significantly greater, and unlike eHarmony, there seems to be no heuristic - matching is all or nothing (i.e. explicit ratings and questions). This is expected to lead to extremely high false negatives, relatively few true positives, and, since matches will likely occur with only a tiny fraction of possible DNA (highly limited explicit information yeilds a sparse matrix), considering the complexity of human beings, mostly false positives.

In order to get relatively high quality and accurate recommendations, a large amount of explicit ratings/choices (and/or possibly implicit activity) must be recorded. This is extremely hard to do: users are less likely to maintain interest while the machine learns, and this will be increasingly true in the future. Currently, users must be content with mediocre results, but a trade-off will develop between accuracy/quality and user patience.

Another frequent limitation is that users can act maliciously or inappropriately to skew results. Due to the limitations of current applications, users may feel the need to modify or exaggerate their choices in order to get better results. On the other end, users who want to promote certain items to others may give or encourage false ratings, views or descriptions (called 'Shilling') through manual or automated efforts or attacks. Also, privacy becomes an issue as users may explicitly or implicitly reveal private information about themselves. Details include demographics, personal details, taste, ratings, opinions, etc. Systems administrators (and possibly hackers) will have free access to this data.

Accurate, high quality, robust and broad scope recommendations have been the holy grail for internet futurists for quite some time, though we are still a long way from that goal. The problem is largely technical: recommendations are a really tough problem. Mathematics/statistics, clever algorithms and artificial intelligence are stretching the results to the maximum, given the poor quality data available from users during registration or interaction with the application. The solution is to get high quality data about the user's identity or individuality and match based on that, rather than matching based on a user's history. The problem is that teaching the machine about the core identity of a person is science fiction. Or is it?