Archive

Posts Tagged ‘Social Web’

Kevin Marks on the Social Cloud

December 11, 2010 Leave a comment

This talk by Kevin Marks from Lift ’08 is over two years old now, but I really like the story of how the “younger generation” views the Web (or the Cloud) – it’s just there, it’s like oxygen:

As I dig deeper into the technologies of the open, social web it’s nice to be reminded that the whole point of these technologies is in many ways to make the infrastructure invisible.

glenn

Social Design Patterns – Crumlish and Malone presentation at IDEA 2009

September 22, 2010 Leave a comment

A nice presentation at the Information Architecture Institute‘s IDEA 2009 conference by Christian Crumlish, who is the curator of Yahoo!’s pattern library and Erin Malone. A link to the presentation can be found here – scroll down to the Social Design Patterns Mini-Workshop presentation. The slide deck that accompanies the presentation is shown below.



Crumlish and Malone are also the authors of the book Designing Social Interfaces – which I would list as one of my top 3 books on social web design, along with Josh Porter’s Designing for the Social Web, and Gavin Bell’s Building Social Web Applications.

glenn

Hyperlocal – Key Technologies

February 14, 2010 3 comments

This is the fourth in a series of posts on key dimensions of Hyperlocal. Other posts in this series are:

In this post we consider key enabling technologies that many of the hyperlocal platforms mentioned in previous posts will leverage.

Key Enabling Technologies

The initial post in this series identified the following key enabling technologies for Hyperlocal solutions:

  1. Identity and Personalization
  2. Social Media/Social Web
  3. Real-time Web
  4. Geolocation
  5. Search
  6. Mobile
  7. Machine Learning
  8. Structured Data/Semantic Web

Let’s explore each in turn.

*** Update January 5 2010 ***

It looks like ReadWriteWeb concurs with my identifiation of key enabling technologies for emerging web-based applications. See ReadWriteWeb’s Top 5 Web Trends of 2009. I think leaving out Geolocation is a fairly important omission on RWW’s part. I didn’t make reference to the Internet of Things in my list, but have referred to Web Meets World (another name for the same thing), and its impact on HyperLocal, in previous posts.
*** End of Update ***

Identity and Personalization

Identity is a key part of any online platform these days. Not only does Identity represent one’s online presence, but it’s the basis for relating to other in the context of one’s social graph.

Chris Messina has some great insights into the emergence of Identity as a platform – here’s video of his Identity is the Platform presentation from October 2009, and the slideshow accompanying his talk.

The two key players positioned to dominate the Identity Platform space are:

Identity forms the foundation by which to deliver and manage personalized content for a user. I’m not going to discuss Personalization strategies in detail here, but ReadWriteWeb has an excellent piece on the topic.

Social Media and Social Web

I’m not sure too much needs to be said here. Obviously, Social Media and Social Networks, or what’s often referred to as the Social Graph, is a key feature of the Web today. If you’re going to host and service a Community on your website, you won’t get very far if you don’t design your website for the social web.

Interestingly, the Identity Platforms mentioned in the previous section – OpenID and Facebook Connect – allows you to import the Social Graph from external platforms into your Community site. Alternatively, you may also want to promote your content on other sites on the Social Web – including Twitter and Facebook.

Another important concept to be aware of in the context of the Web and HyperLocal is that of the Social Object. The Social Object is any piece of Content or information that a community might potentially socialize around. So for example, Twitter posts, news articles, photos, business listings, videos, URLs, movies … all are potential social objects that a community might share and discuss.

Social Media is any form of publishing that facilitaties social collaboration and sharing of information, content, and conversation. Social Networking sites, Blogs, Wikis, Microblogging platforms etc. all fall under this category.

The following are just a few of the more popular platforms on the social web:

It’s important on your website to enable key forms of social behavior, including sharing and bookmarking content, commenting, rating and reviewing, and so on. These are features that any social website should support, and the key community platform players, such as Jive, Pluck, and Lithium all support.

Real-time Web

With the viral adoption of Twitter, the real-time web has really taken off of late. To understand the state of the Real-time Web heading into 2010, see the following:

The Real-time Web can be viewed from a number of different angles. Three are:

Real-time Feeds/Sreams

This is the core of the Real-time Web – the underlying real-time feed protocol. Please see:

Real-time Search

Here, see:

Real-time Geo, or Geo-streams

Here, see:

For more on real-time geo and geolocation trends, see the Geolocation section that follows.

Managing the Real-time Firehose of Information

With the Real-time Web, information bursts furth as a massive stream – or firehose – of information, which is then filtered or consumed according to one’s particular social filters and interests. It can be overwhelming at first, as Nova Spivak discusses here.

Geolocation

… This post is a work-in-progress. Please return later to view the completed post.

glenn

Influence is the Future of Media – Ross Dawson

February 12, 2010 3 comments

Media “futurist” Ross Dawson has been talking a lot about Influence lately – and Influence’s role in Media, Marketing, Social Organization, etc. He had a nice blog post titled Top blog posts of 2009: 8 Perspectives on Influence in December 2009 that listed his top blog posts on Influence during 2009.

There’s some interesting stuff here. Nothing really I haven’t specifically encountered before. But Dawson aggregates various Social Media related trends – across Media, the Economy, Social Media, Organizational structure and processes – under the broad category of Influence, and for me anyway, it produced some unique insights.

Here’s a list of some Dawson’s Influence-related posts over 2009 that caught my eye:

Food for thought here.

glenn

Ambient Intimacy – insight into the real-time Social Web

December 21, 2009 Leave a comment

Always like to come across a concept that serves as a “hook” around which to describe a shared concept. In the Social Web space, Social Object (coined by Jyri Engestrom) is such a concept (see also here).

Another concept that’s been around for a while, but which I just recently stumbed across, is that ambient intimacy, coined by Leisa Reichelt back in 2007. Here’s how Leisa describes the notion of ambient intimacy:

Ambient intimacy is about being able to keep in touch with people with a level of regularity and intimacy that you wouldn’t usually have access to, because time and space conspire to make it impossible. Flickr lets me see what friends are eating for lunch, how they’ve redecorated their bedroom, their latest haircut. Twitter tells me when they’re hungry, what technology is currently frustrating them, who they’re having drinks with tonight.

Who cares? Who wants this level of detail? Isn’t this all just annoying noise? There are certainly many people who think this, but they tend to be not so noisy themselves. It seems to me that there are lots of people for who being social is very much a ‘real life’ activity and technology is about getting stuff done.

There are a lot of us, though, who find great value in this ongoing noise. It helps us get to know people who would otherwise be just acquaintances. It makes us feel closer to people we care for but in whose lives we’re not able to participate as closely as we’d like.

Knowing these details creates intimacy. (It also saves a lot of time when you finally do get to catchup with these people in real life!) It’s not so much about meaning, it’s just about being in touch.

Since Reichelt’s original post, she’s continued to elaborate on the topic. See Ambient Exposure, and Ambient Intimacy for the Next Generation.

glenn

Google Social Search

December 19, 2009 1 comment

Google is moving into the Social Search space as well. First there was the launch in Google Labs of Google Social Search:

Then there was the recent announcement at the Google Search Event that Google will be partnering with Facebook and MySpace (in addition to Twitter) to accept real-time feeds from these applications.

Google is certainly making rapid moves of late to integrate both the real-time web and the social web into its products. It will be interesting to watch moving forward.

glenn

Collective Intelligence – Part 5: Extracting Intelligence from Tags

November 17, 2009 4 comments

This is the fifth of a series of posts on the topic of programming Collective Intelligence in web applications. This series of posts will draw heavily from Santam Alag’s excellent book Collective Intelligence in Action.

These posts will present a conceptual overview of key strategies for programming CI, and will not delve into code examples. For that, I recommend picking up Alag’s book. You won’t be disappointed!

Click on the following links to access previous posts in this series:

Introduction

So far in this series of posts, we’ve been introduced to some basic algorithms in CI, looked at various forms of user interaction, and explored how we used term vectors and similarity matrices to calcuate the similarity between users, items, and items and users. In this post, we’ll explore how to gather intelligence from tags.

Alag introduces the topic of gathering intelligence from tags as follows:

Users tagging items—adding keywords or phrases to items—is now ubiquitous on the web. This simple process of a user adding labels or tags to items, bookmarking items, sharing items, or simply viewing items provides a rich dataset that can translate into intelligence, for both the user and the items. This intelligence can be in the form of finding items related to the one tagged; connecting with other users who have similarly tagged items; or drawing the user to discover alternate tags that have been associated with an item of interest and through that finding other related items.

With that introduction, let’s begin.

Introduction to Tagging

Quoting Alag:

Tagging is the process of adding freeform text, either words or small phrases, to items. These keywords or tags can be attached to anything in your application—users, photos, articles, bookmarks, products, blog entries, podcasts, videos, and more.

[Previously] we looked at using term vectors to associate metadata with text. Each term or tag in the term vector represents a dimension. The collective set of terms or tags in your application defines the vocabulary for your application. When this same vocabulary is used to describe both the user and the items, we can compute the similarity of items with other items and the similarity of the item to the user’s metadata to find content that’s relevant to the user.

In this case, tags can be used to represent metadata. Using the context in which they appear and to whom they appear, they can serve as dynamic navigation links.

In essence, tags enable us to:

  1. Build a metadata model (term vector) for our users and items. The common terminology between users and items enables us to compute the similarity of an item to another item or to a user.
  2. Build dynamic navigation links in our application, for example, a tag cloud or hyperlinked phrases in the text displayed to the user.
  3. Use metadata to personalize and connect users with other users.
  4. Build a vocabulary for our application.
  5. Bookmark items, which can be shared with other users.

Content-based vs. Collaborative-based Metadata

Alag emphasizes the distinction between content-based and collaborative-based sources of metadata. Quoting Alag:

In the content-based approach, metadata associated with the item is developed by analyzing the item’s content. This is represented by a term vector, a set of tags with their relative weights. Similarly, metadata can be associated with the user by aggregating the metadata of all the items visited by the user
within a window of time.

In the collaborative approach, user actions are used for deriving metadata. User tagging is an example of such an approach. Basically, the metadata associated with the item can be computed by computing the term vector from the tags—taking the relative frequency of the tags associated with the item and normalizing the counts.

When you think about metadata for a user and item using tags, think about a term vector with tags and their related weights.

Categorizing Tags based on how they are generated

We can categorize tags based on who generated them. There are three main types of tags: professionally generated, user-generated, and machine-generated.

Professionally generated Tags

Again quoting Alag:

There are a number of applications that are content rich and provide different kinds of content—articles, videos, photos, blogs—to their users. Vertical-centric medical sites, news sites, topic-focused group sites, or any site that has a professional editor generating content are examples of such sites.

In these kinds of sites, the professional editors are typically domain experts, familiar with content domain, and are usually
paid for their services. The first type of tags we cover is tags generated by such domain experts, which we call professionally generated tags.

Tags that are generated by domain experts have the following characteristics:

  • They bring out the concepts related to the text.
  • They capture the associated semantic value, using words that may not be found in the text.
  • They can be authored to be displayed on the user interface.
  • They can provide a view that isn’t centered around just the content of interest, but provides a more global overview.
  • They can leverage synonyms—similar words.
  • They can be multi-term phrases.
  • The set of words used can be controlled, with a controlled vocabulary.

Professionally generated tags require a lot of manpower and can be expensive, especially if a large amount of new content is being generated, perhaps by the users. These characteristics can be challenging for an automated algorithm.

User-generated Tags

Back to Alag:

It’s now common to allow users to tag items. Tags generated by the users fall into the category of user-generated tags, and the process of adding tags to items is commonly known as tagging.

Tagging enables a user to associate freeform text to an item, in a way that makes sense to him, rather than using a fixed terminology that may have been developed by the content owner or created professionally.

[For example, considering the tagging processes] at del.icio.us. Here, a user can associate any tag or keyword with a URL. The system displays a list of recommended and popular tags to guide the user.

The use of users to create tags in your application is a great example of leveraging the collective power of your users. Items that are popular will tend to be frequently tagged. From an intelligence point of view, for a user, what matters most is which items people similar to the user are tagging.

User-generated tags have the following characteristics:

  • They use terms that are familiar to the user.
  • They bring out the concepts related to the text.
  • They capture the associated semantic value, using words that may not be found in the text.
  • They can be multi-term phrases.
  • They provide valuable collaborative information about the user and the item.
  • They may include a wide variety of terms that are close in meaning.

User-generated tags will need to be stemmed to take care of plurals and filtered for obscenity. Since tags are freeform, variants of the same tag may appear. For example, collective intelligence and collectiveintelligence may appear as two tags.

[Additionally,] you may want to offer recommended tags to the user based on the dictionary of tags created in your application and the first few characters typed by the user.

Machine-generated Tags

Tags or terms generated through an automated algorithm are known as machine-generated tags. Alag provides several examples in his book of extracting tags using an automated algorithm – for example, generating tags by analyzing the textual content of a document.

Again from Alag:

An algorithm generates tags by parsing through text and detecting terms and phrases.

Machine-generated tags have the following characteristics:

  • They use terms that are contained in the text, with the exception of injected synonyms.
  • They’re usually single terms—Multi-term phrases are more difficult to extract and are usually done using a set of predefined phrases. These predefined phrases can be built using either professional or user-generated tags.
  • They can generate a lot of noisy tags—tags that can have multiple meanings based on the context, including polysemy and homonyms.—For example, the word gain can have a number of meanings—height gain, weight gain, stock price gain, capital gain, amplifier gain, and so on. Again, detecting multiple-term phrases, which are a
    lot more specific than single terms, can help solve this problem.

In the absence of user-generated and professionally generated tags, machine-generated tags are the only alternative. This is especially true for analyzing user-generated content.

How to leverage Tags in your application

Alag leads off this section of his book with the following:

It’s useful to build metadata by analyzing the tags associated with an item and placed by a user. This metadata can then be used to find items and users of interest for the user. In addition to this, tagging can be useful to build dynamic navigation in your
application, to target search, and to build folksonomies. In this section, we briefly review these three use cases.

I’m not going to explore the specific use cases that Alag covers in his book. Again, you know where to find the details. :)

Other topics

Alag concludes his chapter on extracting intelligence from tagging with:

  1. An example that illustrates the process of extracting intelligence from user tagging, and
  2. Thoughts on building a scalable persistence architecture for tagging

Exploring the tagging example and Alag’s thoughts on a persistence architecture for tagging is beyond the introductory scope of this post. Please see Alag’s book for more information.

In Summary

Hopefully this post has given you a bit of a flavor of how Tags are used to surface collective intelligence in a social web application. In the final post in this series, I’ll be exploring extracting intelligence from textual content.

Also in this series

Collective Intelligence – Part 4: Calculating Similarity

November 17, 2009 4 comments

This is the fourth of a series of posts on the topic of programming Collective Intelligence in web applications. This series of posts will draw heavily from Santam Alag’s excellent book Collective Intelligence in Action.

These posts will present a conceptual overview of key strategies for programming CI, and will not delve into code examples. For that, I recommend picking up Alag’s book. You won’t be disappointed!

Click on the following links to access previous posts in this series:

Determining Similarity using a Similarity Matrix

The essential task in developing collective intelligence is determining similarity between things – between users and items, between different items, and between groups of users.

In Collective Intelligence, this typically involves computing similarities in the form of a similarity matrix (or similarity table). A similarity matrix compares the values in two Term Vectors, and computes the relative similarity between comparable entries in each term vector. Please refer to this previous post, for a brief introduction to terms and term vectors.

In chapter 2 of his book, Alag calculates similarity tables using 3 basic approaches:

  1. Cosine-based similarity
  2. Correlation-based similarity
  3. Adjusted-cosine-based similarity

I’m not going to get into the specific differences between the different methods, but I will provide a general example (from Alag’s book) to illustrate the approach.

User similarity in rating Photos

The example Alag gives involves 3 different users rating 3 different photos. They express their ranking of a photo as a number between 1 and 5. These ratings are displayed in the table below:

If we were to calculate a similiarity matrix (using the cosine-based approach) comparing how similar the photos are to each other, we’d get the following table:

This table tells us that Photo1 and Photo2 are very similar. The closer to 1 a value in the similarity table is, the more similar the items are to each other.

You can use the same approach to calculating similarity between users’ preferences for the photos. If we do the calculations, we get the following results:

Here we see that Jane and Doe are very similar.

In Alag’s book, he details the specific algorithm for caclulating each of the above similarity tables, and shows the different results obtained using the 3 methods listed above (i.e. cosine-based, correlation-based, and adjusted cosine-based methods). He also provides examples based on user ratings for photos, as well as user ranking of articles based on which articles they bookmarked. However, the basic approach is the same as illustrated above.

In Summary

In this post we looked at the basic task of calculating similarity between items and users. In the next post, we’ll look at the specific scenario of extracting intelligence from tags.

Also in this series

Collective Intelligence – Part 3: Gathering Intelligence from User Interaction

November 17, 2009 4 comments

This is the third of a series of posts on the topic of programming Collective Intelligence in web applications. This series of posts will draw heavily from Santam Alag’s excellent book Collective Intelligence in Action.

These posts will present a conceptual overview of key strategies for programming CI, and will not delve into code examples. For that, I recommend picking up Alag’s book. You won’t be disappointed!

Click on the following links to access previous posts in this series:

Introduction – Applying CI in your Application

Alag states that there are three things that need to happen to apply collective intelligence in your application.

You need to:

  1. Allow users to interact with your site and with each other, learning about each user through their interactions and contributions.
  2. Aggregate what you learn about your users and their contributions using some useful models.
  3. Leverage those models to recommend relevant content to your users.

This post will focus on the first of these steps: specifically the different forms of user interaction that capture the raw data used to derive collective intelligence in social web applications.

In Alag’s book, he provides persistence models for capturing this user interaction data. In this post, however, I will not be discussing the specific persistence models that model these user interactions. Please pick up a copy of Alag’s book if you are interested in the details of how the data collected from these user interactions are captured in underlying persistence models.

Gathering Intelligence from User Interaction

Quoting Alag:

To extract intelligence from a user’s interaction in your application, it isn’t enough to know what content the user looked at or visited. You need to quantify the quality of the interaction. A user may like the article or may dislike it, these being two extremes. What one need is a quantification of how the user liked the item relative to other items.

Remember, we’re trying to ascertain what kind of information is of interest to the user. The user may provide this directly by rating or voting for an article, or it may need to be derived, for example, by looking at the content the user has consumed. We can also learn about the item that the user is interacting with in the process.

In this section, we look at how users provide quantifiable information through their interactions. … Some of the interactions such as ratings and voting are explicit in the user’s intent, while other interactions such as using clicks are noisy – the intent of the user isn’t perfectly known and is implicit.

Alag discusses 6 examples of user interaction from which collective intelligence data might be extracted. These are:

  1. Rating and Voting
  2. E-mailing of Forwarding a Link
  3. Bookmarking and Saving
  4. Purchasing Items
  5. Click-stream
  6. Reviews

I would generalize “e-mailing and forwarding a link” to “forwarding and sharing content” generally, of which “e-mailing and forwarding a link” is variation.

This post will provide a very light treatment of some of the forms of user interaction from which collective intelligence is derived.As mentioned above, I will not be exploring the persistence models that capture the user data from these interactions.

So, first up, rating and voting.

Rating and Voting

Quoting Alag:

Asking the user to rate an item of interest is an explicit way of getting feedback o how well the user liked the item. The advantage with a user rating content is that the information provided is quantifiable and can be used directly.

Alag has a very nice section on the specific data and persistence models that underlie the rating and voting data captured from user intereaction. Please refer to his book for this additional detail.

Forwarding and Sharing Content

Forwarding and sharing is another activity that can be considered a positive vote for an item. Alag briefly discusses a variation of this activity in the form of a user e-mailing or forwarding a link

Bookmarking and Saving

A few quick comments from Alag:

Online bookmarking services such as del.icio.us allow users to store and retrieve URLs, also known as bookmarks. Users can discover interesting links that other users have bookmarked through recommendations, hot lists, and other such features. By bookmarking URLs, a user is explicitly expressing interest in the material associated with the bookmark. URLs that are commonly bookmarked bubble up higher in the site.

The process of saving an item or adding it to a list is similar to bookmarking and provides similar information.

Bookmarking and saving is another user interaction activity for which Alag explores the underlying persistence model.

Purchasing Items

In an e-commerce site, when users purchase items, they’re casting an explicit vote of confidence in the item – unless the item is returned after purchase, in which case it’s a negative vote. Recommendation engines, for example the one used by Amazon, can be built from analyzing the procurement history of users. Users that buy similar items can be correlated and items that have been bought by other users can be recommended to a user.

Click-stream

Quoting Alag:

So far we’ve looked at fairly explict was of determining whether a user liked or disliked a particular item, through ratings, voting, forwarding, and purchasing items. When a list of items is presented to a user, there’s a good chance that the user will click on one of them based on the title and description. But after quickly scanning the item, the user may find the item to be not relevant and may browse back or search for other items.

A simply way to quantify an article’s relevance is to record a positive vote for any item clicked. This approach is used by Google News to personalize the site. To furthre filter out the noise, such as items the user didn’t really like, you could look at the amount of time the user spent on the article. Of course, this isn’t fail proof. For example, the user could have left the room to get some coffee or been interrupted when looking at the article. But on average, simply looking at whether an item was visited and the time spent on it provides useful information that can be mined later.

You can also gather useful statistics from this data:

  • What is the average time a user spends on a particular item?
  • For a user, what is the average time spent on any given article?

Reviews

Web 2.0 is all about connecting people with similar people. This similarity may be based on similar tastes, positions, opinions, or geographic location. Tastes and opinions are often expressed through reviews and recommendations. These have the greatest impact on other users when:

  • They’re unbiased
  • The reviews are from similar users
  • They’re from a person of influence

Depending on the application, the information provided by a user may be available to the entire population of users, or may be privately available only to a select group of users.

Perhaps the biggest reasons why people review items and share their experiences are to be discovered by others and for boasting rights. Reviewers enjoy the recognition, and typically like the site and want to contribute to it. Most of them enjoy doing it. A number of applications highlight the contributions made by users, by having a Top Reviewers list. Reviews from top reviewers are also typically placed toward the top and featured more prominently. Sites may also feature one of their top reviewers on the site as an incentive to contribute.

Here again, Alag provides additional commentary around the persistence model underlying Reviews. See the book for details.

In Summary

In this post, we (very) briefly explored forms of user interaction that provide the raw data that applications use to derive collection intelligence to provide useful and relevant content to their users. In future posts in this series, we’ll explore how collective intelligence algorithms are used to aggregate this content, and provide useful insight and information to the users or a social web application.

Also in this series

Collective Intelligence – Part 2: Basic Algorithms

November 16, 2009 5 comments

This is the second of a series of posts on the topic of programming Collective Intelligence in web applications. This series of posts will draw heavily from Santam Alag’s excellent book Collective Intelligence in Action.

These posts will present a conceptual overview of key strategies for programming CI, and will not delve into code examples. For that, I recommend picking up Alag’s book. You won’t be disappointed!

Click on the following links to access previous posts in this series:

Introduction

Quoting Alag (which I’ll be doing a lot of!):

In order to correlate users with content and with each other, we need a common language to compute relevance between items [or Social Objects], between users, and between users and items. Content-based relevance is achored in the content itself, as is done by information retrieval systems. Collaborative-based relevance leverages the user interaction to discern meaningful relationships. Also, since a lot of content is in the form of unstructured text, it’s helpful to understand how metadata can be developed from unstructured text. In this section, we cover these three fundamental concepts of learning algorithms.

We begin by abstracting the various types of content, so that the concepts and algorithms can be applied to all of them.

Users and Items

Quoting Alag:

As shown in [the figure below], most applications generally consist of users and items. Items may be articles, both user-generated and professionally developed: videos, photos, blog entries, questions and answers posted on message boards, or products and services sold in your application. If your application is a social-networking application, or if you’re looking to connect one user with another, then a user is also a type of item.

Alag continues:

Associated with each item is metadata, which may be in the form of professionally-developed keywords, user-generated tags, keywords extracted by an algorithm after analyzing the text, ratings, popularity ranking, or just about anything that provides a higher level of information about the item and can be used to correlate items together.

When an item is a user, in most applications there’s no content associated with a user (unless your application has a text-based descriptive profile of the user). In this case, metadata for a user will consist of profile-based data and user-action based data.

There are three main sources of developing metadata for an item: (i) attribute-based, (ii) content-based, and (iii) user-action based. Alag discusses these next.

Attribute-based

Quoting Alag:

Metadata can be generated by looking at the attributes of the user or the item. The user attribute information is typically dependent on the nature of the domain of the application. It may contain information such as age, sex, geographical location, profession, annual income, or education level. Similarily, most nonuser items have attributes associated with them. For example, a product may have a price, the name of the author or manufacturer, the geographical location where it’s available, and so on.

Content-based

Metadata can be generated by analyzing the contents of a document. As we see in the following sections, there’s been a lot of work done in the area of information retrieval and text mining to extra metadata associated with unstructured text. The title, subtitles, keywords, frequency counts of words in a document and across all documents of interest, and other data provide useful information that can then be converted into metadata for that item.

User-action based

Metadata can be generated by analyzing the interactions of users with items. User interactions provide valuable insight into preferences and interests. Some of the interactions are fairly explicit in terms of their intentions, such as purchasing and item, contributing content, rating an item, or voting. Other interactions are a lot more difficult to discern, such as a user clicking on an article and the system determining whether the user liked that item or not. This interaction can be used to build metadata about the user and the item.

Alag advises thinking about users and items having an associated vector of metadata attributes. The similarity or relevance between two users or two items or a user and item can be measured by looking at the similarity between the two vectors.

Content-based Analysis and Collaborative Filtering

Alag explains that User-centric applications aim to make the application more valuable for users by applying CI to personalize the site. There are two basic approaches to personalization: content-based and collaboration-based.

Content-based Analysis

Again, quoting Alag:

Content-based approaches analyze the content to build a representation for the content. Terms and phrases (multiple terms in a row) appearing in the document are typically used to build this representation. Terms are converted into their basic form by a process known as stemming. Terms with their associated weights, commonly known as term vectors, then represent the metadata associated with the text. Similarity between two content items is measured by measuring the similiarity associated with their term vectors.

A user’s profile can also be developed by analyzing the set of content the user interacted with. In this case, the user’s profile will have the same set of terms as the items, enabling you to compute the similarities between a user and an item. Content-based recommendation systems do a good job of finding related items, but they can’t predict the quality of the item – how popular an item is or how a user will like the items. This is where collaborative-based methods come in.

Collaborative Filtering

A collaborative-based approach aims to use the information provided by the interactions of users to predict items of interest to a user. For example, in a system where users rate items, a collaborative-based approach will find patterns in the way items have been rated by the user and other users to find additional items of interest for a user. This approach aims to match a user’s metadata to that of other similar users and recommend items liked by them. Items that are liked by or popular with a certain segment of your user population will appear often in their interaction history – viewed often, purchased often, and so forth. The frequency or occurrence of ratings provided by users are indicative of the quality of the item or the appropriate segment of your user population. Sites that user collaborative filtering include Amazon, Google, and Netflix.

Continuing:

There are two main approaches in collaborative filtering: memory-based and model-based. In memory-based systems, a similarity measure is used to find similar users and then make a prediction using a weighted average of the ratings of the similar users. This approach can have scalability issues and is sensitive to data sparseness. A model-based approach aims to build a model for prediction using a variety of approaches: linear algebra, probabilistic methods, neural networks, clustering, latent classes, and so on. They normally have fast runtime predicting abilities.

Since a lot of information that we deal with is in the form of unstructured text, Alag proceeds to review basic concepts about how intelligence is extracted from unstructured text.

Representing Intelligence from Unstructured Text

Alag begins this section as follows:

This section deals with developing a representation for unstructured text by using the content of the text. Fortunately, we can leverage a lot of work that’s been done in the area of information retrieval. This section introduces you to terms and term vectors, used to represent metadata associated with text.

Continuing:

Let’s consider an example where the text being analyzed is the phrase “Collective Intelligence in Action.”

In its most basic form, a text document consists of terms—words that appear in the text. In our example, there are four terms: Collective, Intelligence, in, and Action. When terms are joined together, they form phrases. Collective Intelligence and Collective Intelligence in Action are two useful phrases in our document.

The Vector Space Model representation is one of the most commonly used methods for representing a document. A document is represented by a term vector, which consists of terms appearing in the document and a relative weight
for each of the terms. The term vector is one representation of metadata associated with an item. The weight associated with each term is a product of two computations: term frequency and inverse document frequency.

Term frequency (TF) is a count of how often a term appears. Words that appear often may be more relevant to the topic of interest. Given a particular domain, some words
appear more often than others. For example, in a set of books about Java, the word Java will appear often. We have to be more discriminating to find items that have these lesscommon terms: Spring, Hibernate, and Intelligence. This is the motivation behind inverse document frequency (IDF). IDF aims to boost terms that are less frequent.

Commonly occurring terms such as a, the, and in don’t add much value in representing the document. These are commonly known as stop words and are removed from the term vector. Terms are also converted to lowercase. Further, words are stemmed—brought to their root form—to handle plurals. For example, toy and toys will be stemmed to toi. The position of words, for example whether they appear in the title, keywords, abstract, or the body, can also influence the relative weights of the terms used to represent the document. Further, synonyms may be used to inject terms into the representation.

To recap, here are the four steps Alag presents for analyzing text:

  1. Tokenization – Parse the text to generate terms. Sophisticated analyzers can also extract phrases from text.
  2. Normalize – Convert them into a normalized form such as converting text into lower case.
  3. Eliminate stop words – Eliminate terms that appear very often.
  4. Stemming – Convert the terms into their stemmed form to handle plurals.

Computing Similarities

Quoting Alag:

So far we’ve looked at what a term vector is and have some basic knowledge of how they’re computed. Let’s next look at how to compute similarities between them. An item that’s very similar to another item will have a high value for the computed similarity metric. An item whose term vector has a high computed similarity to that of a user’s will be very relevant to a user—chances are that if we can build a term vector to capture the likes of a user, then the user will like items that have a similar term vector.

A term vector is a vector where the direction is the magnitude of the weights for each of the terms. The term vector has multiple dimensions—thousands to possibly millions, depending on your application.

Multidimensional vectors are difficult to visualize, but the principles used can be illustrated by using a two-dimensional vector, as shown below.

Alag, again:

Given a vector representation, we normalize the vector such that its length is of size 1 and compare vectors by computing the similarity between them. Chapter 8 develops the Java classes for doing this computation. For now, just think of vectors as a
means to represent information with a well-developed math to compute similarities between them.

Types of Datasets

In this section of the book, Alag discusses the difference between densely- and sparsely-populated datasets. The difference?

  • A densely-populated dataset has more rows that columns, with a value for each cell. The classic example of a densely-populated dataset is a database table, where every record has an entry for every, or nearly-every field.
  • A sparsely-populated dataset is a dataset where each row has very few entries per column. For example, an Amazon customer may potentially be associated with any book in Amazon’s inventory. In this example, each book in Amazon’s universe would potentially be a field in the customer’s record (or vector). However, a record representing the books that a customers had viewed or bought would only contain entries for a very few of these many books. Thus, the table that associated all Amazon users with potentially all of Amazon’s books would be a “sparse” dataset.

Well, that about wraps it up for this blog post. In the next blog post in this series, we’ll look at the many forms of user interaction in an social application, and how they are converted into collective intelligence.

glenn

Also in this series

Follow

Get every new post delivered to your Inbox.