Posts Tagged ‘Machine Learning’

Large-scale Machine Learning and Data Mining using Hadoop – Hadoop World 2010

March 11, 2011 Leave a comment

A couple interesting videos on large-scale machine learning and data mining using Hadoop from Hadoop World 2010.

1 – Large-Scale Text Analytics at AOL

The first is a presentation on Text Analysis from AOL:

Slides for the presentation can be found here.

AOL’s high-level text analytics architecture – built on top of HDFS – is shown in the visual below:

Related presentations on AOL’s use of Hadoop for Content Analytics and Ad Targeting can be seen below:

The Text Analytics modules perform analysis that is then fed into important AOL applications. Two targeted advertising examples – shown below – are Location-Aware Contextual Advertising and User Aware Ad Targeting:

2 – Sentiment Analysis at GE

The second presentation is from GE on large-scale Sentiment Analysis using Hadoop:



Hyperlocal – Key Technologies

February 14, 2010 3 comments

This is the fourth in a series of posts on key dimensions of Hyperlocal. Other posts in this series are:

In this post we consider key enabling technologies that many of the hyperlocal platforms mentioned in previous posts will leverage.

Key Enabling Technologies

The initial post in this series identified the following key enabling technologies for Hyperlocal solutions:

  1. Identity and Personalization
  2. Social Media/Social Web
  3. Real-time Web
  4. Geolocation
  5. Search
  6. Mobile
  7. Machine Learning
  8. Structured Data/Semantic Web

Let’s explore each in turn.

*** Update January 5 2010 ***

It looks like ReadWriteWeb concurs with my identifiation of key enabling technologies for emerging web-based applications. See ReadWriteWeb’s Top 5 Web Trends of 2009. I think leaving out Geolocation is a fairly important omission on RWW’s part. I didn’t make reference to the Internet of Things in my list, but have referred to Web Meets World (another name for the same thing), and its impact on HyperLocal, in previous posts.
*** End of Update ***

Identity and Personalization

Identity is a key part of any online platform these days. Not only does Identity represent one’s online presence, but it’s the basis for relating to other in the context of one’s social graph.

Chris Messina has some great insights into the emergence of Identity as a platform – here’s video of his Identity is the Platform presentation from October 2009, and the slideshow accompanying his talk.

The two key players positioned to dominate the Identity Platform space are:

Identity forms the foundation by which to deliver and manage personalized content for a user. I’m not going to discuss Personalization strategies in detail here, but ReadWriteWeb has an excellent piece on the topic.

Social Media and Social Web

I’m not sure too much needs to be said here. Obviously, Social Media and Social Networks, or what’s often referred to as the Social Graph, is a key feature of the Web today. If you’re going to host and service a Community on your website, you won’t get very far if you don’t design your website for the social web.

Interestingly, the Identity Platforms mentioned in the previous section – OpenID and Facebook Connect – allows you to import the Social Graph from external platforms into your Community site. Alternatively, you may also want to promote your content on other sites on the Social Web – including Twitter and Facebook.

Another important concept to be aware of in the context of the Web and HyperLocal is that of the Social Object. The Social Object is any piece of Content or information that a community might potentially socialize around. So for example, Twitter posts, news articles, photos, business listings, videos, URLs, movies … all are potential social objects that a community might share and discuss.

Social Media is any form of publishing that facilitaties social collaboration and sharing of information, content, and conversation. Social Networking sites, Blogs, Wikis, Microblogging platforms etc. all fall under this category.

The following are just a few of the more popular platforms on the social web:

It’s important on your website to enable key forms of social behavior, including sharing and bookmarking content, commenting, rating and reviewing, and so on. These are features that any social website should support, and the key community platform players, such as Jive, Pluck, and Lithium all support.

Real-time Web

With the viral adoption of Twitter, the real-time web has really taken off of late. To understand the state of the Real-time Web heading into 2010, see the following:

The Real-time Web can be viewed from a number of different angles. Three are:

Real-time Feeds/Sreams

This is the core of the Real-time Web – the underlying real-time feed protocol. Please see:

Real-time Search

Here, see:

Real-time Geo, or Geo-streams

Here, see:

For more on real-time geo and geolocation trends, see the Geolocation section that follows.

Managing the Real-time Firehose of Information

With the Real-time Web, information bursts furth as a massive stream – or firehose – of information, which is then filtered or consumed according to one’s particular social filters and interests. It can be overwhelming at first, as Nova Spivak discusses here.


… This post is a work-in-progress. Please return later to view the completed post.


Algorithmic Journalism – a “deep trend”

January 3, 2010 Leave a comment

Thought I’d muse today about a topic I’m going to call Algorithmic Journalism. I’ve noticed a fair bit of discussion lately on the use of algorithms (typically machine-learning algorithms) to make sense of, understand the relevance of, aggregate, and distribute news.

First off, the use of machine-learning algorithms and collective intelligence to determine relevance of search and content are very common place today. They form the basis of Google’s search algorithms, and are heavily used by Amazon, Netflix, etc. However, machine-learning in Newsrooms is another matter. And it’s the discussion of machine learning in the context of the News Media business whose waves are starting to wash up against the shorelines of my personal information space (i.e. Twitter and the real-time Web!)

Here’s some of the articles/blog posts in the past few months that speak to this topic:

Note these articles were all written in the past few months. So the topic appears to be only recently breaking into the broader consciousness of the Journalism community.

I’d also point out that the evolution of Algorithimic Journalism is highly dependent on Semantic Web technologies. So look for the influence of the Semantic Web to continue to penetrate the Journalism industry.

Anyway, a topic to keep an eye on in 2010.


Google Goggles – Visual Search technology from Google

December 18, 2009 1 comment

Man, those folks at Google are innovating at breakneck speed. In yet another cool application of artificial intelligence technology, check out Google’s new Google Goggles visual search applications:



Collective Intelligence – Part 5: Extracting Intelligence from Tags

November 17, 2009 4 comments

This is the fifth of a series of posts on the topic of programming Collective Intelligence in web applications. This series of posts will draw heavily from Santam Alag’s excellent book Collective Intelligence in Action.

These posts will present a conceptual overview of key strategies for programming CI, and will not delve into code examples. For that, I recommend picking up Alag’s book. You won’t be disappointed!

Click on the following links to access previous posts in this series:


So far in this series of posts, we’ve been introduced to some basic algorithms in CI, looked at various forms of user interaction, and explored how we used term vectors and similarity matrices to calcuate the similarity between users, items, and items and users. In this post, we’ll explore how to gather intelligence from tags.

Alag introduces the topic of gathering intelligence from tags as follows:

Users tagging items—adding keywords or phrases to items—is now ubiquitous on the web. This simple process of a user adding labels or tags to items, bookmarking items, sharing items, or simply viewing items provides a rich dataset that can translate into intelligence, for both the user and the items. This intelligence can be in the form of finding items related to the one tagged; connecting with other users who have similarly tagged items; or drawing the user to discover alternate tags that have been associated with an item of interest and through that finding other related items.

With that introduction, let’s begin.

Introduction to Tagging

Quoting Alag:

Tagging is the process of adding freeform text, either words or small phrases, to items. These keywords or tags can be attached to anything in your application—users, photos, articles, bookmarks, products, blog entries, podcasts, videos, and more.

[Previously] we looked at using term vectors to associate metadata with text. Each term or tag in the term vector represents a dimension. The collective set of terms or tags in your application defines the vocabulary for your application. When this same vocabulary is used to describe both the user and the items, we can compute the similarity of items with other items and the similarity of the item to the user’s metadata to find content that’s relevant to the user.

In this case, tags can be used to represent metadata. Using the context in which they appear and to whom they appear, they can serve as dynamic navigation links.

In essence, tags enable us to:

  1. Build a metadata model (term vector) for our users and items. The common terminology between users and items enables us to compute the similarity of an item to another item or to a user.
  2. Build dynamic navigation links in our application, for example, a tag cloud or hyperlinked phrases in the text displayed to the user.
  3. Use metadata to personalize and connect users with other users.
  4. Build a vocabulary for our application.
  5. Bookmark items, which can be shared with other users.

Content-based vs. Collaborative-based Metadata

Alag emphasizes the distinction between content-based and collaborative-based sources of metadata. Quoting Alag:

In the content-based approach, metadata associated with the item is developed by analyzing the item’s content. This is represented by a term vector, a set of tags with their relative weights. Similarly, metadata can be associated with the user by aggregating the metadata of all the items visited by the user
within a window of time.

In the collaborative approach, user actions are used for deriving metadata. User tagging is an example of such an approach. Basically, the metadata associated with the item can be computed by computing the term vector from the tags—taking the relative frequency of the tags associated with the item and normalizing the counts.

When you think about metadata for a user and item using tags, think about a term vector with tags and their related weights.

Categorizing Tags based on how they are generated

We can categorize tags based on who generated them. There are three main types of tags: professionally generated, user-generated, and machine-generated.

Professionally generated Tags

Again quoting Alag:

There are a number of applications that are content rich and provide different kinds of content—articles, videos, photos, blogs—to their users. Vertical-centric medical sites, news sites, topic-focused group sites, or any site that has a professional editor generating content are examples of such sites.

In these kinds of sites, the professional editors are typically domain experts, familiar with content domain, and are usually
paid for their services. The first type of tags we cover is tags generated by such domain experts, which we call professionally generated tags.

Tags that are generated by domain experts have the following characteristics:

  • They bring out the concepts related to the text.
  • They capture the associated semantic value, using words that may not be found in the text.
  • They can be authored to be displayed on the user interface.
  • They can provide a view that isn’t centered around just the content of interest, but provides a more global overview.
  • They can leverage synonyms—similar words.
  • They can be multi-term phrases.
  • The set of words used can be controlled, with a controlled vocabulary.

Professionally generated tags require a lot of manpower and can be expensive, especially if a large amount of new content is being generated, perhaps by the users. These characteristics can be challenging for an automated algorithm.

User-generated Tags

Back to Alag:

It’s now common to allow users to tag items. Tags generated by the users fall into the category of user-generated tags, and the process of adding tags to items is commonly known as tagging.

Tagging enables a user to associate freeform text to an item, in a way that makes sense to him, rather than using a fixed terminology that may have been developed by the content owner or created professionally.

[For example, considering the tagging processes] at Here, a user can associate any tag or keyword with a URL. The system displays a list of recommended and popular tags to guide the user.

The use of users to create tags in your application is a great example of leveraging the collective power of your users. Items that are popular will tend to be frequently tagged. From an intelligence point of view, for a user, what matters most is which items people similar to the user are tagging.

User-generated tags have the following characteristics:

  • They use terms that are familiar to the user.
  • They bring out the concepts related to the text.
  • They capture the associated semantic value, using words that may not be found in the text.
  • They can be multi-term phrases.
  • They provide valuable collaborative information about the user and the item.
  • They may include a wide variety of terms that are close in meaning.

User-generated tags will need to be stemmed to take care of plurals and filtered for obscenity. Since tags are freeform, variants of the same tag may appear. For example, collective intelligence and collectiveintelligence may appear as two tags.

[Additionally,] you may want to offer recommended tags to the user based on the dictionary of tags created in your application and the first few characters typed by the user.

Machine-generated Tags

Tags or terms generated through an automated algorithm are known as machine-generated tags. Alag provides several examples in his book of extracting tags using an automated algorithm – for example, generating tags by analyzing the textual content of a document.

Again from Alag:

An algorithm generates tags by parsing through text and detecting terms and phrases.

Machine-generated tags have the following characteristics:

  • They use terms that are contained in the text, with the exception of injected synonyms.
  • They’re usually single terms—Multi-term phrases are more difficult to extract and are usually done using a set of predefined phrases. These predefined phrases can be built using either professional or user-generated tags.
  • They can generate a lot of noisy tags—tags that can have multiple meanings based on the context, including polysemy and homonyms.—For example, the word gain can have a number of meanings—height gain, weight gain, stock price gain, capital gain, amplifier gain, and so on. Again, detecting multiple-term phrases, which are a
    lot more specific than single terms, can help solve this problem.

In the absence of user-generated and professionally generated tags, machine-generated tags are the only alternative. This is especially true for analyzing user-generated content.

How to leverage Tags in your application

Alag leads off this section of his book with the following:

It’s useful to build metadata by analyzing the tags associated with an item and placed by a user. This metadata can then be used to find items and users of interest for the user. In addition to this, tagging can be useful to build dynamic navigation in your
application, to target search, and to build folksonomies. In this section, we briefly review these three use cases.

I’m not going to explore the specific use cases that Alag covers in his book. Again, you know where to find the details. 🙂

Other topics

Alag concludes his chapter on extracting intelligence from tagging with:

  1. An example that illustrates the process of extracting intelligence from user tagging, and
  2. Thoughts on building a scalable persistence architecture for tagging

Exploring the tagging example and Alag’s thoughts on a persistence architecture for tagging is beyond the introductory scope of this post. Please see Alag’s book for more information.

In Summary

Hopefully this post has given you a bit of a flavor of how Tags are used to surface collective intelligence in a social web application. In the final post in this series, I’ll be exploring extracting intelligence from textual content.

Also in this series

Collective Intelligence – Part 4: Calculating Similarity

November 17, 2009 4 comments

This is the fourth of a series of posts on the topic of programming Collective Intelligence in web applications. This series of posts will draw heavily from Santam Alag’s excellent book Collective Intelligence in Action.

These posts will present a conceptual overview of key strategies for programming CI, and will not delve into code examples. For that, I recommend picking up Alag’s book. You won’t be disappointed!

Click on the following links to access previous posts in this series:

Determining Similarity using a Similarity Matrix

The essential task in developing collective intelligence is determining similarity between things – between users and items, between different items, and between groups of users.

In Collective Intelligence, this typically involves computing similarities in the form of a similarity matrix (or similarity table). A similarity matrix compares the values in two Term Vectors, and computes the relative similarity between comparable entries in each term vector. Please refer to this previous post, for a brief introduction to terms and term vectors.

In chapter 2 of his book, Alag calculates similarity tables using 3 basic approaches:

  1. Cosine-based similarity
  2. Correlation-based similarity
  3. Adjusted-cosine-based similarity

I’m not going to get into the specific differences between the different methods, but I will provide a general example (from Alag’s book) to illustrate the approach.

User similarity in rating Photos

The example Alag gives involves 3 different users rating 3 different photos. They express their ranking of a photo as a number between 1 and 5. These ratings are displayed in the table below:

If we were to calculate a similiarity matrix (using the cosine-based approach) comparing how similar the photos are to each other, we’d get the following table:

This table tells us that Photo1 and Photo2 are very similar. The closer to 1 a value in the similarity table is, the more similar the items are to each other.

You can use the same approach to calculating similarity between users’ preferences for the photos. If we do the calculations, we get the following results:

Here we see that Jane and Doe are very similar.

In Alag’s book, he details the specific algorithm for caclulating each of the above similarity tables, and shows the different results obtained using the 3 methods listed above (i.e. cosine-based, correlation-based, and adjusted cosine-based methods). He also provides examples based on user ratings for photos, as well as user ranking of articles based on which articles they bookmarked. However, the basic approach is the same as illustrated above.

In Summary

In this post we looked at the basic task of calculating similarity between items and users. In the next post, we’ll look at the specific scenario of extracting intelligence from tags.

Also in this series

Collective Intelligence – Part 3: Gathering Intelligence from User Interaction

November 17, 2009 4 comments

This is the third of a series of posts on the topic of programming Collective Intelligence in web applications. This series of posts will draw heavily from Santam Alag’s excellent book Collective Intelligence in Action.

These posts will present a conceptual overview of key strategies for programming CI, and will not delve into code examples. For that, I recommend picking up Alag’s book. You won’t be disappointed!

Click on the following links to access previous posts in this series:

Introduction – Applying CI in your Application

Alag states that there are three things that need to happen to apply collective intelligence in your application.

You need to:

  1. Allow users to interact with your site and with each other, learning about each user through their interactions and contributions.
  2. Aggregate what you learn about your users and their contributions using some useful models.
  3. Leverage those models to recommend relevant content to your users.

This post will focus on the first of these steps: specifically the different forms of user interaction that capture the raw data used to derive collective intelligence in social web applications.

In Alag’s book, he provides persistence models for capturing this user interaction data. In this post, however, I will not be discussing the specific persistence models that model these user interactions. Please pick up a copy of Alag’s book if you are interested in the details of how the data collected from these user interactions are captured in underlying persistence models.

Gathering Intelligence from User Interaction

Quoting Alag:

To extract intelligence from a user’s interaction in your application, it isn’t enough to know what content the user looked at or visited. You need to quantify the quality of the interaction. A user may like the article or may dislike it, these being two extremes. What one need is a quantification of how the user liked the item relative to other items.

Remember, we’re trying to ascertain what kind of information is of interest to the user. The user may provide this directly by rating or voting for an article, or it may need to be derived, for example, by looking at the content the user has consumed. We can also learn about the item that the user is interacting with in the process.

In this section, we look at how users provide quantifiable information through their interactions. … Some of the interactions such as ratings and voting are explicit in the user’s intent, while other interactions such as using clicks are noisy – the intent of the user isn’t perfectly known and is implicit.

Alag discusses 6 examples of user interaction from which collective intelligence data might be extracted. These are:

  1. Rating and Voting
  2. E-mailing of Forwarding a Link
  3. Bookmarking and Saving
  4. Purchasing Items
  5. Click-stream
  6. Reviews

I would generalize “e-mailing and forwarding a link” to “forwarding and sharing content” generally, of which “e-mailing and forwarding a link” is variation.

This post will provide a very light treatment of some of the forms of user interaction from which collective intelligence is derived.As mentioned above, I will not be exploring the persistence models that capture the user data from these interactions.

So, first up, rating and voting.

Rating and Voting

Quoting Alag:

Asking the user to rate an item of interest is an explicit way of getting feedback o how well the user liked the item. The advantage with a user rating content is that the information provided is quantifiable and can be used directly.

Alag has a very nice section on the specific data and persistence models that underlie the rating and voting data captured from user intereaction. Please refer to his book for this additional detail.

Forwarding and Sharing Content

Forwarding and sharing is another activity that can be considered a positive vote for an item. Alag briefly discusses a variation of this activity in the form of a user e-mailing or forwarding a link

Bookmarking and Saving

A few quick comments from Alag:

Online bookmarking services such as allow users to store and retrieve URLs, also known as bookmarks. Users can discover interesting links that other users have bookmarked through recommendations, hot lists, and other such features. By bookmarking URLs, a user is explicitly expressing interest in the material associated with the bookmark. URLs that are commonly bookmarked bubble up higher in the site.

The process of saving an item or adding it to a list is similar to bookmarking and provides similar information.

Bookmarking and saving is another user interaction activity for which Alag explores the underlying persistence model.

Purchasing Items

In an e-commerce site, when users purchase items, they’re casting an explicit vote of confidence in the item – unless the item is returned after purchase, in which case it’s a negative vote. Recommendation engines, for example the one used by Amazon, can be built from analyzing the procurement history of users. Users that buy similar items can be correlated and items that have been bought by other users can be recommended to a user.


Quoting Alag:

So far we’ve looked at fairly explict was of determining whether a user liked or disliked a particular item, through ratings, voting, forwarding, and purchasing items. When a list of items is presented to a user, there’s a good chance that the user will click on one of them based on the title and description. But after quickly scanning the item, the user may find the item to be not relevant and may browse back or search for other items.

A simply way to quantify an article’s relevance is to record a positive vote for any item clicked. This approach is used by Google News to personalize the site. To furthre filter out the noise, such as items the user didn’t really like, you could look at the amount of time the user spent on the article. Of course, this isn’t fail proof. For example, the user could have left the room to get some coffee or been interrupted when looking at the article. But on average, simply looking at whether an item was visited and the time spent on it provides useful information that can be mined later.

You can also gather useful statistics from this data:

  • What is the average time a user spends on a particular item?
  • For a user, what is the average time spent on any given article?


Web 2.0 is all about connecting people with similar people. This similarity may be based on similar tastes, positions, opinions, or geographic location. Tastes and opinions are often expressed through reviews and recommendations. These have the greatest impact on other users when:

  • They’re unbiased
  • The reviews are from similar users
  • They’re from a person of influence

Depending on the application, the information provided by a user may be available to the entire population of users, or may be privately available only to a select group of users.

Perhaps the biggest reasons why people review items and share their experiences are to be discovered by others and for boasting rights. Reviewers enjoy the recognition, and typically like the site and want to contribute to it. Most of them enjoy doing it. A number of applications highlight the contributions made by users, by having a Top Reviewers list. Reviews from top reviewers are also typically placed toward the top and featured more prominently. Sites may also feature one of their top reviewers on the site as an incentive to contribute.

Here again, Alag provides additional commentary around the persistence model underlying Reviews. See the book for details.

In Summary

In this post, we (very) briefly explored forms of user interaction that provide the raw data that applications use to derive collection intelligence to provide useful and relevant content to their users. In future posts in this series, we’ll explore how collective intelligence algorithms are used to aggregate this content, and provide useful insight and information to the users or a social web application.

Also in this series