Archive

Posts Tagged ‘Facebook’

HBase in Production at Facebook – Jonathan Gray at Hadoop World 2010

March 9, 2011 Leave a comment

Interesting presentation from Facebook’s Jonathan Gray at Hadoop World 2010 on Facebook’s current and future plans for using HBase in their data platform. Here’s the video:

A couple of slides that position the role of HBase within Facebook. First, Facebook’s core application/data platform – a LAMP stack with Memcache and Hadoop/Hive:

And then a slide that hints at the impact HBase has on various elements of the stack:

Note that HBase does not actually replace any of these element of the stack, but rather plays an interesting intermediate role between online transactional and offline batch data processing.

In his presentation, Gray speaks to the advantages of HBase. Here’s a few snippets:

And then on the Data Analysis side, HBase doesn’t actually do data analysis. And it doesn’t actually store data. But HDFS stores data, and Hive does the analysis. But with HBase in the middle you can do random access and you can do incremental updating.

You also have fast Index writes. HBase is a sorted store. So every single table is sorted by row, every single row is sorted by the column. And then columns are sorted by versions. That’s a really powerful thing that you can build inverted search indexes on, you can build secondary indexes on. So you can do a lot more that just what you can do with a key-value store. So it has a very powerful data structure.

And lastly, there’s real tight integration with Hadoop. And my favorite thing: It’s an interesting product that kind of bridges this gap between the online world and the offline world – the serving, transactional world and the offline, batch-processing world.

HBase Use Case #1: Near real-time Incremental updates to the Data Warehouse

Says Gray:

Right now (at Facebook), we’re doing night updates of UDBs into the data warehouse. And the reason we’re doing that is because HDFS doesn’t have incremental operations. I can only append to something. I can’t edit something, I can’t delete something. So merging in the changes of transactional data, you basically have to rewrite the entire thing.

But with HBase, what we’re able to do is, all of our MySQL data is already being replicated. So we already have existing replication streams. So we can actually hook directly into those replication streams, and then write them into HBase. And then HBase then allows us to expose Hive tables, so we can actually have completely up-to-date UBD data in the datawarehouse. So now we can have UDB data into our data warehouse in minutes, rather than in hours or a day.

HBase Use Case #2: High Frequency Counters and Real-time Analytics

Again quoting Gray:

The second use case is around high-frequency counters, and then real-time analytics of those counters. This is something I think a lot of people have used HBase for for a long time. …

It’s a really interesting use case. Counters aren’t writes, they’re read-modify-writes. So they’re actually a real expensive operation. In a relational database, I’d actually have to do a read lookup, and then write that data back. So it’s a real expensive thing. And if you’re talking about billions of counter updates per hour – or I think right now on one of our clusters it’s about 100,000 updates per second. So doing a 100,000 increments a second on a SQL machine, it’s a cluster of machines now – a lot of machines.

And then the other part is, well now that I’m taking all this increment data, I want to be able to do analysis on it. If I’m taking click-stream data, I want to say What’s the most popular link today? And this past hour? And of all time? So I want to be able to do all that stuff, and if I have all my data sitting in MySQL or HDFS, it’s not necessarily very efficient to compute these things.

So the way we do it now is Scribe Logs. So everytime you click on something, for example, that’s going into Scribe as a log line saying this user clicked this link. That’s being fed into HDFS. And then periodically we’re saying, OK once an hour, or once a day or whatever, let’s take all of our click data and do some analysis on it. Let’s do sums and max-mins and averages and group-by, and different kinds of queries like that. And then once we have our computations, let’s feed it back into the UDBs so people can read it.

So looking at this flow here, we have Scribe going downline into HDFS. And then once we’re in HDFS, we’re writing things as Hive tables so we can get the dimensions that we need. And then we’re doing these huge MapReduce joins to join everything by URL. So it takes a long time to do that job. It’s really, really inefficient. It uses lots and lots of I/O. And it’s not real-time. If this job takes an hour to run, we’ll always have at least an hour of stale data.

But with HBase what we’re doing is we’re going from Scribe directly into HBase. Which means that as soon as that edit comes into HBase, it’s available. You can read it. You can randomly read it. You can do MapReduce on it. You can do whatever you want. And like I was talking about before, you can do real-time reads of it. And I could say “How many clicks have there been for newyorktimes.com today, and I can just grab that data out of HBase.

Or, if I want to do things like “What are the top 10 links across all domains?” Well, the way we do that is kind of like through a trigger-based system. Because the increments are so efficient, I can actually increment 10 things for each increment. So I can say “Increment this domain. But then increment the link. Increment it for today. Increment it for these demographics.” Just do a whole bunch of increments because the increments are so efficient.

So you can actually pre-compute a lot of this stuff. And then when you want to do big aggregations, you can do a MapReduce directly on HBase. And then when you’re done and you have your results, rather than having to feed them back to the UDBs, you just put them in HBase and they’re there.

So it’s really, really cool – storage, serving, analysis as one system. And we’re able to basically keep up with huge, huge numbers of increments and operations, and at the same time do analytics on it.

HBase Use Case #3: User-facing Database for Write-intensive workloads

And the third scenario:

The last case I want to talk about … is using HBase as a user-facing database – almost as a transactional database, and specifically for Write workloads. When you have lots and lots of Writes and very few Reads, or you have a huge amount of data liked I talked about before. If I’m storing 500K, I don’t necessarily want to put that in my UDB. …

I’m not going to elaborate further on this use case. Please listen to the presentation for a full discussion.

HBase and Hive Integration

On production development of HBase at Facebook, Gray has this to say:

But the first thing we did was the Hive integration. … This unlocks a whole new potential, and not just the way I was describing it earlier that we can now randomly write into our data warehouse. You can also randomly read into the data warehouse. So for certain kinds of joins, for example, rather than having to stream the joins we can actually do point lookups into HBase tables. So it unlocks a whole new bunch of ways that we can potentially optimize Hive queries.

But the base of Hive integration is really HBase tables become Hive tables. So you can map Hive tables into HBase. You can use that then as an ETL data target, meaning that we can write our data into it. It can also be a Query data source so we can read data from it. And like I was saying, the Hive integration supports different read and write patterns.

So on the Write side, it supports API random writing like we would do with UDBs. It also supports this bulk load facility through something called HFile output format. So HFile is the ondisk format that HBase uses. And it just looks like a sequence file or a Map file or anything else, but it has some special facilities for HBase.

And we extensively are doing this, which is taking data, writing it out as HFiles, which basically means you’re writing into HBase at the same speed you write to HDFS. And then you just kind of hit a button, and HBase loads all those files in. And now you have really efficient random access to all that data. We’re using that extensively.

Also, on the Read side. You can randomly read into stuff. Or you can do full table scans. Or you can do range scans. All that kind of stuff through Hive.

In Summary

Another great presentation from Hadoop World.

glenn

Facebook’s Architectural Stack – designing for Big Data

March 6, 2011 1 comment

This is my fourth of a series of posts exploring the topic of Big Data. The previous posts in this series are:

This post provides two videos in which Facebook’s David Recordon discusses Facebook’s architectural stack as a platform that must scale to massive amounts of data and traffic. The first video is a short video where Recordon discusses Facebook use of the LAMP stack at OSCON 2010:

On Database Technology and NoSQL Databases at Facebook

In the first video, Recordon first addresses how Facebook implements database technology generally, and the topic of NoSQL databases. Says Recordon:

The primary way that we store data – all of our user data that you’re going and accessing when we’re working on the sight, with the exception of some services like newsfeed – is actually stored in MySQL.

So we run thousands of nodes of a MySQL cluster – but we largely don’t care that MySQL is a relational database. We generally don’t use it for joins. We’re not going and running complex queries that are pulling multiple tables together inside a database using views or anything like that.

But the fundamental idea of a relational database from the ’70s hasn’t gone away. You still need those different components.

Recordon says that there are really three different layers Facebook thinks about when working with data, illustrated in the following visual:

Continues Recordon:

You have the database, which is your primary data store. We use MySQL because it’s extremely reliable. [Then you have] Memcache and our web servers.

So we’re going and getting the data from our database. We’re actually using our web server to combine the data and do joins. And this is some of where HipHop becomes so important, because our web server code is fairly CPU-intensive because we’re going and doing all these different sorts of things with data.

And then we use Memcache as our distributed secondary index.

These are all the components that you would traditionally use a relational database for:

Recordon continues:

[These are the same layers that were] talked about 30-40 years ago in terms of database technology, but they’re just happening in different places.

And so whether you’re going and using MySQL, or whether you’re using a NoSQL database, you’re not getting away from the fact that you have to go and combine data together, that you’re needing to have a way to look it up quickly, or any of those things that you would traditionally use a database for.

On the topic of NoSQL databases, Recordon says:

And then when you dig into the NoSQL technology stack, there are a number of different families of NoSQL databases which you can go and use. You have document stores, you have column family stores, you have graph databases, you have key-value pair databases.

And so the first question that you really have is what problem am I trying to solve, and what family of SQL database do I want to go and use.

And then even when you dig into one of these categories – if we just go and look at Cassandra and HBase – there are a number of differences inside of this one category of database. Cassandra and HBase make a number of different tradeoffs from a consistency perspective, from a relationship perspective. And so overall you really go and think about what problem am I trying to solve; how can I pick the best database to do that, and use it.

While we store the majority of our user data inside of SQL, we have about 150 terabytes of data inside Cassandra, which we use for Inbox search on the site. And over 36 petabytes of uncompressed data in Hadoop overall.

On the topic of Big Data

Recordon:

So that leads me into Big Data. We run a Hadoop cluster with a little over 2,200 servers, about 23,000 CPU cores inside of it. And we’ve seen the amount of data which we go and store and process growing rapidly – it’s increased about 70 times over the past 2 years. And by the end of the year, we expect to be storing over 50 petabytes of uncompressed information – which is more than all the works of mankind combined together.

And I think this is really both the combination of the increase in terms of user activity on Facebook … But also just in terms of how important data analysis has become to running large, successful websites.

The diagram below shows Facebook’s Big Data infrastructure:

Says Recordon:

So this is the infrastructure which we use. I’ll take a minute to walk through it.

With all our web servers we use an open source technology we created called Scribe to go and take the data from tens of thousands of web servers, and funnel them into HDFS and into our Hadoop warehouses. The problem that we originally ran into was too many web servers going and trying to send data to one place. And so Scribe really tries to break it out into a series of funnels collecting this data over time.

This data is pushed into our Platinum Hadoop Cluster about every 5-to-15 minutes. And then we’re also going and pulling in data from our MySQL clusters on about a daily basis. Our Platinum Hadoop Cluster is really what is vital to the business. It is the cluster where if it goes down, it directly affects the business. It’s highly maintained, it’s highly monitored. Every query that’s being run across it, a lot of thought has gone into it.

We also then go and replicate this data to a second cluster which we call the Silver Cluster – which is where people can go and run ad-hoc queries. We have about 300 to 400 people which are going running Hadoop and Hive jobs every single month, many of them outside of engineering. We’ve tried to make this sort of data analysis to help people throughout the company make better product decisions really accessible.

And so that’s one of the other technologies which we use, Apache Hive, which gives you an SQL interface on top of Hadoop to go and do data analysis. And all of these components are open source.

So when Facebook thinks about how there stack has evolved over the past few years, it looks something like this:

Where the major new component is the Hadoop technology stack and its related components to manage massive amounts of data, and do data analysis over top of that data.

A deeper look at Scaling challenges at Facebook

The second video is a presentation delivered by David Recordon and Scott MacVicar – both Facebook software engineers – at FOSDEM in February 2010 provides a deeper look into Facebook’s use of open source technology to provide a massively scalable infrastructure:

The question that I am interested in, and isn’t answered in these videos, is how Facebook implements its Open Graph data model in its infrastructure. That would be very interesting to learn. For more specifically about Facebook’s Open Graph technology, please see Facebook’s Open Graph and the Semantic Web – from Facebook F8.

Very interesting stuff.

glenn

Mark Zuckerberg interview at Web 2.0 Summit 2010 … thoughts on Social Business Design

February 17, 2011 Leave a comment

Here’s the video:

The comment that hit home with me, at approx 17:05 minutes into the interview, Zuckerberg comments:

I think that over the next 5 years, most industries are going to get rethought to be social and designed around people. This is kind of the evolution we’ve seen at Facebook.

On that topic, off to a Gamestorming workshop with Alexendar Osterwalder in Berkeley this weekend. See also the work being done at the Dachis Group on Social Business Design.

BTW, my second favorite quote from the Zuckerberg interview is Zuckerberg’s response to Tim O’Reilly’s question about the importance of building the right cultural DNA is to Facebook. Here’s what he says:

We have these values that we write down, and there are 5 of them that we write down. The two I’d focus on right now that I really try to hammer home every day are “move fast” and “be bold and take risks”.

Technology companies … just tend to get slower, and then they get replaced by smaller companies that are more versatile. So one of the things that I think about every day is how can we make this company operate as quickly as possible. And often that’s encouraging people to move quickly, but a lot of it’s about building really good infrastructure that enables people to move quickly on top of solid abstractions that we built. And that’s a real big deal I think.

As a Business/Enterprise Architect, I really appreciate the value of “solid abstractions” to a flexible, agile, performatn business operating platform. Working for a traditional media company, I think this appreciation and focus can sometimes be lacking. I think it’s understood in some general way be senior management, but I think senior executive of traditional media companies have a long way to go to appreciate the role of Architecture in contributing to business operational agility.

M2CW.

glenn

Social Commerce – leveraging the Social Graph to facilitate commercial transactions (links)

January 10, 2011 1 comment

There’s been a lot of buzz lately around the emerging category of Social Commerce – that is, eCommerce platforms that leverage Social Networks/Social Graphs. The following are a list of links that provide insight into this emerging space.

The List

Social Commerce – Wikipedia

Definition of Social Commerce – Social Commerce Today

Social Shopping – Wikipedia

The Rise of Social Commerce – Brian Solis, September 2010

The Rise Of Social Commerce – Charlene Li, September 2010

Social Commerce, How Brands are generating Revenue in Social Media – Jeremey Owyang, Novermber 2010

Speed Summary | Wired Feb 2011 Cover Story on Social Commerce – Social Commerce Today, January 2011

Social Commerce – Social Media Today, November 2010

Social Commerce 101: Leverage Word of Mouth to Boost Sales

Social Commerce Top 10 for 2010; Outlook for 2011 – Practical eCommerce, December 2010

The Future of Social Shopping – eMarketer, January 2011

14 Social Commerce Examples from Social Media Today – Social Commerce Today, November 2010

Book Review Social Commerce by Julien Chaumond – Social Commerce Today, Novermber 2010

eBay and the socially layered business – Social Commerce Today, December 2010

Facebook Launches Big New Social Commerce Service for Local Businesses – Social Commerce Today, Novermber 2010

Facebook Deals Guide [Download] – For Brands & Retailers – Social Commerce Today, November 2010

Groupon Focuses on Online, Not Local Deals – Social Commerce Today, November 2010

How Big Retail is Deploying Social Commerce [Presentation Download] – Social Commerce Today, November 2010

New Group Buy WordPress Plugin: Opportunity for Brands, Businesses & Bloggers?

Top Trends of 2010: Social Shopping – ReadWriteWeb, December 2010

Roundup of Social Commerce Predictions for 2011 Phase 3 (Sophistication) – Social Commerce Today, January 2011

Social Commerce Helping Products find People – Social Commerce Today, December 2010

Speed Summaries New Social Commerce Presentations from PayPal and Skive – Social Commerce Today, December 2010

Speed Summary OgilvyOne Report on The Future of [Social] Selling – Social Commerce Today, December 2010

The Future of Social Commerce Customisation, Curation, Personalisation – Social Commerce Today, December 2010

Three New Social Commerce Reports [Downloads] Hoffman, Altimeter & ATG – Social Commerce Today, November 2010

TNW’s 6 DOs And DON’Ts of F-Commerce – Social Commerce Today, November 2010

Did Amazon Miss The Boat On Social Commerce? – TechCrunch, May 2010

Why Amazon DOES Understand Social Commerce – Social Commerce Today, Novermber 2010

LivingSocial Gets $175 Million From Amazon Amazon’s Local Strategy – BIAKelsey, December 2010

Walmart’s Group-Buy App in Facebook 10 Ideas for Competitors – Social Commerce Today, November 2010

Why Ping Is the Future of Social Commerce – GigaOM, September 2010

In Summary

Umm, I think you could say this is a fairly hot space right now. Lots of reading and learning to do. :)

glenn

Facebook’s Open Graph and the Semantic Web – from Facebook F8

October 2, 2010 1 comment

A bit late to finally getting around to watching this Facebook Developer Conference keynote presentation by Facebook’s CEO Mark Zuckerburg and CTO Bret Taylor, but wanted to post as a reminder-to-self to understand more deeply Facebook’s Open Graph and associated Social Plugins.

There’s a whack of very interesting stuff in this presentation. I’m going to focus on the Open Graph Protocol and the Graph API, as this is really the foundation of the whole thing.

The Open Graph Protocol

So what is the Open Graph Protocol? Here’s how Facebook’s CEO Bret Taylor sums it up:

At it’s core, the Open Graph Protocol is a specification for a set of meta tags you can use to markup your pages to tell us what type of real-world objects your page represents.

Taylor provides an example using Pandora, and how they would markup up a web page on Pandora.com using the Open Graph Protocol to specify the Title (Green Day), Type (Band), Genre (Punk), and City (Berkeley) of a band.

Taylor continues:

So now this web page which was previously just a bunch of text has this semantic markup, so we know what it represents. And because of that, when a user clicks a Like button on one of these Open Graph pages, we’ll use that semantic knowledge to represent it in really deep ways on Facebook.com.

The easiest way to understand this is just to walk through an example. … If I visit the Godfather page [on movie site IMDb] and I click “Like”, it’s going to go to my Facebook stream just like Shares do today. But IMDb is using the Open Graph Protocol, so they’ve marked up their pages. We know this isn’t just any old web page. This represents a “Movie” with the name “The Godfather” made in “1972”. And because of that, this page is also going to go in the Movies section in my Profile.

And it’s a first-order object in the Facebook Graph. So if my friend visits my profile and hovers over that link, they’re going to see not only that it came from IMDb, but they’re going to be able to connect to that IMDb page with a Like button directly from Facebook.com.

And this object is represented in Facebook everywhere objects in Facebook are represented today – from the News Feed to the Profile Page, and even in Search Results.

That’s pretty cool. Of course the Open Graph Protocol is not restricted to just movies, but is designed to represent any real-world object represented on the Internet. Taylor describes an example from ESPN.com:

So just as easily as I can connect to a movie on IMDb, I can connect to an athlete on ESPN.com. And those connections have all the same features as connections on Facebook do today. …

I’m going to go to ESPN.com and click that Like button on Toby Gerhart. And because this connection has all the same features as connections on Facebook.com, ESPN can publish updates to all the Facebook users that have connected to this page. So tomorrow, when Toby Gerhart’s a surprise first-round draft pick for the Browns, ESPN can send that update to me and everyone else who’s connected with Toby Brown on ESPN.

So it’s not just that this object is represented on Facebook.com, but the Open Graph Protocol represents a long-term relationship with . This is a really significant step for Facebook. For years we’ve been saying Facebook is an Open Platform. But for the first time, the Likes and Interests on my Profile link to pages off Facebook.com.

And I’m not just updating my profile when I join Facebook. I’m updating it with Like buttons all around the Internet. My Identity isn’t just defined by things on Facebook, it’s defined by things all around the Web.

The punchline – moving from a web of pages to a web of entities. Quoting Taylor:

To date, the Web has really been defined by hyperlinks connecting static pieces of content. We think over the next few years that the connections between people and the things they care about will play as big a part as hyperlinks do today in defining people’s internet experiences.

And our goal with the Open Graph protocol is to really accelerate building out that map of connections in a really significant way.

The Graph API

The Graph API, of course, is the developer interface into the Open Graph. Back to Taylor:

The Graph API is our attempt to re-imagine our core server-side API within the context of this new Graph structure we’re all building out together. We’ve essentially re-architected the Facebook Platform from the ground up with simplicity, stability, and this new graph structure in mind.

… In the Graph API, every object in Facebook has a unique ID – whether that object is a user profile, a group, an event, or a fan page. And to download the JSON representation of any object in Facebook, you just need to download graph.facebook.com/id. You can download the JSON representation of my user profile by downloading graph.facebook.com/btaylor.

And the connections between objects in the graph are represented equally elegantly. I’m connected to things all over Facebook. I’m attending this event. I’m friends with Zuck. I’m a member of many groups. To download the connections for an object, you just need to download graph.facebook.com/id/connection name.

So to download my Friends, you just download /btaylor/friends. To get the Groups I’m a member of, /btaylor/groups. To get all the Pages I’m connected with in the Open Graph, you download /btaylor/likes. And this applies to every single object in Facebook.

And More …

Taylor continues to talk about additional capabilities being introduced into the Facebook Platform, including a new Search capability over the Graph API. Says Taylor:

For the first time, via the Search capability of the Graph API, we’re giving developers the capability to search over all the public updates on Facebook.

This is a really significant thing. For the first time, if you’re making a web page for a brand, you can build a module that says what are people saying about this brand.

Taylor briefly discusses a new real-time eventing model for the platform:

We’re also baking in real-time directly into the API. Using a technology called Web Hooks, you can register a call back with Facebook, and we’ll ping you every time a user of your application updates their profile, adds a new connection, or posts a new wall post.

Now, you register a callback URL, and we’ll tell you when your users update their profiles. This is a huge win not just for developers, but for users. So now when I update my Profile, every application that I use that’s integrated with Facebook will be updated instantaneously.

And finally, the adoption of OAuth 2.0.

The Big Picture – Facebook begins to embrace the Semantic Web

The really exciting thing for me, is that with the Open Graph, Facebook begins its embrace of the Semantic Web. The best post I’ve seen on this topic is from Alex Iskold: Facebook Open Graph: A new take on semantic web – from May 2010.

Facebook developer David Recordon also delivered an interesting presentation at Semtech 2010 on Facebook’s Open Graph, which can be viewed below:

All in all, pretty exciting stuff.

glenn

Epipheo Studios – brilliant use of visual thinking applied to video

October 2, 2010 2 comments

A few weeks back I stumbled across this really cool video that illustrated the essence of the value proposition behind a semantic technology company called Metaweb (recently acquired by Google). When I first saw the video, I thought “Wow, what a great use of visual thinking to explain an underlying concept!”

Well, it turns out this video was one of many created by a company called Epipheo Studios. Epipheo, BTW, is short for “Epiphanies on Video”. Say what? Well, here’s how Epipheo tells the story:

As explained in the video, an Epipheo explains and enlightens, it explains new concepts and ideas in a simple and original way. If we are inspired and stimulated by the experience, we are likely to want to share it with others we feel might be interested. And if they are enlightened or inspired, they will share it with others too. And so on.

Oh, and it may also be an advertisement for a company or product. But if you learn something new, if it helps new ideas and concepts form in your mind, if it leads to an “epiphany”, you may not even notice. And you are unlikely to find the experience “intrusive”.

I love it. Here’s another video from Epipheo that I really liked, that explains Facebook’s Social Plugins:

To see other Epipheos, click here to see their entire portfolio of videos.

glenn

Hyperlocal – Core Dimensions (Part 2)

February 14, 2010 3 comments

This is the third in a series of posts on key dimensions of Hyperlocal. Other posts in this series are:

In the previous post, we explored the dimensions of Hyperlocal News and Commerce. In this post, we will explore Local Advertising and Hyperlocal Community.

Local Advertising

Local Advertising is definitely a key part of Local Business/Commerce, which I explored in the previous post. But local advertising can also be embedded within Local News and Local Community portals. Thus I’ve chose to deal with it as a separate topic.

Insights into Local/Hyperlocal Advertising

First off, I have a few favorite resources for keeping informed in the Local/Hyperlocal advertising space. These are:

Borrell Associates – headed up by CEO Gordon Borrell – also sponsors the Local Online Advertising Conference, which was held in New York city early this month.

Jeff Jarvis also frequently has compelling insights into Advertising strategies for Local News Media. For example, see his recent blog posts from February 2010: Stop selling scarcity and NewBizNews: What ad sales people hear.

Search Engine Marketing/SEO for SMEs

Obviously, SEM strategies are critical for any local online business on the web. My top go-to resources for local SEM/SEO insights are:

Big Ad Networks

On the solution provider front, you have the big ad networks around Search Engine marketing, some of which include:

Local Advertising Media/Platforms

A number of application/media providers – many with a mobile focus – are positioned to be significant players, including:

Niche/Regional-based Ad Networks and Services

Increasingly, however, you also have your niche/regional-based ad networks and service providers. Here’s some examples:

Bargains and Deals

Numerous vendors provide applications to notify consumers of bargains and deals in the local vicinity, including:

Additional Local Advertising Solution Providers

One more advertising solution provider I’ll mention:

So there you have it, a sampling of Local Advertising solution providers. Local Advertising should be a very interesting space to watch in 2010, particularly when it comes to mobile, location-based tools and technologies.

Local Community

The Local Community view of HyperLocal is about information and events of interest to the Community. Information and Events around the Local Community may be contributed by businesses, community organizations, or municipal governmental sources, or it may be user-generated content contributed by the Community.

When you talk Community, by definition you are talking about Social Networks. Therefore, you have to consider the various social networking platforms, and particularly those that host large social graphs. I’m thinking here most specifically of:

Many of the HyperLocal News platforms are also positioning themselves as Local Community platforms. For example:

You also have open city initiatives/discussions such as those initiated by:

For additional information on open city initiatives, see here.

Then there are do-it-yourself City initiatives and tools, for example:

You have Local Event platforms, such as:

And finally, organizational and community tools around local causes. See:

This is really just a very small sampling of possible ways/platforms for organizing people within a geographic community. I look for a lot of innovation in this space over the next several years.

HyperLocal Business Models

This viewpoint explores various ways to make a HyperLocal business commercially viable. There’s some great pioneering work being done by Jeff Jarvis and the folks at CUNY here – see the New Business Models for News Project at CUNY, and Jarvis’ overview of the work on HyperLocal business models here.

More on this to come.

Follow

Get every new post delivered to your Inbox.