Machines like empty calories too but they lack the taste to distinguish good and bad

December 13th, 2009 — Mark Littlewood

Lots of interesting writing at the moment about content farms, basically businesses that produce tons of crappy content, so that they can be found on search engines, get people to click through to their sites and make money from advertising. Demand Media (main site is eHow.com) and answers.com (who run wikianswers.com) come in for the most flack as they are the largest – both in the top 20 most visited websites in the US. Demand Media is reportedly producing over 4,000 pages of ‘content’ a day both are very heavily reliant on Google adwords for revenue. The main issue with content farms is that they fill up the internet with crap that is cheap and easy to produce, that generates traffic to sites whose only USP seems to be that they have lots of content. These are the ‘empty calories’ of internet content.

McDonalds - a delicious burger

McDonalds - a delicious burger

ReadWriteWeb have covered this better than anyone here and here for example.

Today Mike Arrington produced a very thought provoking piece in Techcrunch which talks about the McDonaldization of content here.

“So what really scares me? It’s the rise of fast food content that will surely, over time, destroy the mom and pop operations that hand craft their content today. It’s the rise of cheap, disposable content on a mass scale, force fed to us by the portals and search engines.”

Since I started writing this I also got, (like 19,ooo other people), a note from Jason Calacanis talking about Facebook’s disgraceful behaviour with respect to privacy policies.

“So why is Facebook trying to trick their users?

“Simple: search results.

“Facebook is trying to dupe hundreds of millions of users they’ve spent years attracting into exposing their data for Facebook’s personal gain: page views. Yes, Facebook is tricking us into exposing all our items so that those personal items get indexed in search engines–including Facebook’s–in order to drive more traffic to Facebook.”

Subscribe to Jason’s newsletter here.

The basic problem here is simple, machines are not very good at working out what is good content and now that Google rules the world, the more content that you have, the more money you make, the more content you can produce…

The issue is likely to get worse as more sites start to produce more content as they realise what the game is. What can be done to prevent the web becoming a horrible sea of crappy link-baiting ‘content’ known to Google and thus to the rest of the world?

There is some hope, but it may be some time before machines can do the job of people.

Some people still value quality over quality. Talking to some large UK publishers recently, it struck me that many of them were not in fact that concerned about the volume of visitors to their sites. In the words of one national newspaper publisher, “We could easily double the volume of visitors overnight but we have taken a conscious decision to focus on increasing the engagement that we have with users that we can make money from”. (Or in other words, we aren’t too bothered about driving foreign traffic that we can only monetise through advertising when we can and will monetise our domestic traffic more effectively).

Sardines with blackcurrant and eucalyptus from ElBulli

Sardines with blackcurrant and eucalyptus from ElBulli

Content producers need to focus on selling the value of their audience rather than the numbers of consumers. The value to advertisers of 1,000 diners is lower than the value of 1,000 diners at El Bulli or The Fat Duck. We all need to be reminded of the difference sometimes.

Computers can’t help much – yet. For Google to distinguish between content generated for keywords and link bait, it needs to develop a sort of variation on the Turing Test, the test of a machine’s ability to demonstrate ‘intelligence’.  Is there a way for a machine to identify when value is being created rather than a series of keywords stitched together for link bait? Sadly, some of the most insightful analysts don’t get heard as they get drowned out by the noise that comes from the most popular sites. Despite the potential of the long tail of content to uncover hidden gems, it rarely does in practice. If Google’s algorithms can get beyond measuring the number of links and volume of traffic to being able to measure the true value of content (and not the relatively empty calories of links and views), then behaviour would change almost overnight.

Social Media may be able to help. Startups that can help people distinguish content that they are likely to trust may be able to help reduce our reliance on content for the masses. I am far more likely to trust the views of my friends and colleagues if they like an article than because it is the most read on a general website. This is an area where we will see significant innovation in the next few years and reading habits are likely to change.

If Google, or someone else, can harness the power of the social web to help me find valuable content, I might even be prepared to pay the privacy price involved.

Reblog this post [with Zemanta]

2 Comments Feed

we need to figure out how to get everyone who consumes content on the web to engage with it at some level (the lighter the better for most) and then capture that engagement to surface the best stuff

fred wilson — December 14, 2009

One of the things that tech blogs miss out is how current information is – ultimately Demend media are creating lots of “current” information whereas sites which did this in the past such as about.com quite often have huge archives but the information has become dated.
Also the sites are relatively well SEOed, though not perfect by any means.

About.com have a lot more content, but is currently poorly indexed

Lots of startups highlight peer recommended content such as Stumbleupon – the problem is that 99% of searches are not something my peer group has previously stumbled, even with a huge (compared to most) social media network.

Andy Beard — December 14, 2009