Frequent thinker, occasional writer, constant smart-arse

Tag: readwriteweb

How Google reader can finally start making money

Today, you would have heard that Newsgator, Bloglines, Me.dium, Peepel, Talis and Ma.gnolia have joined the APML workgroup and are in discussions with workgroup members on how they can implement APML into their product lines. Bloglines created some news the other week on their intention to adopt it, and the announcement today about Newsgator means APML is now fast becoming an industry standard.

Google however, is still sitting on the side lines. I really like using Google reader, but if they don?¢‚Ǩ‚Ñ¢t announce support for APML soon, I will have to switch back to my old favourite Bloglines which is doing some serious innovating. Seeing as Google reader came out of beta recently, I thought I?¢‚Ǩ‚Ñ¢d help them out to finally add a new feature (APML) that will see it generate some real revenue.

What a Google reader APML file would look like
Read my previous post on what exactly APML is. If the Google reader team was to support APML, what they could add to my APML file is a ranking of blogs, authors, and key-words. First an explanation, and then I will explain the consequences.

In terms of blogs I read, the percentage frequency of posting I read from a particular blog will determine the relevancy score in my APML file. So if I was to read 89% of Techcrunch posts ?¢‚Ǩ‚Äú which is information already provided to users ?¢‚Ǩ‚Äú it would convert this into a relevancy score for Techcrunch of 89% or 0.89.

ranking

APML: pulling rank

In terms of authors I read, it can extract who posted the entry from the individual blog postings I read, and like the blog ranking above, perform a similar procedure. I don?¢‚Ǩ‚Ñ¢t imagine it would too hard to do this, however given it?¢‚Ǩ‚Ñ¢s a small team running the product, I would put this on a lower priority to support.

In terms of key-words, Google could employ its contextual analysis technology from each of the postings I read and extract key words. By performing this on each post I read, the frequency of extracted key words determines the relevance score for those concepts.

So that would be the how. The APML file generated from Google Reader would simply rank these blogs, authors, and key-words – and the relevance scores would update over time. Over time, the data is indexed and re-calculated from scratch so as concepts stop being viewed, they start to diminish in value until they drop off.

What Google reader can do with that APML file
1. Ranking of content
One of the biggest issues facing consumers of RSS is the amount of information overload. I am quite confident to think that people would pay a premium, for any attempt to help rank the what can be the hundreds of items per day, that need to be read by a user. By having an APML file, over time Google Reader can match postings to what a users ranked interests are. So rather than presenting the content by reverse chronology (most recent to oldest); it can instead organise content by relevancy (items of most interest to least).

This won?¢‚Ǩ‚Ñ¢t reduce the amount of RSS consumption by a user, but it will enable them to know how to allocate their attention to content. There are a lot of innovative ways you can rank the content, down to the way you extract key works and rank concepts, so there is scope for competing vendors to have their own methods. However the point is, a feature to ?¢‚ǨÀúSort by Personal Relevance?¢‚Ǩ‚Ñ¢ would be highly sort after, and I am sure quite a few people will be willing to pay the price for this God send.

I know Google seems to think contextual ads are everything, but maybe the Google Reader team can break from the mould and generate a different revenue stream through a value add feature like that. Google should apply its contextual advertising technology to determine key words for filtering, not advertising. It can use this pre-existing technology to generate a different revenue stream.

2. Enhancing its AdSense programme

blatant ads

Targeted advertising is still bloody annoying

One of the great benefits of APML is that it creates an open database about a user. Contextual advertising, in my opinion is actually a pretty sucky technology and its success to date is only because all the other types of targeted advertising models are flawed. As I explain above, the technology instead should be done to better analyse what content a user consumes, through keyword analysis. Over time, a ranking of these concepts can occur ?¢‚Ǩ‚Äú as well as being shared from other web services that are doing the same thing.

An APML file that ranks concepts is exactly what Google needs to enhance its adwords technology. Don?¢‚Ǩ‚Ñ¢t use it to analyse a post to show ads; use it to analyse a post to rank concepts. Then, in aggregate, the contextual advertising will work because it can be based off this APML file with great precision. And even better, a user can tweak it ?¢‚Ǩ‚Äú which will be the equivalent to tweaking what advertising a user wants to get. The transparency of a user being able to see what ‘concept ranking’ you generate for them, is powerful, because a user is likely to monitor it to be accurate.

APML is contextual advertising biggest friend, because it profiles a user in a sensible way, that can be shared across applications and monitored by the user. Allowing a user to tweak their APML file for the motivation of more targeted content, aligns their self-interest to ensure the targeted ads thrown at them based on those ranked concepts, are in fact, relevant.

3. Privacy credibility
Privacy is the inflation of the attention economy. You can?¢‚Ǩ‚Ñ¢t proceed to innovate with targeted advertising technology, whilst ignoring privacy. Google has clearly realised this the hard way by being labeled one of the worst privacy offenders in the world. By adopting APML, Google will go a long way to gain credibility in privacy rights. It will be creating open transparency with the information it collects to profile users, and it will allow a user to control that profiling of themselves.

APML is a very clever approach to dealing with privacy. It?¢‚Ǩ‚Ñ¢s not the only approach, but it a one of the most promising. Even if Google never uses an APML file as I describe above, the pure brand-enhancing value of giving some control to its users over their rightful attention data, is something alone that would benefit the Google Reader product (and Google?¢‚Ǩ‚Ñ¢s reputation itself) if they were to adopt it.

privacy

Privacy. Stop looking.

Conclusion
Hey Google – can you hear me? Let’s hope so, because you might be the market leader now, but so was Bloglines once upon a time.

The attention economy needs a consistent base

Okay, enough naval gazing. The journalist in me (by experience), the accountant in me (by education), and the businessman in me (by occupation) is going to synthesise my understanding of the world and propose a new metric for the attention economy. I don’t know the answer yet, but I am going to use this blog to develop my thinking. I can’t promise a solution, however I am sure breaking the issue down into key requirements, assumptions, and needs of what this magical metric is – will add value somewhere for someone.

So let’s start with the most important assumption of all: what are we measuring? As Herbert Simon coined it, and smart guys like Umair, Scott and Chris have extended (at least for my conceptual understanding) – it is called the attention economy. It is important to note however, that the attention economy is an aspect of the Information Sector (see below). And as I described in a previous posting, the attention economy needs a metric for two reasons: monetisation and feedback.


What incorporates the attention economy?
Well, this is a bit like a related problem I had when I first came to grips with what new media was. A few years back, I did some active research trying to understand how a book, a television, a newspaper, and a search engine – could all somehow be classed as “media”. I found my question answered by Vin Crosbie’s manifesto (read this for a recent summary). Take note of what he considers is the key element of new media (the technology aspect).

I am going to propose one of my key assumptions of the future, which will answer this question. It might not happen for another 5, 10 or even 20 years – but I am convinced this is the future. The Internet will act as infrastructure.

I believe the unifying aspect, and the backbone of the attention economy, will be the Internet. All enterprise software, all consumer software, all (distributed) entertainment, all (distributed) communications and all information – will be delivered digitally over the Internet. I think the people at the US Census bureau?Ç? conceptually have already worked this out by defining the information sector of the economy, which classes the above mentioned and more into this one diverse category. The Internet is the enabler of the Information Age, just like how the production line was for the Industrial Age

I’m not saying we are going to live, sleep, and eat on computers in the future. However just think – anything that runs on electricity, can connect to the Internet. And look at the technologies being developed that enable the Internet to live beyond the computer screen like electronic paper and?Ç? dynamic interfaces. Even more powerfully, is that the Internet has brought entire industries to their knees – like the newspaper and music industries – because it is providing a more efficient way of delivering content. If it’s information, communication or entertainment related – then it probably works better in digital format, over the Internet. (Excluding of course the things like theme parks and the like, which are more about physical entertainment and not distributed entertainment like a television programme).

I think this is an important issue to be recognised, that the Internet will the the backbone of the attention economy. By being the core back-end, it means that no matter the output device – whether it is mobile phone, a computer, or a television – it will be providing a consistent delivery mechanism for digital information. For a measurement system to work, it needs to be consistent. The Internet infrastructure will be that consistency. If you can recognise that, then that is a big step forward to solving the issue.