Frequent thinker, occasional writer, constant smart-arse

Tag: DataPortability (Page 2 of 3)

Information overload: we need a supply side solution

About a month ago, I went to a conference filled with journalists and I couldn’t help but ask them what they thought about blogs and its impact on their profession. Predictably, they weren’t too happy about it. Unpredictably however, were the reasons for it. It wasn’t just a rant, but a genuine care about journalism as a concept – and how the blogging “news industry” is digging a hole for everyone.

Bloggers and social media are replacing the newspaper industry as a source of breaking news. What they still lack, is quality – as there have been multiple examples of blogs breaking news that in the rush to publish it, turns out it was in fact fallacious . Personally, I think as blogging evolves (as a form of journalism) the checks and balances will be developed – such as big names blogs with their brands, effectively acting like a traditional masthead. And when a brand is developed, more care is put into quality.

Regardless, the infancy of blogging highlights the broader concern of “quality”. With the freedom for anyone to create, the Information Age has seen us overload with information despite our finite ability to take it all in. The relationship between the producer of news and consumer of news, not only is blurring – but it’s also radically transforming the dynamics that is impacting even the offline world.

Traditionally, the concept of “information overload” has been relegated as a simple analysis of lower costs to entry as a producer of content (anyone can create a blog on wordpress.com and away you go). However what I am starting to realise, is the issue isn’t so much the technological ability for anyone to create their own media empire, but instead, the incentive system we’ve inherited from the offline world.

Whilst there have been numerous companies trying to solve the problem from the demand side with “personalisation” of content (on the desktop , as an aggregator , and about another 1000 different spins), what we really need are attempts on the supply side, from the actual content creators themselves.

info overload

Too much signal, can make it all look like noise

Information overload: we need a supply side solution
Marshall Kirkpatrick , along with his boss Richard McManus , are some of the best thinkers in the industry. The fact they can write, makes them not journalists in the traditional sense, but analysts with the ability to clearly communicate their thoughts. Add to the mix Techcrunch don Michael Arrington , and his amazing team – they are analysts that give us amazing insight into the industry. I value what they write; but when they feel the stress of their industry to write more, they are not only doing a disservice to themselves, but also to the humble reader they write to. Quality is not something you can automate – there’s a fixed amount a writer can do not because of their typing skills but because quality is a factor of self-reflection and research.

The problem is that whilst they want, can and do write analysis – their incentive system is biased towards a numbers system driven by popularity. The more people that read and the more content created (which creates more potential to get readers) means more pageviews and therefore money in the bank as advertisers pay on number of impressions. The conflict of the leading blogs churning out content , is that their incentive system is based on a flawed system in the pre-digital world, which is known as circulation offline, and is now known as pageviews online.

A newspaper primarily makes money through their circulation: the amount of physical newspapers they sell, but also the audited figures of how many people read their newspaper (readership can have a factor of up to three times the physical circulation ). With the latter, a newspaper can sell space based on their proven circulation: the higher the readership, the higher the premium. The reason for this is that in the mass media world, the concept of advertising was about hitting as many people as possible. I liken it to the image of flying a plane over a piece of land, and dropping leaflets with the blind faith that of those 100,000 pamphlets, at least 1000 people catch them.

It sounds stupid why an advertiser would blindly drop pamphlets, but they had to: it was the only way they could effectively advertise. For them to make sales, they need the ability to target buyers and create exposure of the product. The only mechanism available for this was the mass media as it was a captured audience, and at best, an advertiser could places ads on specialist publications hoping to getter better return on their investment (dropping pamphlets about water bottles over a desert, makes more sense than over a group of people in a tropical rainforest). Nevertheless, this advertising was done on mass – the technology limited the ability to target.

catch the advert

Advertising in the mass media: dropping messages, hoping the right person catches them

On the Internet, it is a completely new way to publish. The technology enables a relationship with a consumer of content, a vendor, a producer of content unlike anything else previously in the world. The end goal of a vendor advertising is about sales and they no longer need to drop pamphlets – they can now build a one on one relationship with that consumer. They can now knock on your door (after you’ve flagged you want them to), sit down with you, and have a meaningful conversion on buying the product.

“Pageviews” are pamphlets being dropped – a flawed system that we used purely due to technological limitations. We now have the opportunity for a new way of doing advertising, but we fail to recognise it – and so our new media content creators are being driven by an old media revenue model.

It’s not technology that holds us back, but perception
Vendor Relationship Management or (VRM) is a fascinating new way of looking at advertising, where the above scenario is possible. A person can contain this bank of personal information about themselves, as well as flagging their intention of what products they want to buy – and vendors don’t need to resort to advertising to sell their product, but by building a relationship with these potential buyers one on one. If an advertiser knows you are a potential customer (by virtue of knowing your personal information – which might I add under VRM, is something the consumer controls), they can focus their efforts on you rather than blindly advertising on the other 80% of people that would never buy their product). In a world like this, advertising as we know it is dead because we know longer need it.

VRM requires a cultural change in our world of understanding a future like this. Key to this is the ability for companies to recognise the value of a user controlling their personal data is in fact allowing us new opportunities for advertising. Companies currently believe by accumulating data about a user, they are builder a richer profile of someone and therefore can better ‘target’ advertising. But companies succeeding technologically on this front, are being booed down in a big way from privacy advocates and the mainstream public. The cost of holding this rich data is too much. Privacy by obscurity is no longer possible, and people demand the right of privacy due to an electronic age where disparate pieces of their life can be linked online

One of the biggest things the DataPortability Project is doing, is transforming the notion that a company somehow has a competitive advantage by controlling a users data. The political pressure, education, and advocacy of this group is going to allow things like VRM. When I spoke to a room of Australia’s leading technologists at BarCamp Sydney about DataPortability, what I realised is that they failed to recognise what we are doing is not a technological transformation (we are advocating existing open standards that already exist, not new ones) but a cultural transformation of a users relationship with their data. We are changing perceptions, not building new technology.

money on the plate

To fix a problem, you need to look at the source that feeds the beast

How the content business will change with VRM
One day, when users control their data and have data portability, and we can have VRM – the content-generating business will find a light to the hole currently being dug. Advertising on a “hits” model will no longer be relevant. The page view will be dead.

Instead, what we may see is an evolution to a subscription model. Rather than content producers measuring success based on how many people viewed their content, they can now focus less on hits and more on quality as their incentive system will not be driven by the pageview. Instead, consumers can build up ‘credits’ under a VRM system for participating (my independent view, not a VRM idea), and can then use those credits to purchase access to content they come across online. Such a model allows content creators to be rewarded for quality, not numbers. They will need to focus on their brand managing their audiences expectations of what they create, and in return, a user can subscribe with regular payments of credits they earned in the VRM system.

Content producers can then follow whatever content strategy they want (news, analysis, entertainment ) and will no longer be held captive by the legacy world system that drives reward for number of people not types of people.

Will this happen any time soon? With DataPortability, yes – but once we all realise we need to work together towards a new future. But until we get that broad recognition, I’m just going to have to keep hitting “read all” in my feed reader because I can’t keep up with the amount of content being generated; whilst the poor content creators strain their lives, in the hope of working in a flawed system that doesn’t reward their brilliance.

February 2008 DataPortability project report

The DataPortability project has now released its February 2008 report, with a massive thank you to Mary Trigiani and Daniela Barbosa, our Italian and Portuguese glamour ladies in the evangelism action group! The delay this month was due to Mary’s family getting hit by a tornado (!) which had her busy with other things, and the finalisation of our new wiki platform, with the new uri http://wiki.dataportability.org now live for the world to access.

Highlights include:

  • A new logo competition
  • A new collaboration platform
  • The announcement of the”investigation” phase of the DataPortability project
  • …and a lot more

Be sure to read the February 2008 report (and the January 2008 report if you missed it) to get the latest news about DataPortability, as we have commited to be open and transparent about what we are doing.

DataPortability is about user value, fool!

In a recent interview, VentureBeat asks Facebook creator and CEO Mark Zuckerberg the following:

VB: Facebook has recently joined DataPortability.org, a working group among web companies, that intends to develop common standards so users can access their data across sites. Is Facebook going to let users — and other companies — take Facebook data completely off Facebook?

MZ: I think that trend is worth watching.

It disappoints me to see that, because it seems like a quick journalists hit at a contentious issue. On the other hand, we have seen amazing news today which are examples of exactly the type of thing we should be expecting in a data portability enabled world: the Google contacts API which has been a thing we have highlighted for months now as an issue for data security and Google analytics allowing benchmarking which is a clear example of a company that understands by linking different types of data you generate more information and therefore value for the user. The DataPortability project is about trying to advocate new ways of thinking, and indeed, we don’t have to formally produce a product in as much maintain the agenda in the industry.

However the reason I write this is that it worries me a bit that we are throwing around the term “data portability” despite the fact the DataPortability Project has yet to formally define what that means. I can say this because as a member of the policy action group and the steering action group which are responsible for making this distinction, we have yet to formally decide.

Today, I offer an analysis of what the industry needs to be talking about, because the term is being thrown around like buggery. Whilst it may be weeks or months before we finalise this, it’s starting to bother me that people seem to think the concept means solving the rest of the world’s problems or to disrupt the status quo. It’s time for some focus!

Value creation
First of all, we need to determine why the hell we want data portability. DataPortability (note the distinction of the term with that of ‘data portability’ Рthe latter represents the philosophy whilst the former is the implementation of that philosophy by DataPortability.org) is not a new utopian ideal; it’s a new way of thinking about things that will generate value in the entire Information sector. So to genuinely want to create value for consumers and businesses alike, we need to apply thinking that we use in the rest of the business world.

A company should be centered on generating value for its customers. Whilst they may have obligations to generate returns for their shareholders, and may attempt different things to meet those obligations; they also have an obligation to generate shareholder value. To generate shareholder value, means to fund the growth of their business ultimately through increased customer utility which is the only long term way of doing so (taking out acquisitions and operational efficiency which are other ways companies generate more value but which are short term measures however). Therefore an analysis of what value DataPortability creates should be done with the customer in mind.

The economic value of a user having some sort of control over their data is that they can generate more value through their transactions within the Information economy. This means better insights (ie, greater interoperability allowing the connection of data to create more information), less redundancy (being able to use the same data), and more security (which includes better privacy which can compromise a consumers existence if not managed).

Secondly, what does it mean for a consumer to have data portability? Since we have realised that the purpose of such an exercise is to generate value, questions about data like “control”, “access” and “ownership” need to be reevaluated because on face value, the way they are applied may have either beneficial or detrimental effects for new business models. The international accounting standards state that you can legally “own” an asset but not necessarily receive the economics benefits associated with that asset. The concept of ownership to achieve benefit is something we really need to clarify, because quite frankly, ownership does not translate into economic benefit which is what we are at stake to achieve.

Privacy is a concept that has legal implications, and regardless of what we discuss with DataPortability, it still needs to be considered because business operates within the frameworks of law. Specifically, the human rights of an individual (who are consumers) need to be given greater priority than any other factor. So although we should be focused on how we can generate value, we also need to be mindful that certain types of data, like personally identifiable data, needs to be considered in adifferent light as there are social implications in addition to the economic aspects.

The use cases
The technical action group within the DataPortability project has been attempting to create a list of scenarios that constitute use cases for DataPortability enablement. This is crucial because to develop the blueprint, we also need to know what exactly the blueprint applies to.

I think it’s time however we recognise, that this isn’t merely a technical issue, but an industry issue. So now that we have begun the research phase of the DataPortability Project, I ask you and everyone else to join me as we discuss what exactly is the economic benefit that DataPortability creates. Rather than asking if Facebook is going to give up its users data to other applications, we need to be thinking on what is the end value that we strive to achieve by having DataPortability.

Portability in context, not location
When the media discuss DataPortability, please understand that a user simply being able to export their data is quite irrelevant to the discussion, as I have outlined in my previous posting. What truly matters is “access”. The ability for a user to command the economic benefits of their data, is the ability to determine who else can access their data. Companies need to be thinking that value creation comes from generating information – which is simply relationships between different data ‘objects’. If a user is to get the economic benefits of using their data from other repositories, companies simply need to allow the ability for a user to delegate permission for others to access that data. Such a thing does not compromise a company’s competitive advantage as they won’t necessarily have to delete data they have of a user; rather it requires them to try to to realise that holding in custody a users data or parts of it gives them a better advantage as hosting a users data gives them complete access, to try to come up with innovative new information products for the user.

So what’s my point? When discussing DataPortability, let’s focus on the value to the user. And the next time the top tech blogs confront the companies that are supporting the movement with a simplistic “when are you going to let users take their data completely off ” I am going to burn my bra in protest.

Disclosure: I’m a hetrosexual male that doesn’t cross-dress

Update: I didn’t mean to scapegoat Eric from VentureBeat who is a brilliant writer. However I used him to give an example of the language being used in the entire community which now needs to change. With the DP research phase now officially underway for the next few months, the questions we should be asking should be more open-ended as we at the DataPortability project have realised these issues are complex, and we need to get the entire community to come to a consensus. DataPortability is no longer just about exporting your social graph – it’s an entirely new approach to how we will be doing business on the net, and as such, requires us to fundamentally reexamine a lot more than we originally thought.

Can you answer my question?

We at the DataPortability project have kick started a research phase, because we’ve realised we need to spend more time consulting with the community working out issues which don’t quite have one answer.

As Chris Saad and myself are also experimenting with a new type of social organisation as we incubate the DataPortability project, which I call wikiocracy (Chris calls it participant democracy), I thought I might post these issues on my blog to keep in line with the decentralised ethos we are encouraging with DataPortability. This is something the entire world should be questioning,

So below are some thoughts I have had. They’ve changed a lot since I first thought about what a users data rights are, and no doubt, they will change again. But hopefully my thoughts can act as a catalyst for what people think data rights really are, and a focus on the issue at stake which I conclude as my question. I think the bill of rights for users on the social web is not quite adequate, and we need a more careful analysis of the issues.

It’s the data, stupid
Data is essentially an object. Standalone it’s useless – take for example the name “Elias”. In the absence of anything else, that piece of datum means nothing. However when you associate that name with my identity (ie, appending my surname Bizannes or linking it to my facebook profile), that suddenly becomes “information”. Data is an object and information is generated when you create linkages between different types of data – the ‘relationships’.

Take this data definition from DMReview which defines data (and information):

Items representing facts, text, graphics, bit-mapped images, sound, analog or digital live-video segments. Data is the raw material of a system supplied by data producers and is used by information consumers to create information.

Data is an object and information is a relationship between data – I’ve studied database theory at university to be authoritative on that! But since I didn’t do philosophy, then what is knowledge?

Knowledge can be considered as the distillation of information that has been collected, classified, organized, integrated, abstracted and value added
(source)

Relationships, facts, assumptions, heuristics and models derived through the formal and informal analysis or interpretation of data
(source)

So in other words, knowledge is the application of information to a scenario. Whilst I apologise if this appears that I am splitting hairs, I think clarifying what these terms are is fundamental to the implementation of DataPortability. Why this is relevant will be seen below, but now we need to move onto what does the second concept mean.

Portability
On first interpretation, portability means the ability to move something – exporting and importing. I think we shouldn’t take the ability to move data around as the sole definition of portability but it should also mean being able to port the context that data is used. After all, information and knowledge is based on the manipulation of data, and you don’t need to move data per se but merely change the context to do that. A vendor can add value to a consumer by building unique relationships between data and giving unique application to other scenarios – where the original data is stored is irrelevant as long as its accessible.

Portability to me means a person needs to have the ability to determine where their data is used. But to do that, they need control over that data – which means determining how it is used. Yet there is little point being able to determine how your data is used, if you can’t determine who can access your data. Therefore, the concept of portability invokes an understanding of what exactly control and accessibility means.

So to discuss portability, requires us to also understand what does data control and data accessibility really mean. You can’t “port” something unless you control it; and you can’t “control” something, if you can’t determine who can “access” it. As I state, as long as the data is accessible, the location of it can be on the moon for all I care: for the concept of portability by context to exist, we must ensure as a condition that the data is open to access.

Ownership
Now here is where it gets complicated: who owns what? Maybe the conversation should come to who owns the information and knowledge generated from that data. Data on its own, potentially doesn’t belong to anyone. My name “Elias” is shared by millions of other people in the world. Whilst I may own my identity, which my name is a representation of that, is it fair to say I own the name “Elias”? On the flip side, if a picture I took is considered data – I think it’s fair to say I “own” that piece of data.

Information on the other hand, requires a bit of work to create. Therefore, the generator of that information should get ownership. However when we start applying this concept to something like a social relationship, it gets a bit tricky. If I add a friend on Facebook, and they accept me, who “owns” that relationship? Effectively both of us – so we become join partners in ownership of that piece of information. If I was to add someone as a friend on MySpace, they don’t necessarily have to reciprocate – therefore it’s a one way relationship. Does that mean, I own that information?

This is when the concept of privacy comes in. If I am generating information about someone, am I entitled to it? If someone owns the underlying data I used to generate that information – then it would be fair to say, I am “licensing” usage of that data to generate information which de-facto is owned by them. But privacy as a concept and in the legislation of many countries doesn’t work like that. Privacy is even a right along side other basic rights like freedom of expression and religion in the constitution of Iraq (Article 17). So what’s privacy in the context of information that relates to someones identity?

Perhaps we should define privacy as the right to control information that represents an entity’s identity (being a person or legal body). Such as definition ties with defamation law for example, and the principle of privacy: you have control over what’s been said about you, as a fundamental human right. But yet again, I’ve just opened up a can of worms: what is “identity”? Maybe the Identity commons people can answer that? Would it be fair to say, that in the context of an “identity”, an entity like a person ‘owns’ that? So when it comes to information relating to someones identity, do we override it with this human right to privacy as to who owns that information, regardless of who generated that information?

This posting is a question, rather than an answer. When we say we want “data portability”, we need to be clear what exactly this means. Companies I believe are slightly afraid of DataPortability, because they think they will lose something, which is not true. Companies commercial interests are something I am very mindful when we have these discussions, and I will ensure with my involvement that DataPortability pioneers not some unrealistic ideal but a genuine move forward in business thinking. It needs to be clear what constitutes ownership and of what so we can design a blueprint that accounts for users’ data rights, without ruining the business models of companies that rely on our data.

Which brings me to my question – “who owns what”?

Control doesn’t necessarily mean access

I was approached by multiple people – PR professionals and journalists alike – after I gave my presentation at the kickstart forum yesterday. Whilst I doubt DataPortability is something they will pick up on for feature stories given the product focus these journalists have, the conversations with them were extremely encouraging and I am thank full to get their feedback.

One conversation particularly stood out for me, which was with John Hepworth – a former engineer whose has been freelance writing for over 20 years, and it was in the context of the ability to port your health information. I’ve been thinking a lot of the scenario whereby consumers can move their health records from clinics, and with Google Health launching and the discussions in the DataPortability forums I am certainly not alone. Something that caught my attention was Deepak Singh who recently posted an interesting perspective: we shouldn’t give users access to their health records, because they will make uninformed judgments if they have control of them. That’s an excellent point, but one which prickles the whole issue of not just who owns your data, but who should have access to it (including yourself).

Hepworth provided a simple but extremely insightful position to this issue: you don’t need to give users the ability to see their data, for them to control it. Brilliant!

The benefits of controlling your data, needs to be looked at not just in the context of the laws of a country, but on the net benefit it provides to an individual. Comments provided by your physicians in your medical history, whilst although they deserve to be given ownership to the individual they are about, they also need to be given access to people who are qualified to make educated judgments. In others words, you should have the right to port your data to another doctor, but you should only have access to it in the presence of a qualified doctor.

DataPortability should not equate in you seeing your data all the time – rather it should be about determining how it gets used by others.

My presentation at Kickstart forum

I’m currently at Kickstart forum (along with the Mickster), and I just gave a presentation on DataPortability to a bunch of Aussie journalists. I didn’t write a speech, but I did jot down some points on paper before I spoke, so I thought I might share them here given I had a good response.

My presentation had three aspects: background, explanation, and implications of DataPortability. Below is a summary of what I said

Background

  • Started by a bunch of Australians and a few other people overseas in November 2007 out of a chatroom. We formed a workgroup to explore the concept of social network data portability
  • In January 2008, Robert Scoble had an incident, which directed a lot of attention to us. As a consequence, we’ve seen major companies such as Google, Microsoft, Yahoo, Facebook, Six Apart, LinkedIn, Digg, and a host of others pledge support for the project.
  • We now have over 1000 people contributing, and have the support of a lot of influential people in the industry who want us to succeed.

Explanation

  • The goal is to not invent anything new. Rather, it’s to synthesise existing standards and technologies, into one blueprint – and then we push it out to the world under the DataPortability brand
  • When consumers see the DataPortability brand, they will know it represents certain things – similar to how users recognise the Centrino brand represents Intel, mobility, wireless internet, and a long battary life. The brand is to communicate some fundamental things about a web service, that will allow a user to recognise a supporting site respects it’s users data rights and certain functionality.
  • Analogy of zero-networking: before the zeroconf initiative it was difficult to connect to the internet (wirelessly). Due to the standardisation of policies, we can now connect on the internet wirelessly at the click of a button. The consequence of this is not just a better consumer experience, but the enablement of future opportunities such as what we are seeing with the mobile phone. Likewise, with DataPortability we will be able to connect to new applications and things will just “work” – and it will see new opportunity for us
  • Analogy of the bank: I stated how the attention economy is something we give our attention to ie, we put up with advertising, and in return we get content. And that the currency of the attention economy is data. With DataPortability, we can store our data in a bank, and via “electronic transfer”, we can interact with various services controlling the use of that data in a centralised manner. We update our data at the bank, and it automatically synchronises with the services we use ie, automatically updating your Facebook and MySpace profiles

Implications

  1. Interoperability: When diverse systems and organisations work together. A DataPortability world will allow you to use your data generated from other sites ie, if you buy books on Amazon about penguins, you can get movie recommendations on your pay TV movie catalog for penguins. Things like the ability to log in across the web with one sign-on, creates a self-supporting ecosystem where everyone benefits.
  2. Semantic web: I gave an explanation of the semantic web (which generated a lot of interest afterwards in chats), and then I proceeded to explain that the problem for the semantic web is there hasn’t been this uptake of standards and technologies. I said that when a company adopts the DataPortability blueprint, they will effectively be supporting the semantic web – and hence enabling the next phase of computing history
  3. Data rights: I claimed the DataPortability project is putting data rights in the spotlight, and it’s an issue that has generated interest from other industries like the health and legal sectors, and not just the Internet sector. Things like what is privacy, and what exactly does my “data” mean. DataPortability is creating a discussion on what this actually means
  4. Wikiocracy: I briefly explained how we are doing a social experiment, with a new type of of governance model, which can be regarded as an evolution of the open source model. “Decentralised” and “non-hierarchical”, which with time it will be more evident with what we are trying to do

Something that amused me was in the sessions I had afterwards when the journalists had a one-on-one session with me, one woman asked: “So why are you doing all of this?”. I said it was an amazing opportunity to meet people and build my profile in the tech industry, to which she concluded: “you’re doing this to make history, aren’t you?”. I smiled 🙂

How Google reader can finally start making money

Today, you would have heard that Newsgator, Bloglines, Me.dium, Peepel, Talis and Ma.gnolia have joined the APML workgroup and are in discussions with workgroup members on how they can implement APML into their product lines. Bloglines created some news the other week on their intention to adopt it, and the announcement today about Newsgator means APML is now fast becoming an industry standard.

Google however, is still sitting on the side lines. I really like using Google reader, but if they don?¢‚Ǩ‚Ñ¢t announce support for APML soon, I will have to switch back to my old favourite Bloglines which is doing some serious innovating. Seeing as Google reader came out of beta recently, I thought I?¢‚Ǩ‚Ñ¢d help them out to finally add a new feature (APML) that will see it generate some real revenue.

What a Google reader APML file would look like
Read my previous post on what exactly APML is. If the Google reader team was to support APML, what they could add to my APML file is a ranking of blogs, authors, and key-words. First an explanation, and then I will explain the consequences.

In terms of blogs I read, the percentage frequency of posting I read from a particular blog will determine the relevancy score in my APML file. So if I was to read 89% of Techcrunch posts ?¢‚Ǩ‚Äú which is information already provided to users ?¢‚Ǩ‚Äú it would convert this into a relevancy score for Techcrunch of 89% or 0.89.

ranking

APML: pulling rank

In terms of authors I read, it can extract who posted the entry from the individual blog postings I read, and like the blog ranking above, perform a similar procedure. I don?¢‚Ǩ‚Ñ¢t imagine it would too hard to do this, however given it?¢‚Ǩ‚Ñ¢s a small team running the product, I would put this on a lower priority to support.

In terms of key-words, Google could employ its contextual analysis technology from each of the postings I read and extract key words. By performing this on each post I read, the frequency of extracted key words determines the relevance score for those concepts.

So that would be the how. The APML file generated from Google Reader would simply rank these blogs, authors, and key-words – and the relevance scores would update over time. Over time, the data is indexed and re-calculated from scratch so as concepts stop being viewed, they start to diminish in value until they drop off.

What Google reader can do with that APML file
1. Ranking of content
One of the biggest issues facing consumers of RSS is the amount of information overload. I am quite confident to think that people would pay a premium, for any attempt to help rank the what can be the hundreds of items per day, that need to be read by a user. By having an APML file, over time Google Reader can match postings to what a users ranked interests are. So rather than presenting the content by reverse chronology (most recent to oldest); it can instead organise content by relevancy (items of most interest to least).

This won?¢‚Ǩ‚Ñ¢t reduce the amount of RSS consumption by a user, but it will enable them to know how to allocate their attention to content. There are a lot of innovative ways you can rank the content, down to the way you extract key works and rank concepts, so there is scope for competing vendors to have their own methods. However the point is, a feature to ?¢‚ǨÀúSort by Personal Relevance?¢‚Ǩ‚Ñ¢ would be highly sort after, and I am sure quite a few people will be willing to pay the price for this God send.

I know Google seems to think contextual ads are everything, but maybe the Google Reader team can break from the mould and generate a different revenue stream through a value add feature like that. Google should apply its contextual advertising technology to determine key words for filtering, not advertising. It can use this pre-existing technology to generate a different revenue stream.

2. Enhancing its AdSense programme

blatant ads

Targeted advertising is still bloody annoying

One of the great benefits of APML is that it creates an open database about a user. Contextual advertising, in my opinion is actually a pretty sucky technology and its success to date is only because all the other types of targeted advertising models are flawed. As I explain above, the technology instead should be done to better analyse what content a user consumes, through keyword analysis. Over time, a ranking of these concepts can occur ?¢‚Ǩ‚Äú as well as being shared from other web services that are doing the same thing.

An APML file that ranks concepts is exactly what Google needs to enhance its adwords technology. Don?¢‚Ǩ‚Ñ¢t use it to analyse a post to show ads; use it to analyse a post to rank concepts. Then, in aggregate, the contextual advertising will work because it can be based off this APML file with great precision. And even better, a user can tweak it ?¢‚Ǩ‚Äú which will be the equivalent to tweaking what advertising a user wants to get. The transparency of a user being able to see what ‘concept ranking’ you generate for them, is powerful, because a user is likely to monitor it to be accurate.

APML is contextual advertising biggest friend, because it profiles a user in a sensible way, that can be shared across applications and monitored by the user. Allowing a user to tweak their APML file for the motivation of more targeted content, aligns their self-interest to ensure the targeted ads thrown at them based on those ranked concepts, are in fact, relevant.

3. Privacy credibility
Privacy is the inflation of the attention economy. You can?¢‚Ǩ‚Ñ¢t proceed to innovate with targeted advertising technology, whilst ignoring privacy. Google has clearly realised this the hard way by being labeled one of the worst privacy offenders in the world. By adopting APML, Google will go a long way to gain credibility in privacy rights. It will be creating open transparency with the information it collects to profile users, and it will allow a user to control that profiling of themselves.

APML is a very clever approach to dealing with privacy. It?¢‚Ǩ‚Ñ¢s not the only approach, but it a one of the most promising. Even if Google never uses an APML file as I describe above, the pure brand-enhancing value of giving some control to its users over their rightful attention data, is something alone that would benefit the Google Reader product (and Google?¢‚Ǩ‚Ñ¢s reputation itself) if they were to adopt it.

privacy

Privacy. Stop looking.

Conclusion
Hey Google – can you hear me? Let’s hope so, because you might be the market leader now, but so was Bloglines once upon a time.

Explaining APML: what it is & why you want it

Lately there has been a lot of chatter about APML. As a member of the workgroup advocating this standard, I thought I might help answer some of the questions on people’s minds. Primarily – “what is an APML file”, and “why do I want one”. I suggest you read the excellent article by Marjolein Hoekstra on attention profiling that she recently wrote, if you haven’t already done so, as an introduction to attention profiling. This article will focus on explaining what the technical side of an APML file is and what can be done with it. Hopefully by understanding what APML actually is, you’ll understand how it can benefit you as a user.

APML – the specification
APML stands for Attention Profile Markup Language. It’s an attention economy concept, based on the XML technical standard. I am going to assume you don’t know what attention means, nor what XML is, so here is a quick explanation to get you on board.

Attention
There is this concept floating around on the web about the attention economy. It means as a consumer, you consume web services – e-mail, rss readers, social networking sites – and you generate value through your attention. For example, if I am on a Myspace band page for Sneaky Sound System, I am giving attention to that band. Newscorp (the company that owns MySpace) is capturing that implicit data about me (ie, it knows I like Electro/Pop/House music). By giving my attention, Newscorp has collected information about me. Implicit data are things you give away about yourself without saying it, like how people can determine what type of person you are purely off the clothes you wear. It’s like explicit data – information you give up about yourself (like your gender when you signed up to MySpace).

Attention camera

I know what you did last Summer

XML
XML is one of the core standards on the web. The web pages you access, are probably using a form of XML to provide the content to you (xHTML). If you use an RSS reader, it pulls a version of XML to deliver that content to you. I am not going to get into a discussion about XML because there are plenty of other places that can do that. However I just want to make sure you understand, that XML is a very flexible way of structuring data. Think of it like a street directory. It’s useless if you have a map with no street names if you are trying to find a house. But by having a map with the street names, it suddenly becomes a lot more useful because you can make sense of the houses (the content). It’s a way of describing a piece of content.

APML – the specification
So all APML is, is a way of converting your attention into a structured format. The way APML does this, is that it stores your implicit and explicit data – and scores it. Lost? Keep reading.

Continuing with my example about Sneaky Sound System. If MySpace supported APML, they would identify that I like pop music. But just because someone gives attention to something, that doesn’t mean they really like it; the thing about implicit data is that companies are guessing because you haven’t actually said it. So MySpace might say I like pop music but with a score of 0.2 or 20% positive – meaning they’re not too confident. Now lets say directly after that, I go onto the Britney Spears music space. Okay, there’s no doubting now: I definitely do like pop music. So my score against “pop” is now 0.5 (50%). And if I visited the Christina Aguilera page: forget about it – my APML rank just blew to 1.0! (Note that the scoring system is a percentage, with a range from -1.0 to +1.0 or -100% to +100%).

APML ranks things, but the concepts are not just things: it will also rank authors. In the case of Marjolein Hoekstra, who wrote that post I mention in my intro, because I read other things from her it means I have a high regard for her writing. Therefore, my APML file gives her a high score. On the other hand, I have an allergic reaction whenever I read something from Valleywag because they have cooties. So Marjolein’s rank would be 1.0 but Valleywag’s -1.0.

Aside from the ranking of concepts (which is the core of what APML is), there are other things in an APML file that might confuse you when reviewing the spec. “From” means ‘from the place you gave your attention’. So with the Sneaky Sound System concept, it would be ‘from: MySpace’. It’s simply describing the name of the application that added the implicit node. Another thing you may notice in an APML file is that you can create “profiles”. For example, the concepts about me in my “work” profile is not something I want to mix with my “personal” profile. This allows you to segment the ranked concepts in your APML into different groups, allowing applications access to only a particilar profile.

Another thing to take note of is ‘implicit’ and ‘explicit’ which I touched on above – implicit being things you give attention to (ie, the clothes you wear – people guess because of what you wear, you are a certain personality type); explicit being things you gave away (the words you said – when you say “I’m a moron” it’s quite obvious, you are). APML categorises concepts based on whether you explicitly said it, or it was implicitly determined by an application.

Okay, big whoop – why can an APML do for me?
In my eyes, there are five main benefits of APML: filtering, accountability, privacy, shared data, and you being boss.

1) Filtering
If a company supports APML, they are using a smart standard that other companies use to profile you. By ranking concepts and authors for example, they can use your APML file in the future to filter things that might interest you. As I have such a high ranking for Marjolein, when Bloglines implements APML, they will be able to use this information to start prioritising content in my RSS reader. Meaning, of the 1000 items in my bloglines reader, all the blog postings from her will have more emphasis for me to read whilst all the ones about Valleywag will sit at the bottom (with last nights trash).

2) Accountability
If a company is collecting implicit data about me and trying to profile me, I would like to see that infomation thank you very much. It’s a bit like me wearing a pink shirt at a party. You meet me at a party, and think “Pink – the dude must be gay”. Now I am actually as straight as a doornail, and wearing that pink shirt is me trying to be trendy. However what you have done is that by observation, you have profiled me. Now imagine if that was a web application, where this happens all the time. By letting them access your data – your APML file – you can change that. I’ve actually done this with Particls before, which supports APML. It had ranked a concept as high based on things I had read, which was wrong. So what I did, was changed the score to -1.0 for one of them, because that way, Particls would never show me content on things it thought I would like.

3) Privacy
I joined the APML workgroup for this reason: it was to me a smart away to deal with the growing privacy issue on the web. It fits my requirements about being privacy compliant:

  • who can see information about you
  • when can people see information about you:
  • what information they can see about you

The way APML does that is by allowing me to create ‘profiles’ within my APML file; allowing me to export my APML file from a company; and by allowing me to access my APML file so I can see what profile I have.

drivers

Here is my APML, now let me in. Biatch.

4) Shared data
An APML file can, with your permission, share information between your web-services. My concepts ranking books on Amazon.com, can sit alongside my RSS feed rankings. What’s powerful about that, is the unintended consequences of sharing that data. For example, if Amazon ranked what my favourite genres were about books – this could be useful information to help me filter my RSS feeds about blog topics. The data generated in Amazon’s ecosystem, can benefit me and enjoy a product in another ecosystem, in a mutually beneficial way.

5) You’re the boss!
By being able to generate APML for the things you give attention to, you are recognising the value your attention has – something companies already place a lot of value on. Your browsing habits can reveal useful information about your personality, and the ability to control your profile is a very powerful concept. It’s like controlling the image people have of you: you don’t want the wrong things being said about you. 🙂

Want to know more?
Check the APML FAQ. Othersise, post a comment if you still have no idea what APML is. Myself or one of the other APML workgroup members would be more than happy to answer your queries.

« Older posts Newer posts »