Frequent thinker, occasional writer, constant smart-arse

Tag: rss (Page 1 of 2)

Data portability allows mashup for Australian bush fire crisis

Last night in Australia, one of the states developed a series of bush fires that have ravaged communities – survivors describe it as “raining fire” that came out of no where. As I write this, up to 76 people have been killed.

Victorian AU Fires 2009
The sky is said by Dave Hollis to look how it is in the movie ‘Independence Day’

An important lesson has come out out of this. First, the good stuff.

Googler Pamela Fox has created an invaluable tool to display the bush fires in real time. Using Google technologies like App engine and the Maps API (which she is the support engineer for), she’s been able to create a mashup that helps the public.

She can do so because the Victorian Fire department supports the open standard RSS. There are fires in my state of New South Wales as well, but like other Fire Department’s in Australia, there is no RSS feed to pull the data from (which is why you won’t see any data on the map from there) It appears states like NSW do support RSS for updates, but it would be more useful if there was some consistency – refer to discussion below about the standards.

For further information, you can read the Google blog post.

While the Fire Department’s RSS allows the portability of the data, it doesn’t have geocodes or a clear licence for use. That may not sound like a big deal, but the ability to contextualise a piece of information in this case matters a hell of a lot.

As a workaround, Pamela sent addresses through the Google geocoder to develop a database of addresses with latitude and longtitude.

GeoRSS and KML
In the geo standards world, two dominant standards exist that enable the portability of data. One is an extension to RSS (GeoRSS) that allows you to extend an RSS feed to show geodata. The other in Keyhole Markup Language, which was a standard developed by Google. GeoRSS is simply modifying RSS feeds to be more useful, while KML is more like how HTML is.

If the CFA and any other websites had supported them either of these standards, it would have made life a lot more easier. Pamela has access to Google resources to translate the information into a geocode and even she had trouble. (Geocoding the location data was the most time-consuming of the map-making process.)

The lessons
1) If you output data, output it in some standard structured format (like RSS, KML, etc).
2) If you want that data to be useful for visualisation, include both time and geographic (latitude/longitude information). Otherwise you’re hindering the public’s ability to use it.
3) Let the public use your data. The Google team spent some time to ensure they were not violating anything by using this data. Websites should be clearer about their rights of usage to enable mashers to work without fear
4) Extend the standards. It would have helped a lot of the CFA site extended their RSS with some custom elements (in their own namespace), for the structured data about the fires. Like for example <cfa:State>Get the hell out of here</cfa>.
5) Having all the Fire Department’s using the same standards would have make a world of difference – build the mashup using one method and it can be immediately useful for future uses.

Pamela tells me that this is the fifth natural disaster she’s dealt with. Every time there’s been an issue of where to get the data and how to syndicate it. Data portability matters most for natural disasters- people don’t have time to deal with scraping HTML (didn’t we learn this with Katrina?).

Let’s be prepared for the next time an unpredictable crisis like this occurs.

Social media and that whole “friend” thing

Social media, is being killed not by fail whales , but social awkwardness. Facebook as a simple example – is everyone you add there really your "friend"? What’s a "friend", what ‘group’ do I put them in…it’s all very stressful. However bring into the mix social media services (sites where people collaborate, share content, discuss openly) and this stress becomes a real pain in the arse.
Twitter for example – you get alerts when people post a message. What happens when there is someone you know in real life, you are friendly with, but their Twitter stream is verbal diarrhoea? You force yourself to subscribe to them, because the social awkwardness matters more to you. Or Friendfeed, where people share links – it’s even worse. I would even go on to say it makes the service unusable.
Enter Google Reader, the tool I use to consume my online information habit. There is a feature that determines who e-mails you, and if they use Google Reader and share links, will come up along with your other subscriptions. It’s become such a valuable thing for me, that I now focus my attention on clearing items there ahead of my other few dozen subscriptions. The reason being, it’s the benefit of social media services without the social awkwardness.
Take Chris Saad, who was on my list. I didn’t like the things he shared – movie reviews – so I hid him. Up until now when a Google blog search will notify him (I expect him to find this and respond within 6 hours of posting this – watch!), he probably didn’t even know. However, if I was to unsubscribe from him on something like Twitter – he’d work it out – and say "dude, what’s the deal?". Because an inherent value of social media is that it’s collaborative communication; it’s just that too much communication from too many people can become more noise than signal.
This new age of mass collaboration is a massive thing, that I don’t think even the early adopters driving it, realise what’s happening. It’s the future of media – the fact people I know and trust will suggest articles, is the same human-powered recommendations the mass media have been doing -but so much more efficient, relevant and better.
And yet, Google Reader in its simplicity does it best – it’s almost like a secret. Mike Cannon-Brookes probably doesn’t even realise I track his shared links, but I love them because he reads a lot of RSS feeds on diverse subjects that interest me. Likewise, Kate Carruthers has such a diverse reading list I feel like I can whittle down my RSS subscriptions which stress me from having too much, and just get fed from her the good stuff.
Am I showing up in their field? Who knows. And quite frankly, who cares. I know I do for Brady Brim De-Forest, because he’s re-shared stuff I shared that I doubt he subscribes to (at least then). But that doubt detracts the fact it doesn’t matter. It’s a secret club – I go about clicking the "share" button for good content I come across, thinking perhaps someone follows them and would appreciate it. There’s no feedback mechanism, other than seeing other people encouraged to do the same. And this is the first time I’ve ever discussed the club openly. I think it exists. Maybe it doesn’t. But damn, it rocks.

Liako is everywhere…but not here

Life’s been busy, and this blog has been neglected. Not a bad thing – a bit of life-living, work-smacking, exposure to new experiences, and active osmosis from the things I am involved in – is what makes me generate the original perspectives I try to create on this blog.

However to my subscribers (Hi Dad!), let this post make it up to you with some content I’ve created elsewhere.

You already know about the first podcast I did with the Perth baroness Bronwen Clune and the only guy I know who can pull off a mullet Mike Cannon-Brookes of Atlassian . Here’s a recap of some other episodes I’ve done:

  • Episode two: ex-PwC boy Matthew Macfarlane talks to current PwC boy myself and Bronwen, in his new role as partner of a newly created investment fund Yuuwa Capital. He joined us and told us about what he’s looking for in startups, as he’s about to spend $40million on innovative startups!
  • Episode three: marketing guru Steve Sammartino , tells us about building a business and his current startup Rentoid.com
  • Episode four: experienced entrepreneur Martin Hosking shares us lessons and insight, whilst talking about his social commerce art service Red Bubble .
  • Episode five: “oh-my-God-that-dude-from-TV!” Mark Pesce joins us in discussing that filthy government filter to censor the Internet
  • Episode six: ex-Fairfax Media strategist Rob Antulov tells us about 3eep – a social networking solution for the amateur and semi-professional sports world.

I’ve also put my data portability hat on beyond mailing list arguments and helped out a new social media service called SNOBS – a Social Network for Opportunistic Business women – with a beginners guide to RSS . You might see me contribute there in future, because I love seeing people pioneer New Media and think Carlee Potter is doing an awesome job – so go support her!

Over and out -regular scheduling to resume after this…

How Google reader can finally start making money

Today, you would have heard that Newsgator, Bloglines, Me.dium, Peepel, Talis and Ma.gnolia have joined the APML workgroup and are in discussions with workgroup members on how they can implement APML into their product lines. Bloglines created some news the other week on their intention to adopt it, and the announcement today about Newsgator means APML is now fast becoming an industry standard.

Google however, is still sitting on the side lines. I really like using Google reader, but if they don?¢‚Ǩ‚Ñ¢t announce support for APML soon, I will have to switch back to my old favourite Bloglines which is doing some serious innovating. Seeing as Google reader came out of beta recently, I thought I?¢‚Ǩ‚Ñ¢d help them out to finally add a new feature (APML) that will see it generate some real revenue.

What a Google reader APML file would look like
Read my previous post on what exactly APML is. If the Google reader team was to support APML, what they could add to my APML file is a ranking of blogs, authors, and key-words. First an explanation, and then I will explain the consequences.

In terms of blogs I read, the percentage frequency of posting I read from a particular blog will determine the relevancy score in my APML file. So if I was to read 89% of Techcrunch posts ?¢‚Ǩ‚Äú which is information already provided to users ?¢‚Ǩ‚Äú it would convert this into a relevancy score for Techcrunch of 89% or 0.89.

ranking

APML: pulling rank

In terms of authors I read, it can extract who posted the entry from the individual blog postings I read, and like the blog ranking above, perform a similar procedure. I don?¢‚Ǩ‚Ñ¢t imagine it would too hard to do this, however given it?¢‚Ǩ‚Ñ¢s a small team running the product, I would put this on a lower priority to support.

In terms of key-words, Google could employ its contextual analysis technology from each of the postings I read and extract key words. By performing this on each post I read, the frequency of extracted key words determines the relevance score for those concepts.

So that would be the how. The APML file generated from Google Reader would simply rank these blogs, authors, and key-words – and the relevance scores would update over time. Over time, the data is indexed and re-calculated from scratch so as concepts stop being viewed, they start to diminish in value until they drop off.

What Google reader can do with that APML file
1. Ranking of content
One of the biggest issues facing consumers of RSS is the amount of information overload. I am quite confident to think that people would pay a premium, for any attempt to help rank the what can be the hundreds of items per day, that need to be read by a user. By having an APML file, over time Google Reader can match postings to what a users ranked interests are. So rather than presenting the content by reverse chronology (most recent to oldest); it can instead organise content by relevancy (items of most interest to least).

This won?¢‚Ǩ‚Ñ¢t reduce the amount of RSS consumption by a user, but it will enable them to know how to allocate their attention to content. There are a lot of innovative ways you can rank the content, down to the way you extract key works and rank concepts, so there is scope for competing vendors to have their own methods. However the point is, a feature to ?¢‚ǨÀúSort by Personal Relevance?¢‚Ǩ‚Ñ¢ would be highly sort after, and I am sure quite a few people will be willing to pay the price for this God send.

I know Google seems to think contextual ads are everything, but maybe the Google Reader team can break from the mould and generate a different revenue stream through a value add feature like that. Google should apply its contextual advertising technology to determine key words for filtering, not advertising. It can use this pre-existing technology to generate a different revenue stream.

2. Enhancing its AdSense programme

blatant ads

Targeted advertising is still bloody annoying

One of the great benefits of APML is that it creates an open database about a user. Contextual advertising, in my opinion is actually a pretty sucky technology and its success to date is only because all the other types of targeted advertising models are flawed. As I explain above, the technology instead should be done to better analyse what content a user consumes, through keyword analysis. Over time, a ranking of these concepts can occur ?¢‚Ǩ‚Äú as well as being shared from other web services that are doing the same thing.

An APML file that ranks concepts is exactly what Google needs to enhance its adwords technology. Don?¢‚Ǩ‚Ñ¢t use it to analyse a post to show ads; use it to analyse a post to rank concepts. Then, in aggregate, the contextual advertising will work because it can be based off this APML file with great precision. And even better, a user can tweak it ?¢‚Ǩ‚Äú which will be the equivalent to tweaking what advertising a user wants to get. The transparency of a user being able to see what ‘concept ranking’ you generate for them, is powerful, because a user is likely to monitor it to be accurate.

APML is contextual advertising biggest friend, because it profiles a user in a sensible way, that can be shared across applications and monitored by the user. Allowing a user to tweak their APML file for the motivation of more targeted content, aligns their self-interest to ensure the targeted ads thrown at them based on those ranked concepts, are in fact, relevant.

3. Privacy credibility
Privacy is the inflation of the attention economy. You can?¢‚Ǩ‚Ñ¢t proceed to innovate with targeted advertising technology, whilst ignoring privacy. Google has clearly realised this the hard way by being labeled one of the worst privacy offenders in the world. By adopting APML, Google will go a long way to gain credibility in privacy rights. It will be creating open transparency with the information it collects to profile users, and it will allow a user to control that profiling of themselves.

APML is a very clever approach to dealing with privacy. It?¢‚Ǩ‚Ñ¢s not the only approach, but it a one of the most promising. Even if Google never uses an APML file as I describe above, the pure brand-enhancing value of giving some control to its users over their rightful attention data, is something alone that would benefit the Google Reader product (and Google?¢‚Ǩ‚Ñ¢s reputation itself) if they were to adopt it.

privacy

Privacy. Stop looking.

Conclusion
Hey Google – can you hear me? Let’s hope so, because you might be the market leader now, but so was Bloglines once upon a time.

Explaining APML: what it is & why you want it

Lately there has been a lot of chatter about APML. As a member of the workgroup advocating this standard, I thought I might help answer some of the questions on people’s minds. Primarily – “what is an APML file”, and “why do I want one”. I suggest you read the excellent article by Marjolein Hoekstra on attention profiling that she recently wrote, if you haven’t already done so, as an introduction to attention profiling. This article will focus on explaining what the technical side of an APML file is and what can be done with it. Hopefully by understanding what APML actually is, you’ll understand how it can benefit you as a user.

APML – the specification
APML stands for Attention Profile Markup Language. It’s an attention economy concept, based on the XML technical standard. I am going to assume you don’t know what attention means, nor what XML is, so here is a quick explanation to get you on board.

Attention
There is this concept floating around on the web about the attention economy. It means as a consumer, you consume web services – e-mail, rss readers, social networking sites – and you generate value through your attention. For example, if I am on a Myspace band page for Sneaky Sound System, I am giving attention to that band. Newscorp (the company that owns MySpace) is capturing that implicit data about me (ie, it knows I like Electro/Pop/House music). By giving my attention, Newscorp has collected information about me. Implicit data are things you give away about yourself without saying it, like how people can determine what type of person you are purely off the clothes you wear. It’s like explicit data – information you give up about yourself (like your gender when you signed up to MySpace).

Attention camera

I know what you did last Summer

XML
XML is one of the core standards on the web. The web pages you access, are probably using a form of XML to provide the content to you (xHTML). If you use an RSS reader, it pulls a version of XML to deliver that content to you. I am not going to get into a discussion about XML because there are plenty of other places that can do that. However I just want to make sure you understand, that XML is a very flexible way of structuring data. Think of it like a street directory. It’s useless if you have a map with no street names if you are trying to find a house. But by having a map with the street names, it suddenly becomes a lot more useful because you can make sense of the houses (the content). It’s a way of describing a piece of content.

APML – the specification
So all APML is, is a way of converting your attention into a structured format. The way APML does this, is that it stores your implicit and explicit data – and scores it. Lost? Keep reading.

Continuing with my example about Sneaky Sound System. If MySpace supported APML, they would identify that I like pop music. But just because someone gives attention to something, that doesn’t mean they really like it; the thing about implicit data is that companies are guessing because you haven’t actually said it. So MySpace might say I like pop music but with a score of 0.2 or 20% positive – meaning they’re not too confident. Now lets say directly after that, I go onto the Britney Spears music space. Okay, there’s no doubting now: I definitely do like pop music. So my score against “pop” is now 0.5 (50%). And if I visited the Christina Aguilera page: forget about it – my APML rank just blew to 1.0! (Note that the scoring system is a percentage, with a range from -1.0 to +1.0 or -100% to +100%).

APML ranks things, but the concepts are not just things: it will also rank authors. In the case of Marjolein Hoekstra, who wrote that post I mention in my intro, because I read other things from her it means I have a high regard for her writing. Therefore, my APML file gives her a high score. On the other hand, I have an allergic reaction whenever I read something from Valleywag because they have cooties. So Marjolein’s rank would be 1.0 but Valleywag’s -1.0.

Aside from the ranking of concepts (which is the core of what APML is), there are other things in an APML file that might confuse you when reviewing the spec. “From” means ‘from the place you gave your attention’. So with the Sneaky Sound System concept, it would be ‘from: MySpace’. It’s simply describing the name of the application that added the implicit node. Another thing you may notice in an APML file is that you can create “profiles”. For example, the concepts about me in my “work” profile is not something I want to mix with my “personal” profile. This allows you to segment the ranked concepts in your APML into different groups, allowing applications access to only a particilar profile.

Another thing to take note of is ‘implicit’ and ‘explicit’ which I touched on above – implicit being things you give attention to (ie, the clothes you wear – people guess because of what you wear, you are a certain personality type); explicit being things you gave away (the words you said – when you say “I’m a moron” it’s quite obvious, you are). APML categorises concepts based on whether you explicitly said it, or it was implicitly determined by an application.

Okay, big whoop – why can an APML do for me?
In my eyes, there are five main benefits of APML: filtering, accountability, privacy, shared data, and you being boss.

1) Filtering
If a company supports APML, they are using a smart standard that other companies use to profile you. By ranking concepts and authors for example, they can use your APML file in the future to filter things that might interest you. As I have such a high ranking for Marjolein, when Bloglines implements APML, they will be able to use this information to start prioritising content in my RSS reader. Meaning, of the 1000 items in my bloglines reader, all the blog postings from her will have more emphasis for me to read whilst all the ones about Valleywag will sit at the bottom (with last nights trash).

2) Accountability
If a company is collecting implicit data about me and trying to profile me, I would like to see that infomation thank you very much. It’s a bit like me wearing a pink shirt at a party. You meet me at a party, and think “Pink – the dude must be gay”. Now I am actually as straight as a doornail, and wearing that pink shirt is me trying to be trendy. However what you have done is that by observation, you have profiled me. Now imagine if that was a web application, where this happens all the time. By letting them access your data – your APML file – you can change that. I’ve actually done this with Particls before, which supports APML. It had ranked a concept as high based on things I had read, which was wrong. So what I did, was changed the score to -1.0 for one of them, because that way, Particls would never show me content on things it thought I would like.

3) Privacy
I joined the APML workgroup for this reason: it was to me a smart away to deal with the growing privacy issue on the web. It fits my requirements about being privacy compliant:

  • who can see information about you
  • when can people see information about you:
  • what information they can see about you

The way APML does that is by allowing me to create ‘profiles’ within my APML file; allowing me to export my APML file from a company; and by allowing me to access my APML file so I can see what profile I have.

drivers

Here is my APML, now let me in. Biatch.

4) Shared data
An APML file can, with your permission, share information between your web-services. My concepts ranking books on Amazon.com, can sit alongside my RSS feed rankings. What’s powerful about that, is the unintended consequences of sharing that data. For example, if Amazon ranked what my favourite genres were about books – this could be useful information to help me filter my RSS feeds about blog topics. The data generated in Amazon’s ecosystem, can benefit me and enjoy a product in another ecosystem, in a mutually beneficial way.

5) You’re the boss!
By being able to generate APML for the things you give attention to, you are recognising the value your attention has – something companies already place a lot of value on. Your browsing habits can reveal useful information about your personality, and the ability to control your profile is a very powerful concept. It’s like controlling the image people have of you: you don’t want the wrong things being said about you. 🙂

Want to know more?
Check the APML FAQ. Othersise, post a comment if you still have no idea what APML is. Myself or one of the other APML workgroup members would be more than happy to answer your queries.

Bloglines to support APML

Tucked away in a post by one of the leading RSS readers in the world, Bloglines had announced that they will be investigating on how they can implement APML into their service. The thing about standards is that as fantastic as they are, if no one uses them, they are not a standard. Over the last year, dozens of companies have implemented APML support and this latest annoucement by a revitalised Bloglines team that is set to take back what Google took from them, means we are going to be seeing a lot more innovation in an area that has largely gone unanswered.

The annoucement has been covered by Read/WriteWeb, APML founders Faraday Media,?Ç? and a thoughtful analysis has been done by Ross Dawson. Ben Melcalfe had also written a thought-provoking analysis, of the merits of APML.

What this means?

APML is about taking control of data that companies collect about you. For example, if you are reading lots of articles about dogs, RSS readers can make a good guess you like dogs – and will tick the “likes dogs” box on the profile they build of you which they use to determine advertising.?Ç? Your attention data is anything you give attention to – when you click on a link within facebook, that’s attention data that reveals things about you implicitly.

The big thing about APML is that is solves a massive problem when it comes to privacy. If you look at my definition of what constitutes privacy, the abillity to control what data is collected with APML, completely fits the bill. I was so impressed when I first heard about it, because its a problem I have been thinking about for years, that I immediately joined the APML workgroup.

Privacy is the inflation of the attention economy, and companies like Google are painfully learning about the natural tension between privacy and targetted advertising. (Targetted advertising being the thing that Google is counting on to fund its revenue.) The web has seen a lot of technological innovation, which has disrupted a lot of our culture and society. It’s time that the companies that are disrupting the world’s economies, started innovating to answer the concerns of the humans that are using their services. Understanding how to deal with privacy is a key competitive advantage for any company in the Internet sector. It’s good to see some finally realising that.

Faraday Media – Particls

This series of blog posts – wizards of oz – is to highlight the innovation we have down under. So I begin with Faraday media, a Brisbane based start-up that launched their keynote product today,

Particls is an engine that learns what you are interested in, and alerts you when content on the internet becomes available – through a desktop ‘ticker’ or pop-up alerts.

Value
1) It’s targeted. Particls is an attention engine – it learns what you want to read, and then goes and finds relevant information. That’s a powerful tool, for those of us drowning in information overload, and who don’t have time to read.

2) It catches your attention. Particls is based on the concept of ‘alerts’ – information trickles across your screen seemlesly as you do your work, like a news ticker. For the things that matter, an alert will pop-up. The way you deal with information overload is not by shutting yourself out – it’s by adjusting the volume on things that you value more than other things.

3) The founders understand privacy. They started the APML standard – a workgroup I joined because it’s the best attempt I have seen yet that tackles the issue of privacy on the internet. For example, I can see what the Particls attention engine uses to determine my preferences – lists of people and subjects with “relevance scores”. And better yet – it’s stored on my hard-disk.

4) It’s simple. RSS is a huge innovation on the web, that only a minority of users on the internet understand. The problem with RSS (Real Simple Syndication), is that it’s not simple. Particles makes it dead simple to add RSS and track that content.

Conclusion

Why the hell doesn’t Fairfax acquire the start-up, rather than wasting time creating yet another publication (incidently in the same city) that we don’t have time to read. In my usage of the product, I have been introduced to content that I am interested in, that I never would have realised had existed on the web. In my trials, I have mainly used it to keep track of my research interests, and despite my skepticism about how ‘good’ the the attention engine is, it has absolutely blown me away.

And it’s not just in the consumer space – a colleague (who happens to hold a lot of influence in enterprise architecture of our 140,000 person firm) was blasting RSS one day on an internal blog – saying how we don’t yet have the technology to ‘filter’ information. I told him about Particls – he’s now in love. If a guy like him, who shapes IT strategy for a $20 billion consulting firm, can get that excited – that’s got to tell you something.

How to become the next Google

During the industrial age, information was a scarce commodity. The flow of information was controlled by the mass media – books, newspapers, television – were the sole distributors of information. The media during this age had a huge influence on society because the mass media was effectively the "gatekeeper" of information in society. Supply, or rather the distribution-capacity to supply information, was limited.

Criticism of these gatekeepers occurred for their power on what information they distributed, a thing the internet changed. We now live in the Information Age, which has come on the back of the internet. This has opened up the distribution points of information ?¢‚Ǩ‚Äú access to information is no longer dependent on the mass media – and availability is no longer confined by physical constraints (the internet has potentially an infinite storage). No longer do the traditional gatekeepers control the flow of and access to information.

The consequence of losing the old gatekeepers is that information is now plenty and consumers face information overload. Due to an environment of limited distribution in the Industrial Age, the mass media by consequence filtered information for consumers. Now with infinite information available, consumers are finding it difficult to filter information: identifying quality information was a role that wasn’t totally appreciated before. The cost of consuming quality information is being bourne by the consumer, as they are forced to identify it themselves.

The attention economy has risen as an important factor, as consumers only have limited attention to view the now unlimited amount of information. The new scarce commodity is no longer information, but the attention of consumers. Demand for information is now limited – people only have so much time to sift through the abundance of information.

Why search and aggregation services are valuable to consumers
The 1990s saw the development of search engines as a solution to this problem. Search engines have now become the new gatekeepers of information, as they provide consumers a means of filtering information and returning only what is relevant. Search works as a filtering system because consumers identify what they want, and a search engine simply needs to associate pre-indexed information that best matches that request. Innovation in search is about increasing the relevance of information to that request.

Other technologies have also been developed, which allow for the filtering of information. "Aggregation" services similar in role to what newspapers for example traditionally did – help pull together information from disparate sources. The value of these aggregation services, is in the value of relevant information to the consumer – a similar scenario to search. Search engines help consumers pull information; aggregation services push information, with what they think a consumer would want.

With both these "pull" and "push" technologies, consumers are reverting back to an industrial age concept of trust in brands. Google’s search for example, has impressive technology. But so do its rivals. User experience aside, the biggest advantage Google has is that users trust its brand more for the quality of information provided. Users trust Google to provide more relevant information – relevance is quality. The same reason why consumers used to trust a broadsheet newspaper over a student newspaper, is in the credibility of that brand to provide quality. The brand was and still is a way for consumers to filter information – or rather, trust others to provide information they can rely on.

The future

If you are looking to start a new search engine that will beat Google: don’t. If you think you have a brand new of way identifying quality information: spend your efforts there. Remember the reason why search, RSS, and profiling aggregators are important to consumers, is because they help them find the best quality information, in the shortest period of time.

You can’t beat Google at search. And if you do, by the time you do, it will be a waste of time because the industry will have evolved. Innovation on the internet and the Information Age, is about understanding why the traditional gate keepers were so effective in what they did. The last decade has seen some clever innovation – but we still have a long way to go.

« Older posts