Frequent thinker, occasional writer, constant smart-arse

Tag: information (Page 1 of 6)

The changing dynamics of news

In the recent controversy that has erupted due to the firing of Michael Arrington from TechCrunch, I believe it represents an era in innovation led by TechCrunch that we’re only starting to appreciate.

To start on this thought experiment, consider how four years ago (meaning, things haven’t changed) I wrote about the two kinds of content that exist: data like breaking news or archived news; and culture which includes analysis like editorials and entertainment such as satire.

UnderstandingI argue that each content form has unique characteristics that needs to be exploited in different ways. Think about that before digesting this blog post, because understanding the product (such as news) impacts the way the market will operate.

Some trends of the past
Over the last two decades, we’ve seen the form (and costs) of news be disrupted dramatically.

It started with hypertext systems that helped humans share knowledge (with the most successful hyperterxt implementation, the world wide web 20 years ago forever changing the world); search engines helping us find information easier (with Google transforming the world 10 years ago), and content management systems helping people reduce the costs of publishing to practically zero (with Moveable Type and especially WordPress driving this).

While the sourcing of news still requires unique relationships that journalists can extract to the world, even that’s changed due to social media that’s created a distributed ‘citizen journalism’ world. Related to this is a movement Julian Assange calls “scientific journalism” where the sourcing of news is now democratised and exposed in its raw form.

Some observations of the present
With that, I’ve noticed two interesting things about the tech news ecosystem, who are are helping shape the trends in news more broadly: tech bloggers kill themselves to break stories, to the point where blogs like TechCrunch have become cults for those that work there; separately, the rise of the news aggregators like TechMeme and HackerNews (or Slashdot and Digg before them) have built the audiences who have been overwhelmed by information overload and crave a filter from a quality editorial voice (the latter being why news personalisation technologies cannot work on their own).

The big secret (that’s not particularly secret due to the abundance of ‘share this’ buttons on webpages) about the news ecosystem is that it’s the aggregators who drive traffic to news outlets that report the news. When you understand that point, a lot of other things become clearer.

Content Aggregation infographic

On the other hand, tech entrepreneurs break their backs for the hope of getting written about on the Tech blogs. The reasons vary from getting credibility so they can recruit talent; exposure so they raise money; and a belief that they can acquire customers (the whole point of building a startup).

Which leads me to think despite all these random observations I’ve listed above, there is a fundamental efficiency evolving in news reporting that may give an insight into the future.

Let’s keep thinking. Other things to consider include:

  • The audience starts with the aggregators for news and the articles whereby the better headlines tend to perform better
  • News in its barest form is making awareness of an event (data); anything additional is analysis (cultural) which is to shape understanding around the event
  • The rise of ‘scientific journalism’ and social media allows society to discover and share information without a third party (due to technology tools).
  • Press releases are an invention to communicate a message so reporters can base their writing on, who often just copy and paste the words.

Some thinking about the future
News should be stripped to its barest form: a description of the event. It should be what we consider currently a “headline”, with preferably a link to the source material. Therefore professional journalists, bloggers, and the rest of the world should be competing to break news not on who can write the best prose but who can share a one line summary based on their ability to extract that information (either by being accidentally at the event or having exclusive relationships with the event maker). The cost of breaking the news should be simply a matter of who can share a link the quickest.

News Article - Wichita Falls Record News

Editorial, which is effectively analysis (or entertainment in some cases) and what blogging has become, should be left to what we now consider as “comments”. Readers get to have the “news” coloured, based on a managed curation of the top commentators.

Tying this together: Imagine a world where anyone could submit “news” and anyone could provide “editorial”? A rolling river of news of submitted headlines and links, and discussions roaring underneath the item reflecting the interpretation of the masses.

You could argue Twitter has become the first true example of that where most content is in full public view but with a restricted output (140 characters); people can share links with their comments; and the top stories tend to get retweeted which further gains exposure. Things could be similarly said about Digg, Reddit and Hacker News. But these services, along with Twitter (and Facebook) are simply an insight into a future that’s already begun. I think they are just early pioneers before the real solution comes, similar to how Tim Berners-Lee created a hypertext system in a saturated market that then became the standard; Google created a search engine in a saturated market that then became the standard; and WordPress created a blogging platform in a saturated market that then become the standard. Lots of people have tried to innovate in the news ecosystem, but I still don’t think the nut’s been cracked.

News has a lot of value, but there is different value based on who breaks it and who interprets it. For example, when I fire up some of my favourite aggregators, I tend to not click on the original headline but on brands I like so as to read their take on the event (though when I’m deeply looking into something, I dig for the source material). But the problem with news now, is there is a fundamental disruption on the cost structures supporting it: the economics favour those who break the news, with those that interpret news suffering as traditionally both these roles were considered the one function. Something’s going on and the answer is cheaper production, faster distribution and more of a decentralised effort across society and not the self-appointed curators.

While the newspaper industry is collapsing, something more fundamental is happening with news and we’re simply in the eye of the storm. Stay tuned.

Quora will give stock options to celebrities, reject a Google acquisition

It’s a private company so we may never know. But one thing that’s clear, is that the circumstances surrounding its growth are mimicking things we’ve seen in the last few years. The below are some specific thoughts on where I see Quora heading in 2011 and beyond.

1) Its growth will be driven by celebrities
A year ago, I asked if Twitter gave stock options to celebrities, which would explain the bizarre trend that had celebrities embrace the service. I’m willing to bet money they did.

Steve Case, the billionaire founder of AOL, recently has been actively answering questions on Quora, and it is awesome to see the responses. Now imagine if this domain knowledge in tech was expanded to people asking questions about celebrities? Let’s not forget Twitter started as a tech industry thing (I was told in May 2007, when I first started networking in the Sydney scene, that it was the ‘thing’ I had to have to have credibility) — it was a way to network in tech and track interesting people. Thinking back, it was transformative because successful people in tech were now accessible to new upstarts like me. In the years to follow, we saw Ashton Kutcher’s CNN race for one million followers combined with the Oprah moment, that suddenly saw it become mainstream, transformed into a way to track your favourite celebrities which is what drives its growth now.

So imagine if Quora gave stock options to all the interesting people of the world and they started answering questions? Imagine it being a direct way to interact with elected officials? Keep reading, this is not the first time we’ve seen this.

2)It will break news and information
Quora did something interesting a few months ago: it helped unravel some big news in the industry. It will do this again.

Its recognition in the mainstream (give it 2 years at least) will be if two things can occur: a massive tragedy occurs that uses Quora as a form of distributing reporting and citizen journalism; and the 2012 presidential candidates use it as a way to engage with voters. Who knows, maybe 2012 is too soon but like Twitter, it will be those two kinds of events that will make it mainstream. (For context on this, read my post from two years ago which explains the origins and rise of social media.) The service is perfectly setup to cater for both situations in a way that exceeds both the ability of Facebook and Twitter, its cousins in the social media world that is driving this broader trend in the world.

3) Google will try to buy them
Quora’s “social” competency complements Google’s lack in that area. Which ironically, is because both founders were early and senior employees of Facebook…the same reason I believe the Obama campaign led to him becoming the first social media president (as another early employee and “co-founder” of Facebook, Chris Hughes, was responsible for Obama’s Internet strategy).

Google is trying really hard to catchup on social, an area Facebook dominates and what will lead to Google losing its leadership in the industry. Despite all the rumours of its internal social networking initiatives, the numerous products launched so far have all been ordinary. And it’s for good reason: Google doesn’t get social. It can’t, it’s not in its DNA.

Google has an engineering culture where decisions are made based on data. Google’s former top designer quit because of “a design philosophy that lives or dies strictly by the sword of data”. Rather than trust the talent of its designers, it instead would over-rule decisions based on user metrics — which in a conversion business, makes sense. But the thing about user experience, its about shaping new behaviours rather than relying on existing patterns.

Which interestingly, is what Quora is excelling at: its user experience is inspiring the entire industry (like the Angel List crew, who in turn are inspiring an entire industry). That’s an impressive thing to do as a startup, and shows innovation in an area that is key to engagement — engagement that Google can’t seem to get.

4) They will decline a Google acquisition and do a licensing deal instead
Quora has very rich content, the stuff that make Google searches a lot more interesting. Google validated it is interested in the social search area with the $50 million acquisition of Aardvark. Quora in my eyes, would be a perfect fit for the same goal Google has but due to a different approach.

Google makes its money on specific types of searches, which are transactional searches — when you are looking to buy something (say a flight) as opposed to informational (like what’s the capital of Australia). But it’s always been the informational searches that drive usage of the Google search engine, as Google is a one-stop-shop for answers. Quora is like the structured blogging equivalent of Wikipedia, which is gold in the eyes of Google.

Which is why I believe they will go down the path of Twitter, which successfully played off both Google and Bing (Microsoft) with a licensing agreement to show Tweets in searches, a functionality that allowed the search engines to claim they were now “real time”. They will want to do this with Quora, because the questions on Quora mimic searches people make and the answers offer a treasure trove of curated answered by real people.

Conclusion
I could be wrong. Regardless, even if it doesn’t succeed like how I think it will, expect the startup to make a lot more noise in 2011 beyond the current cries of people saying this last week has seen a tipping point. The big blogs will continue to talk about it, and new journalists are now discovering it, only to compound my original complaint of lazy journalism.

That’s impressive and which will guarantee the noise through to 2011. That’s because all communication innovations tend to do so, and Quora is the new kid on the block that will drive that disruption.

Why blogs are turning into newspapers and Quora is the future of journalism

MG Siegler wrote a post following our exchange on Twitter. I called him out because for the second time that day, I had logged into Quora only to see minutes later a TechCrunch post being Tweeted that was rehashing the original Quora discussion. Is this the future of journalism?

Blogging 3.0
Siegler wrote an eloquent post expanding on my original jibe that he was practicing blogging 3.0 (I called it that as over the years Marshall Kirkpatrick would constantly joke Twitter is what paid his rent). Now don’t get me wrong: Quora is one of my favourite websites right now, and Siegler (as well as Kirkpatrick) are two of the more talented writers in the blogosphere. But it made me wonder: what’s the role of the journalist in the world, and by implication, the news blogger?

For the bloggers out there who receive bonuses by getting headlines on Techmeme — what’s stopping Gabe Rivera (Techmeme’s founder) from simply importing the RSS feed of Quora posts and having its human editors headline the best answer? As Siegler points out, he (worryingly) already has. Given Quora responses are like blog posts and get aggregated into a community wiki-like answer summary, I can’t see why this won’t become a new input source for Techmeme, completely bypassing the traditional blogs.

And while we are on the topic: Julian Assange of Wikileaks argues that they are pioneering a new form of journalism, which he recently argued in an editorial for The Australian, as “Scientific journalism“. Scientific because you can read the source of the material in its naked form or accompanying an article that discusses the source.

Source material is democratised
Journalists, it is said, are becoming curators of information. Siegler claims he has retrieved information from an obscure source, amplified it, which in turn will be broadcasted by a bigger publisher like CNN. But if Quora democratices the source gathering — it’s so obscure that everyone in Silicon Valley is on it, include billionaires like Steve Chase who founded AOL and Mark Zuckerberg of Facebook — what’s stopping me from “breaking” the apparent news? Or Rivera from doing a direct RSS import of the top answers, direct to his audience of thousands?

If the big blogs are traffic hungry that have them reliant on the aggregators like Techmeme to feed their pageviews….And if this trend to scientific journalism is being promoted, where journalistic bias adds colour to a source only if you want (rather then the bias being the source of your information consumption) — then one has to ponder. That the evolution of journalism will come not from changes in journalistic style, but by changes in technology — an evolution where every single one of us can talk openly about the world and in an applied way.

Siegler says this is business as usual for the bloggers, but I think it’s business as usual for the disruption technology is generating for the news making business. Disruption that will continue to favour those who tease out the source of news (like Quora, Twitter and Wikileaks has) and those who curate it into an efficient way to consume (like aggregators such as Techmeme, Google News and Digg).

The future of journalism resides with those that create the originating value: traffic or content
Before the Internet, newspapers were the sole source of information and so had an elevated role in society. Now they are being relegated to just one of the many sources of news; once considered a horror if they disappeared, they would not impact the world if they went bankrupt today (as there are plenty of online mastheads to replace their value). As social media technologies continue to be refined — where the participants curate the source material themselves — blogs will not disappear like how newspapers won’t disappear. But their position in the world is far from guaranteed, as the audience curation is being done better by the aggregators and the source material is now no longer proprietary to a journalist.

Billion dollar brainwaves

My Small Business | Tips & Advice For Small Business in Australia

David Wilson, a journalist for Fairfax, approached me the other month to give him my thoughts of Silicon Valley. The resulting interview appeared on Fairfax’s online mastheads (which include The Age, the Sydney Morning Herald, and the Brisbane Times). The article had more focus on me than I expected, but Wilson still captures some important lessons I’ve learned since moving here.

One that is mentioned is the importance of not planning. I’m a big believer that you can’t plan your life  (or your business). To put it simply, using yesterday’s information to make decisions about tomorrow is just not as effective as using the most recent information and reacting. Check out the very successful and intelligent Jason Fried of 37 Signals who says something similar:

So it’s not about the big plan, it’s about a day by day by day by day and seeing where things go and just kind of making decisions as we go.

And the main reason why I think this is important is because people often make decisions with the wrong information.  So they make decisions far into the future, based on information they have today.  You’re better off making decisions today based on information you have today because that’s when you make your best decisions.  You make your best decisions when you have the best information.  That’s always right now.

Another lesson I’m learning but which I didn’t mention, is that capitalism — effective capitalism — is brutal. It’s something I’ve observed with successful business people in Australia and America and I’m still trying to collect my thoughts about it. For example, employees and their termination — I’ve had several people explain to me the difficulty they’ve experienced doing it, which is for the better of the business.

To put this in context, loyalty and personality should not be confused as performance (which really, is the point of the employment). And if you’re not performing (or the broader function you are a part of), there’s the door. Brutal, I know — but what’s more brutal is a business collapsing and everyone losing their jobs. Effective capitalism isn’t about protecting an individuals ‘entitlement’ to a job; it’s about evolving the entitlement of an enterprise so that it can continue to sustain itself.

My media consumption – three years on

I was reflecting on a conversation the other day where I said I no longer read the news, a bizarre fact given as a teenager and young adult I was a newspaper junkie. Certainly, things have changed – even since three years ago when I wrote about my media consumption.

And it’s true – I don’t read newspapers or many news sites anymore. But I’m actually better informed about the world now.

How so?

– My iPhone has improved my productivity. I’m reading things constantly off it. It’s an important distribution tool worth pointing out, which is why I consume information like I do now.
Current homescreen
– Like I did in 2007, Techmeme is something I religiously check every day and increasingly Mediagazer. Both are icons on my iPhones’s homescreen.
Twitter and Facebook are a huge source of how I find out about things or come across interesting content. (Also both on my phone’s homescreen.)
– I am a subscriber to the geopolitical thinktank Stratfor, which tells me where the US navy is on weekly basis, breaks news to me for major political news or dramatic calamities, and gives me essays filled with complete perspective. I don’t have the ability to read all the emails, but like Techmeme, merely reading the headlines is enough to keep me on top of things. And the interesting point to note about this, is that this is premium analysis – the stuff the intelligence community and government policy makers subscribe to. It’s seems like I’ve cut the middleman out (the newspaper journalists) and gone closer to the source of the original analysis. By implication, I’ve chosen the better analyser and that has now become my default news provider.
– I have BNO news and the Associated Press applications on my iPhone, which send me alerts to news items through the day via push notification. I also have the NY Times and WSJ journal apps and which I used to use religiously a year ago, but for some reason I no longer do. (Maybe because they are now buried in my iPhone’s menu.)
– Recently, I changed my homepage from Techmeme to be three homepages: my company’s internal blog, OneRiot which flags the top news shared through Twitter, and Techmeme. The addition of OneRiot has got me hooked these last few weeks: its given me a great source of headline news and useless news, like celebrity gossip that I don’t normally seek. That’s not to say I like celebrity gossip, but it completes my knowledge gaps of what’s happening in the world and that other people are talking about.
– I no longer listen to the radio, the prime reason being I don’t have a car here in San Francisco. If the iPhone had a radio, I probably would – I have my headset in my ears usually every day at work, to help me focus.
– I am a paying subscriber to Pandora, the online music discovery service. (I’m listening to it right now as I write this post!) I prefer it not because my music collection is weak, but because I like being introduced to songs I might not normally know about.
– I have cable TV in my apartment (Comcast), but I never watch it. And when I do, it’s when I want to just switch off for a bit.

My current approach has gaps: for example, I am detached from Australian news. Regardless, its proved an interesting point: I no longer have time to read newspapers like I used to as a teenager. What’s changed is the way I consume information, which allows me to consume more with less effort. I’m one of the busiest guys I know, but thanks to technology, I can be efficient with my time.

The best feature Facebook didn’t invent that it should invent now

Around 9.15pm last night after my first rugby training for the year (and in America), I sat down at the bus stop right by the football field, to catch a bus home. Playing on my iPhone, I noticed a woman walk past me and then run back. That’s weird I thought and it raised my awareness levels. Then, I noticed a hooded black kid approach the bus shelter from the back and entering from the left. I watched him turn and saw his arm raise with his jacket covering his hand. A second later, he pointed a gun right into the left temple of my head and mumbled: “ok man, hand it over”.

Luckily, I got away with my wallet, phone – and life – in tact. (I stood up, roared abuse at him, and he ran away – don’t ask why I did what I did, but it worked!) Minutes later, I shared the news on my Facebook account:

Gun pulled to my head - status.

And I received a flood of comments, phone-calls and text messages over the next 24 hours. No ‘likes’ however.

The like feature
Friendfeed, a startup Facebook acquired last year, pioneered social media in the way people could collaborate and share information. One of its most brilliant innovations was the ‘like’ feature – the ability for a user reading something, to acknowledge the content being shared by another user. Rating systems are a hard thing to get right, and its been said by YouTube that the standard five-star rating systems are actually not quite five stars. Friendfeed’s simple but elegant approach took a life of its own as a rating mechanism and more. Facebook implemented the feature, and I’ve been observing how my social circle have reacted to it – and I’ve been startled at the way its been used. Just like the unique culture Friendfeed built, encouraged by this simple ‘liking’ activity, so to has Facebook’s users developed a unique kinds of behaviour. I’d argue its become one of the key forms of activity on the site.

Australia trip like

So congrats Facebook – you copied a feature and your users love it. Now how about you evolve this remarkably simple form of communication, which has become a powerful way to have people share information (as it flags value, quantifies a kind of engagement and adds an additional level of communication to the originating message). How about a dislike feature? Do you think people would use that?

My friend Marty responded to my gun incident with the following:

Facebook | dislike button

And he wasn’t the only one. My Friend Kyle, who responded first, said:

Facebook | dislike by kyle

Despite being an engaging piece of content and popping up on my friend’s homescreens, there were no ‘likes’. It just didn’t seem appropriate. But just like when you can’t speak a foreign language fluently but want to communicate a message, the lack of this feature prevented additional communication.

Facebook | dislike button placed here

Social media is here to stay and is having a remarkable impact on our word. If by definition its about connecting people and communicating with each other, let’s evolve the way they can express their thoughts beyond simply text. It’s going to lead to a more interactive, engaging, and a far richer experience. This post may seem trivial because it’s like advocating we create a new word to communicate a frivolous concept, but like language, we gain a type of richness in the diversity we have to express ourselves.

The Information Value Chain and its Network

The Information Value Network is an economic theory for Internet businesses, which incorporates my original thinking of the Information value chain. It describes how data openness, interoperability and data portability allows for greater value creation for both service providers and their users. It is proposed by myself, and is inspired by two existing theories: David Ricardo’s 1817 thesis of comparative advantage and Michael Porter’s 1985 concept of the Value Chain.

The theory on information value-chains and networks
Information Value Chain
Figure 1: Information Value Chain

The information value chain recognises the value activities required in the use of information. It represents the cycle of a common information product, with the activities most likely undertaken by one entity.

The activities can be broken down into two components within the value chain.
1) Primary value activities relate to aspects of the chain that are the core information product. They are data creation, information generation, and knowledge application.
2) Supporting value activities relate to aspects of the chain that assist the core information product. They are storage, processing, and distribution.

As an example of the above, a photo can be a core information product — with a single image being “data”. The adding of EXIF data, titles, and tags creates information as it enables additional value unlocked in the context of the core information product (the photo).

Knowledge is created when the photos are clustered with other similar photos, like a collection of photos from the same event. Each of the information products may present their own information value, but in the context of each other, they reveal a story of the time period when the pictures where taken — unlocking additional value.

The secondary activities of storage, processing, and distribution of the information product are integral to it. However, they are merely a process that assist in the development of the product and as such are not to be considered the core activities.

Another point to note is that these secondary processes can occur at any three stages of the information process. Computing processing is required when a photo is taken (data creation), when it is edited with additional information like a title (information), and when it is grouped with other photos with similar characteristics (knowledge). Similarly, cloud computing storage or local storage is required for any of those three stages of the information product, with distribution necessary at any stage as well.

Information Value Network
Figure 2: Information Value Network

Whereas the information value chain describes the activities of an information product, it does not acknowledge the full environment of an information product. Information is an intangible good that is utilised by humans (and with increased sophistication over time, by machines) to assist in their own internal thinking. It does not live in isolation, and its presence alongside other information products and their value development cycles can have a huge impact.

In the diagram above, the information value chain has been extended when looking at the context of multiple entities.

In the network, several entities may agree to exchange information products created through their own respective activities, in order to add additional value to each other. Information and knowledge both derive their value from having as many sources as possible; whether it be data sources, but also processed data in the form of information.

Extending the photo example use earlier, another entity may have created an information product relating to geolocation. It has acquired the geo-coordinates of regions, presented them in the appropriate geo standards, and placed them on a map. The owner of a set of activities that generated the photo, can match their geodata to this other activity process and have the photos mapped by location — as well as analysis or specific types of visualisation that can be can be done due to proximity with other photos.

Background to the concepts supporting the theory
Comparative advantage
The law of comparative advantage in international trade states that, if a country is more productive producing one good over another country, it should focus on allocating its resources to that production. Further, if a country has an absolute advantage producing multiple goods, it should focus only on the one where it yields the most productive capacity.

By specializing in producing the products with the higher comparative advantage — even if they across the board are the most efficient at doing them all — the world can expand total world output with the same quantity of resources due to specialisation.

Value chain
A Value Chain Analysis describes the activities that take place in a business and relates them to an analysis of the competitive strength of the business. It is one way of identifying what activities are best undertaken by a business and which are best provided by others (ie, out-sourced).

It helps a company look are what its core competitive advantage is, and segments the activities surrounding its competitive advantage, in order to realize efficiencies and better value creation.

Data, Information, and knowledge
Data can be defined as an object that represents something. Typically data lacks meaning, although it derives meaning when context is added.

Information on the other hand, is what is considered when connecting different data objects — the actual linkages between data objects are what is information. Meaning can be derived through the context of data.

Likewise, knowledge is the extension in this chain of development. That being, the application of information in the context of other information.

Comment on the economic incentive for firms
Industries that operate with the purpose of generating, managing or manipulating information products will benefit by working with other like organisations. It reduces cost, increases engagement, and more fundamentally will increase total value creation.

Cost
By focusing on what an entity has a comparative advantage in and identifying its true competitive advantage, it can focus its resources on the activity that ultimately maximise the entity’s own value.

Take as a case in point a photo sharing website, that is aiming to be both a storage facility (ie, ‚”unlimited storage”) as well as a community site.

  • Feature development: Development resources will face competition to build functionality for the photo service, to cater for two completely purposes. This will lead to opportunity cost in the short-term, and potentially the long term if dealing in a highly competitive market.
  • Money: Any resource acquisition, whether it be external spending or internal allocations, face conflict as the company is attempting to win on two different types of businesses
  • Conflict of interest: The decision makers at the company do not have aligned self interest and face conflict. For example, if a user puts their photos at a pure storage service, management will do what they can to maximise that core value. If the company also does community, management may trade storage value (such as privacy) for the benefits of building the other aspect of the business.

Engagement
In the context of web services, engagement of a user is a key priority. Economic value can be derived by a service due to attention, conversion, or simply a satisfied customer through the experienced offered.

If a service provider focused on their core competency, value can be maximised both for a users engagement and a provider’s margin.

A commerce site aims to convert users and make them customers through the purchase of goods. Commerce sites rely on identity services to validate the authenticity of a user, but it’s not part of their core value offering. In the case of one business, the web designers took away the Register button. In its place, they put a Continue button with a simple message: “You do not need to create an account to make purchases on our site. Simply click Continue to proceed to checkout. To make your future purchases even faster, you can create an account during checkout.”

The results: The number of customers purchasing went up by 45%. The extra purchases resulted in an extra $15 million the first month. For the first year, the site saw an additional $300,000,000.

The significance of this is that by attempting to manage multiple aspects of the experience of their users, this business actually lost potential business. If they integrated their commerce site with an identity site experienced in user login, they may have leveraged this expertise a lot earlier and minimised the opportunity cost.

Value creation
Continuing the example of a photo, let’s assume multiple services work together using the same photo, and that there is full peer-to-peer data portability between the services.

The popular social-networking site Facebook described a technique where they were able to speed up the time they served photos to their users. In a blog post, they state that by removing the EXIF data — metadata that describes the photos (like location, shutter speed, and others — they were able to decrease the load on the servers to produce the photo.

This is fine in the context of Facebook, where the user experience is key and speed matters. But if a person uploaded their photos there as their only copy, and they wanted to use the same photos in a Flickr competition — whose community of passionate photographers puts a different criteria on the photos — they would be at a loss.

In a world that has true data portability, the photos (say the RAW images) could be stored on a specialised storage solution like Amazon S3. The online version of Photoshop could edit the RAW images to give an enhanced quality image for the Flickr community; whereas Google App engine could be used for a mass editing that is computer-intensive, in order to process the multiple RAW photos into EXIF-stripped images for distribution within Facebook. The desktop application Cooliris could access the newly edited photos that still retain their EXIF data, and have them visualised in its proprietary software, which gives a unique experience of viewing the information product.

The significance of the above example is that each service is using the same core information product, but for a completely different purpose. On the surface, all services would appear to be competing for the information product and “lock in” the user to only use it on their service. But the reality is, better value can be generated with their peered data portability. And in some cases, greater efficiencies realised — allowing the web services to focus on what their true comparative advantage is.

Comment on value-creation versus value-capture
This paper makes a explicit explanation on how value is generated. It does not, however, explain how that value can be captured by firms.

It is beyond the scope of this particular discussion to detail how value capture can occur, although it is an important issue that needs to be considered. Web businesses repeatedly have proven to fail to monetise on the web effectively.

This however is more a industry issue than a specific issue related to openness, and this paper makes the case of firms to focus on their core competitive advantage rather than how to monetise it. Instead it suggests that more firms can monetise, which creates total economic output to increase. How the output is shared amongst market participants is a matter of execution and specific market dynamics.

An invention that could transform online privacy and media

The University of Washington announced today of an invention that allows digital information to expire and “self-destruct”. After a set time period, electronic communications such as e-mail, Facebook posts, word documents, and chat messages would automatically be deleted and becoming irretrievable. Not even the sender will be able the retrieve them, and any copy of the message (like backup tapes) will also have the information unavilable.

GmailEncapsulated

Vanish is designed to give people control over the lifetime of personal data stored on the web or in the cloud. All copies of Vanish encrypted data — even archived or cached copies — will become permanently unreadable at a specific time, without any action on the part of a person, third party or centralised service.

As the New York Times notes, the technology of being able to destruct digital data is nothing new. However this particular implementation uses a novel way that combines a time limit and more uniquely, peer-to-peer file sharing that degrades a “key” over time. Its been made available as open source on the Mozilla Firefox browser. Details of the technical implementation can be found on the team’s press release, which includes a demo video.

FacebookEncapsulated

Implications
Advances like this could have a huge impact on the world, from controlling unauthorised assess to information to reinforcing content-creators copyright. Scenario’s where this technology could benefit

  • Content. As I’ve argued in the past, news derives its value from how quickly it can be accessed. However, legacy news items can also have value as an archive. By controlling the distribution of unique content like news, publishers have a way of controlling usage of their product – so that they can subsequently monetise the news if used for a different purpose (ie, companies researching the past for information as opposed to being informed by the latest news for day to day decision making)
  • Identity. Over at the DataPortability Project, we are in the finishing touches of creating our conceptial overview for a standard set of EULA and ToS that companies can adopt. This means, having companies respect your rights to your personal information in a standardised way – think how the Creative Commons has done for your content creations. An important conceptual decision we made, is that a person should have the right to delete their personal information and content – as true portability of your data is more than just reusing it in a different content. Technologies like this allow consumers to control their personal information, despite the fact they may not have possession, as their data resides in the cloud.
  • Security. Communications between people is so that we can inform each other in the ‘now’. This new world with the Internet capturing all of our conversations (such as chat logs and emails threads) is having us lose control of our privacy. The ability to have chat transcripts and email discussions automatically expire is a big step forward. Better still, if a company’s internal documents are leaked (as was the case with Twitter recently), it can rely on more avenues to limit damage beyond using the court system that would issue injunctions.

GoogleDocsEncapsulated

There’s a lot more work to be performed on technologies like this. Implementation issues aside, the inline encryption of the information doesn’t make this look sexy. But with a few user interface tweaks, it gives us a strong insight into real solutions for present day problems with the digital age. Even if we simply get companies like Facebook, Google, Microsoft ad Yahoo to agree on a common standard, it will transform the online world dramatically.

The information age is still filling up its rocket with fuel

Today, the Wall Street Journal published an article by a fund manager who suggested the Internet is now dead in terms of high growth. While I can respect the argument from the financial point of view (although he’s still wrong), it also shows how widespread and unsuspecting even the educated are for the transformation the Internet is preparing us. Yes, ladies and gentlemen – we ain’t seen nothing yet.

But I won’t get into the trends right now that are banging around my head, making me willing to change careers, country and life to position myself for the future opportunities. Let’s instead start with his core thesis:

The days of infinite margins, 1,000% productivity gains, and growth of market throughout the universe are long over. Internet companies now should be treated, at best, like utility companies that get bought at about 10 times earnings and sold at 13 times earnings. Even then, I’m not sure I would give the Internet sector the same respect as the monopoly-protected utility sector.

I am glad that was said, because this is more of a world-wide problem we have, that has lead us into the Global Financial Crisis (GFC). The ridiculous false economy generated over decades of speculative growth – where fundamental asset values were supported by unreal cash – is something we need to stop. The best thing the GFC has taught us, is that valuations need to be supported by independent cash flows with markets not manipulated to inflate their true value. And I can’t wait to see the technology sector (who along with their partners in crime in banking and property) use some basic accounting skills, and come to the rude awakening that, in the real world, that’s how things roll.

Where he is wrong however, is in the innovation that is creating new ways of generating revenue. More importantly, what we are seeing is a stabilisation in technologies invented half a century ago. The Internet and hypertext (the web is an implementation of a hyptertext system) have all been in development for 50 years – and it’s only *now* that we are coming to grips with the change. So to say this is a fad that’s now over, is really ignoring the longer term trends occurring.

As identified in the article, the biotech market will be massive, but I was told by the head of the PwC Technology park Bo Parker in March 2009 that it’s only just resembling Information Technology in the 1970s. However, when in comes to information, things are ramping up for a lot more as the industry has had a lot more time to evolve.

Where do I see things going? Oh man, let’s get a beer and talk about it. Data portability, Semantic Web, VRM, Project Natal, the sixth sense, augmented reality – try that to get your imagination started. I call it the age of ubiquity: ubiqitous connectivity, ubiqitous computing, ubiqitous information – where we have those separate things accessible anywhere and everywhere and when combined will change our lives. Information and communications, after all, are a fundamental aspect of being human that underlie everything we do – and so its impact will be more broadly applicable, obvious, and transformative.

Where’s the money in that? Are you kidding me?! The question is not how many dollars these changes can generate, but how many new industries will they spawn. We seriously don’t know what’s about to hit us in the next two decades for information technology, and clearly, neither do the Fund Managers.

The business model of API’s

Application Programming Interfaces – better known in the technology industry as API’s – have come out as one of the most significant innovations in information technology. What at first appears a geeky technical technique for developers to play with, is now evolving into something that will underpin our very society (assuming you accept information has, is, and will be the the crux of our society). This post explores the API and what it means for business.

API are cool

What is it?
In very simple terms, an API is a set of instructions a service declares, that outsiders can use to interact with it. Google Maps has one of the most popular API’s on the Internet and provides a good example of their power. Google hosts terabytes of data relating to its mapping technology, and it allows developers not affiliated with Google to build applications on top of Google’s. For example, thousands of websites like the NYTimes.com have integrated Google’s technology to enhance their own.

An example more familiar with ordinary consumers would be Facebook applications. Facebook allows developers through an API to create ‘apps’ that have become one of the main sources of entertainment on Facebook, the world’s most popular social networking site. Facebook’s API determines how developers can build apps that interact with Facebook and what commands they need to specify in order to pull out people’s data stored in Facebook. It’s a bit like a McDonald’s franchise – you are allowed to use McDonald’s branding, equipment and supplies, so long as you follow the rules in being a franchisee.

API’s have become the centre of the mashup culture permeating the web. Different websites can interact with each other – using each others technology and data – to create innovative products.

API photo visualisation

What incentive do companies have in releasing an API?
That’s the interesting question that I want to explore here. It’s still early days in the world of API’s, and a lot of companies seem to offer them for free – which seems counter-intuitive. But on closer inspection, it might not. Free or not, web businesses can create opportunity.

Free doesn’t mean losing
An API that’s free has the ability to generate real economic value for a new web service. For example, Search Engine Optimisation (SEO) has become a very real factor in business success now. Becoming the top result for the major search engines generates free marketing for new and established businesses.

In order for companies to boost their SEO rankings, one of the things they need to do is have a lot of other websites pointing links at them. And therein flags the value of an open API. By allowing other people to interact with your service and requiring some sort of attribution, it enables a business to boost their SEO dramatically.

Scarcity is how you generate value
One of the fundamental laws of economics, is that to create value, you need something to be scarce. (That’s why cash is tightly controlled by governments.) Twitter, the world’s most popular micro-blogging service, is famous for the applications that have been built on their API (with over 11,000 apps registered). And earlier this year, they really got some people’s knickers in a knot when they decided to limit usage of the API.

Which is my eyes was sheer brilliance by the team at Twitter.

Crumped up cash note

By making their API free, they’ve had hundreds of businesses build on top of it. Once popular, they could never just shut the API off and start charging access for it – but by introducing some scarcity, they’ve done two very important things: they are managing expectations for the future ability to charge additional access to the API and secondly, they are creating the ability to generate a market.

The first point is better known in the industry as the Freemium model. Its become one of the most popular and innovative revenue models in the last decade on the Internet. One where it’s free for people to use a service, but they need to pay for the premium features. Companies get you hooked on the free stuff, and then make you want the upgrade.

The second point I raised about Twitter creating a market, is because they created an opportunity similar to the mass media approach. If an application dependent on the API needs better access to the data, they will need to pay for that access. Or why not pay someone else for the results they want?

Imagine several Twitter applications that every day calculate a metric – that eats their daily quota like no tomorrow – but given it’s a repetitive standard task, doesn’t require everyone having to do it. If the one application of say a dozen could generate the results, they could then sell it to the other 11 companies that want the same output. Or perhaps, Twitter could monitor applications generating the same requests and sell the results in bulk.

That’s the mass media model: write once, distribute to many. And sure, developers can use up their credits within the limit…or they can instead pay $x per day to get the equivalent information pre-mapped out. By limiting the API, you create an economy based on requests (where value comes through scarcity) – either pay a premium API which gives high-end shops more flexibility or pay for shortcuts to pre-generated information.

API diagram

API’s are part of the information value chain
An economic concept I proposed a year ago (and am going to revise over the coming year with some fresh thought) is called the Information Value Chain. It takes an established economic theory that has dictated business in the industrial age, and applies it in the context of businesses that create products in information or computing utility.

With reference to my model, the API offers the ability for a company to specialise at one stage of the value chain. The processing of data can be a very intensive task, and require computational resources or raw human effort (like a librarian’s taxonomy skills). Once this data is processed, a company can sell that output to other companies, who will generate information and knowledge that they in turn can sell.

I think this is one of the most promising opportunities for the newspaper industry. The New York Times last year announced a set of API’s (their first one being campaign finance data), that allows people to access data about a variety of issues. Developers can then query this API, and generate unique information. It’s an interesting move, because it’s the computer scientists that might have found a future career path for journalists.

Journalists skills in accessing sources, determining significance of information, and conveying it effectively is being threatened with the democratisation of information that’s occurred due to the Internet. But what the NY Times API reflects, is a new way of creating value – and it’s taking more of a librarian approach. Rather than journalism become story-centric, their future may be one where it is data based, which is a lot more exciting than it sounds. Journalists yesterday were the custodians of information, and they can evolve that role to one of data instead. (Different data objects connected together, by definition, is what creates information.)

A private version of the semantic web and a solution for data portability
The semantic web is a vision by the inventor of the World Wide Web, which if fully implemented, will make the advances of the Internet today look like prehistory. (I’ve written about the semantic web before to give those new to the subject or skeptical.) But for those that do know of it, you probably are aware of one problem and less aware of another.

The obvious problem is that it’s taking a hell of a long time to see the semantic web happen. The not so obvious problem, is that it’s pushing for all data and information to be public. The advocacy of open data has merit, but by constantly pushing this line, it gives no incentive for companies to participate. Certainly, in the world of data portability, the issue of public availability of your identity information is scary stuff for consumers.

Enter the API.

API’s offer the ability for companies to release data they have generated in a controlled way. It can create interoperability between different services in the same way the semantic web vision ultimately wants things to be, but because it’s controlled, can overcome this barrier that all data needs to be open and freely accessible.

Concluding thoughts
This post only touches on the subject. But it hopefully makes you realise the opportunities created by this technology advance. It can help create value without needing to outlay cash; new monetisation opportunities for business; additional value in society due to specialisation; and the ability to bootstrap the more significant trends in the Web’s evolution.

« Older posts