Frequent thinker, occasional writer, constant smart-arse

Tag: future (Page 2 of 4)

The change brought by the Internet is a correction

I was sitting at a restaurant with Mick Liubinskas of Pollenizer the other week, who I regard as one of the best minds in the Australian tech scene. Mick in a previous life used to run marketing at Kazaa, which was the music-industry’s anti-Christ during the early 2000s. Kazaa was one of the higher profile peer-to-peer technologies that made the distribution of music so widespread on the Internet.

I said to Mick how one of the things that plagues my thinking is trying to work out the future business models for content. Naturally, we ended up talking about the music industry and he explained to me the concept of Soft DRM which he thought was one avenue for the future but which the record labels rejected at the time.

DRM

DRM or Digital Rights Management is the attempt by companies to control the distribution of digital content. Hard DRM places control over access, copying and distribution – while soft DRM does not prohibit unauthorised actions but merely monitors a user‚Äôs interaction with the content.

The basic difference, is that Hard DRM protects copyrights by preventing unauthorised actions before the fact, while Soft DRM protects copyrights by giving copyright owners information about infringing uses after the fact.

As I questioned Mick on this, he compared it to us sitting in that restaurant. What’s stopping either of us from getting up and not paying the bill? The restaurant let’s us sit, serves us food – and only at the end do we pay for the service.

Hard DRM is not congruent with our society
Part of the music industry’s problem is that they’ve focused too much on Hard DRM. And that’s wrong. They could get away with it in the past because that’s how the world worked with controlled distribution lines, but now that world no longer exists with the uncontrollable Internet.

In a restaurant, like any other service industry, the risk that you don’t get paid is real but not big enough to prevent it from operating. Our social conventions are what make us pay that bill, even though we have the ability to avoid it.

To insist on the Hard DRM approach, is going against how the rest of the western world works. Our society is philosophically based on the principle of innocent until proven guilty. Likewise, you pay after a service has been rendered – and you pay for something that has unique value (only scarcity is rewarded). What existed with the media world was unique over any other industry, but unique purely due to technological limitations, not because it was genuinely better.

The record companies (not the artists) are hurting
Artists practically sell their soul to get a record deal, and make little money from the actual albums themselves. This change for music is really a threat to the century-old record company model, of which the Internet has broken their distribution power and their marketing ability is now dwarfed by the potential of social media.

Instead of reinventing themselves, they wasted time by persisting with an old model that worked in the industrial age. They should have been reflecting on what value people will pay for, and working out the things that are better than free. Unfortunately, the entire content business – movies, television, radio, magazines, newspapers, books and the rest – have made similar mistakes.

The Internet is transforming our world and every object in our lives one day will be connected. In some ways, the great change brought about by the Internet is actually a step back to how things used to be (like it is for music where the record model was an anomaly in our history). Even the concept of a “nation state” is a 20th century experiment pushed after the first world war, where for our entire history prior to that, our world was governed by independent cities or empires that governed multiple ethnic nations – the Internet is breaking down the nation-state concept and good riddance because its complicated our lives.

Future

We need to clear the white board and start fresh. The Internet is only going to get more entrenched in our world, so we must re-engineer our views of the world to embrace it. With content, distribution was one of the biggest barriers to those industries to get into, and now it has been obliterated. Business models can no longer rely on that.

We should not let the old world drive our strategies for business because the dynamics have changed completely. If you are looking to defend yourself against an oncoming army – stop polishing the sword and start looking for the bullets to put in the machine gun.

The evolution of news and the bootstrapping of the Semantic Web

The other month (as in, the ones where I am working 16 hour days and don’t have time to blog), I read in amazement a stunning move made by the New York Times. It was the announcement of its first API, where you could query campaign finance data. It turns out this wasn’t an isolated incident, as evidenced by yet another API release, this time for movies, with plenty more to come.

Fake New York Times newspaper That is massive! Basically, using the same data people will be able to create completely different information products.

I doubt the journalists toiling away at the Times have any idea what this will do to their antiquated craft (validating that to get the future of media you need to track technology). As the switched on Marshall Kirkpatrick said in the above linked article for Read Write Web "We believe that steps like this are going to prove key if big media is to thrive in the future."

Hell yeah. The web has now evolved beyond ‘destination’ sites as a business model. News organisations need to harness the two emerging business models – platforms and networks. Whilst we’ve seen lots of people trying the platform model (as aggregators – after all, that is what a traditional newspaper has been in society), this is the first real example I have seen of the heritage media doing the network model. The network model means your business thrives by people using *other* peoples’ sites and services. It sounds counter intuitive but it’s the evolution of the information value chain.

This will certainly make Sir Tim Berners-Lee happy. The Semantic Web is a vision that information on the web is machine readable so that computers can truly unleash their power. However this vision is gaining traction very slowly. We will get there, but I am wondering whether the way we get there is not how we expect.

The New Improve Semantic Web: now with added meaning!

These API’s that allow web services to reuse their data in a structured way may just be what the Semantic Web needs to bootstrap it. There’s an assumption with the vision, which is that for it to work, all data needs to be open and publicly accessible. The economics are just not there yet for companies to unlock their data and my work this year with the DataPortability Project has made me realise to get value out of your data you simply need access to it (which doesn’t necessarily mean public data).

Either way, for me this was one of the biggest news events of the year, and one that very quietly has moved on. This will certainly be something worth tracking in 2009 as we see the evolution of not just the Semantic Web, but also Social Media.

The future of journalism and media

Last week, Deep Throat died. No, not the porn actress but the guy who was effectively in operational control of the FBI during the Nixon years. Mark Felt was a guy who was in line to run the FBI from his number three position, but was passed up by Nixon who brought in an outsider. Whilst people often remark that the Russian government is controlled by the intelligence services, it’s worth reflecting that the poster-child of the free world has its own domestic intelligence services yielding too much power over the presidents. Nixon broke tradition for the first time in 48 years, doing something other presidents couldn’t do: it was appointing an outsider to run the agency. And so lays the roots to his downfall, in one of the most dramatic episodes in the mass media’s history – a newspaper brought the downfall of one of the the most powerful men in the world.

Felt’s identity has been protected for decades, and was only made public three years ago, arguably because someone else was going to expose him and he beat them too it. In an interesting article by George Friedman at Stratfor:

Journalists have celebrated the Post’s role in bringing down the president for a generation. Even after the revelation of Deep Throat’s identity in 2005, there was no serious soul-searching on the omission from the historical record. Without understanding the role played by Felt and the FBI in bringing Nixon down, Watergate cannot be understood completely. Woodward, Bernstein and Bradlee were willingly used by Felt to destroy Nixon. The three acknowledged a secret source, but they did not reveal that the secret source was in operational control of the FBI. They did not reveal that the FBI was passing on the fruits of surveillance of the White House. They did not reveal the genesis of the fall of Nixon. They accepted the accolades while withholding an extraordinarily important fact, elevating their own role in the episode while distorting the actual dynamic of Nixon’s fall.

Absent any widespread reconsideration of the Post’s actions during Watergate in the three years since Felt’s identity became known, the press in Washington continues to serve as a conduit for leaks of secret information. They publish this information while protecting the leakers, and therefore the leakers’ motives. Rather than being a venue for the neutral reporting of events, journalism thus becomes the arena in which political power plays are executed. What appears to be enterprising journalism is in fact a symbiotic relationship between journalists and government factions. It may be the best path journalists have for acquiring secrets, but it creates a very partial record of events — especially since the origin of a leak frequently is much more important to the public than the leak itself.

Now consider my own experiences as an amateur journalist.

After several years of failed media experiments, my university enterprise (I did it as a society, not as a company, because I want to treat this as my "throw-away" startup to learn but not be tied down when I left) at changing student media suddenly hit the gold mine: we created an online weekly "news digest" that literally became the talk of the campus for those in the university administration and the people surrounding it. An elite audience (not the 40,000 University of Sydney crowd), but the several hundreds of people that theoretically represented the campus and ran the multi-million dollar student infrastructure. Of the 23 editions we created that year, we literally had people hanging off their seats for the next edition: trying to predict the new URL, and e-mails with quotes of it sent out within hours of publishing.

The News Digest, October 29th 2004.

It was interesting because of how the product evolved during its first year. I started it thinking it would be a cool thing to have a summary of the news, once a week, in a "digest" format. The news was split arbitrarily as student, Australian and international. However within a few editions, the student news segment was no longer just about the latest party but about confidential information and the core reason why people read it. In the second edition I wrote:

USYD UNION : Chris Farral has been hired as the Union 's new General Manager. Farral has a highly reputable background in the ABC and various community-based groups. It has been a decade since the Union 's last General Manager was appointed, and as such we hope Farral will bring a new flair and vitality to the position. Chris also happens to be the father of Honi Soit editor Sophie. Does this mean an end to critical analysis in Honi's reporting of the traditionally stale and bitter Union ? No. That would require there to have been critical analysis in the first place. (EB)

Cheekily written but an innocent attempt to report news. Someone saw that, realised we had an audience, and in edition three we revealed:

SYDNEY UNIVERSITY UNION: Last week we reported that Chris Farrell was appointed the new General Manager of Sydney University’s student union. This week we can reveal that close to $50,000 was spent on external recruitment agencies to find Mr Farrell. Where was he hiding? The selection panel was evenly split for two candidates: Paul McJamett, the current Facilities Manager and previously expected next-in-line for the job, was supported by Vice-President Penny Crossley, Ex-President Ani Satchithanada, and Human Resources Manager Sandra Hardie. Meanwhile Farrell was supported by current President Toby Brennan, and the two senate reps (one of whom is new this year to the Board). Crossley is rumoured to have crossed the floor, and made the casting vote for Farrell. (Elias Bizannes)

And then we go threatened with a law suit (the first of many in my life, it would turn out) because we exposed some dirty secrets of a very politicised group of people. The reason I wanted to share that story, was to have you see how we evolved from a “summary of the news” to a “tool for the politicians”. The rest of that year, I had people in all the different factions developing relationships with me and breaking news. Yes, I knew I was being played for their own reasons. However it was a two way using: I was getting access to confidential information from the insiders. Our little creation turned into a battleground for the local politicians – and so long as I could manage the players equally, I won just as much as they did, if not more.

Up until now, I never realised (or really thought) that my experience in student journalism was actually how the big players of the world operate. Forget the crap about what journalism is: at its core, it’s about creating relationships with insiders and being part of a game in politics, that as a by-product (not a function) also creates accountability and order in society.

On the future of journalism
For as long as we have politics, we will have “journalists”. In the tech industry for example, the major blogs have become a tool for companies. I recently saw an example where I e-mailed a CEO of a prominent startup about an issue, and within days, two major blogs posted some old news to get exposure to fixing the issue. This CEO used his media credits with the publishers of these blogs, to help him with the issue. It’s the same dynamic described above: people who create news and people with the audience. Heck – we have an entire industry created to manage those two groups: the Public Relations industry.

So the question about the future of journalism, needs a re-look. It’s a career path being disrupted by the Internet and breaking traditional business models, with the new innovations going to have their bubble burst one day. Where we will find answers to the future, is where we can see in play the dynamics of news creators and news distributors, as that is where journalism will evolve.

Personally, I’m still trying to work out if the captive audience has now left the building. But my 2004 experiment in student media – targeting the same Gen Y’s that don’t read newspapers – is recent enough experience to prove the Internet hasn’t broken this relationship yet. If you are looking to see what the future of journalism and especially the media is – you need to follow where the audience is. But a word of caution: don’t measure the audience by its size, but by its type. One million people may read the blog TechCrunch, but it’s the same one-million early adopters around the world that are asked by their Luddite families to fix the video recording machine. There is an indirect reader of a publication, but they are just as much influenced and can be reached out to, if determined by the direct reader. Even though Michael Arrington who started TechCrunch was a corporate lawyer, his successful blog has now done what the mass media used to do. That’s something worth recognising as the core to his success, I think. Certainly, it validates that the future is just like the past – just slightly tweaked in its delivery.

The mobile 3D future – as clear as mud

I’ll be happy to admit that in the past, I never understood the hype behind the mobile web. That is of course, before I bought the Nokia E61 – a brilliant phone I loved until it was stolen from me in April 2008. I had the phone since January 2007 with no regrets – only the hype surrounding the iPhone swayed me from (maybe) getting a new phone.

I used that phone for my news reading and e-mail (using the Gmail application, not the native phone installation) and I could understand how mobile was the future for technology innovation. Sure, it didn’t make me want throw my hands in the air and shout in excitement, but it made me understand amongst the naysayers of the mobile web, that there was potential. The defining thing for this realisation of mine, was the fact it had a massive screen, which was not common in phones before that.

So as my phone was stolen, I had the dilemma of buying a crappy new phone until the iPhone came out to Australia (potentially) several months later. But why bother I thought – I ended up buying the first generation iPhone over eBay. And lets just say, ever since, I’ve been throwing my hands in the air in excitement.

Using the iPhone is truly a transformative experience. Quite frankly, it sucks as a phone – no support for MMS; doesn’t syncronise with my corporate Lotus Notes calendar; and the call quality and hearing is consistently bad. But where it lacks as a phone, it makes up for it as a device. The fact I would use it over my laptop at home, just reflects how perfect it was: a portable computer that makes mobile browsing enjoyable. The native e-mail client made the process fun!

My cracked iPhone

Naturally of course, I smashed the screen of my phone (which made the iPhone “just” a phone, because the screen is its core value proposition), and yesterday I finally got around to buying a new phone. I faced an issue: do I upgrade to an iPhone 3G (and boy, did I miss 3G so I was willing to upgrade just for that), or go back to my beloved Nokia’s, who perform one of the most valuable features for me which is the syncing with my calendar. It turns out Nokia had just released in Australia the N96 – which is basically the N95 but better. I can recall Lachlan Hardy swearing by the N95 and convincing me it was the perfect phone. Indeed, his view was supported by anecdotal evidence I found: it was as if Nokia said “heck, lets just chuck every single feature we have into the one phone and see what happens”.

So surely I thought, the one week old N96 which is the evolution of sex-on-a-stick N95 must be just as good, if not better? The experimenter in me went for it, because I knew I could get a better understanding about the mobile future.

From iPhone to Nokia
I absolutely hate it. I’ve barely even had the phone for 24 hours, and yet I am dying to get rid of the phone – I much prefer my cracked screen iPhone on GPRS than the N96. The iPhone compared to the Nokia N96 is like going from Windows Vista to Window’s 3.1 (the version BEFORE Windows 95). This phone, which has a market value of $1200; and had every feature under the sun you could dream for in a phone, is something I am willing to sacrifice. The interface of the iPhone has made me addicted like a heroin addict – I literally, cannot force myself to get used to the degraded user experience.

And I just find it amazing how it’s provoking such a strong reaction in me. Before the iPhone, this phone would have been heaven for me. It has every feature I dream for in a phone, yet the pixelated, text-driven graphical interface makes me all agitated and angry that Nokia hasn’t focused on what really matters.

Playing with the N96, using the browser and even the Gmail app to try to make it a more seamless user experience, reminds me again why I never got the mobile web before. Realistically, it will be a few years before the standard mobile evolves to a richer interface and so consequently, it’s too early to think the mobile web is about to take off now. However, the mobile and the 3D internet are both trends where I am willing to bet my life that it’s not a question of “if” but “when”, and whoever is in the right place at the right time at the emergence of the next upswing post this financial crisis, will be the new barons of the technology world.

The interface and user experience, are the missing link between connecting the vision of the early supporters, into the excitement of the mainstream of society. Mark my words, just like social networking sites such as MySpace and Facebook appeared to come out of nowhere to dominate our world, so too will the mobile future and the 3D Internet.

Emerging trends? Nope – its been a long time coming

When I read the technology news, concepts about cloud computing still seem to be debated . I think to myself: you are kidding me right? I take a step back and think maybe the future won’t be like the current mantra, but then again, trends take time to materialise.

Scanning through my hard-disk, I could not help but laugh after I found a document I wrote to a friend in February 2006 – and as I said in the document "Those six points, as rough as they are, form core elements in my thinking on how I approach business on the Internet …[I’ve been thinking about it] since November 2003"

So below, is literally a copy and paste of that document that has seeds from way back in 2003 when I submitted a grant application for a business idea (ahem, no response obviously…). The fact that nearly half a decade has passed since I first synthesised these ideas (and no doubt, from reading of the thinkers of the day not just me being imaginative) means they are not flake predictions: they are real. Ready?

1. Digital future. All information – news reports, television shows, educational text books, radio shows – are being digitalised, coexisting with their analogue versions. Whether the digital replicas replace their analogue counterparts is pure speculation. But one fact we cannot ignore is that the possibility is there – all content is now digital. And consumers will switch to the digital version if the value of the content consumed is better realised in digital form

  • Quick case study. Many pundits believe newspapers will not exist in 15 years. I know they won‚Äôt exist in 15 years, and I have spent three years thinking about this very point. At first I used to think digital replicas, as shown by http://www.newsstand.com, was what was going to transform the newspaper business. What I didn‚Äôt realise, is that the current newspaper experience far exceeds the digital replica (I was hung up on the idea of electronic paper [www.eink.com] ‚Äì which still remains a big possibility). But I knew the digital future was going to make the current newspaper business obsolete ‚Äì there is more value out of digitial. It only just hit me recently by observing my own behaviour‚Äì traditional newspapers are not going to be replaced by digital versions ‚Äì rather, the method¬¨ that people receive their news is going to change. And this fact is embodied by the recent acknowledgment of the world‚Äôs great newspapers of not being in the newspaper business anymore, but in the information business now. I used to read every single major newspaper, and several international newspapers, as I was a debater ‚Äì I was a heavy news consumer, and I still am. Today, I still follow the news very closely ‚Äì but I have not read a newspaper all year. Why? I receive all my information needs through websites, RSS feeds and blogs. A new method, made possible by the digital future. People means of consuming content will change because of digital.

2. Internet as infrastructure. It doesn’t take a genius to realise that the internet will be the core infrastructure of anything to do with information and communications. The power of the internet as infrastructure to communications and information unlocks opportunities that are transforming the world. Radio, TV, phone calls – you name it – can be done via the internet protocol now.


3. Content is king, distribution is queen – but advertising is what pays for the cost of that sting. Google now makes more revenue than the three prime time television stations in the USA. In monetary terms, that’s about $10 billion a year. And yet, 99% of that revenue comes from one thing – Google’s click-through advertising (about 45% from Google results, the rest from the Google network of publishers through adsense). HarperCollins announced last week that they are trialling a new business model of providing books for free but supported by advertising – the consumer book business up until then was literally the only segment of media not reliant on advertising as a revenue model. Whilst broadcasting organisations make money from several sources, advertising is literally the backbone of their revenue. To make money out of any content, you place a huge reliance on advertising.
In short, if you want to make money out of content, you need to understand advertising

4. One-to-one advertising is the superior form of advertising. Partly due to technological factors, the mass media could only advertise through a one-to-many medium ‚Äì meaning one message to many. The digital-internet future has transformed that ability, by customising content on a one-to-one basis. If advertising, and content can be targeted to an individual‚Äôs personality profile and preferences, it allows for the value of the content to be maximised, with 1-to-1 advertising returning a higher return on campaigns – far superior than any other form of advertising. Superior because it can make advertising more relvant for consumers (ie, higher response rate), it can increase advertising inventory (mass media advertising is a bit like throwing pamphlets out of a plane, hoping the right people catch them – 1-to-1 means the right people get it at minimal cost and best of all, it creates better accountability which is what advertisers now demand.

5. The best business practice for one-to-one advertising is not there yet. The internet is the platform that enables one-to-one advertising, and yet, this opportunity has still not been fully exploited. There is a massive need in the market, for a means of providing personalised advertising far superior to the current technologies and methods. Google populised an innovative form of advertising through click-throughs. However internet click-throughs, despite providing more accountable and better targeted advertising, still lacks the ability to unleash the real power of one-to-one advertising. The power of the internet as a one-to-one advertising platform is still in its infancy

6. Privacy matters. Privacy is the right to determine what information is available about you, when you want it to be available, and to whom you want it available to. Current practices of companies who gain as much information about you through your sales history, your activity on the web, and the like – are often doing so without the full knowledge of the consumer. It is information collected by spying on a consumer, and whilst some people retaliate by various measures (ie, fake information, anonymous proxies), there is great mistrust by the public in providing personal information, or rather, too much to one organisation. If information is to be used about people, there needs to be proper approval – both for legal reasons (a business model cannot rely on consumer stupidity) but also for the integrity of the data (ie, a cooperative consumer will provide more reliable data)

  • Companies like Double Click who would collect your surfing history relied on placing a cookie on your computer ‚Äì what happens if you delete that cookie? And what happens if your dad, mum, and cousin from Brazil, use the same computer as you? That creates a fairly inconsistent ‚Äúprofile‚Äù of a person that is to be targeted

I had totally forgotten I had written that. And reading it now it’s a bit lame and I could probably extend on things a little bit – actually there are things I have actually written in blog posts this last year. Better still, I can provide actual evidence that validate these trends as advancing like the existence of the VRM project for advertising, the big clash with Facebook and privacy (and lets not forget the first time ), and Microsoft’s recent announcement about moving away from software (to pick but a few examples).

If this is what I was seeing in November 2003 as a naive university student absorbing what the industry trends were back then; February 2006 when I wrote to my friend what I thought he needed to consider about the future; and the fact I still agree with it in May 2008 – I think things are beyond speculation: these are long-term trends that are entrenched.

It’s all still alpha in my eyes

The invention of hypertext has been the most revolutionary thing since two previous technologies before: the printing press and the alphabet. Combined with computing and the Internet, we have seen a new world represented by the World Wide Web that has transformed entire industries in its mere 19 15 year existence.

The web caught our imagination in the nineties, which became the Dot-Com bubble. Several years after the bust, optimism reawakened when the Google machine listed on the stock exchange – heralding a new era dubbed “web2.0”. This era has now been recognised in the mainstream, elevated by the mass adoption of the social computing services, and has once again seen the web transform traditional ideas and generate excitement.

davewiner
The web2.0 era is far from over – the recent global recession however has flagged though that the pioneers of the industry are looking for something new. As the mainstream is rejuvenated by web2.0 like the Valley was not that long ago, it’s time to now look for what the next big thing will be. Innovation on the web is apparently flattening. Perhaps it has – but the seeds of the next generation of innovation on the web are already here.

Controversy of the meaning of web2.0 – and what its successor will be – should not distract us. We are seeing the web and associated technologies evolve to new heights. So the question is not when web2.0 ends, but what are we seeing now, that will dominate in the future?

My view:
• The mobile web. The mobile phone is now evolving into a generic entertainment device, becoming a new computing device that extends the reach of the internet. First with the desktop computer, and then with the laptop computer – new opportunities presented themselves in the way we could use computers. The use of this new computing platform will create new opportunities that we have only scratched the surface.
• The 3D web. Visit second life, the virtual world, as you quickly note the main driver of activity is sex and that it’s just a game. However, porn and games have spearheaded a lot of the innovation of technology in the past. The 3D web is now emerging with four separate but related trends: virtual worlds, mirror worlds, augmented reality and lifelogging.
• The data web. Data has now become a focus in the industry. The semantic web, eventually, will allow a weak form of artificial intelligence that will allow computer agents to work in an automated fashion. Vendor Relationship Management is changing the fundamental assumptions of advertising, with a new way of how we transact in our world. Those trends, when combined with the drive for portability of peoples data, is having us see the web in a new light with new potential. Not as a collection of documents, and not as a platform for computing, but as a database that can be queried.

So to get some discussion, I thought I might ping some smart people I know in the industry on what they think: Chris Saad, Daniela Barbosa, Ben Metcalfe, Ross Dawson, Mick Liubinskas, Randal Leeb-du Toit, Stewart Mader, Tim Bull, Seth Yates, Richard Giles as well as you reading this now.
What do you think is currently in the landscape that will dominate the next generation of the web?

What is the DataPortability Project

When we created the DataPortability workgroup in November 2007, it was after discussion amongst a few of us to further explore an idea; a vision for the future of the social web. By working together, we thought we could make real change in the industry. What we didn’t realise, was how quickly and how big the attention generated by this workgroup was to be. A press release has been released that details the journey to date, which highlight’s some interesting tidbits. What I am going to write below, are how my own thoughts have evolved over the last few months, and what it is that I think DataPortability is.

1) Getting companies to adopt open, existing standards
RSS , OpenID , APML , oAuth , RDF , and the rest. These technologies exist, with of which have been around for many years. Everyone that understands what they are, know that they rock. If these standards are all so great – why hasn’t the entire technology industry adopted them yet? Now we just need awareness, education and in some cases pressure on the industry heavies to adopt them.

2) Create best practices of implementing these standards
When you are part of a community, you are in the know, and don’t realise how the outside world looks in. Let the standards communities focus their precious energies on creating and maintaining the technologies; and DataPortability can help provide resources for people to implement them. Is providing PHP4 support for oAuth really a priority? It isn’t for them – but by pooling the community with people that have diverse skillsets and are committed to the overall picture, it has a better chance of happening.

3) Synthesise these open standards to play nice with each other.
All these different communities working in isolation have been doing their own thing. An example is how Yadis-XRDS are working on service discovery and have a lacklustre catalogue. Do we just leave them to do their own thing? Does someone else in Bangalore create his own catalogue? (Which is highly likely given the under-exposure of this key aspect to groups needing it for the other standards, and the current state its in). Thanks to Kaliya for mentioning that the XRDS guys have been more then proficient in working with other groups – "how do you think their spec is part of the OpenID spec?". Julian Bond goes on to say: "Yadis-XRDS is only months old and XRDS-Simple is literally days old…Having trouble thinking of a community that is working in isolation. And that isn’t likely to be hugely offended if you suggested it. " So let me leave the examples here, and just say the DataPortability Project when defining technical and policy blueprints, can identify issues and from the bigger picture perspective focus attention on where it’s needed. By embracing the broader community, and focusing our attention on weaknesses, we can ensure no one is reinventing wheels .

4) Communicate all the good things the existing communities are doing, under the one brand, to the end user.
RSS is by far the most recognised open standard. Have you ever tried explaining RSS to someone who is outside of the tech industry? I have. Multiple times. It’s like I’ve just told them about the future with flying cars and settlements on Mars. I’ve done it in in the corporate world, to friends, family, girls I date, guys I weight train with and anyone else. Moving onto OpenID – does anyone apart from Scoble and the technorati who try all the webservices they can, really care? Most people use Facebook, Hotmail (the cutting edge are using Gmail) and that’s it. On your next trip to Europe ask a cultured French (wo)man if they know what OpenID is; why they need it; what they can do with it. Now try explaining RSS to the mix. And APML. And oAuth. Bonus if you can explain RDF to yourself.

Wouldn’t it be just easier if you explained what DataPortability is, and explained the benefits that can be achieved by using all these standards? Standards are invisible things that consumers shouldn’t need to care about; they just care about the benefits. Do consumers care about the standards behind Wi-Fi, as defined by Zero-conf – or do they care about clicking "enable wireless" on their laptop and them connecting to the Internet. If you are going around evangelising the technical standards, the only audience you will get are the corporates in IT departments, who couldn’t care less. The corporate IT guys respond to their customer/client facing guys, who in turn respond to consumers – and consumers couldn’t care less on how its done, but just what they can do. Have the consumer channel their demand, and it benefits the whole ecosystem.


The new DataPortability trustmark

It has been said the average consumer doesn’t care about DataPortability. Of course they don’t – we are still in the investigation phase of the Project ; which later on will evolve to the design phases and then evangelising phases. We know people would want RSS, oAuth, and the rest of the Alphabet soup – so lets use DataPortability as a brand that we can communicate this. Sales is about creating demand – lets coordinate our ‘selling’ to make it overwhelming – and make it easy for consumers to channel that want in a way they can relate to. You don’t say "oAuth"; you say "preventing password theft" to them instead.

5) Make the business case that a user should get open access to their data
Why should Facebook let other applications use the data it has on its servers? Why should google give up all this data they have about their users to a competitor? Why should a Fortune 500 adopt solutions that decentralise their control? Why should a user adopt RDF on their blog when they get no clear benefit from it? Is a self-trained PHP coder who can whack something together, going to be able to articulate that to the VC’s?

The tech industry has this obsession that nothing gets done unless the developers are on board. No surprises there – if we don’t have an engineer to build the bridge, we are going to have to keep jumping off the cliff hoping we make it to the other side. But at the same time, if you don’t have the people persuading the people that would fund this bridge; or the broader population about how important it is for them to have this bridge – that engineer can build what he wants but the end result is that no one will ever walk on it. Funny how web2.0 companies suck at the revenue model thing : overhype on the development innovation, with under-hype on the value-proposition to the ordinary consumer who funds their business .

Developers need to be on board because they hassle their bosses and sometimes that evangelising from within works; but imagine if we get the developers bosses bosses on board because some old bear on the board of directors wants DataPortability after his daughter explained it to him (the same person that also told him about Facebook and Youtube). I can assure you, as I’ve seen it first hand with the senior leadership at my own firm, this is exactly what is happening.

Intel is one of the best selling computer-chip companies in the world. Do you really think as a consumer I care about what chip my computers works on? Logically – no. But "Intel’s Inside" marketing campaign gave them a monopoly, because end consumers would ask "does it have intel inside?" and this pressure forced Intel’s customers (IBM and the rest) to actually use Intel. Steve Greenberg corrects me by saying "The Intel Inside campaign came a decade after Intel took over the world. It wasn’t what got them there. It was in response to Microsoft signaling that they liked AMD. Looked like AMD was going to take off… but then they didn’t". So my facts were slightly wrong, but the point still remains.
At the same time, it isn’t just political pressure but its also to educate. I genuinely believe opening up your data is a smart business strategy that will change the potential of web services.

You make people care by giving them an incentive to do it (business opportunities; customer political pressure; peer pressure as individuals and an industry which later evolve to industry norms). The semantic web communities, the VRM communities, the entire open standards communities – all have a common interest in doing this. DataPortability is culture change on an industry wide level, that will improve the entire ecosystem. Apparently innovation has died – I say it’s just beginning .

Information overload: we need a supply side solution

About a month ago, I went to a conference filled with journalists and I couldn’t help but ask them what they thought about blogs and its impact on their profession. Predictably, they weren’t too happy about it. Unpredictably however, were the reasons for it. It wasn’t just a rant, but a genuine care about journalism as a concept – and how the blogging “news industry” is digging a hole for everyone.

Bloggers and social media are replacing the newspaper industry as a source of breaking news. What they still lack, is quality – as there have been multiple examples of blogs breaking news that in the rush to publish it, turns out it was in fact fallacious . Personally, I think as blogging evolves (as a form of journalism) the checks and balances will be developed – such as big names blogs with their brands, effectively acting like a traditional masthead. And when a brand is developed, more care is put into quality.

Regardless, the infancy of blogging highlights the broader concern of “quality”. With the freedom for anyone to create, the Information Age has seen us overload with information despite our finite ability to take it all in. The relationship between the producer of news and consumer of news, not only is blurring – but it’s also radically transforming the dynamics that is impacting even the offline world.

Traditionally, the concept of “information overload” has been relegated as a simple analysis of lower costs to entry as a producer of content (anyone can create a blog on wordpress.com and away you go). However what I am starting to realise, is the issue isn’t so much the technological ability for anyone to create their own media empire, but instead, the incentive system we’ve inherited from the offline world.

Whilst there have been numerous companies trying to solve the problem from the demand side with “personalisation” of content (on the desktop , as an aggregator , and about another 1000 different spins), what we really need are attempts on the supply side, from the actual content creators themselves.

info overload

Too much signal, can make it all look like noise

Information overload: we need a supply side solution
Marshall Kirkpatrick , along with his boss Richard McManus , are some of the best thinkers in the industry. The fact they can write, makes them not journalists in the traditional sense, but analysts with the ability to clearly communicate their thoughts. Add to the mix Techcrunch don Michael Arrington , and his amazing team – they are analysts that give us amazing insight into the industry. I value what they write; but when they feel the stress of their industry to write more, they are not only doing a disservice to themselves, but also to the humble reader they write to. Quality is not something you can automate – there’s a fixed amount a writer can do not because of their typing skills but because quality is a factor of self-reflection and research.

The problem is that whilst they want, can and do write analysis – their incentive system is biased towards a numbers system driven by popularity. The more people that read and the more content created (which creates more potential to get readers) means more pageviews and therefore money in the bank as advertisers pay on number of impressions. The conflict of the leading blogs churning out content , is that their incentive system is based on a flawed system in the pre-digital world, which is known as circulation offline, and is now known as pageviews online.

A newspaper primarily makes money through their circulation: the amount of physical newspapers they sell, but also the audited figures of how many people read their newspaper (readership can have a factor of up to three times the physical circulation ). With the latter, a newspaper can sell space based on their proven circulation: the higher the readership, the higher the premium. The reason for this is that in the mass media world, the concept of advertising was about hitting as many people as possible. I liken it to the image of flying a plane over a piece of land, and dropping leaflets with the blind faith that of those 100,000 pamphlets, at least 1000 people catch them.

It sounds stupid why an advertiser would blindly drop pamphlets, but they had to: it was the only way they could effectively advertise. For them to make sales, they need the ability to target buyers and create exposure of the product. The only mechanism available for this was the mass media as it was a captured audience, and at best, an advertiser could places ads on specialist publications hoping to getter better return on their investment (dropping pamphlets about water bottles over a desert, makes more sense than over a group of people in a tropical rainforest). Nevertheless, this advertising was done on mass – the technology limited the ability to target.

catch the advert

Advertising in the mass media: dropping messages, hoping the right person catches them

On the Internet, it is a completely new way to publish. The technology enables a relationship with a consumer of content, a vendor, a producer of content unlike anything else previously in the world. The end goal of a vendor advertising is about sales and they no longer need to drop pamphlets – they can now build a one on one relationship with that consumer. They can now knock on your door (after you’ve flagged you want them to), sit down with you, and have a meaningful conversion on buying the product.

“Pageviews” are pamphlets being dropped – a flawed system that we used purely due to technological limitations. We now have the opportunity for a new way of doing advertising, but we fail to recognise it – and so our new media content creators are being driven by an old media revenue model.

It’s not technology that holds us back, but perception
Vendor Relationship Management or (VRM) is a fascinating new way of looking at advertising, where the above scenario is possible. A person can contain this bank of personal information about themselves, as well as flagging their intention of what products they want to buy – and vendors don’t need to resort to advertising to sell their product, but by building a relationship with these potential buyers one on one. If an advertiser knows you are a potential customer (by virtue of knowing your personal information – which might I add under VRM, is something the consumer controls), they can focus their efforts on you rather than blindly advertising on the other 80% of people that would never buy their product). In a world like this, advertising as we know it is dead because we know longer need it.

VRM requires a cultural change in our world of understanding a future like this. Key to this is the ability for companies to recognise the value of a user controlling their personal data is in fact allowing us new opportunities for advertising. Companies currently believe by accumulating data about a user, they are builder a richer profile of someone and therefore can better ‘target’ advertising. But companies succeeding technologically on this front, are being booed down in a big way from privacy advocates and the mainstream public. The cost of holding this rich data is too much. Privacy by obscurity is no longer possible, and people demand the right of privacy due to an electronic age where disparate pieces of their life can be linked online

One of the biggest things the DataPortability Project is doing, is transforming the notion that a company somehow has a competitive advantage by controlling a users data. The political pressure, education, and advocacy of this group is going to allow things like VRM. When I spoke to a room of Australia’s leading technologists at BarCamp Sydney about DataPortability, what I realised is that they failed to recognise what we are doing is not a technological transformation (we are advocating existing open standards that already exist, not new ones) but a cultural transformation of a users relationship with their data. We are changing perceptions, not building new technology.

money on the plate

To fix a problem, you need to look at the source that feeds the beast

How the content business will change with VRM
One day, when users control their data and have data portability, and we can have VRM – the content-generating business will find a light to the hole currently being dug. Advertising on a “hits” model will no longer be relevant. The page view will be dead.

Instead, what we may see is an evolution to a subscription model. Rather than content producers measuring success based on how many people viewed their content, they can now focus less on hits and more on quality as their incentive system will not be driven by the pageview. Instead, consumers can build up ‘credits’ under a VRM system for participating (my independent view, not a VRM idea), and can then use those credits to purchase access to content they come across online. Such a model allows content creators to be rewarded for quality, not numbers. They will need to focus on their brand managing their audiences expectations of what they create, and in return, a user can subscribe with regular payments of credits they earned in the VRM system.

Content producers can then follow whatever content strategy they want (news, analysis, entertainment ) and will no longer be held captive by the legacy world system that drives reward for number of people not types of people.

Will this happen any time soon? With DataPortability, yes – but once we all realise we need to work together towards a new future. But until we get that broad recognition, I’m just going to have to keep hitting “read all” in my feed reader because I can’t keep up with the amount of content being generated; whilst the poor content creators strain their lives, in the hope of working in a flawed system that doesn’t reward their brilliance.

My presentation at Kickstart forum

I’m currently at Kickstart forum (along with the Mickster), and I just gave a presentation on DataPortability to a bunch of Aussie journalists. I didn’t write a speech, but I did jot down some points on paper before I spoke, so I thought I might share them here given I had a good response.

My presentation had three aspects: background, explanation, and implications of DataPortability. Below is a summary of what I said

Background

  • Started by a bunch of Australians and a few other people overseas in November 2007 out of a chatroom. We formed a workgroup to explore the concept of social network data portability
  • In January 2008, Robert Scoble had an incident, which directed a lot of attention to us. As a consequence, we’ve seen major companies such as Google, Microsoft, Yahoo, Facebook, Six Apart, LinkedIn, Digg, and a host of others pledge support for the project.
  • We now have over 1000 people contributing, and have the support of a lot of influential people in the industry who want us to succeed.

Explanation

  • The goal is to not invent anything new. Rather, it’s to synthesise existing standards and technologies, into one blueprint – and then we push it out to the world under the DataPortability brand
  • When consumers see the DataPortability brand, they will know it represents certain things – similar to how users recognise the Centrino brand represents Intel, mobility, wireless internet, and a long battary life. The brand is to communicate some fundamental things about a web service, that will allow a user to recognise a supporting site respects it’s users data rights and certain functionality.
  • Analogy of zero-networking: before the zeroconf initiative it was difficult to connect to the internet (wirelessly). Due to the standardisation of policies, we can now connect on the internet wirelessly at the click of a button. The consequence of this is not just a better consumer experience, but the enablement of future opportunities such as what we are seeing with the mobile phone. Likewise, with DataPortability we will be able to connect to new applications and things will just “work” – and it will see new opportunity for us
  • Analogy of the bank: I stated how the attention economy is something we give our attention to ie, we put up with advertising, and in return we get content. And that the currency of the attention economy is data. With DataPortability, we can store our data in a bank, and via “electronic transfer”, we can interact with various services controlling the use of that data in a centralised manner. We update our data at the bank, and it automatically synchronises with the services we use ie, automatically updating your Facebook and MySpace profiles

Implications

  1. Interoperability: When diverse systems and organisations work together. A DataPortability world will allow you to use your data generated from other sites ie, if you buy books on Amazon about penguins, you can get movie recommendations on your pay TV movie catalog for penguins. Things like the ability to log in across the web with one sign-on, creates a self-supporting ecosystem where everyone benefits.
  2. Semantic web: I gave an explanation of the semantic web (which generated a lot of interest afterwards in chats), and then I proceeded to explain that the problem for the semantic web is there hasn’t been this uptake of standards and technologies. I said that when a company adopts the DataPortability blueprint, they will effectively be supporting the semantic web – and hence enabling the next phase of computing history
  3. Data rights: I claimed the DataPortability project is putting data rights in the spotlight, and it’s an issue that has generated interest from other industries like the health and legal sectors, and not just the Internet sector. Things like what is privacy, and what exactly does my “data” mean. DataPortability is creating a discussion on what this actually means
  4. Wikiocracy: I briefly explained how we are doing a social experiment, with a new type of of governance model, which can be regarded as an evolution of the open source model. “Decentralised” and “non-hierarchical”, which with time it will be more evident with what we are trying to do

Something that amused me was in the sessions I had afterwards when the journalists had a one-on-one session with me, one woman asked: “So why are you doing all of this?”. I said it was an amazing opportunity to meet people and build my profile in the tech industry, to which she concluded: “you’re doing this to make history, aren’t you?”. I smiled 🙂

Explaining APML: what it is & why you want it

Lately there has been a lot of chatter about APML. As a member of the workgroup advocating this standard, I thought I might help answer some of the questions on people’s minds. Primarily – “what is an APML file”, and “why do I want one”. I suggest you read the excellent article by Marjolein Hoekstra on attention profiling that she recently wrote, if you haven’t already done so, as an introduction to attention profiling. This article will focus on explaining what the technical side of an APML file is and what can be done with it. Hopefully by understanding what APML actually is, you’ll understand how it can benefit you as a user.

APML – the specification
APML stands for Attention Profile Markup Language. It’s an attention economy concept, based on the XML technical standard. I am going to assume you don’t know what attention means, nor what XML is, so here is a quick explanation to get you on board.

Attention
There is this concept floating around on the web about the attention economy. It means as a consumer, you consume web services – e-mail, rss readers, social networking sites – and you generate value through your attention. For example, if I am on a Myspace band page for Sneaky Sound System, I am giving attention to that band. Newscorp (the company that owns MySpace) is capturing that implicit data about me (ie, it knows I like Electro/Pop/House music). By giving my attention, Newscorp has collected information about me. Implicit data are things you give away about yourself without saying it, like how people can determine what type of person you are purely off the clothes you wear. It’s like explicit data – information you give up about yourself (like your gender when you signed up to MySpace).

Attention camera

I know what you did last Summer

XML
XML is one of the core standards on the web. The web pages you access, are probably using a form of XML to provide the content to you (xHTML). If you use an RSS reader, it pulls a version of XML to deliver that content to you. I am not going to get into a discussion about XML because there are plenty of other places that can do that. However I just want to make sure you understand, that XML is a very flexible way of structuring data. Think of it like a street directory. It’s useless if you have a map with no street names if you are trying to find a house. But by having a map with the street names, it suddenly becomes a lot more useful because you can make sense of the houses (the content). It’s a way of describing a piece of content.

APML – the specification
So all APML is, is a way of converting your attention into a structured format. The way APML does this, is that it stores your implicit and explicit data – and scores it. Lost? Keep reading.

Continuing with my example about Sneaky Sound System. If MySpace supported APML, they would identify that I like pop music. But just because someone gives attention to something, that doesn’t mean they really like it; the thing about implicit data is that companies are guessing because you haven’t actually said it. So MySpace might say I like pop music but with a score of 0.2 or 20% positive – meaning they’re not too confident. Now lets say directly after that, I go onto the Britney Spears music space. Okay, there’s no doubting now: I definitely do like pop music. So my score against “pop” is now 0.5 (50%). And if I visited the Christina Aguilera page: forget about it – my APML rank just blew to 1.0! (Note that the scoring system is a percentage, with a range from -1.0 to +1.0 or -100% to +100%).

APML ranks things, but the concepts are not just things: it will also rank authors. In the case of Marjolein Hoekstra, who wrote that post I mention in my intro, because I read other things from her it means I have a high regard for her writing. Therefore, my APML file gives her a high score. On the other hand, I have an allergic reaction whenever I read something from Valleywag because they have cooties. So Marjolein’s rank would be 1.0 but Valleywag’s -1.0.

Aside from the ranking of concepts (which is the core of what APML is), there are other things in an APML file that might confuse you when reviewing the spec. “From” means ‘from the place you gave your attention’. So with the Sneaky Sound System concept, it would be ‘from: MySpace’. It’s simply describing the name of the application that added the implicit node. Another thing you may notice in an APML file is that you can create “profiles”. For example, the concepts about me in my “work” profile is not something I want to mix with my “personal” profile. This allows you to segment the ranked concepts in your APML into different groups, allowing applications access to only a particilar profile.

Another thing to take note of is ‘implicit’ and ‘explicit’ which I touched on above – implicit being things you give attention to (ie, the clothes you wear – people guess because of what you wear, you are a certain personality type); explicit being things you gave away (the words you said – when you say “I’m a moron” it’s quite obvious, you are). APML categorises concepts based on whether you explicitly said it, or it was implicitly determined by an application.

Okay, big whoop – why can an APML do for me?
In my eyes, there are five main benefits of APML: filtering, accountability, privacy, shared data, and you being boss.

1) Filtering
If a company supports APML, they are using a smart standard that other companies use to profile you. By ranking concepts and authors for example, they can use your APML file in the future to filter things that might interest you. As I have such a high ranking for Marjolein, when Bloglines implements APML, they will be able to use this information to start prioritising content in my RSS reader. Meaning, of the 1000 items in my bloglines reader, all the blog postings from her will have more emphasis for me to read whilst all the ones about Valleywag will sit at the bottom (with last nights trash).

2) Accountability
If a company is collecting implicit data about me and trying to profile me, I would like to see that infomation thank you very much. It’s a bit like me wearing a pink shirt at a party. You meet me at a party, and think “Pink – the dude must be gay”. Now I am actually as straight as a doornail, and wearing that pink shirt is me trying to be trendy. However what you have done is that by observation, you have profiled me. Now imagine if that was a web application, where this happens all the time. By letting them access your data – your APML file – you can change that. I’ve actually done this with Particls before, which supports APML. It had ranked a concept as high based on things I had read, which was wrong. So what I did, was changed the score to -1.0 for one of them, because that way, Particls would never show me content on things it thought I would like.

3) Privacy
I joined the APML workgroup for this reason: it was to me a smart away to deal with the growing privacy issue on the web. It fits my requirements about being privacy compliant:

  • who can see information about you
  • when can people see information about you:
  • what information they can see about you

The way APML does that is by allowing me to create ‘profiles’ within my APML file; allowing me to export my APML file from a company; and by allowing me to access my APML file so I can see what profile I have.

drivers

Here is my APML, now let me in. Biatch.

4) Shared data
An APML file can, with your permission, share information between your web-services. My concepts ranking books on Amazon.com, can sit alongside my RSS feed rankings. What’s powerful about that, is the unintended consequences of sharing that data. For example, if Amazon ranked what my favourite genres were about books – this could be useful information to help me filter my RSS feeds about blog topics. The data generated in Amazon’s ecosystem, can benefit me and enjoy a product in another ecosystem, in a mutually beneficial way.

5) You’re the boss!
By being able to generate APML for the things you give attention to, you are recognising the value your attention has – something companies already place a lot of value on. Your browsing habits can reveal useful information about your personality, and the ability to control your profile is a very powerful concept. It’s like controlling the image people have of you: you don’t want the wrong things being said about you. 🙂

Want to know more?
Check the APML FAQ. Othersise, post a comment if you still have no idea what APML is. Myself or one of the other APML workgroup members would be more than happy to answer your queries.

« Older posts Newer posts »