Frequent thinker, occasional writer, constant smart-arse

Author: elias (Page 15 of 27)

Thank you 2008, you finally gave New Media a name

Earlier this year Stephen Collins and Chris Saad had flown to Sydney for the Future of Media summit, and in front of me were having heated discussions on how come nobody invited them to the Social Media club in Australia. As they were yapping away, I thought to myself what the hell are they going on about. It turns out things I used to call "blogs", "comments" or "wikis" were now "social media". Flickr, Delicious, YouTube? No longer Web 2.0 innovations, but social media. Bulletin boards that you would dial up on your 14000 kbps modem? Social media. Online forums discussing fetishes? Social media. Everything was now bloody social media (or Social Media: tools are lower case, concept uppercase) and along with Dare Obasanjo I was asleep for the two hours when it suddenly happened.

social media bandwagon

However it turns out that this is a term that’s been around for a lot longer than we give it credit for. It hung low for a while and then as some significant events occurred this year the term became a perfect fit to describe what was happening. It’s a term that I’ve been waiting to emerge for years now, as I knew the term "new media" was going to mature one day.

Ladies and gentlemen, welcome to our new world and the way of defining it: 2008 is when the Information Age’s "social media" finally displaced the Industrial Era’s "mass media". Below I document how, when and why.

Origins of the term and its evolution
The executive producer of the Demo conference Chris Shipley is said to have coined the term during a key note at the Demofall 2005 conference on the 20th September 2005. As she said in her speech:

Ironically, perhaps, there is one other trend that would at first blush seem at odds with this movement toward individuality, and that is the counter movement toward sociability.

As one reporter pointed out to me the other day, the program book you have before you uses the term “social” a half-dozen times or more to describe software, computing, applications, networks and media.

I’m not surprised that as individuals are empowered by their communications and information environments, that we leverage that power to reach out to other people. In fact, blogs are as much about individual voice as they are about a community of readers.

The term gained greater currency over the next year, as Shipley would use the term in her work and various influencers like Steve Rubel would popularise the term. Brainjam which popularised unConferences first had the idea of a Social Media Club around the time of Shipley’s keynote and eventually formed it in July of the following year, which created more energy towards pushing for the term. Other people starting building awareness, like the Hotwire consultant Drew Benvie who from April 2006 has been writing the Social Media Report (and created the Social media Wikipedia page on 9 July 2006). Benvie said to me in some private correspondence: “When social media emerged as a category of the media landscape in 2005 / 2006 I noticed the PR and media industries looking for suitable names. The term social media came to be used at the same time of social networks becoming mainstream.” Back then it was more a marketing word to conceptualise online tools and strategies to deal with them, which is why there has been distaste for the term that prevented its adoption.

It was 2008 however when several news incidents, innovations, and an election entrenched this term into our consciousness. Later on, I will explain that, but first a lesson.

web2_logos

So what is Social Media?
A debate in August 2008 created the following definition: "social media are primarily Internet and mobile-based tools for sharing and discussing information among human beings. " I like that definition, but with it, you could arguably say "social media" existed when the first e-mail was sent in the 1970s. Perhaps it’s going to suffer the fate of the term “globalisation” where in the 1990s people didn’t know the term existed – but by 2001 in high school, I was told it had been around since the 1980s and by my final year of university in 2004 I was told "globalisation" started in the 1700s. Heaven forbid it turns into a term like "Web 2.0" where no one agrees but it somehow becomes a blanket term for everything that is post the Dot-Com bubble.

The definition is off-putting unless you have a fundamental understanding of what exactly media is. It might shock you to hear this, but a newspaper and a blog are not media. A television and a Twitter account, are not media either. So if you’ve had had trouble getting the term social media before, it’s probably because you’ve been looking at it in the wrong way. Understand what media really is and you will recognise the brilliance of the term "social media".

Vin Crosbie many years ago answered a question I had been searching half a decade ago on what was new media. Crosbie’s much cited work has moved around the Internet, so I can’t link to his original piece of work (update: found it on the Internet archive), but this is what he argued in summary.

  • Television, books and websites are wrongly classified as media. What they really are, are media outputs. We are defining our world on the technology, and not the process. Media is about communication of messages.
  • There are three types of media in the world: Interpersonal media, mass media, and new media.
  1. Interpersonal media, which he coined for lack of an established term, is a one-on-one communications process. A person talking directly to another person is interpersonal media. It’s one message distributed to one other person, from one person.
  2. Mass media is a one-to-many process. That means, one entity or person is communicating that one message to multiple people. So if you are standing in front of a crowd giving a speech, you are conducting a mass media act. Likewise, a book is mass media as it’s one message distributed to many
  3. New media, which is only possible due to the Internet, is many-to-many media.

I highly recommend you read his more recent analysis which is an update of his 1998 essay (can be seen here on the Internet archive ).

That’s a brilliant way of breaking it down but I still didn’t get what many-to-many meant. When the blogosphere tried to define social media it was a poor attempt (and as recently as November 2008, it still sucked). But hidden in the archives of the web, we can read Stowe Boyd who came up with the most accurate analysis I’ve seen yet.

  1. Social Media Is Not A Broadcast Medium: unlike traditional publishing — either online or off — social media are not organized around a one-to-many communications model.
  2. Social Media Is Many-To-Many: All social media experiments worthy of the name are conversational, and involve an open-ended discussion between author(s) and other participants, who may range from very active to relatively passive in their involvement. However, the sense of a discussion among a group of interested participants is quite distinct from the broadcast feel of the New York Times, CNN, or a corporate website circa 1995. Likewise, the cross linking that happens in the blogosphere is quite unlike what happens in conventional media.
  3. Social Media Is Open: The barriers to becoming a web publisher are amazingly low, and therefore anyone can become a publisher. And if you have something worth listening to, you can attract a large community of likeminded people who will join in the conversation you are having. [Although it is just as interesting in principle to converse with a small group of likeminded people. Social media doesn’t need to scale up to large communities to be viable or productive. The long tail is at work here.]
  4. Social Media Is Disruptive: The-people-formerly-known-as-the-audience (thank you, Jay Rosen!) are rapidly migrating away from the old-school mainstream media, away from the centrally controlled and managed model of broadcast media. They are crafting new connections between themselves, out at the edge, and are increasingly ignoring the metered and manipulated messages that centroid organizations — large media companies, multi national organizations, national governments — are pushing at them. We, the edglings, are having a conversation amongst ourselves, now; and if CNN, CEOs, or the presidential candidates want to participate they will have to put down the megaphone and sit down at the cracker barrel to have a chat. Now that millions are gathering their principal intelligence about the world and their place in it from the web, everything is going to change. And for the better.

So many-to-many is a whole lot of conversation? As it turns out, yes it is. Now you’re ready to find out how 2008 became the year Social Media came to maturity.

How 2008 gave the long overdue recognition that New Media is Social Media
The tools: enabling group conversations
MySpace’s legacy on the world is something that I think is under-recognised, that being the ability to post on peoples’ profiles. It gave people an insight into public communication amongst friends, as people used it more for open messaging rather than adding credentials like the feature originally intended when developed on Friendster. Yes, I recognise public discussions have occurred for years on things like forums and blogs, but this curious aspect of MySpace’s culture at its peak has a lot to answer for what is ultimately Social Media. Facebook picked up on this feature and more appropriately renamed it as "wall posts" and with the launch of the home screen that is essentially an activity stream of your friends, it created a new form of group communication.

The image below shows a wall-to-wall conversation with a friend of mine in February 2007 on Facebook. You can’t see it, but I wrote a cheeky response to Beata’s first message at the bottom about her being a Cabbage-eating Ukrainian communist whose vodka is radioactive from Chernobyl. She responds as you can see, but more interestingly, our mutual friend Rina saw the conversation on her homescreen and jumped in. This is a subtle example that shows how the mainstream non-technology community is using social media. I’m currently seeing how non-technology friends of mine will share links that appear on the activity stream and how they jump into a conversation about it right there. It’s like over-hearing a conversation around the water-cooler and joining in if you want.

Facebook | Elias, Beata, Rina

This is what made Twitter what it is. What started as a status update tool for friends, turned into a chat-room with your friends; you can see the messages posted by people you are mutually following, and you can join in on a conversation that you weren’t originally a part of. Again, simple but the impact we have seen it have on the technology community is unbelievable. Like for example, I noticed Gabe Rivera a few days ago had a discussion with people about how he still doesn’t get what social media is. I wasn’t involved in that discussion originally, but its resulted in me partially inspired to explore the issue with this blog post. These are subtle, anecdotal examples but in sum they point to this broader transformation occurring in our society due to these tools that allow us to mass collaborate and communicate. The open conversation culture of Web 2.0 has helped create this phenomenon.

Another Internet start-up company which I think has contributed immensely to the evolution of Social Media is Friendfeed. It essentially copied the Facebook activity screen, but made it better – and in the process, created the closest thing to a social media powerhouse. People share links there constantly and get into discussions in line. In the mass media, an editor would determine what you could read in a publication; in the Social Media world, you determine what you read based on the friends you want to receive information from. Collectively, we decimate information and inform each other: it’s decentralised media. Robert Scoble, a blogging and video super star, is the central node of the technology industry. He consumes and produces more information than anyone else in this world; and if he is spending seven days a week for seven hours a day on Friendfeed, that’s got to tell you something’s up.

The events: what made these tools come to life in 2008
We’ve often heard about citizen journalism with people posting pictures from their mobile phones to share with the broader Internet. Blogs have long been considered a mainstay in politics this last decade. But it was 2008 that saw two big events that validated Social Media’s impact and maturity.

  1. A new president: Barack Obama has been dubbed as the world’s first Social Media president. Thanks to an innovative use of technology (and the fact one of the co-founders of Facebook ran his technology team – 2008 is the year for Social Media due to cross pollination), we’ve seen the most powerful man in the world get elected thanks to the use of the Internet in a specific way. Obama would post on Twitter where he was speaking; used Facebook in a record way; posted videos on YouTube (and is doing a weekly video addresses now as president-elect) – and a dozen other things, including his own custom-built social networking site.
  2. A new view of the news: In November, we saw a revolting event occur which was the terrorist situation in India (and which has now put us on the path of a geopolitical nightmare in the region). However the tragic event at Mumbai, also gave tangible proof of the impact social media is having in the world .

What’s significant about the above two events is that Social Media has robbed the role played by the Mass Media in the last century and beyond. Presidents of the past courted newspapers, radio and television personalities to get positive press as Mass Media influenced public perception. Likewise, breaking news has been the domain of the internationally-resourced Mass Media. Social Media is a different but much better model.

What’s next?
It’s said we need bubbles as they fuel over-development that leave something behind forever. The last over-hyped Web 2.0 era has given us a positive externality that has laid the basis of the many-to-many communications required for New Media to occur. Arguably, the culture of public sharing that first became big with the social bookmarking site Del.icio.us sparked this cultural wave that has come to define the era. The social networking sites created an infrastructure for us to communicate with people en masse, and to recognise the value of public discussions. Tools like wikis both in the public and the enterprise have made us realise the power of group collaboration – indeed, the biggest impact a wiki has in a corporation from my own experience rolling out social media technologies at my firm, is encouraging this culture of "open".

It has taken a long time to get to this point. The technologies have taken time to evolve (ie, connectivity and a more interactive experience than the document web); our cultures and societies have also needed some time to catch up with this massive transformation in our society. Now that the infrastructure is there, we are busy concerning ourselves with refining the social model. Certainly, the DataPortability Project has a relevant role in ensuring the future of our media is safe, like for example the monitoring the Open Standards we use to allow people to resuse their data. If my social graph is what filters my world, then my ability to access and control that graph is the equivalent to the Mass Media’s cry of ensuring freedom of the press.

Elias Bizannes social graph
Over 700 people in my life – school friends, university contacts, workmates and the rest – are people I am willing to trust to filter my information consumption. It will be key for us to be able to control this graph

Newspapers may be going bankrupt thanks to the Internet, but finally in 2008, we now can confidently identify the prophecies of what the future of media looks like.

The makings of a media mogul: Michael Arrington of TechCrunch

After recognising in my previous post that Michael Arrington has successfully captured the dynamic of the mass media to pioneer new media, my mind asked how did this guy do it. With some time on my hands, I looked into what I think is one of the most remarkable stories to occur in the recent tech boom that was Web 2.0 (yep, that’s past tense – it’s an innovation era that now has closed). How "a nobody ‚Äî a former attorney and entrepreneur who, at 35, looked as if he might never hit it big " became one of Time’s 100 most influential people in the world. I’ve never interacted with Arrington, although I know plenty of people that know him well (through the Aussie mafia that grace the Valley). So this is coming from a completely objective but aware view. An outside view with purely the public record to track his success. Let’s see what the evidence tells us.

The accidental start-up
Reading through the archives of his main blog TechCrunch.com and his companion blog CrunchNotes.com, I came to realise his success could be identified as early as his first five months from the first post written. He launched TechCrunch.com on the 11th June 2005 with posts released daily if not multiple times per day. The blog averaged 5 posts every two days in its first year, with 879 posts (it was actually more, but a half dozen or so have since been removed).

TechCrunch posts per day (year one)

His first post, which has since been removed (God bless the Internet archive), gives an insight into motivations for starting the blog.

TechCrunch is edited by Michael Arrington and Keith Teare, with frequent input from guest editors. It is part of the Archimedes Ventures network of companies.

Archimedes Ventures was at the time a two partner firm that specialised in the "development of companies focused on Web 2.0 technologies and solutions." The fact the page listed Teare and is marked as part of Archimedes Ventures network of companies suggests this was a conscious business development effort on the part of Arrington. As he would later reveal, he was inspired by Dave Winer who said: ‚Äúif you are going to build a new company, go to the trouble of actually researching what other companies have already done." Several months later in October, he posted an announcement that his startup Edgeio would be live soon, validating that TechCrunch wasn’t so much a "hobby" but a need to understand your market. Indeed, it seems TechCrunch just became a more formalised affair as he had been posting research into potential competitors on his personal blog publicly from March 2005 – and by the time he launched TechCrunch there were already four employees at Edgeio. No doubt, exposure and networking like any smart businessman was part of his agenda as well, which perhaps is why we saw a transition from a personal site to a TechCrunch brand (more on community building later).

On October 2005, TechCrunch was ranked the 566th blog by Technorati based on the amount of links it received from other websites. In December of that year, its ranking had climbed to 96th. One year on, in June 2006, it became the 4th most linked-to blog and has subsequently maintained its status as number 2 (not being able to beat another new media mogul Arianna Huffington who dominates the table, but that’s a story for another time).

TechCrunch subscribers

The above graph shows an explosion, but it’s the first year that tells the story which forms the basis of this post:

Over that first year, 23,713 comments had been left, with around 1-2 million page views per month. However as the figures show, it was the first six months where this research turned into a prospective business ("help "), with subsequent months and years simply consolidating his growth: by year two, there were 2,000 more posts (double the output of the previous year); 115,608 comments and trackbacks in total (an average of 40 per post); and 435,000 RSS subscribers. Pages views in the month leading up to the 24th month in operation were 4.5 million, twice what it was the previous year. In September 2008, over a million people subscribed to the blog.

So how did he do it?
Compared to his peers/competitors, he joined the game quite late, and yet he is absolutely smashing them. Same software in some cases and same focus. The question is, what did Arrington do that others didn’t?

Whilst the metrics might track his growth, they don’t track how he did it, which has less to do with Search Engine Optimisation and more to do with hyping up a boom. Below I describe what I think are the Critical Success Factors that made TechCrunch what it is today.

1) Events.
TechCrunch wasn’t just a blog; it was a host. Early on, there were events hosted at Arrington’s house where people could network and mingle. It would be a mistake to think that TechCrunch later on got into the conference business as an alternative revenue stream, but the reality is, social networking was being organised in the real world in parallel to the online blog from as early as August 2005. To create a new blog and have 63 people subscribed to it within a week indicates a lot of offline activity to get those subscribers. The social meet ups reinforced his readership base.

2) Web2.0.
Arrington saw a tide building for a second tech boom and formed a loose group of allies promoting this tide. Add to the mix some existing high profile personal brands like Dave Winer and Robert Scoble – and in the process, you build your own personal brand. To use his words, he saw a parade and got in front of it.

When Tim O’Reily coined “Web 2.0”, it was a buzz-speak marketing word. What Arrington did was successfully exploit this dynamic by recognising the rising investment trend occurring. He built a community around Web 2.0 by being its tireless champion and channeling existing energies. And as the community grew, so did he. He realised that what goes down, goes back up again – and by tapping into this growth, he could grow with it. If this second boom was anything like the first, being at the front of it would be such a good career move that it probably didn’t even need to be said.

3) Excellent content.
Don’t underestimate the difference quality content has. Arrington has an analytical mind and is a clear communicator – he is a lawyer after all. Intelligence and an ability to communicate will beat even the most experienced journalist. I‚Äôve been told that Arrington doesn‚Äôt understand tech, or at least makes a convincing image of not getting it, which probably explains the why he writes in plain English – even in the conversational style of writing that blogging is associated with, good clear English is rare to find. More importantly, he understood what all publishers have long known: good content is not just about the words. As Scoble highlighted long ago, one of the reasons that made Arrington such a popular writer is the simple use of images to break up the text.

No doubt, Arrington’s previous staff writers, ones I am familiar with like Nik Cubrilovic, Duncan Riley and Marshall Kirkpatrick, made a big difference in TechCrunch’s growth: Kirkpatrick’s ground-breaking RSS and research skills to find news, Cubrilovic’s Arrington-style writing ability, and Riley’s industry relationships to often break news – is how they made compelling content. However, Arrington quite uniquely stands out and it‚Äôs why when he tried to take a break and to focus on the business side, he was pulled back in to raise the quality. TechCrunch is Mike Arrington: it’s been proven you can’t separate the two (at least, yet).

4) The media dynamic.
As I recently argued, the mass media at its core is about playing a game, but in the context of web 2.0 it is about understanding the dynamics of a market place. He had access to Venture Capitalists (VCs) as he was a corporate lawyer as well as an entrepreneur with experience to boot – access that other entrepreneurs quite simply didn‚Äôt have.

He was able to successfully take advantage of the VC paranoia that they might miss the next Google or Facebook. They literally were desperate to hear about the next big thing. For them, Arrington was a deal-type lawyer who would review things in plain English and present it with pretty pictures. On the flip side, you had entrepreneurs dying to get in front of these VCs as well as general exposure for their start-up. When Arrington decided to put advertising on the blog, it was a natural progression: entrepreneurs wanted to get exposure to VCs, future employees, and buzz amongst their peers. People on the other hand, are willing to consume this content because it’s free market research for them – catering in the audience for both investor and the entrepreneur. Powerful stuff? God yeah – that’s the kind of captive audience that’s addicted to crack cocaine.

To give you an idea of impact, I was told by an entrepreneur whose company was profiled in that first six months, that they got something like 30 VC calls and e-mails over a holiday period. After less than three weeks, they had Kleiner Perkins Caufield & Byers email, say "Hi, just another VC here. Can we meet next Thursday?". They had a list of meetings that kept them going for weeks. My own personal experience this year through the DataPortability Project saw first hand what exposure and support from TechCrunch could do, and suffice to say, it’s impressive. We had VCs wanting to talk to us about data portability, even though we‚Äôre non-profit!

This offline social networking is key to what ultimately became an online social media business. What’s very telling is a comment left by Valley legend Dave Winer, a man Arrington repeatedly showed admiration for and I am sure his relationship is what gave him a boost at the start. It reflects several things, but foremost, Arrington had a lot of goodwill in the community as a leader of the industry by existing heavy weights. He connected the various participants in what ultimately is a marketplace. Forget about Edgeio – this was the making of a new media business that would show the dying mass media what the future looks like for their industry. TechCrunch became the channel of choice for so many people to get their voice heard for competitive, strategic and ego reasons.

Concluding thoughts
TechCrunch started as a hobby and research project to test a bunch of the stuff he’d been reading about in the Web2.0 space. After the crash, he pretty much dropped out and watched a lot of college football – he needed a way to get back into it. Arrington probably knew he could write well, but I don’t think he realised how much of an impact his ability could have. The use of images in content, and the frequency of his posts made TechCrunch in the first six months, combined with offline social networking, the positioning as a champion of the Web 2.0 community, and exploiting the dynamic of a marketplace is what made him what he is. By the end of 2006, I don’t think Edgeio got much of Arrington’s attention at all – he’d been hooked by the excitement of writing, leading opinion and eventually, the power that attracts people to positions of note and influence, whether it be media, celebrity, business or politics.

This post only touches on the surface, as the Critical Success Factors in that first year do not give a full picture. Arrington‚Äôs involvement with the presidential primaries process, his disruptive influence with DEMO through the TC40/50, the Crunchies and even the people who keep trying to take him down add a further dimension to the TechCrunch story. He’s a man with more haters than Murdoch, but that’s doesn’t make him any less brilliant.

Arrington can get right up the nose of people with massive vested interests, and he loves to stir the pot – like the traditional press practice, controversy sells. Living in a massive rented house with all but a big dog, he can pretty much operate without fear. If it all exploded tomorrow, he’d probably have a beer, and enjoy a good long holiday and another season of college football. That’s what makes a journalist fearless, and that, combined with his obvious passion for the sector and the power he wields makes for a pretty dynamic combo.

He’s made no secret of his desire to be bigger than C|Net (without having to cop the overheads of their business model). Take out download.com, and I think it safe to say he’s reached that: maybe it’s time he puts his eyes on something a bit bigger. Although I doubt he needs to be told that – he’s already making history along with another select few, who through raw talent are pioneering “new media”, ready to replace the financially bankrupt mass media as the influencers in our society.

The future of journalism and media

Last week, Deep Throat died. No, not the porn actress but the guy who was effectively in operational control of the FBI during the Nixon years. Mark Felt was a guy who was in line to run the FBI from his number three position, but was passed up by Nixon who brought in an outsider. Whilst people often remark that the Russian government is controlled by the intelligence services, it’s worth reflecting that the poster-child of the free world has its own domestic intelligence services yielding too much power over the presidents. Nixon broke tradition for the first time in 48 years, doing something other presidents couldn’t do: it was appointing an outsider to run the agency. And so lays the roots to his downfall, in one of the most dramatic episodes in the mass media’s history – a newspaper brought the downfall of one of the the most powerful men in the world.

Felt’s identity has been protected for decades, and was only made public three years ago, arguably because someone else was going to expose him and he beat them too it. In an interesting article by George Friedman at Stratfor:

Journalists have celebrated the Post’s role in bringing down the president for a generation. Even after the revelation of Deep Throat’s identity in 2005, there was no serious soul-searching on the omission from the historical record. Without understanding the role played by Felt and the FBI in bringing Nixon down, Watergate cannot be understood completely. Woodward, Bernstein and Bradlee were willingly used by Felt to destroy Nixon. The three acknowledged a secret source, but they did not reveal that the secret source was in operational control of the FBI. They did not reveal that the FBI was passing on the fruits of surveillance of the White House. They did not reveal the genesis of the fall of Nixon. They accepted the accolades while withholding an extraordinarily important fact, elevating their own role in the episode while distorting the actual dynamic of Nixon’s fall.

Absent any widespread reconsideration of the Post’s actions during Watergate in the three years since Felt’s identity became known, the press in Washington continues to serve as a conduit for leaks of secret information. They publish this information while protecting the leakers, and therefore the leakers’ motives. Rather than being a venue for the neutral reporting of events, journalism thus becomes the arena in which political power plays are executed. What appears to be enterprising journalism is in fact a symbiotic relationship between journalists and government factions. It may be the best path journalists have for acquiring secrets, but it creates a very partial record of events — especially since the origin of a leak frequently is much more important to the public than the leak itself.

Now consider my own experiences as an amateur journalist.

After several years of failed media experiments, my university enterprise (I did it as a society, not as a company, because I want to treat this as my "throw-away" startup to learn but not be tied down when I left) at changing student media suddenly hit the gold mine: we created an online weekly "news digest" that literally became the talk of the campus for those in the university administration and the people surrounding it. An elite audience (not the 40,000 University of Sydney crowd), but the several hundreds of people that theoretically represented the campus and ran the multi-million dollar student infrastructure. Of the 23 editions we created that year, we literally had people hanging off their seats for the next edition: trying to predict the new URL, and e-mails with quotes of it sent out within hours of publishing.

The News Digest, October 29th 2004.

It was interesting because of how the product evolved during its first year. I started it thinking it would be a cool thing to have a summary of the news, once a week, in a "digest" format. The news was split arbitrarily as student, Australian and international. However within a few editions, the student news segment was no longer just about the latest party but about confidential information and the core reason why people read it. In the second edition I wrote:

USYD UNION : Chris Farral has been hired as the Union 's new General Manager. Farral has a highly reputable background in the ABC and various community-based groups. It has been a decade since the Union 's last General Manager was appointed, and as such we hope Farral will bring a new flair and vitality to the position. Chris also happens to be the father of Honi Soit editor Sophie. Does this mean an end to critical analysis in Honi's reporting of the traditionally stale and bitter Union ? No. That would require there to have been critical analysis in the first place. (EB)

Cheekily written but an innocent attempt to report news. Someone saw that, realised we had an audience, and in edition three we revealed:

SYDNEY UNIVERSITY UNION: Last week we reported that Chris Farrell was appointed the new General Manager of Sydney University’s student union. This week we can reveal that close to $50,000 was spent on external recruitment agencies to find Mr Farrell. Where was he hiding? The selection panel was evenly split for two candidates: Paul McJamett, the current Facilities Manager and previously expected next-in-line for the job, was supported by Vice-President Penny Crossley, Ex-President Ani Satchithanada, and Human Resources Manager Sandra Hardie. Meanwhile Farrell was supported by current President Toby Brennan, and the two senate reps (one of whom is new this year to the Board). Crossley is rumoured to have crossed the floor, and made the casting vote for Farrell. (Elias Bizannes)

And then we go threatened with a law suit (the first of many in my life, it would turn out) because we exposed some dirty secrets of a very politicised group of people. The reason I wanted to share that story, was to have you see how we evolved from a “summary of the news” to a “tool for the politicians”. The rest of that year, I had people in all the different factions developing relationships with me and breaking news. Yes, I knew I was being played for their own reasons. However it was a two way using: I was getting access to confidential information from the insiders. Our little creation turned into a battleground for the local politicians – and so long as I could manage the players equally, I won just as much as they did, if not more.

Up until now, I never realised (or really thought) that my experience in student journalism was actually how the big players of the world operate. Forget the crap about what journalism is: at its core, it’s about creating relationships with insiders and being part of a game in politics, that as a by-product (not a function) also creates accountability and order in society.

On the future of journalism
For as long as we have politics, we will have “journalists”. In the tech industry for example, the major blogs have become a tool for companies. I recently saw an example where I e-mailed a CEO of a prominent startup about an issue, and within days, two major blogs posted some old news to get exposure to fixing the issue. This CEO used his media credits with the publishers of these blogs, to help him with the issue. It’s the same dynamic described above: people who create news and people with the audience. Heck – we have an entire industry created to manage those two groups: the Public Relations industry.

So the question about the future of journalism, needs a re-look. It’s a career path being disrupted by the Internet and breaking traditional business models, with the new innovations going to have their bubble burst one day. Where we will find answers to the future, is where we can see in play the dynamics of news creators and news distributors, as that is where journalism will evolve.

Personally, I’m still trying to work out if the captive audience has now left the building. But my 2004 experiment in student media – targeting the same Gen Y’s that don’t read newspapers – is recent enough experience to prove the Internet hasn’t broken this relationship yet. If you are looking to see what the future of journalism and especially the media is – you need to follow where the audience is. But a word of caution: don’t measure the audience by its size, but by its type. One million people may read the blog TechCrunch, but it’s the same one-million early adopters around the world that are asked by their Luddite families to fix the video recording machine. There is an indirect reader of a publication, but they are just as much influenced and can be reached out to, if determined by the direct reader. Even though Michael Arrington who started TechCrunch was a corporate lawyer, his successful blog has now done what the mass media used to do. That’s something worth recognising as the core to his success, I think. Certainly, it validates that the future is just like the past – just slightly tweaked in its delivery.

So open it’s closed

The DataPortability Project has successfully promoted in 2008 the concept of “data portability”. However it’s become too successful – people make announcements now that claim to be “data portability” but are misleadingly not. Further, the term “Open” has become the new black. But really, when people say they are open – are they?

Status update on the DataPortability Project & context
The DataPortability Project now has developed a strong underlying transparent governance model to make decisions which embeds a process to achieve outcomes. We have also formulated our vision that forms the core DNA of the Project and allow us to align our efforts. Organisationally, we are currently working on a legal entity to protect our online community, and we are doing this whilst also ensuring we are working with others in the industry, such as the discussions we’ve had within the IDTBD proposal with Liberty Alliance, Identity Commons and others.

Our brand communications are nearly finalised (this time, legally vetted), and a refreshed website with a new blog has been rolled out. We’ve put out calls for positions and have already finalised our agreement with a new community manager. (Now open are positions for our analyst roles if you are interested.)

We have a Health Care task force that’s just started, looking to broaden our work into another sector of the economy. We also have an Service Provider Grid Task force finalising its work, which via an online interface and API, will allow people to query what various entities use in terms of open standards. We also have a task force that will provide sample EULA and TOS documents that encourage data portability, and further our vision.

The DataPortability vision states that people should be able to reuse their data. Traditionally in the past, people have said this means “physically” porting their data amongst web services. Whilst this applies in some cases, it is also about access as I recently argued .

So to synchronise our work on the EULA/ToS task force, I believe we need a technology equivalent, and which will give additional value to our Service Provider Grid. This is because Open Standards comply with our vision, and we need to ensure we only support efforts that we believe are worthy.

Hi, I’m open
Open Standards have been a core value that the DataPortability Project has advocated for since its founding, getting to the point where its even been confused as its core mission (it‚Äôs not). For us, they are an enabler – and it has always been in our interest to see all of them work together.

Standards are important because they allow interoperability. For people to be able to access their data from multiple systems, we require systems to be able to easily communicate with each other. Likewise, for people to get value of any data they export from a system, they need to be able to import it – and this can only occur if the data is structured in a way that is compatible with another system.

We advocate “Open” because we want to minimise the costs of business for wanting to comply with our vision. However during 2008, the term "Open" Standards has been over-used, to the point of abuse.

An open standard is a standard that is publicly available and has various rights of use associated with it. But really, what’s open?
– its availability?
– the authority controlling the standard?
– the decision making process over the standard?

Liberty Alliance defines it as:

– The costs for the use of the standard are low.
– The standard has been published.
– The standard is adopted on the basis of an open decision-making procedure.
– The intellectual property rights to the standard are vested in a not-for-profit organisation, which operates a completely free access policy.
– There are no constraints on the re-use of the standard.

That I believe, perfectly encapsulates what I think an Open Standard should be. However as someone who spends his days applying international accounting standards to what companies report in their financials, I can assure you, simply flagging the criteria is only half the fun. Interpreting them is a whole debate in itself.

In my eye, most of these "open" efforts don’t fit that criteria. To illustrate, I am going to shame myself as I am a member of a workgroup that claims to be open: the APML workgroup. The group fails the open test because:
– it has a closed workgroup that makes the decisions, without a clearly defined decision making procedure
– it does not have a non-profit behind it, with the copyright owned by a company (although it’s made clear there is no intention to issue patents)
– it has no clear rights attached to it

So does that mean every standard group needs to create a legal entity for it to be open? Thankfully no – the Open Web Foundation (OWF) will solve this problem. Or does it? Whilst the decision making process is "open" (you can read the mailing list where the discussion occurs), what about the way it selects members? It’s dependent on being invited. That’s Open with a big But.

How about OpenID (which I am also a member of) – that poster child for "Open Standards". On the face of it, it fits the bill. But did you know OpenID contains other standards as part of it? As my friend and intellectual mentor Steve Greenberg said:

openid xrds greenberg

Now thankfully, XRDS fits the bill as a safe standard. Well kind of. It has links to another standard XRI, which it is alleged are subject to patent claims. Well sort of. Kinda. Oh God, let’s not get into a discussion about this again. But don’t give poor APML, the OWF or Open ID too much grief – I could indeed raise some nastier questions especially at other groups. However this isn’t about shaming – rather, it’s about raising questions.

The standards communities are fraught with politics, they are murky, and now they are creeping into the infrastructure of our online world. As a proponent for these "Open Standards", I think it’s time we start looking at them with a more critical eye. Yes, I recognise all these questions I’m raising are fixable, but that’s why I want to raise the point, because they are currently being swept under the carpet outside of the traditional authorities like the W3C.

It’s time some boundaries were set on what is effectively the brand of Open. It’s also time the term is defined, because quite frankly, its lost all meaning now. I’ve listed some criteria – but what we really need is some consensus on what ‘the’ criteria for Open should be.

Social media and that whole “friend” thing

Social media, is being killed not by fail whales , but social awkwardness. Facebook as a simple example – is everyone you add there really your "friend"? What’s a "friend", what ‘group’ do I put them in…it’s all very stressful. However bring into the mix social media services (sites where people collaborate, share content, discuss openly) and this stress becomes a real pain in the arse.
Twitter for example – you get alerts when people post a message. What happens when there is someone you know in real life, you are friendly with, but their Twitter stream is verbal diarrhoea? You force yourself to subscribe to them, because the social awkwardness matters more to you. Or Friendfeed, where people share links – it’s even worse. I would even go on to say it makes the service unusable.
Enter Google Reader, the tool I use to consume my online information habit. There is a feature that determines who e-mails you, and if they use Google Reader and share links, will come up along with your other subscriptions. It’s become such a valuable thing for me, that I now focus my attention on clearing items there ahead of my other few dozen subscriptions. The reason being, it’s the benefit of social media services without the social awkwardness.
Take Chris Saad, who was on my list. I didn’t like the things he shared – movie reviews – so I hid him. Up until now when a Google blog search will notify him (I expect him to find this and respond within 6 hours of posting this – watch!), he probably didn’t even know. However, if I was to unsubscribe from him on something like Twitter – he’d work it out – and say "dude, what’s the deal?". Because an inherent value of social media is that it’s collaborative communication; it’s just that too much communication from too many people can become more noise than signal.
This new age of mass collaboration is a massive thing, that I don’t think even the early adopters driving it, realise what’s happening. It’s the future of media – the fact people I know and trust will suggest articles, is the same human-powered recommendations the mass media have been doing -but so much more efficient, relevant and better.
And yet, Google Reader in its simplicity does it best – it’s almost like a secret. Mike Cannon-Brookes probably doesn’t even realise I track his shared links, but I love them because he reads a lot of RSS feeds on diverse subjects that interest me. Likewise, Kate Carruthers has such a diverse reading list I feel like I can whittle down my RSS subscriptions which stress me from having too much, and just get fed from her the good stuff.
Am I showing up in their field? Who knows. And quite frankly, who cares. I know I do for Brady Brim De-Forest, because he’s re-shared stuff I shared that I doubt he subscribes to (at least then). But that doubt detracts the fact it doesn’t matter. It’s a secret club – I go about clicking the "share" button for good content I come across, thinking perhaps someone follows them and would appreciate it. There’s no feedback mechanism, other than seeing other people encouraged to do the same. And this is the first time I’ve ever discussed the club openly. I think it exists. Maybe it doesn’t. But damn, it rocks.

The broken business model of newspapers

About six weeks ago I took a week off work to catch up on life and do some research and testing of market opportunities. I had several hypotheses I wanted to test and sent content to a closed group of friends and colleagues. My goal was to watch how they reacted to it, to understand how time-poor people consume information…and it was an absolutely fascinating experience.

As part of this excercise, I took the task of reading all the major newspapers every day. It has literally been years since I’ve given that much attention to them – I used to read them daily, but my Gen-Y ways got the better of me, and I moved online. Unfortunately, I still can’t seem to manage my online rituals to efficiently consume information (hence the research I did – turns out other people are struggling as well). Something I realised in the course of my research, is that whilst newspapers are losing circulation due to the Internet – there is a lot they could do to really improve their competitiveness.

Too much detail
I tried reading the main newspapers word for word, and it took me hours. I don’t care how much people whine that they love the newspaper experience – the reality is, the people who read the news also work full-time. They barely have time to take out five minutes in their day; the reason people don’t read newspapers is because of the complexity of life. Personally I work through lunch; and if I don’t work, I am trying to do things in my life so as to make more time for myself after work. The weekend is literally the only time I have a chance to take a time out to read the newspaper – but given I neglect people in my personal life during the week and the myriad of other things I am involved in outside of work, means I don’t even get that chance. I rarely sit down – that’s why I read the news on my phone on the train.

Newspapers contain quality content, there is no doubt about that. However, if you are going to compete in the news business, you need to understand your audience: that’s all they want. If you read any news item in a newspaper, it will be flowered with extra facts, background information, and endless perspectives to colour the central issue. For example, an article about the Central Bank in Australia dropping its cash rate by 1% had several paragraphs talking about the exchange rate. Yes, it’s valid to talk about it – but there were another half dozen articles that did the same thing in the related coverage, and quite frankly, it’s a separate issue. Another article about the impact of the rate change on local business, makes mention that 50 million pizzas get sold through Dominoes Australia. Interesting stuff – but is it relevant to the news?

A newspaper should have a headline, and literally report just on that news. I’m not saying they shouldn’t report on the extra stuff – quite the contrary I love the extra stuff – but they fail to recognise that the problem with reading a newspaper is that it takes so long, and so people can only skim it. Report just the news, and let consumers follow up on the website with extra detail through special links provided.

Newspapers can’t compete in news any more
I was able to get copies of the major newspapers between 11pm and 12.40am – as in, the night before people usually buy it. Those newspapers had been delivered by a truck, after being printed in a factory far away, with thousands of copies being loaded and distributed earlier that evening. Of course, there is a staggered distribution with some newsagents getting them through the night and early morning (about 5am), but it’s still the same newspaper delivered at 12am as at the high profile newsagents.

The timeline for reporting news is a joke. The only hope a newspaper has in reporting news uniquely, is if it breaks it. By breaking news, it has a chance to take its time and frame the flow of information. But is this that common? Most newspapers use shared agencies to pool their resources with stories, like international news. Newspapers are being ignored by consumers, because they get news quicker on the Internet. Why must these media executives continue to ignore the reality that an online news organistaion is much more efficient in distributing breaking news. That’s why newspapers existed in the past, but they no longer fill that role in society – newspapers need to get out of that role (or become “news brand”, but no longer treating print as the prime distribution for that news).

The incentives and structures can’t compete with this new world
Journalists, especially freelancers, get paid by word count.
Readers, especially time poor ones, skim through the newspaper.

See a problem there? It’s called friction. In case you are a mass media executive, let me build on it for you: the economics of information have now changed. When your industry was created several hundred years ago, information was scarce and people had plenty of time. Today, it is people’s time (or “attention”) that is scarce, whereas information is abundant. Tradition through the “art” and skill of journalism seems to drive the industry more than its fundamental economic shifts. As I remarked at the Future of Media Summit several months back after hearing a mass media journalist rant on justifying her existence: “The skill of journalism? It’s just as relevant as the skill of sword makers. It’s nice, but I prefer a gun.”

A business that does not respond to its market, will die one day. The cost structures of the newspaper (and magazine industries) are sustaining a structure that no longer suits the market for which it supposedly caters for. Instead, it relies purely on generational factors of a Luddite population to sustain its circulation, trying to make money on a model that has now been broken.

What’s so exciting about this? The traditional media don’t get it, in the same way a bible-basher won’t accept there is no God despite presenting logic suggesting otherwise. I’ve heard this from friends in the industry, from people I’ve met at conferences, and from observing my own clients who are part of a broader media group.

Denial by a legacy industry can be a beautiful thing for an entrepreneur.

Online advertising – a bubble

I just recorded a podcast with Duncan Riley and Bronwen Clune – two New Media innovators I greatly admire, to discuss what the future of media was. Unfortunately, the podcast recording came out battered and my normal analytical mind wasn’t in gear to add fruitfully to the discussion.

So Dunc and Bron, here I go: why I think advertising on the Internet has a future that will repeat the property bubble that fueled the world’s economic expansion these last few decades. (Y’know -the one that just burst.)

Advertising has been broken by the Internet
Let’s think about this from a big picture first: why do people advertise? It’s to get an outcome. Ignoring elections and government campaigns, the regular market economy has advertising so companies can make money. Pure and simple. Whether it be "brand" advertising which is a way of shaping perceptions for future sales, or straight-off-the-bat advertising pushing a product – the incentive for companies is to get a response. That response, ultimately, is to take that cash out of your wallet.

Now’s lets jump into the time machine and think about companies in the 1970s and 1980s – before this "Internet" thing became mainstream. How could companies get exposure for their products? Through the media of course. The mass media had captured audiences, and they were able to monetise this powerful position they had in society by forcing people to consume advertising as they were dealt with servings of information they actually wanted.

It worked in the past, because that’s how the world worked. That is of course, until the Internet and the Web completely transformed our world.

Companies jumped on the web thinking this was simply an extension of the mass media but so much better. And they were right to some extent – it was much better. A bit too good actually, because it now exposed the weaknesses of the concept of advertising.

Take for example one of the undergraduate students that works at my firm. Apparently, this 19 year old never watches television – but he is on top of all the main shows. He does this through peer to peer technology, where he is able to download his favourite shows. I asked him why does he do that and he responded quickly: "because I can avoid the ads". What’s happening with the Internet is that consumers can control the experience they have when consuming information now, unlike the past where they marched in line according to the programming schedule. The audience is no longer captive.

The Internet did another thing: it made advertising more accountable. In the past, savvy agencies would ‘segment’ the population and associate various mass media outlets as better being able to connect with the ‘target market’. To measure, print used circulation and readership – working out how many people bought the publication, and some number out of some Actuary’s head of how many people read that same copy (through statistical techniques of assessing patients in doctors’ surgeries, no doubt). Broadcasters on the other hand, would randomly call households and using statistical methods, would estimate the number of people that tuned in.

Perhaps the fact I took statistics for my undergraduate degree, is why I am so skeptical. Even my stats lecturer admitted it was bullshit – albeit in an ‘educated’ way. In relation to the mass media, the bigger issue was the fact this educated bullshit was not disaggregated. What I mean, was that when a newspaper has a readership of 100,000 people – there is a massive assumption that if you advertise in that publication, you will actually reach them. You might have bought a newspaper to read this one article your friend mentioned – and yet, your act of purchase enables the newspaper to justify all the other pages to advertisers with a simplistic metric.

The Internet completely changed this because we no longer are relying on statistics, but actual data collected. In the past, advertisers would get a plane and fly over an Amazonian forest they picked and pay to drop one million pamphlets hoping that at least 50,000 of their target market would catch the pamphlets and respond. Of course, indirect sales activity could indicate the effectiveness of a campaign, but in reality it was all a guess. Now with the Internet, a lot of the guesswork is not required any more – and quite frankly, advertising on the Net looks bad but the reality is that the truth has now been set free.

This is looking at it from an accountability point of view, but looking at it from a practical view as well, there are issues. The holy grail of advertising, is targeting. The reason being, if you can target an ad better, you are more likely to get a conversion. However there is a natural friction with targeted advertising and it’s called privacy. As I’ve said before, privacy is the speed hump for the attention economy.

Advertising on the net technologically offers a great ability to target, with marketers licking their lips at the opportunity. However this is coming with a complete misunderstanding, that technology may be an enabler but culture and society will be a breaker. People do not want better targeting. The thought that some company profiles you scares the crap out of people. Yes, I’ve even convinced myself that when advertising is relevant, it’s useful – but this is looking at it after the fact. The problem with targeted advertising, is that whilst it may run a world record 100 metre dash, it might not get the chance to actually get off the starting blocks. Just ask Facebook if you don’t believe me.

The structural impact the Internet has had to ruin advertising
The Internet is great for measuring – but there are a few too many measures. The lack of a consistent measurement system creates several problems. More significant is the fact that different types of Internet services compete based on what model works best for them. For example, pay per action is something advertisers love because they are getting a better return on their investment by seeing a follow through. This works with contextual advertising like the kind Google uses – it’s actually in Google’s interest for you to click off their pages.

Contrast that with video sites where a person is engaged with the content for ten minutes. An advertiser can’t compare ten minutes of engagement on a video site easily with click-actions on contextual advertising sites. What this creates is a vacuum, where the ad dollars will bias those that offer a better likelihood of making a sale. After all, why would you care about capturing someone’s attention for ten minutes, when you can simply pay for someone clicking on a link which is directly linked with an e-commerce sale on your site.

This creates a real problem, because it’s not an equal playing field to compete for the advertising. Certain types of services do better under different models. Banner advertising will die, not just because people are realising the usability issues surrounding banner blindness , or the fact that banner advertising is simply a copy and paste model of the mass media days , but because competing advertising models that better link them better to final sales will become more popular. When we hear about the great growth rates in online advertising, don’t forget to dig a little deeper because the real growth comes from search advertising which makes up about half of that.

There’s another structural problem with the Internet: there’s too much competition. In the mass media days, the media had an established relationship as "the" information distribution outlets of society. With the Internet, anyone can create a blog and become their own publisher. Additionally, the Internet is seeing growth not just in New Media ventures, but utility and commerce ventures as well. Same advertising pie theoretically (ignoring the long tail effect for a second, where small advertisers can now participate), but a lot more "distributors". This creates a fragmentation, where advertising dollars are being worn thin. It’s for this reason the larger internet services tend to manage to get by . Just looking at the face of it though, you know there’s a problem in the longer term even for the bigger players when you operate in such an environment.

It’s not just other Internet services to worry about however: it’s the advertisers themselves. In a world of information, democratised by search engines judging quality content – you as a publisher are on the same foot as the company paying for the ads. Why would Nike want to advertise on your website, when it can just improve its own search engine ranking? Companies can now create a more direct relationship with their customers and future customers – and they no longer need an intermediary (like the media) to facilitate that relationship. That’s a Big Deal. It’s not just search though – the VRM Project is doing exactly that, creating a system that will facilitate those relationships.

Concluding thoughts
I could just as much put an argument in favour of online advertising, don’t get me wrong – there will be a lot of growth occuring still. But what I want to highlight, is that taking a step back at the facts, there is something seriously wrong with this model. If advertisers no longer need that intermediary to facilitate a relationship; if advertisers are chasing the industry down the tail of measureable ads that better link to a final sale; if the entire industry is not consistent and competing with each other both in inventory and in methods, in an infinite battle; and if consumers are no longer captive to the content distribution experience – it makes you question doesn’t it?

According to Nielsen over a year ago, about a third of all U.S. online advertising dollars spent in July came from the financial sector–with mortgage and credit reporting firms representing five of the top ten advertisers. Together, those companies spent nearly $200 million on search, display and other Web advertising, meaning that a slowdown would degrade fairly significant annual revenue streams. The writing was on the wall that long ago, what analysts are only now saying are troubled times for online advertising.

Just like we knew a year ago about the credit crunch, before a drastic turn of events turned it into the most dramatic economic shift in our world in our collective memories, so too will the advertising bubble burst. It will be years – perhaps decades – before this happens. However one thing is for sure – the Internet has not only ruined the newspaper, music and traditional software industries, but it’s also ruining the world of advertising. Like how newspapers, music and software are currently evolving into new models which we still are not sure where they will end up, so too will advertising be transformed.

Mr Online Advertising and Ms Media Company relying on it as a revenue model – you are growing on the basis of some very shaky foundations.

Liako is everywhere…but not here

Life’s been busy, and this blog has been neglected. Not a bad thing – a bit of life-living, work-smacking, exposure to new experiences, and active osmosis from the things I am involved in – is what makes me generate the original perspectives I try to create on this blog.

However to my subscribers (Hi Dad!), let this post make it up to you with some content I’ve created elsewhere.

You already know about the first podcast I did with the Perth baroness Bronwen Clune and the only guy I know who can pull off a mullet Mike Cannon-Brookes of Atlassian . Here’s a recap of some other episodes I’ve done:

  • Episode two: ex-PwC boy Matthew Macfarlane talks to current PwC boy myself and Bronwen, in his new role as partner of a newly created investment fund Yuuwa Capital. He joined us and told us about what he’s looking for in startups, as he’s about to spend $40million on innovative startups!
  • Episode three: marketing guru Steve Sammartino , tells us about building a business and his current startup Rentoid.com
  • Episode four: experienced entrepreneur Martin Hosking shares us lessons and insight, whilst talking about his social commerce art service Red Bubble .
  • Episode five: “oh-my-God-that-dude-from-TV!” Mark Pesce joins us in discussing that filthy government filter to censor the Internet
  • Episode six: ex-Fairfax Media strategist Rob Antulov tells us about 3eep – a social networking solution for the amateur and semi-professional sports world.

I’ve also put my data portability hat on beyond mailing list arguments and helped out a new social media service called SNOBS – a Social Network for Opportunistic Business women – with a beginners guide to RSS . You might see me contribute there in future, because I love seeing people pioneer New Media and think Carlee Potter is doing an awesome job – so go support her!

Over and out -regular scheduling to resume after this…

The Rudd Filter

This poor blog of mine has been neglected. So let me do some catchup with some of the things I’ve been doing.

Below is a letter I sent to every senator of the Australian government several weeks ago. Two key groups responded: the Greens (one of the parties to hold the balance of power) who were encouraged by my letter, and the Independent Nick Xenophon (who is one of the two key senators that will have an impact) had his office respond in a very positive way .

It relates to the Government’s attempt to censor the Internet for Australians.

Subject: The Rudd Filter

Attention: Senators of the Australian parliament

With all due respect, I believe my elected representatives as well as my fellow Australians misunderstand the issue of Internet censorship. Below I offer my perspective, which I hope can re-position the debate with a more complete understanding of the issues.

Background

The policy of the Australian Labor Party on its Internet filter was in reaction to the Howard Government’s family-based approach which Labor said was a failure. Then leader of the Opposition, Kim Beazley, announced in March 2006 (Internet archive ) that under Labor “all Internet Service Providers will be required to offer a filtered ‘clean feed’ Internet service to all households, and to schools and other public internet points accessible by kids.” The same press release states “Through an opt-out system, adults who still want to view currently legal content would advise their Internet Service Provider (ISP) that they want to opt out of the “clean feed”, and would then face the same regulations which currently apply.”

The 2007 Federal election, which was led by Kevin Rudd, announced the election pledge that “a Rudd Labor Government will require ISPs to offer a ‚Äòclean feed‚Äô Internet service to all homes, schools and public Internet points accessible by children, such as public libraries. Labor‚Äôs ISP policy will prevent Australian children from accessing any content that has been identified as prohibited by ACMA, including sites such as those containing child pornography and X-rated material.”

Following the election, the Minister for Broadband, Communications and Digital Economy Senator Stephen Conroy in December 2007 clarified that anyone wanting uncensored access to the Internet will have to opt-out of the service .

In October 2008, the policy had another subtle yet dramatic shift. When examined by a Senate Estimates committee, Senator Conroy stated that “we are looking at two tiers – mandatory of illegal material and an option for families to get a clean feed service if they wish.” Further, Conroy mentioned “We would be enforcing the existing laws. If investigated material is found to be prohibited content then ACMA may order it to be taken down if it is hosted in Australia. They are the existing laws at the moment.”

The interpretation of this, which has motivated this paper as well as sparked outrage by Australians nation-wide, is that all Internet connection points in Australia will be subjected to the filter, with only the option to opt-out of the Family tier but not the tier that classifies ‘illegal material’. While the term “mandatory” has been used as part of the policy in the past, it has always been used in the context of making it mandatory for ISP’s to offer such as service. It was never used in the context of it being mandatory for Australians on the Internet, to use it.

Not only is this a departure from the Rudd government’s election pledge, but there is little evidence to suggest that it is truly being representative of the requests from the Australian community. Senator Conroy has shown evidence of the previous NetAlert policy by the previous government falling far below expectations. According to Conroy, 1.4 million families were expected to download the filter, but many less actually did . The estimated end usage according to Conroy is just 30,000 – despite a $22 million advertising campaign. The attempt by this government to pursue this policy therefore, is for its own ideological or political benefit . The Australian people never gave the mandate nor is there evidence to indicate majority support to pursue this agenda. Further, the government trials to date have shown the technology to be ineffective.

On the 27th of October, some 9,000 people had signed a petition to deny support of a government filter. At the time of writing this letter on the 2 November, this has now climbed to 13,655 people. The government’s moves are being closely watched by the community and activities are being planned to respond to the government should this policy continue in its current direction.

I write this to describe the impact such a policy will have if it goes ahead, to educate the government and the public.

Impacts on Australia

Context

The approach of the government to filtering is one dimensional and does not take into account the converged world of the Internet. The Internet has – and will continue to – transform our world. It has become a utility, to form the backbone of our economy and communications. Fast and wide-spread access to the Internet has been recognised globally as a priority policy for political and business leaders of the world.

The Internet typically allows three broad types of activities. The first is that of facilitating the exchange of goods and services. The Internet has become a means of creating a more efficient marketplace, and is well known to have driven demand in offline selling as well , as it creates better informed consumers to reach richer decision making. On the other hand, online market places can exist with considerable less overhead – creating a more efficient marketplace than in the physical world, enabling stronger niche markets through greater connections between buyers and sellers.

The second activity is that of communications. This has enabled a New Media or Hypermedia of many-to-many communications, with people now having a new way to communicate and propagate information. The core value of the World Wide Web can be realised from its founding purpose: created by CERN , it was meant to be a hypertext implementation that would allow better knowledge sharing of its global network of scientists. It was such a transformative thing, that the role of the media has forever changed. For example, newspapers that thrived as businesses in the Industrial Age, now face challenges to their business models, as younger generations are preferring to access their information over Internet services which objectively is a more effective way to do so .
A third activity is that of utility. This is a growing area of the Internet, where it is creating new industries and better ways of doings, now that we have a global community of people connected to share information. The traditional software industry is being changed into a service model where instead of paying a license, companies offer an annual subscription to use the software via the browser as platform (as opposed to a PC’s Window’s installation as the platform). Cloud computing is a trend pioneered by Google, and now an area of innovation by other major Internet companies like Amazon and Microsoft, that will allow people to have their data portable and accessible anywhere in the world. These are disruptive trends, that will further embed the Internet into our world.

The Internet will be unnecessarily restricted

All three of the broad activities described above, will be affected by a filter.
The impact on Markets with analysis-based filters, is that it will likely block access to sites due to a description used in selling items. Suggestions by Senators have been that hardcore and fetish pornography be blocked – content that may be illegal for minors to view, but certainly not illegal for consenting adults. For example, legitimate businesses that used the web as their shopfront (such as adultshop.com.au), will be restricted from the general population in their pursuit of recreational activities. The filter’s restriction on information for Australians is thus a restriction on trade and will impact individuals and their freedoms in their personal lives.
The impact on communications is large. The Internet has created a new form of media called “social media”. Weblogs, wiki’s, micro-blogging services like Twitter, forums like Australian start-up business Tangler and other forms of social media are likely to have their content – and thus service – restricted. The free commentary of individuals on these services, will lead to a censoring and a restriction in the ability to use the services. “User generated content” is considered a central tenet in the proliferation of web2.0, yet the application of industrial area controls on the content businesses now runs into a clash with people’s public speech as the two concepts that were previously distinct in that era, have now merged.
Further more, legitimate information services will be blocked with analysis-based filtering due to language that would trigger filtering. As noted in the ACMA report , “the filters performed significantly better when blocking pornography and other adult content but performed less well when blocking other types of content”. As a case in point, a site containing the word “breast”, would be filtered despite it having legitimate value in providing breast cancer awareness.
Utility services could be adversely affected. The increasing trend of computing ‘in the cloud’ means that our computing infrastructure will require an efficient and open Internet. A filter will do nothing but disrupt this, with little ability to achieve the policy goal of preventing illegal material. As consumers and businesses move to the cloud, critical functions will be relied on, and any threat in the distribution and under-realisation of potential speeds, will be a burden on the economy.
Common to all three classes above, is the degradation of speeds and access. The ACMA report claims that all six filters tested scored an 88% effectiveness rate in terms of blocking the content that the government was hoping would be blocked. It also claims that over-blocking of acceptable content was 8% for all filters tested, with network degradation not nearly as big of a problem during these tests as it was during previous previous trials, when performance degradation ranged from 75-98%. In this latest test, the ACMA said degradation was down, but still varied widely‚Äîfrom a low of just 2% for one product to a high of 87% for another. The fact that there is a degradation of even 0.1% is in my eyes, a major concern.The Government has recognised with the legislation it bases its regulatory authority from, that “whilst it takes seriously its responsibility to provide an effective regime to address the publication of illegal and offensive material online, it wishes to ensure that regulation does not place onerous or unjustifiable burdens on industry and inhibit the development of the online economy.”

The compliance costs alone will hinder the online economy. ISP’s will need to constantly maintain the latest filtering technologies, businesses will need to monitor user generated content to ensure their web services are not automatically filtered and administrative delays to unblock legal sites will hurt profitability and for some start-up businesses may even kill them.

And that’s just for compliance, lets not forget the actual impact on users. As Crikey has reported (Internet filters a success, if success = failure ), even the best filter has a false-positive rate of 3% under ideal lab conditions. Mark Newton (the network engineer who Senator Conroy’s office attacked recently ) reckons that for a medium-sized ISP that‚Äôs 3000 incorrect blocks every second . Another maths-heavy analysis says that every time that filter blocks something there‚Äôs an 80% chance it was wrong.

The Policy goal will not be met & will be costly through this approach

The Labor party’s election policy document states that Labor‚Äôs ISP policy will prevent Australian children from accessing any content that has been identified as prohibited by ACMA, including sites such as those containing child pornography and X-rated material. Other than being a useful propaganda device, to my knowledge children and people generally don’t actively seek child pornography, and a filter does nothing to prevent these offline real-world social networks of paedophiles to restrict their activities.

What the government seems to misunderstand, is that a filter regime will prove inadequate in achieving any of this, due to the reality of how information gets distributed on the Internet.
Composition of Internet traffic by you.

Source: http://www.ipoque.com/userfiles/file/internet_study_2007.pdf
Peer-to-peer networks (P2P), a legal technology that also proves itself impossible to control or filter, accounts for the majority of Internet traffic, with figures of between 48% in the Middle East and 80% in Eastern Europe . As noted earlier, the ACMA trials have confirmed that although they can block P2P, they cannot actually analyse the content as being illegal. This is because P2P technologies like torrents are completely decentralised. Individual torrents cannot be identified, and along with encryption technologies, make this type of content impossible to filter or identify what it is.
However, whether blocked or filtered, this is ignoring the fact that access can be bypassed by individuals who wish to do so. Tor is a network of virtual tunnels, used by people under authoritarian governments in the world – you can install the free software on a USB stick to have it working immediately. It is a sophisticated technology that allows people to bypass restrictions. More significantly, I wish to highlight that some Tor servers have been used for illegal purposes, including child pornography and p2p sharing of copyrighted files using the bit torrent protocol. In September 2006, German authorities seized data center equipment running Tor software during a child pornography crackdown, although the TOR network managed to reassemble itself with no impact to its network . This technology is but one of many available options for people to overcome a ISP-level filter.
For a filtering approach to be appropriate, it will require not just automated analysis based technology, but human effort to maintain the censorship of the content. An expatriate Australian in China claims that a staff of 30,000 are employed by the Golden Shield Project (the official name for the Great Firewall) to select what to block along with whatever algorithm they use to automatically block sites. With legitimate online activities being blocked through automated software, it will require a beefed up ACMA to handle support from the public to investigate and unblock websites that are legitimate. Given the amount of false positives proven in the ACMA trials, this is not to be taken likely, and could cost hundreds of millions of dollars in direct taxpayers money and billions in opportunity cost for the online economy.

Inappropriate government regulation

The governments approach to regulating the Internet has been one dimensional, by regarding content online with the same type that was produced by the mass media in the Industrial Era. The Information Age recognises content not as a one-to-many broadcast, but individuals communicating. Applying these previous-era provisions is actually a restraint beyond traditional publishing.
Regulation of the Internet is provided under the Broadcasting Services Amendment (Online Services) Act 1999 (Commonwealth) . Schedule Five and seven of the amendment claim the goal is to:

  • Provide a means of addressing complaints about certain Internet content
  • Restrict access to certain Internet content that is likely to cause offense to a reasonable adult
  • Protect children from exposure to Internet content that is unsuitable for them

Mandatory restricting access can disrupt freedom of expression under Article 19 of the International Covenant on Civil and Political Rights and disrupt fair trade of services under the Trade Practices Act.

It is wrong for the government to take the view of mandating restricted access, but instead should allow consumers that option to participate in a system that protects them. To allow a government to interpret what a “reasonable adult” would think is too subjective for it to be appropriate that a faceless authority regulates, over the ability for an individual adult to determine for themselves.

The Internet is not just content in the communications sense, but also in the market and utility sense. Restricting access to services, which may be done inappropriately due to proven weaknesses in filtering technology, would result in

  • reduced consumer information about goods and services. Consumers will have less information due to sites incorrectly blocked
  • violation of the WTO’s cardinal principles – the “national treatment” principle , which requires that imported goods and services be treated the same as those produced locally.
  • preventing or hindering competition under the interpretation of section 4G of the Trade Practices Act . This means online businesses will be disadvantaged from physical world shops, even if they create more accountability by allowing consumer discussion on forums that may trigger the filter due to consumers freedom of expression.

Solution: an opt-in ISP filter that is optional for Australians

Senator Conroy’s crusade in the name of child pornography is not the issue. The issue, in addition to the points raised above, is that mandatory restricting access to information, is by nature a political process. If the Australian Family Association writes an article criticising homosexuals , is this grounds to have the content illegal to access and communicate as it incites discrimination ? Perhaps the Catholic Church should have its website banned because of their stance on homosexuality?

If the Liberals win the next election because the Rudd government was voted out due to pushing ahead with this filtering policy, and the Coalition repeat recent history by controlling both houses of government – what will stop them from banning access to the Labor party’s website?

Of course, these examples sound far fetched but they also sounded far fetched in another vibrant democracy called the Weimar Republic . What I wish to highlight is that pushing ahead with this approach to regulating the Internet is a dangerous precedent that cannot be downplayed. Australians should have the ability to access the Internet with government warnings and guidance on content that may cause offence to the reasonable person. The government should also persecute people creating and distributing information like child pornography that universally is agreed by society as a bad thing. But to mandate restricted access to information on the Internet, based on expensive imperfect technology that can be routed around, is a Brave New World that will not be tolerated by the broader electorate once they realise their individual freedoms are being restricted.

This system of ISP filtering should not be mandatory for all Australians to use. Neither should it be an opt-out system by default. Individuals should have the right to opt-into a system like this, if there are children using the Internet connection or a household wishes to censor their Internet experience. To mandatory force all Australians to experience the Internet only if under Government sanction, is a mistake of the highest levels. It technologically cannot be assured, and it poses a genuine threat to our democracy.

If the Ministry under Senator Conroy does not understand my concerns by responding with a template answer six months later , and clearly showing inadequate industry consultation despite my request, perhaps Chairman Rudd can step in. I recognise with the looming financial recession, we need to look for ways to prop up our export markets. However developing in-house expertise at restricting the population that would set precedent to the rest of the Western world, is something that’s funny in a nervous type of laughter kind of way.

Like many others in the industry, I wish to help the government to develop a solution that protects children. But ultimately, I hope our elected representatives can understand the importance of this potential policy. I also hope they are aware anger exists in the governments’ actions to date, and whilst democracy can be slow to act, when it hits, it hits hard.
Kind regards,
Elias Bizannes
—-
Elias Bizannes works for a professional services firm and is a Chartered Accountant. He is a champion of the Australian Internet industry through the Silicon Beach Australia community and also currently serves as Vice-Chair of the DataPortability Project. The opinions of this letter reflect his own as an individual (and not his employer) with perspective developed in consultation with the Australian industry.
This letter may be redistributed freely. HTML version and PDF version.

You don’t nor need to own your data

One of the biggest questions the DataPortability project has grappled with (and where the entire industry is not at consensus), is a fairly basic question with some profound consequences: who owns your data. Well I think I have an answer to the question now, which I’ve now cross-validated across multiple domains. Given we live in the Information Age, this certainly matters in every respect.

So who owns “your data”? Not you. Or the other guy. Or the government, and the MicroGooHoo corporate monolith. Actually, no one does. And if they do, it doesn’t matter.

People like to conflate the concept of property ownership to that of data ownership. I mean it’s you right? You own your house so surely, you own your e-mail address, your name, your date of birth records, your identity. However when you go into the details, from a conceptual level, it doesn’t make sense.

Ownership of data
First of all, let’s define property ownership: “the ability to deny use of an asset by another entity”. The reason you can claim status to owning your house, is because you can deny someone else access to your property. Most of us have a fence to separate our property from the public space; others like the hillbillies sit in their rocking chair with a shot gun ready to fire. Either way, it’s well understood if someone else owns something, and if you trespass, the dogs will chase after you.

133377798_8c85d1f1a6_o

The characteristics of ownership can be described as follows:
1) You have legal title recognising in your legal jurisdiction that you own it.
2) You have the ability to enforce your right of ownership in your legal jurisdiction
3) You can get benefits from the property.

The third point is key. When people cry out loud “I own my data”, that’s essentially the reason (when you take out the Neanderthal emotionally-driven reasoning out of the equation). Where we get a little lost though, is when we define those benefits. It could be said, that you want to be able to control your data so that you can use it somewhere else, and so you can make sure someone else doesn’t use it in a way that causes you harm.

Whilst that might sound like ownership to you, that’s where the house of cards collapses. The reason being, unless you can prove the ability to deny use by another entity, you do not have ownership. It’s a trap, because data is not like a physical good which cannot be easily copied. It’s like a butterfly locked in a safe: the moment you open that safe up, you can say good bye. If data can only satisfy the ownership definition when you hide it from the world, that means when it’s public to the world, you no longer own it. And that sucks, because data by nature is used for public consumption. But what if you could get the same benefits of ownership – or rather, receive benefits of usage and regulate usage – without actually ‘owning’ it?

Property and data – same same, but different
Both property and data are assets. They create value for those who use them. But that’s where the similarity’s end.

Property gains value through scarcity. The more unique, the more valuable. Data on the other hand, gains value through reuse. The more derivative works off it, means the more information generated (as information is simply data connected with other data). The more information, the more knowledge, the more value created – working its way along the information value chain. If data is isolated, and not reused, it has little value. For example, if a company has a piece of data but is not allowed to ever use it – there is no value to it.

Data gains value through use, and additional value through reuse and derivative creations. If no one reads this blog, it’s a waste of space; if thousands of people read it, its value increases – as these ideas are decimated. To give one perspective on this, when people create their own posts reusing the data I’ve created, I generate value through them linking back to me. No linking, no value realised. Of course, I get a lot more value out of it beyond page rank juice, but hopefully you realise if you “steal” my content (with at least some acknowledgement to me the person), then you are actually doing me a favour.

Ignore the above!
Talking about all this ownership stuff doesn’t actually matter; it’s not ownership that we want. Let’s take a step back, and look at this from a broader, philosophical view.

Property ownership is based on the concept that you get value from holding something for an extended period of time. But in an age of rapid change, do you still get value from that? Let’s say, we lose the Holy War for people being able to ‘own’ their data. Facebook – you win – you now ‘own’ me. This is because it owns the data about me – my identity, it would appear, is under the control of Facebook – it now owns, that “I am in a relationship”. However, the Holy War might have been lost but I don’t care. Because Facebook owns crap – as six months ago, I was in a relationship. Now I’m single and haven’t updated my status. The value for Facebook, is not in owning me in a period of time: it’s in having access to me all the time – because one way they translate that data into value is advertising, and targeting ads is pointless if you have the wrong information to base your targetting on. Probably the only data that can be static in my profile, is birth-date and gender – but with some tampering and cosmetics, even those can be altered now!

468487548_06182b43d2_o

Think about this point raised by Luk Vervenne, in response to my above thoughts on the VRM mailing list, by considering employability. A lot of your personal information, is actually generated by interactions with third parties, such as the education institution you received your degree from. So do I own the fact that I have a Bachelor of Commerce from the University of Sydney? No I don’t, as that brand and the authenticity is that of the university. What I do have however, is access & usage rights to it. Last time I checked, I didn’t own the university, but if someone quizzes me on my academic record, there’s a hotline ready to confirm it – and once validated, I get the recognition that translates into a benefit for me.

Our economy is now transitioning from a goods-producing to a service-performing and experience-generating economy. It’s hard for us to imagine this new world, as our conceptual understanding of the world is built on the concept of selling, buying and otherwise trading goods that ultimately ends in us owning something. But this market era of the exchange of goods is making way for “networks” and the concept of owning property will diminish in importance, as our new world is will now place value on the access.

This is a broader shift. As a young man building his life, I cannot afford to buy a house in Sydney with its overinflated prices. But that’s fine – I am comfortable in renting – all I want is ‘access’ to the property, not the legal title to it which quite frankly would be a bad investment decision even aside from the current economic crisis. I did manage to buy myself a car, but I am cursing the fact that I wasted my money on that debt which could have gone to more productive means – instead, I could have just paid for access to public transport and taxis when I needed transport. In other words, we now have an economy where you do not need to own something to get the value: you just need access.

That’s not to say property ownership is a dead concept – rather, it’s become less important. When we consider history as well, the concept of the masses “owning” property was foreign anyway – there was a class system with the small but influential aristocracy that would own the land, with the serfs working on the land. “Ownership” really, is a new ‘established’ concept to our world – and it’s now ready to get out of vogue again. We’ve now reached a level of sophistication in our society where we no longer need the security of ownership to get the benefits in our life – and these property owners that we get our benefits from, may appear to yield power but they also have a lot of financial risk, government accountability and public scrutiny (unlike history’s aristocracy).

2516780900_fab76bf33e_o

Take a look at companies and how they outsource a lot of their functions (or even simplify their businesses’ value-activities). Every single client of mine – multi-million dollar businesses at that as well – pay rent. They don’t own the office space they are in, as for them to get the benefits, they instead simply need access which they get through rental. “Owning” the property is not part of the core value of the business. Whilst security is needed, because not having ownership can put you at the mercy of the landlord, this doesn’t mean you can’t contract protection like my clients do as part of the lease agreements.

To bring it back to the topic, access to your data is what matters – but it also needs to be carefully understood. For example, access to your health records might not be a good thing. Rather, you can control who has access to that data. Similarly, whilst no one might own your data, what you do have is the right to demand guidelines and principles like what we are trying to do at the DataPortability Project on how “your” data can be used. Certainly, the various governmental privacy and data protection legislation around the world does exactly that: it governs how companies can use personally identifiable data.

Incomplete thoughts, but I hope I’ve made you think. I know I’m still thinking.

« Older posts Newer posts »