Tag Archive for 'information'

Page 2 of 6

Rethinking copyright and its scope creep

Modern copyright has been influenced by an array of older legal rights that have been recognised throughout history, whose legacy development may be harming our future. It traditionally protected the moral rights of the author who created a work, the economic rights of a benefactor who paid to have a copy made, the property rights of the individual owner of a copy, and a sovereign’s right to censor and to regulate the printing industry. I’m not going to say copyright is dead, but perhaps now irrelevant, given the evolution of media. As I’ve argued before, access to information is more valuable than ownership and modern copyright law doesn’t recognise this fully.
Is copyright a little fuzzy?

Rethinking media
Media, content, and other related words essentially are words that reflect human expression. When you read a book or a newspaper article, it’s another human being expressing something to you. That expression in turn, generates an experience for you – such as (but not limited to) interpretation, entertainment and reflection. You can’t “capture” media and lock it in a jar – you can only remember it. No one can own the individual words in a body of text because no one owns a language. But that power of provoking emotions in other people is powerful and outright scary if you truly realise the power.

We are now seeing a dramatic evolution in the media landscape. The disruptive influence, of what was called “user generated content” yesterday and now called “social media”, is making us rethink the media in our world. The thing is, it’s the same thing as any media – it’s human beings expressing themselves. The only difference now, is that the means of that expression has changed – less so technologically and more so the actual process – to one that is many-to-many.
This image is copyrighted

Rethinking value
Social media is about discussions rather than broadcasts. It’s about the producer and consumer of the information interacting. Everyone has an opportunity to respond to my blog post here, and sometimes I respond. Other times I don’t, but that doesn’t matter because it still impacts me and future readers of my blog post with an alternative perspective.

What we are seeing now is a move way from the mass production of media, and a growth in the mass socialisation of media. It’s not about how many people you can push your content to, but how engaging your content can be. The economic models supporting content are still evolving, but it’s engagement that pays the bills. An engaged reader will be more receptive to advertising (which is increasingly important as advertising performance is now more accountable) and for the subscription model of content, engagement is what retains a customer. To retain a readership, you need compelling content.

However this old media adage about compelling content is changing. The socialisation of content isn’t just about pushing content, as insomuch discussing it. It’s about building a community of people that are passionate and interested: an expertise network where value comes from being in the same place as others that are like-minded. An example of this can be see with Read Write Web, who have created an exclusive community manager community. GigaOM, another innovator in the media space, has done the same thing for premium content that runs in parallel with its popular (free) blogs. Compelling content is now about building compelling communities. Copyright works well for static objects – but not so much for people interacting freely.
RWW aggregator

Rethinking copyright
This is a complex subject and I by no means am being definitive here. But I simply want to raise a question of what really is the value of copyright? If content is an expression that generates an experience in a human, then specific types of expression need specific treatment. And if compelling communities are now a new form of engaging with content, requiring a lock down on the content produced may actually hurt value. As another form of content, news has value in its immediacy and is useless a day later as the news is constantly evolving. There’s no value of copyright there as ‘time’ is where news derives its value from – not long-term protection.

Should copyright be dead? No, that’s not what I am arguing – rather, it needs a rethink of what exactly it’s protecting, with its scope reduced. Like a parent protecting a child, if you protect them too much, they might never actually experience life, be happy and potentially ignorant to future dangers – which is what a parent is ultimately responsible for. The aggressive remarks of newspaper executives I’ve heard in the last month about being more aggressive in their copyright may not be the right solution. Protection of assets is valuable only when it enhances future value.

Let’s be careful we not lose sight of what we are protecting and why.
Copyright is for losers

The Internet, Iran, and Ubiquity

P1040449What’s happening right now in Iran is absolutely remarkable. It validates the remarkable impact ubiquitous computing and ubiquitous connectivity to the Internet has and its potential to disrupt even the most tightly controlled police state in the world.

The rejection of the election by the public is creating public chaos, finally giving the people a reason to revolt against a regime they’ve detested for decades now. This situation has the potential to escalate to bigger things – or it likely will settle down – but regardless, it gives us a real insight into the future. That is, how these new technologies are transforming everything, and disgracing the mass media in the process.

What I saw in Iran
This blog of mine actually started as a travel blog, and one of the countries I wrote about was Iran. In my analysis of that beautiful country, I hypothesised a revolution was brewing based on societal discontent. What prevented this revolution from ever occurring, was a legitimate trigger – one that wasn’t shut down by the Islamic propaganda.
P1040208
A interesting thing I noted was that the official messaging of the country was anti-American and very over the top – no surprises there. But when you talked to people on a one-on-one level, you realised the Iranian’s actually respect the American’s – and it was the establishment they detested. It seemed the regime had a tight grip on society, using Islam as a way of controlling them in much the same way the Bush Administration use patriotism and the War on Terror, to do what it wanted and silence criticism. But by controlling the media (amongst other things), it essentially helped control society from revolting.

How ubiquity has changed that
In my previously linked article, I talk about the rising trend of a ubiquitous world – one where connectivity, computing, and data was omnipresent in our world. Separately, we are seeing a rising trend toward a “network” operating model for internet businesses, as demonstrated with Facebook’s CEO recently saying how he imagines Facebook’s future to not be a destination site.
denied
The implication is that people are now connected , can share information and communicate without restraint, but better yet, do so in a decentralised manner. The use of Twitter to share information to the world, isn’t reliant on visiting Twitter.com – it’s simply a text message away. It’s hard to censor something that’s not centralised. And it’s even harder to control and influence a population, where they no longer need the mass media for information, but can communicate directly with each other on a mass scale.

Take note
Social media is having a remarkable impact. Not only are we getting better quality reporting of events (with the mass media entirely failing us), but it’s enabling mass collaboration on a grand scale. One where even a government has the risk of being toppled. I’m still waiting to here from my Iranian friends to get their insight into the situation, but if it’s one lesson we should take note of, is that the Internet is transforming the world. Industries are not only being impacted, but society in the broadest sense. If a few picture-capable phones, a short-messaging communication service, and some patchy wireless Internet can rattle the most authoritarian state in the world, then all I can say is I’m gobsmacked at what else is on the horizon.

The future is webiqitous

In the half century since the Internet was created – and the 20 years that the web was invented – a lot has changed. More recently, we’ve seen the Dot Com bubble and the web2.0 craze drive new innovations forward. But as I’ve postulated before, those eras are now over. So what’s next?

Well, ubiquity of course.

.com

Huh?
Let’s work backwards with some questions to help you understand.

Why do we now need ubiquity, and what exactly that means, requires us to think of another two questions. The changes brought by the Internet are not one big hit, but a gradual evolution. For example, “Open” has existed since the first days of the Internet in culture: it wasn’t a web2.0 invention. But “openess” was recognised by the masses only in web2.0 as a new way of doing things. This “open” culture had profound consequences: it led to the mass socialisation around content, and recognition of the power that is social media.

As the Internet’s permeation in our society continues, it will generate externalities that affect us (and that are not predictable). But the core trend can be identifiable, which is what I hope to explain in this post. And by understanding this core trend, we can comfortably understand where things are heading.

So let’s look at these two questions:
1) What is the longer term trend, that things like “open” are a part of?
2) What are aspects of this trend yet to be fully developed?

The longer term trend
The explanation can be found into why the Internet and the web were created in the first place. The short answer: interoperability and connectivity. The long answer – keep reading.

Without going deep into the history, the reason why the Internet was created was so that it could connect computers. Computers were machines that enabled better computation (hence the name). As they had better storage and querying capacities than humans, they became the way the US government (and large corporations) would store information. Clusters of these computers would be created (called networks) – and the ARPANET was built as a way of building connections between these computers and networks by the US government. More specifically, in the event of a nuclear war and if one of these computing networks were eliminated – the decentralised design of the Internet would allow the US defense network to rebound easily (an important design decision to remember).

The web has a related but slightly different reason for its creation. Hypertext was conceptualised in the 1960s by a philosopher and scientist, as a way of harnessing computers to better connect human knowledge. These men were partly inspired by an essay written in the the 1940s called “As We May Think“, where the chief scientist of the United States stated his vision whereby all knowledge could be stored on neatly categorised microfirm (the information storage technology at the time), and in moments, any knowledge could be retrieved. Several decades of experimentation in hypertext occurred, and finally a renegade scientist created the World Wide Web. He broke some of the conventions of what the ideal hypertext system would look like, and created a functional system that solved his problem. That being, connecting all these distributed scientists around the world and their knowledge.

So as it is clearly evident, computers have been used as a way of storing and manipulating information. The Internet was invented to connect computing systems around the world; and the Web did the same thing for the people who used this network. Two parallel innovative technologies (Internet and hypertext) used a common modern marvel (the computer) to connect the communication and information sharing abilities of machines and humans alike. With machines and the information they process, it’s called interoperability. With humans, it’s called being connected.

Cables

But before we move on, it’s worth noting that the inventor of the Web has now spent a decade advocating for his complete vision: a semantic web. What’s that? Well if we consider the Web as the sum of human knowledge accessible by humans, the Semantic Web is about allowing computers to be able to understand what the humans are reading. Not quite a Terminator scenario, but so computers can become even more useful for humans (as currently, computers are completely dependent on humans for interpretation).

What aspects of the trend haven’t happened yet?
Borders have been broken down that previously restrained us. The Internet and Hyptertext are enabling connectivity with humans and interoperability for computer systems that store information. Computers in turn, are enabling humans to process tasks that could not be done before. If the longer term trend is connecting and bridging systems, then the demon to be demolished are the borders that create division.

So with that in mind, we can now ask another question: “what borders exist that need to be broken down?” What it all comes down to is “access”. Or more specifically, access to data, access to connectivity, and access to computing. Which brings us back to the word ubiquity: we now need to strive to bridge the gap in those three domains and make them omnipresent. Information accessible from anywhere, by anyone.

Let’s now look at this in a bit more detail
Ubiquitous data: We need a world where data can travel without borders. We need to unlock all the data in our world, and have it accessible by all where possible. Connecting data is how we create information: the more data at our hands, the more information we can generate. Data needs to break free – detached from the published form and atomised for reuse.

Ubiquitous connectivity: If the Internet is a global network that connects the world, we need to ensure we can connect to that network irregardless of where we are. The value of our interconnected world can only achieve its optimum if we can connect wherever with whatever. At home on your laptop, at work on your desktop, on the streets with your mobile phone. No matter where you are, you should be able to connect to the Internet.

Ubiquitous computing: Computers need to become a direct tool available for our mind to use. They need to become an extension of ourselves, as a “sixth sense”. The border that prevents this, is the non-assimilation of computing into our lives (and bodies!). Information processing needs to become thoroughly integrated into everyday objects and activities.

Examples of when we have ubiquity
My good friend Andrew Aho over the weekend showed me something that he bought at the local office supplies shop. It was a special pen that, well, did everything.
– He wrote something on paper, and then through his USB, could transfer an exact replica to his computer in his original handwriting.
– He could perform a search on his computer to find a word in his digitised handwritten notes
– He was able to pass the pen over a pre-written bit of text, and it would replay the sounds in the room when he wrote that word (as in the position on the paper, not the time sequence)
– Passing the pen over the word also allowed it to be translated into several other languages
– He could punch out a query with the drawn out calculator, to compute a function
– and a lot more. The company has now created an open API on top of its platform – meaning anyone can now create additional features that build on this technology. It has the equivalent opportunity to when the Web was created as a platform, and anyone was allowed to build on top of it.

The pen wasn’t all that bulky, and it did this simply by having a camera attached, a microphone and special dotted paper that allowed the pen to recognise its position. Imagine if this pen could connect to the Internet, with access to any data, and the cloud computing resources for more advanced queries?

Now watch this TED video to the end, which shows the power when we allow computers to be our sixth sense. Let your imagination run wild as you watch it – and while it does, just think about ubiquitous data, connectivity, and computation which are the pillars for such a future.

Trends right now enabling ubiquity
So from the 10,000 feet view that I’ve just shown you, let’s now zoom down and look at trends occurring right now. Trends that are heading towards this ever growing force towards ubiquity.

From the data standpoint, and where I believe this next wave of innovation will centre on, we need to see two things: Syntactic Interoperability and Semantic Interoperability. Syntactic interoperability is when two or more systems can communicate with each other – so for example, having Facebook being able to communicate with MySpace (say, with people sending messages to each other). Semantic interoperability is the ability to automatically interpret the information exchanged meaningingfully – so when I Google Paris Hilton, the search engine understands that I want a hotel in a city in Europe, not a celebrity.

The Semantic Web and Linked Data is one key trend that is enabling this. It’s interlinking all the information out there, in a way that makes it accessible for humans and machines alike to reuse. Data portability is similarly another trend (of which I try to focus my efforts), where the industry is fast moving to enable us to move our identities, media and other meta data wherever we want to.

As Chris Messina recently said:

…the whole point of working on open building blocks for the social web is much bigger than just creating more social networks: our challenge is to build technologies that enhance the network and serve people so that they in turn can go and contribute to building better and richer societies…I can think of few other endeavors that might result in more lasting and widespread benefits than making the raw materials of human connection and knowledge sharing a basic and fundamental property of the web.

The DiSo Project that Chris leads is an umbrella effort that is spearheading a series of technologies, that will lay the infrastructure for when social networking will become “like air“, as Charlene Li has been saying for the last two years.

One of the most popular open source pieces of software (Drupal) has now for a while been innovating on the data side rather than on other features. More recently, we’ve seen Google announce it will cater better for websites that markup in more structured formats, giving an economic incentive for people to participate in the Semantic Web. API‘s (ways for external entities to access a website’s data and technology) are now flourishing, and are providing a new basis for companies to innovate and allow mashups (like newspapers).

As for computing and connectivity, these are more hardware issues, which will see innovation at a different pace and scale to the data domain. Cloud computing has long been understood as a long term shift, and which aligns with the move to ubiquitous computing. Theoretically, all you will need is an Internet connection, and with the cloud, be able to have computing resources at your disposal.

CERN

On the connectivity side, we are seeing governments around the world make broadband access a top priority (like the Australian governments recent proposal to create a national broadband network unlike anything else in the world). The more evident trend in this area however, will be the mobile phone – which since the iPhone, has completely transformed our perception of what we can done with this portable computing device. The mobile phone, when connected to the cloud carrying all that data, unleashes the power that is ubiquity.

And then?
Along this journey, we are going to see some unintended impacts, like how we are currently seeing social media replacing the need for a mass media. Spin-off trends will occur which any reasonable person will not be able to predict, and externalities (both positive and negative) will emerge as we drive towards this longer term trend of everything and everyone being connected. (The latest, for example, being the real time web and the social distribution network powering it).

Computing is life

It’s going to challenge conventions in our society and the way we go about our lives – and that’s something that we can’t predict but just expect. For now, however, the trend is pointing to how do we get ubiquity. Once we reach that, then we can ask the question of what happens after it – that being: what happens when everything is connected. But until then, we’ve got to work out on how do we get everything connected in the first place.

Information age companies losing out due to industrial age thinking

Last weekend, I participated at the Sydney Startup camp Sydney II, which had been a straight 24 hour hackathon to build and launch a product (in my case Activity Horizon). Ross Dawson has written a good post about the camp you are interested in that.

activity horizon
It’s been a great experience (still going – send us your feedback!) and I’ve learned a lot. But something really strikes me which I think should be shared. It’s how little has changed since the last start-up camp and how stupid companies are – but first, some background.

The above mentioned product we launched, is a service that allows people to discover events and activities that they would be interested in. We have a lot of thoughts on how to grow this – and I know for a fact, finding new things to do in a complex city environment as time-poor adults, is a genuine issue people complain often about. As Mick Liubinskas said “Matching events with motivation is one of the Holy Grails of online businesses” and we’re building tools to allow people to filter events with minimal effort.

ActivityHorizon Team

So as “entrepreneurs” looking to create value under an artificial petri dish, we recognised that existing events services didn’t do enough to filter events with user experience in mind. By pulling data from other websites, we have created a derivative product that creates value without necessarily hurting anyone. Our value proposition comes from the user experience in simplicity (more in the works once the core technology is set-up) and we are more than happy to access data from other providers in the information value chain on the terms they want.

The problem is that they have no terms! The concept of an API is one of the core aspects of the mashup world we live in, firmly entrenched within the web’s culture and ecosystem. It’s something that I believe is a dramatic way forward for the evolution of the news media and it’s a complementary trend that is building the vision of the semantic web. However nearly all the data we have hasn’t been done through an API which can regulate the way we use the data; instead, we’ve had to scrape it.

Scraping is a method of telling a computer how data is structured on a web page, which you then ‘scape’ data from that template presentation on a website. A bit like highlighting words in a word document with a certain characteristic and pulling all the words you highlighted into your own database. Scraping has a negative connotation as people are perceived to be stealing content and re-using it as their own. The truth of the matter is, additional value gets generated when people ‘steal’ information products: data is an object, and by connecting it with other objects – those relationships – are what create information. The potential to created unique relationships with different data sets, means no two derivative information products are the same.

So why are companies stupid
Let’s take for example a site that sells tickets and lists information about them. If you are not versed in the economics of data portability (which we are trying to do with the DataPortability Project), you’d think that if Activity Horizon is scraping ‘their’ data, that’s a bad thing as we are stealing their value.

WRONG!

Their revenue model is based on people buying tickets through their site. So by us reusing their data and creating new information products, we are actually creating more traffic, more demand, more potential sales. By opening up their data silo, they’ve actually opened up more revenue for themselves. And by opening up their data silo, they not only control the derivatives better but they can reduce the overall cost of business for everyone.

Let’s use another example: a site that aggregates tickets and doesn’t actually sell them (ie, their revenue model isn’t through transactions but attention). Activity Horizon could appear to be a competitor right? Not really – because we are pulling information from them (like they are pulling information from the ticket providers). We’ve extracted and created a derivative product, that brings a potential audience to their own website. It’s repurposing information in another way, to a different audience.

The business case for open data is something I could spend hours talking about. But it all boils down to this: data are not like physical objects. Scarcity does not determine the value of data like it does with physical goods. Value out of data and information comes through reuse. The easier you make it for others to resuse your data, the more success you will have.

Facebook needs to be more like the Byzantines

Flickr graph Chris Saad wrote a good post on the DataPortability Project’s (DPP) blog about how the web works on a peering model. Something we do at the DPP is closely monitor the market’s evolution, and having done this actively for a year now as a formal organisation, I feel we are at the cusp of a lot more exciting times to come. These are my thoughts on why Facebook needs to alter their strategy to stay ahead of the game, and by implication, everyone else who is trying to innovate in this sphere.

Let’s start by describing the assertion that owning data is useless, but access is priceless.

It’s a bold statement that you might need to get some background reading to understand my point of view (link above). However once you understand it, all the debates about who “owns” what data, suddenly become irrelevant. Basically access, just like ownership, is possible due to a sophisticated society that recognises peoples rights. Our society has now got to the point where ownership matters less now for the realisation of value, as we now have things in place to do more, through access.

Accessonomics: where access drives value
Let’s use an example to illustrate the point with data. I am on Facebook, MySpace, Bebo, hi5, Orkut, and dozens of other social networking sites that have a profile of me. Now what happens if all of those social networking sites have different profiles of me? One when I was single, one when I was in a relationship, another engaged, and another “it’s complicated”.

If they are all different, who is correct? The profile I last updated of course. With the exception of your birthdate, any data about you will change in the future. There is nothing ‘fixed’ about someone and “owning” a snap shot of them at a particular point of time, is exactly that. Our interests change, as do our closest friends and our careers.

Recognising the time dimension of information means that unless a company has the most recent data about you, they are effectively carrying dead weight and giving themselves a false sense of security (and a false valuation). Facebook’s $3 billion market value is not the data they have in June 2008; but data of people they have access to, of which, that’s the latest version. Sure they can sell to advertisers specific information to target ads, but “single” in May is not as valuable as “single” in November (and even less valuable than single for May and November, but not the months in between).
Network cable

Facebook Connect and the peering network model
The announcement by Facebook in the last month has been nothing short of brilliant (and when its the CEO announcing, it clearly flags it’s a strategic move for their future, and not just some web developer fun). What they have created out of their Facebook Connect service is shaking up the industry as they do a dance with Google since the announcement of OpenSocial in November 2007. That’s because what they are doing is creating a permanent relationship with the user, following them around the web in their activities. This network business model means constant access to the user. But the mistake is equating access with the same way as you would with ownership: ownership is a permanent state, access is dependent on a positive relationship – the latter of course, being they are not permanent. When something is not permanent, you need strategies to ensure relevance.

When explaining data portability to people, I often use the example of data being like money. Storing your data in a bank allows you better security to house that data (as opposed to under your mattress) and better ability to reuse it (ie, with a theoretical debit card, you can use data about your friends for example, to filter content on a third party site). This Facebook Connect model very much appears to follow this line of thinking: you securely store your data in one place and then you can roam the web with the ability to tap into that data.

However there is a problem with this: data isn’t the same as money. Money is valuable because of scarcity in the supply system, whilst data becomes valuable from reusing and creating derivatives. We generate new information by connecting different types of data together (which by definition, is how information gets generated). Our information economy allows alchemists to thrive, who can generate value through their creativity of meshing different (data) objects.

By thinking about the information value chain, Facebook would benefit more by being connected to other hubs, than having all activity go through it. Instead of data being stored in the one bank, it’s actually stored across multiple banks (as a person, it probably scares you to store all your personal information with the one company: you’d split it if you could). What you want to do as a company is have access to this secure EFT ecosystem. Facebook can access data that occurs between other sites because they are party to the same secured transfer system, even though they had nothing to do with the information generation.

Facebook needs to remove itself from being a central node, and instead, a linked-up node. The node with the most relationships with other sites and hubs wins, because with the more data at your hands, the more potential you have of connecting dots to create unique information.

Facebook needs to think like the Byzantines
A lot more can be said on this and I’m sure the testosterone within Facebook thinks it can colonise the web. What I am going to conclude with is that that you can’t fight the inevitable and this EFT system is effectively being built around Facebook with OpenSocial. The networked peer model will trump – the short history and inherent nature of the Internet proves that. Don’t mistake short term success (ie, five years in the context of the Internet) with the long term trends.

Byzantine buildingThere was once a time where people thought MySpace was unstoppable. Microsoft unbeatable. IBM unbreakable. No empire in the history of the word has lasted forever. What we can do however, is learn the lessons of those that lasted longer than most, like the forgotten Byzantine empire.

Also known as the eastern Roman empire, its been given a separate name by historians because it outlived its western counterpart by over 1000 years. How did they last that long? Through diplomacy and avoiding war as much as possible. Rather than buying weapons, they bought friends, and ensured they had relationships with those around them who had it in their self-interest to keep the Byzantines in power.

Facebook needs to ensure it stays relevant in the entire ecosystem and not be a barrier. They are a cashed up business in growth mode with the potential to be the next Google in terms of impact – but let’s put emphasis on “potential”. Facebook has competitors that are cash flow positive, have billions in the bank, but most importantly of all are united in goals. They can’t afford to fight a colonial war of capturing people identity’s and they shouldn’t think they need to.

Trying to be the central node of the entire ecosystem, by implementing their own proprietary methods, is an expensive approach that will ultimately be beaten one day. However build a peered ecosystem where you can access all data is very powerful. Facebook just needs access, as they can create value through their sheer resources to generate innovative information products: that, not lock-in, is that will keep them up in front.

Just because it’s a decentralised system, doesn’t mean you can’t rule it. If all the kids on a track are wearing the same special shoes, that’s not going to mean everyone runs the same time on the 100 metre dash. They call the patriarch of Constantiniple even to this day “first among equals” – an important figure who worked in parallel to the emperor’s authority during the empire’s reign. And it’s no coincidence that the Byzantine’s outlived nearly all empires known to date, which even to this day, arguably still exists in spirit.

Facebook’s not going to change their strategy, because their short-term success and perception of dominance blinds their eyes. But that doesn’t mean the rest of us need to make that mistake. Pick your fights: realise the business strategy of being a central node will create more heart-ache than gain.

It may sound counter intuitive but less control can actually mean more benefit. The value comes not from having everyone walk through your door, but rather you having the keys to everyone else’s door. Follow the peered model, and the entity with the most linkages with other data nodes, will win.

Let’s kill the password anti-pattern before the next web cycle

Authenticity required: password?I’ve just posted an explanation on the DataPortability Blog about delegated authentication and the Open Standard OAuth. I give poor Twitter a bit of attention by calling them irresponsible (which their password anti-pattern is – a generic example being sites that force people to give up their passwords to their e-mail account, to get functionality like finding your friends on a social network) but with their leadership they will be a pin-up example which we can promote going forward and well placed in this rapidly evolving data portability world. I thought the news would have calmed down by now, but new issues have come to light further highlighting the importance of some security.

With the death of Web 2.0, the next wave of growth for the Web (other than ‘faster, better, cheaper’ tech for our existing communications infrastructure) will come from innovation on the data side. Heaven forbid another blanket term for this next period, which I believe we will see the rise of when Facebook starts monetising and preparing for an IPO, but all existing trends outside of devices (mobile) and visual rendering (3D Internet) seem to point to this. That is, innovation on machine-to-machine technologies, as opposed to the people-to-machine and people-to-people technologies that we have seen to date. The others have been done and are being refined: machine-to-machine is so big it’s a whole new world that we’ve barely scratched the surface of.

But enough about that because this isn’t a post on the future – it’s on the current – and how pathetic current practices are. I caught up with Carlee Potter yesterday – she’s a young Old Media veteran who inspired by the Huffington Post, wants to pioneer New Media (go support her!). Following on from our discussion, she writes in her post that she is pressured by her friends to add applications on services like Facebook. We started talking about this massive cultural issue that is now being exported to the mainstream, where people freely give up personal information – not just the apps accessing it under Facebook’s control, but their passwords to add friends.

I came to the realisation of how pathetic this password anti-pattern is. I am very aware that I don’t like the fact that various social networking sites ask me for private information like my e-mail account, but I had forgotten how used to the process I’ve become to this situation that’s forced on us (ie, giving up our e-mail account passsword to get functionality).

Argument’s that ‘make it ok’ are that these types of situations are low risk (ie, communication tools). I completely disagree, because reputational risk is not something easily measured (like financial risk which has money to quantify), but that’s not the point: it’s contributing to a broader cultural acceptance, that if we have some trust of a service, we will give them personal information (like passwords to other services) so we can get increased utility out of that service. That is just wrong, and whilst the data portability vision is about getting access to your data from other services, it needs to be done whilst respecting the privacy of yourself and others.

Inspired by Chris Messina, I would like to see us all agree on making 2009 the year we kill the password anti-pattern. Because as we now set the seeds for a new evolution of the web and Internet services, let’s ensure we’ve got things like this right. In a data web where everything is interoperable, something that’s a password anti-pattern is not a culture that bodes us well.

They say privacy is dead. Well it only is if we let it die – and this is certainly one simple thing we can do to control how our personal information about ourselves gets used by others. So here’s to 2009: where we seek the eradication of the password anti-pattern virus!

Thank you 2008, you finally gave New Media a name

Earlier this year Stephen Collins and Chris Saad had flown to Sydney for the Future of Media summit, and in front of me were having heated discussions on how come nobody invited them to the Social Media club in Australia. As they were yapping away, I thought to myself what the hell are they going on about. It turns out things I used to call "blogs", "comments" or "wikis" were now "social media". Flickr, Delicious, YouTube? No longer Web 2.0 innovations, but social media. Bulletin boards that you would dial up on your 14000 kbps modem? Social media. Online forums discussing fetishes? Social media. Everything was now bloody social media (or Social Media: tools are lower case, concept uppercase) and along with Dare Obasanjo I was asleep for the two hours when it suddenly happened.

social media bandwagon

However it turns out that this is a term that’s been around for a lot longer than we give it credit for. It hung low for a while and then as some significant events occurred this year the term became a perfect fit to describe what was happening. It’s a term that I’ve been waiting to emerge for years now, as I knew the term "new media" was going to mature one day.

Ladies and gentlemen, welcome to our new world and the way of defining it: 2008 is when the Information Age’s "social media" finally displaced the Industrial Era’s "mass media". Below I document how, when and why.

Origins of the term and its evolution
The executive producer of the Demo conference Chris Shipley is said to have coined the term during a key note at the Demofall 2005 conference on the 20th September 2005. As she said in her speech:

Ironically, perhaps, there is one other trend that would at first blush seem at odds with this movement toward individuality, and that is the counter movement toward sociability.

As one reporter pointed out to me the other day, the program book you have before you uses the term “social” a half-dozen times or more to describe software, computing, applications, networks and media.

I’m not surprised that as individuals are empowered by their communications and information environments, that we leverage that power to reach out to other people. In fact, blogs are as much about individual voice as they are about a community of readers.

The term gained greater currency over the next year, as Shipley would use the term in her work and various influencers like Steve Rubel would popularise the term. Brainjam which popularised unConferences first had the idea of a Social Media Club around the time of Shipley’s keynote and eventually formed it in July of the following year, which created more energy towards pushing for the term. Other people starting building awareness, like the Hotwire consultant Drew Benvie who from April 2006 has been writing the Social Media Report (and created the Social media Wikipedia page on 9 July 2006). Benvie said to me in some private correspondence: “When social media emerged as a category of the media landscape in 2005 / 2006 I noticed the PR and media industries looking for suitable names. The term social media came to be used at the same time of social networks becoming mainstream.” Back then it was more a marketing word to conceptualise online tools and strategies to deal with them, which is why there has been distaste for the term that prevented its adoption.

It was 2008 however when several news incidents, innovations, and an election entrenched this term into our consciousness. Later on, I will explain that, but first a lesson.

web2_logos

So what is Social Media?
A debate in August 2008 created the following definition: "social media are primarily Internet and mobile-based tools for sharing and discussing information among human beings. " I like that definition, but with it, you could arguably say "social media" existed when the first e-mail was sent in the 1970s. Perhaps it’s going to suffer the fate of the term “globalisation” where in the 1990s people didn’t know the term existed – but by 2001 in high school, I was told it had been around since the 1980s and by my final year of university in 2004 I was told "globalisation" started in the 1700s. Heaven forbid it turns into a term like "Web 2.0" where no one agrees but it somehow becomes a blanket term for everything that is post the Dot-Com bubble.

The definition is off-putting unless you have a fundamental understanding of what exactly media is. It might shock you to hear this, but a newspaper and a blog are not media. A television and a Twitter account, are not media either. So if you’ve had had trouble getting the term social media before, it’s probably because you’ve been looking at it in the wrong way. Understand what media really is and you will recognise the brilliance of the term "social media".

Vin Crosbie many years ago answered a question I had been searching half a decade ago on what was new media. Crosbie’s much cited work has moved around the Internet, so I can’t link to his original piece of work (update: found it on the Internet archive), but this is what he argued in summary.

  • Television, books and websites are wrongly classified as media. What they really are, are media outputs. We are defining our world on the technology, and not the process. Media is about communication of messages.
  • There are three types of media in the world: Interpersonal media, mass media, and new media.
  1. Interpersonal media, which he coined for lack of an established term, is a one-on-one communications process. A person talking directly to another person is interpersonal media. It’s one message distributed to one other person, from one person.
  2. Mass media is a one-to-many process. That means, one entity or person is communicating that one message to multiple people. So if you are standing in front of a crowd giving a speech, you are conducting a mass media act. Likewise, a book is mass media as it’s one message distributed to many
  3. New media, which is only possible due to the Internet, is many-to-many media.

I highly recommend you read his more recent analysis which is an update of his 1998 essay (can be seen here on the Internet archive ).

That’s a brilliant way of breaking it down but I still didn’t get what many-to-many meant. When the blogosphere tried to define social media it was a poor attempt (and as recently as November 2008, it still sucked). But hidden in the archives of the web, we can read Stowe Boyd who came up with the most accurate analysis I’ve seen yet.

  1. Social Media Is Not A Broadcast Medium: unlike traditional publishing — either online or off — social media are not organized around a one-to-many communications model.
  2. Social Media Is Many-To-Many: All social media experiments worthy of the name are conversational, and involve an open-ended discussion between author(s) and other participants, who may range from very active to relatively passive in their involvement. However, the sense of a discussion among a group of interested participants is quite distinct from the broadcast feel of the New York Times, CNN, or a corporate website circa 1995. Likewise, the cross linking that happens in the blogosphere is quite unlike what happens in conventional media.
  3. Social Media Is Open: The barriers to becoming a web publisher are amazingly low, and therefore anyone can become a publisher. And if you have something worth listening to, you can attract a large community of likeminded people who will join in the conversation you are having. [Although it is just as interesting in principle to converse with a small group of likeminded people. Social media doesn’t need to scale up to large communities to be viable or productive. The long tail is at work here.]
  4. Social Media Is Disruptive: The-people-formerly-known-as-the-audience (thank you, Jay Rosen!) are rapidly migrating away from the old-school mainstream media, away from the centrally controlled and managed model of broadcast media. They are crafting new connections between themselves, out at the edge, and are increasingly ignoring the metered and manipulated messages that centroid organizations — large media companies, multi national organizations, national governments — are pushing at them. We, the edglings, are having a conversation amongst ourselves, now; and if CNN, CEOs, or the presidential candidates want to participate they will have to put down the megaphone and sit down at the cracker barrel to have a chat. Now that millions are gathering their principal intelligence about the world and their place in it from the web, everything is going to change. And for the better.

So many-to-many is a whole lot of conversation? As it turns out, yes it is. Now you’re ready to find out how 2008 became the year Social Media came to maturity.

How 2008 gave the long overdue recognition that New Media is Social Media
The tools: enabling group conversations
MySpace’s legacy on the world is something that I think is under-recognised, that being the ability to post on peoples’ profiles. It gave people an insight into public communication amongst friends, as people used it more for open messaging rather than adding credentials like the feature originally intended when developed on Friendster. Yes, I recognise public discussions have occurred for years on things like forums and blogs, but this curious aspect of MySpace’s culture at its peak has a lot to answer for what is ultimately Social Media. Facebook picked up on this feature and more appropriately renamed it as "wall posts" and with the launch of the home screen that is essentially an activity stream of your friends, it created a new form of group communication.

The image below shows a wall-to-wall conversation with a friend of mine in February 2007 on Facebook. You can’t see it, but I wrote a cheeky response to Beata’s first message at the bottom about her being a Cabbage-eating Ukrainian communist whose vodka is radioactive from Chernobyl. She responds as you can see, but more interestingly, our mutual friend Rina saw the conversation on her homescreen and jumped in. This is a subtle example that shows how the mainstream non-technology community is using social media. I’m currently seeing how non-technology friends of mine will share links that appear on the activity stream and how they jump into a conversation about it right there. It’s like over-hearing a conversation around the water-cooler and joining in if you want.

Facebook | Elias, Beata, Rina

This is what made Twitter what it is. What started as a status update tool for friends, turned into a chat-room with your friends; you can see the messages posted by people you are mutually following, and you can join in on a conversation that you weren’t originally a part of. Again, simple but the impact we have seen it have on the technology community is unbelievable. Like for example, I noticed Gabe Rivera a few days ago had a discussion with people about how he still doesn’t get what social media is. I wasn’t involved in that discussion originally, but its resulted in me partially inspired to explore the issue with this blog post. These are subtle, anecdotal examples but in sum they point to this broader transformation occurring in our society due to these tools that allow us to mass collaborate and communicate. The open conversation culture of Web 2.0 has helped create this phenomenon.

Another Internet start-up company which I think has contributed immensely to the evolution of Social Media is Friendfeed. It essentially copied the Facebook activity screen, but made it better – and in the process, created the closest thing to a social media powerhouse. People share links there constantly and get into discussions in line. In the mass media, an editor would determine what you could read in a publication; in the Social Media world, you determine what you read based on the friends you want to receive information from. Collectively, we decimate information and inform each other: it’s decentralised media. Robert Scoble, a blogging and video super star, is the central node of the technology industry. He consumes and produces more information than anyone else in this world; and if he is spending seven days a week for seven hours a day on Friendfeed, that’s got to tell you something’s up.

The events: what made these tools come to life in 2008
We’ve often heard about citizen journalism with people posting pictures from their mobile phones to share with the broader Internet. Blogs have long been considered a mainstay in politics this last decade. But it was 2008 that saw two big events that validated Social Media’s impact and maturity.

  1. A new president: Barack Obama has been dubbed as the world’s first Social Media president. Thanks to an innovative use of technology (and the fact one of the co-founders of Facebook ran his technology team – 2008 is the year for Social Media due to cross pollination), we’ve seen the most powerful man in the world get elected thanks to the use of the Internet in a specific way. Obama would post on Twitter where he was speaking; used Facebook in a record way; posted videos on YouTube (and is doing a weekly video addresses now as president-elect) – and a dozen other things, including his own custom-built social networking site.
  2. A new view of the news: In November, we saw a revolting event occur which was the terrorist situation in India (and which has now put us on the path of a geopolitical nightmare in the region). However the tragic event at Mumbai, also gave tangible proof of the impact social media is having in the world .

What’s significant about the above two events is that Social Media has robbed the role played by the Mass Media in the last century and beyond. Presidents of the past courted newspapers, radio and television personalities to get positive press as Mass Media influenced public perception. Likewise, breaking news has been the domain of the internationally-resourced Mass Media. Social Media is a different but much better model.

What’s next?
It’s said we need bubbles as they fuel over-development that leave something behind forever. The last over-hyped Web 2.0 era has given us a positive externality that has laid the basis of the many-to-many communications required for New Media to occur. Arguably, the culture of public sharing that first became big with the social bookmarking site Del.icio.us sparked this cultural wave that has come to define the era. The social networking sites created an infrastructure for us to communicate with people en masse, and to recognise the value of public discussions. Tools like wikis both in the public and the enterprise have made us realise the power of group collaboration – indeed, the biggest impact a wiki has in a corporation from my own experience rolling out social media technologies at my firm, is encouraging this culture of "open".

It has taken a long time to get to this point. The technologies have taken time to evolve (ie, connectivity and a more interactive experience than the document web); our cultures and societies have also needed some time to catch up with this massive transformation in our society. Now that the infrastructure is there, we are busy concerning ourselves with refining the social model. Certainly, the DataPortability Project has a relevant role in ensuring the future of our media is safe, like for example the monitoring the Open Standards we use to allow people to resuse their data. If my social graph is what filters my world, then my ability to access and control that graph is the equivalent to the Mass Media’s cry of ensuring freedom of the press.

Elias Bizannes social graph
Over 700 people in my life – school friends, university contacts, workmates and the rest – are people I am willing to trust to filter my information consumption. It will be key for us to be able to control this graph

Newspapers may be going bankrupt thanks to the Internet, but finally in 2008, we now can confidently identify the prophecies of what the future of media looks like.

The broken business model of newspapers

About six weeks ago I took a week off work to catch up on life and do some research and testing of market opportunities. I had several hypotheses I wanted to test and sent content to a closed group of friends and colleagues. My goal was to watch how they reacted to it, to understand how time-poor people consume information…and it was an absolutely fascinating experience.

As part of this excercise, I took the task of reading all the major newspapers every day. It has literally been years since I’ve given that much attention to them – I used to read them daily, but my Gen-Y ways got the better of me, and I moved online. Unfortunately, I still can’t seem to manage my online rituals to efficiently consume information (hence the research I did – turns out other people are struggling as well). Something I realised in the course of my research, is that whilst newspapers are losing circulation due to the Internet – there is a lot they could do to really improve their competitiveness.

Too much detail
I tried reading the main newspapers word for word, and it took me hours. I don’t care how much people whine that they love the newspaper experience – the reality is, the people who read the news also work full-time. They barely have time to take out five minutes in their day; the reason people don’t read newspapers is because of the complexity of life. Personally I work through lunch; and if I don’t work, I am trying to do things in my life so as to make more time for myself after work. The weekend is literally the only time I have a chance to take a time out to read the newspaper – but given I neglect people in my personal life during the week and the myriad of other things I am involved in outside of work, means I don’t even get that chance. I rarely sit down – that’s why I read the news on my phone on the train.

Newspapers contain quality content, there is no doubt about that. However, if you are going to compete in the news business, you need to understand your audience: that’s all they want. If you read any news item in a newspaper, it will be flowered with extra facts, background information, and endless perspectives to colour the central issue. For example, an article about the Central Bank in Australia dropping its cash rate by 1% had several paragraphs talking about the exchange rate. Yes, it’s valid to talk about it – but there were another half dozen articles that did the same thing in the related coverage, and quite frankly, it’s a separate issue. Another article about the impact of the rate change on local business, makes mention that 50 million pizzas get sold through Dominoes Australia. Interesting stuff – but is it relevant to the news?

A newspaper should have a headline, and literally report just on that news. I’m not saying they shouldn’t report on the extra stuff – quite the contrary I love the extra stuff – but they fail to recognise that the problem with reading a newspaper is that it takes so long, and so people can only skim it. Report just the news, and let consumers follow up on the website with extra detail through special links provided.

Newspapers can’t compete in news any more
I was able to get copies of the major newspapers between 11pm and 12.40am – as in, the night before people usually buy it. Those newspapers had been delivered by a truck, after being printed in a factory far away, with thousands of copies being loaded and distributed earlier that evening. Of course, there is a staggered distribution with some newsagents getting them through the night and early morning (about 5am), but it’s still the same newspaper delivered at 12am as at the high profile newsagents.

The timeline for reporting news is a joke. The only hope a newspaper has in reporting news uniquely, is if it breaks it. By breaking news, it has a chance to take its time and frame the flow of information. But is this that common? Most newspapers use shared agencies to pool their resources with stories, like international news. Newspapers are being ignored by consumers, because they get news quicker on the Internet. Why must these media executives continue to ignore the reality that an online news organistaion is much more efficient in distributing breaking news. That’s why newspapers existed in the past, but they no longer fill that role in society – newspapers need to get out of that role (or become “news brand”, but no longer treating print as the prime distribution for that news).

The incentives and structures can’t compete with this new world
Journalists, especially freelancers, get paid by word count.
Readers, especially time poor ones, skim through the newspaper.

See a problem there? It’s called friction. In case you are a mass media executive, let me build on it for you: the economics of information have now changed. When your industry was created several hundred years ago, information was scarce and people had plenty of time. Today, it is people’s time (or “attention”) that is scarce, whereas information is abundant. Tradition through the “art” and skill of journalism seems to drive the industry more than its fundamental economic shifts. As I remarked at the Future of Media Summit several months back after hearing a mass media journalist rant on justifying her existence: “The skill of journalism? It’s just as relevant as the skill of sword makers. It’s nice, but I prefer a gun.”

A business that does not respond to its market, will die one day. The cost structures of the newspaper (and magazine industries) are sustaining a structure that no longer suits the market for which it supposedly caters for. Instead, it relies purely on generational factors of a Luddite population to sustain its circulation, trying to make money on a model that has now been broken.

What’s so exciting about this? The traditional media don’t get it, in the same way a bible-basher won’t accept there is no God despite presenting logic suggesting otherwise. I’ve heard this from friends in the industry, from people I’ve met at conferences, and from observing my own clients who are part of a broader media group.

Denial by a legacy industry can be a beautiful thing for an entrepreneur.

The Rudd Filter

This poor blog of mine has been neglected. So let me do some catchup with some of the things I’ve been doing.

Below is a letter I sent to every senator of the Australian government several weeks ago. Two key groups responded: the Greens (one of the parties to hold the balance of power) who were encouraged by my letter, and the Independent Nick Xenophon (who is one of the two key senators that will have an impact) had his office respond in a very positive way .

It relates to the Government’s attempt to censor the Internet for Australians.

Subject: The Rudd Filter

Attention: Senators of the Australian parliament

With all due respect, I believe my elected representatives as well as my fellow Australians misunderstand the issue of Internet censorship. Below I offer my perspective, which I hope can re-position the debate with a more complete understanding of the issues.

Background

The policy of the Australian Labor Party on its Internet filter was in reaction to the Howard Government’s family-based approach which Labor said was a failure. Then leader of the Opposition, Kim Beazley, announced in March 2006 (Internet archive ) that under Labor “all Internet Service Providers will be required to offer a filtered ‘clean feed’ Internet service to all households, and to schools and other public internet points accessible by kids.” The same press release states “Through an opt-out system, adults who still want to view currently legal content would advise their Internet Service Provider (ISP) that they want to opt out of the “clean feed”, and would then face the same regulations which currently apply.”

The 2007 Federal election, which was led by Kevin Rudd, announced the election pledge that “a Rudd Labor Government will require ISPs to offer a ‚Äòclean feed‚Äô Internet service to all homes, schools and public Internet points accessible by children, such as public libraries. Labor‚Äôs ISP policy will prevent Australian children from accessing any content that has been identified as prohibited by ACMA, including sites such as those containing child pornography and X-rated material.”

Following the election, the Minister for Broadband, Communications and Digital Economy Senator Stephen Conroy in December 2007 clarified that anyone wanting uncensored access to the Internet will have to opt-out of the service .

In October 2008, the policy had another subtle yet dramatic shift. When examined by a Senate Estimates committee, Senator Conroy stated that “we are looking at two tiers – mandatory of illegal material and an option for families to get a clean feed service if they wish.” Further, Conroy mentioned “We would be enforcing the existing laws. If investigated material is found to be prohibited content then ACMA may order it to be taken down if it is hosted in Australia. They are the existing laws at the moment.”

The interpretation of this, which has motivated this paper as well as sparked outrage by Australians nation-wide, is that all Internet connection points in Australia will be subjected to the filter, with only the option to opt-out of the Family tier but not the tier that classifies ‘illegal material’. While the term “mandatory” has been used as part of the policy in the past, it has always been used in the context of making it mandatory for ISP’s to offer such as service. It was never used in the context of it being mandatory for Australians on the Internet, to use it.

Not only is this a departure from the Rudd government’s election pledge, but there is little evidence to suggest that it is truly being representative of the requests from the Australian community. Senator Conroy has shown evidence of the previous NetAlert policy by the previous government falling far below expectations. According to Conroy, 1.4 million families were expected to download the filter, but many less actually did . The estimated end usage according to Conroy is just 30,000 – despite a $22 million advertising campaign. The attempt by this government to pursue this policy therefore, is for its own ideological or political benefit . The Australian people never gave the mandate nor is there evidence to indicate majority support to pursue this agenda. Further, the government trials to date have shown the technology to be ineffective.

On the 27th of October, some 9,000 people had signed a petition to deny support of a government filter. At the time of writing this letter on the 2 November, this has now climbed to 13,655 people. The government’s moves are being closely watched by the community and activities are being planned to respond to the government should this policy continue in its current direction.

I write this to describe the impact such a policy will have if it goes ahead, to educate the government and the public.

Impacts on Australia

Context

The approach of the government to filtering is one dimensional and does not take into account the converged world of the Internet. The Internet has – and will continue to – transform our world. It has become a utility, to form the backbone of our economy and communications. Fast and wide-spread access to the Internet has been recognised globally as a priority policy for political and business leaders of the world.

The Internet typically allows three broad types of activities. The first is that of facilitating the exchange of goods and services. The Internet has become a means of creating a more efficient marketplace, and is well known to have driven demand in offline selling as well , as it creates better informed consumers to reach richer decision making. On the other hand, online market places can exist with considerable less overhead – creating a more efficient marketplace than in the physical world, enabling stronger niche markets through greater connections between buyers and sellers.

The second activity is that of communications. This has enabled a New Media or Hypermedia of many-to-many communications, with people now having a new way to communicate and propagate information. The core value of the World Wide Web can be realised from its founding purpose: created by CERN , it was meant to be a hypertext implementation that would allow better knowledge sharing of its global network of scientists. It was such a transformative thing, that the role of the media has forever changed. For example, newspapers that thrived as businesses in the Industrial Age, now face challenges to their business models, as younger generations are preferring to access their information over Internet services which objectively is a more effective way to do so .
A third activity is that of utility. This is a growing area of the Internet, where it is creating new industries and better ways of doings, now that we have a global community of people connected to share information. The traditional software industry is being changed into a service model where instead of paying a license, companies offer an annual subscription to use the software via the browser as platform (as opposed to a PC’s Window’s installation as the platform). Cloud computing is a trend pioneered by Google, and now an area of innovation by other major Internet companies like Amazon and Microsoft, that will allow people to have their data portable and accessible anywhere in the world. These are disruptive trends, that will further embed the Internet into our world.

The Internet will be unnecessarily restricted

All three of the broad activities described above, will be affected by a filter.
The impact on Markets with analysis-based filters, is that it will likely block access to sites due to a description used in selling items. Suggestions by Senators have been that hardcore and fetish pornography be blocked – content that may be illegal for minors to view, but certainly not illegal for consenting adults. For example, legitimate businesses that used the web as their shopfront (such as adultshop.com.au), will be restricted from the general population in their pursuit of recreational activities. The filter’s restriction on information for Australians is thus a restriction on trade and will impact individuals and their freedoms in their personal lives.
The impact on communications is large. The Internet has created a new form of media called “social media”. Weblogs, wiki’s, micro-blogging services like Twitter, forums like Australian start-up business Tangler and other forms of social media are likely to have their content – and thus service – restricted. The free commentary of individuals on these services, will lead to a censoring and a restriction in the ability to use the services. “User generated content” is considered a central tenet in the proliferation of web2.0, yet the application of industrial area controls on the content businesses now runs into a clash with people’s public speech as the two concepts that were previously distinct in that era, have now merged.
Further more, legitimate information services will be blocked with analysis-based filtering due to language that would trigger filtering. As noted in the ACMA report , “the filters performed significantly better when blocking pornography and other adult content but performed less well when blocking other types of content”. As a case in point, a site containing the word “breast”, would be filtered despite it having legitimate value in providing breast cancer awareness.
Utility services could be adversely affected. The increasing trend of computing ‘in the cloud’ means that our computing infrastructure will require an efficient and open Internet. A filter will do nothing but disrupt this, with little ability to achieve the policy goal of preventing illegal material. As consumers and businesses move to the cloud, critical functions will be relied on, and any threat in the distribution and under-realisation of potential speeds, will be a burden on the economy.
Common to all three classes above, is the degradation of speeds and access. The ACMA report claims that all six filters tested scored an 88% effectiveness rate in terms of blocking the content that the government was hoping would be blocked. It also claims that over-blocking of acceptable content was 8% for all filters tested, with network degradation not nearly as big of a problem during these tests as it was during previous previous trials, when performance degradation ranged from 75-98%. In this latest test, the ACMA said degradation was down, but still varied widely‚Äîfrom a low of just 2% for one product to a high of 87% for another. The fact that there is a degradation of even 0.1% is in my eyes, a major concern.The Government has recognised with the legislation it bases its regulatory authority from, that “whilst it takes seriously its responsibility to provide an effective regime to address the publication of illegal and offensive material online, it wishes to ensure that regulation does not place onerous or unjustifiable burdens on industry and inhibit the development of the online economy.”

The compliance costs alone will hinder the online economy. ISP’s will need to constantly maintain the latest filtering technologies, businesses will need to monitor user generated content to ensure their web services are not automatically filtered and administrative delays to unblock legal sites will hurt profitability and for some start-up businesses may even kill them.

And that’s just for compliance, lets not forget the actual impact on users. As Crikey has reported (Internet filters a success, if success = failure ), even the best filter has a false-positive rate of 3% under ideal lab conditions. Mark Newton (the network engineer who Senator Conroy’s office attacked recently ) reckons that for a medium-sized ISP that‚Äôs 3000 incorrect blocks every second . Another maths-heavy analysis says that every time that filter blocks something there‚Äôs an 80% chance it was wrong.

The Policy goal will not be met & will be costly through this approach

The Labor party’s election policy document states that Labor‚Äôs ISP policy will prevent Australian children from accessing any content that has been identified as prohibited by ACMA, including sites such as those containing child pornography and X-rated material. Other than being a useful propaganda device, to my knowledge children and people generally don’t actively seek child pornography, and a filter does nothing to prevent these offline real-world social networks of paedophiles to restrict their activities.

What the government seems to misunderstand, is that a filter regime will prove inadequate in achieving any of this, due to the reality of how information gets distributed on the Internet.
Composition of Internet traffic by you.

Source: http://www.ipoque.com/userfiles/file/internet_study_2007.pdf
Peer-to-peer networks (P2P), a legal technology that also proves itself impossible to control or filter, accounts for the majority of Internet traffic, with figures of between 48% in the Middle East and 80% in Eastern Europe . As noted earlier, the ACMA trials have confirmed that although they can block P2P, they cannot actually analyse the content as being illegal. This is because P2P technologies like torrents are completely decentralised. Individual torrents cannot be identified, and along with encryption technologies, make this type of content impossible to filter or identify what it is.
However, whether blocked or filtered, this is ignoring the fact that access can be bypassed by individuals who wish to do so. Tor is a network of virtual tunnels, used by people under authoritarian governments in the world – you can install the free software on a USB stick to have it working immediately. It is a sophisticated technology that allows people to bypass restrictions. More significantly, I wish to highlight that some Tor servers have been used for illegal purposes, including child pornography and p2p sharing of copyrighted files using the bit torrent protocol. In September 2006, German authorities seized data center equipment running Tor software during a child pornography crackdown, although the TOR network managed to reassemble itself with no impact to its network . This technology is but one of many available options for people to overcome a ISP-level filter.
For a filtering approach to be appropriate, it will require not just automated analysis based technology, but human effort to maintain the censorship of the content. An expatriate Australian in China claims that a staff of 30,000 are employed by the Golden Shield Project (the official name for the Great Firewall) to select what to block along with whatever algorithm they use to automatically block sites. With legitimate online activities being blocked through automated software, it will require a beefed up ACMA to handle support from the public to investigate and unblock websites that are legitimate. Given the amount of false positives proven in the ACMA trials, this is not to be taken likely, and could cost hundreds of millions of dollars in direct taxpayers money and billions in opportunity cost for the online economy.

Inappropriate government regulation

The governments approach to regulating the Internet has been one dimensional, by regarding content online with the same type that was produced by the mass media in the Industrial Era. The Information Age recognises content not as a one-to-many broadcast, but individuals communicating. Applying these previous-era provisions is actually a restraint beyond traditional publishing.
Regulation of the Internet is provided under the Broadcasting Services Amendment (Online Services) Act 1999 (Commonwealth) . Schedule Five and seven of the amendment claim the goal is to:

  • Provide a means of addressing complaints about certain Internet content
  • Restrict access to certain Internet content that is likely to cause offense to a reasonable adult
  • Protect children from exposure to Internet content that is unsuitable for them

Mandatory restricting access can disrupt freedom of expression under Article 19 of the International Covenant on Civil and Political Rights and disrupt fair trade of services under the Trade Practices Act.

It is wrong for the government to take the view of mandating restricted access, but instead should allow consumers that option to participate in a system that protects them. To allow a government to interpret what a “reasonable adult” would think is too subjective for it to be appropriate that a faceless authority regulates, over the ability for an individual adult to determine for themselves.

The Internet is not just content in the communications sense, but also in the market and utility sense. Restricting access to services, which may be done inappropriately due to proven weaknesses in filtering technology, would result in

  • reduced consumer information about goods and services. Consumers will have less information due to sites incorrectly blocked
  • violation of the WTO’s cardinal principles – the “national treatment” principle , which requires that imported goods and services be treated the same as those produced locally.
  • preventing or hindering competition under the interpretation of section 4G of the Trade Practices Act . This means online businesses will be disadvantaged from physical world shops, even if they create more accountability by allowing consumer discussion on forums that may trigger the filter due to consumers freedom of expression.

Solution: an opt-in ISP filter that is optional for Australians

Senator Conroy’s crusade in the name of child pornography is not the issue. The issue, in addition to the points raised above, is that mandatory restricting access to information, is by nature a political process. If the Australian Family Association writes an article criticising homosexuals , is this grounds to have the content illegal to access and communicate as it incites discrimination ? Perhaps the Catholic Church should have its website banned because of their stance on homosexuality?

If the Liberals win the next election because the Rudd government was voted out due to pushing ahead with this filtering policy, and the Coalition repeat recent history by controlling both houses of government – what will stop them from banning access to the Labor party’s website?

Of course, these examples sound far fetched but they also sounded far fetched in another vibrant democracy called the Weimar Republic . What I wish to highlight is that pushing ahead with this approach to regulating the Internet is a dangerous precedent that cannot be downplayed. Australians should have the ability to access the Internet with government warnings and guidance on content that may cause offence to the reasonable person. The government should also persecute people creating and distributing information like child pornography that universally is agreed by society as a bad thing. But to mandate restricted access to information on the Internet, based on expensive imperfect technology that can be routed around, is a Brave New World that will not be tolerated by the broader electorate once they realise their individual freedoms are being restricted.

This system of ISP filtering should not be mandatory for all Australians to use. Neither should it be an opt-out system by default. Individuals should have the right to opt-into a system like this, if there are children using the Internet connection or a household wishes to censor their Internet experience. To mandatory force all Australians to experience the Internet only if under Government sanction, is a mistake of the highest levels. It technologically cannot be assured, and it poses a genuine threat to our democracy.

If the Ministry under Senator Conroy does not understand my concerns by responding with a template answer six months later , and clearly showing inadequate industry consultation despite my request, perhaps Chairman Rudd can step in. I recognise with the looming financial recession, we need to look for ways to prop up our export markets. However developing in-house expertise at restricting the population that would set precedent to the rest of the Western world, is something that’s funny in a nervous type of laughter kind of way.

Like many others in the industry, I wish to help the government to develop a solution that protects children. But ultimately, I hope our elected representatives can understand the importance of this potential policy. I also hope they are aware anger exists in the governments’ actions to date, and whilst democracy can be slow to act, when it hits, it hits hard.
Kind regards,
Elias Bizannes
—-
Elias Bizannes works for a professional services firm and is a Chartered Accountant. He is a champion of the Australian Internet industry through the Silicon Beach Australia community and also currently serves as Vice-Chair of the DataPortability Project. The opinions of this letter reflect his own as an individual (and not his employer) with perspective developed in consultation with the Australian industry.
This letter may be redistributed freely. HTML version and PDF version.

Thoughts on privacy – possibly just a txt file away

The other week, a good friend of mine through my school and university days, dropped me a note. He asked me that now that he is transitioning from being a professional student to legal guru (he’s the type I’d expect would become a judge of the courts), that I pull down the website that hosts our experiment in digital media from university days. According to him, its become "a bit of an issue because I have two journal articles out, and its been brought to my attention that a search brings up writing of a very mixed tone/quality!".

In what seemed like a different lifetime for me, I ran a university Journalist’s Society and we experimented with media as a concept. One of our successful experiments, was a cheeky weekly digital newsletter, that held the student politicians in our community accountable. Often our commentary was hard-hitting, and for $4 web hosting bills a month and about 10 hours work each, we become a new power on campus influencing actions. It was fun, petty, and a big learning experience for everyone involved, including the poor bastards we massacred with accountability.

control panel

Privacy in the electronic age: is there an off button?

However this touches on all of us as we progress through life, what we thought was funny in a previous time, may now be awkward that we are all grown up. In this digitally enabled world, privacy has come to the forefront as an issue – and we are now suddenly seeing scary consequences of having all of our information available to anyone at anytime.

I’ve read countless articles about this, as I am sure you have. One story I remember is a guy who contributed to a marijuana discussion board in 2000, now struggles with jobs as that drug-taking past of his is the number one search engine result. The digital world, can really suck sometimes.

Why do we care?

This is unique and awkward, because it’s not someone defaming us. It’s not someone taking our speech out of context, and menacingly putting it a way that distorts our words. This is 100% us, stating what we think, with full understanding what the consequences of our actions were. We have no one but ourselves to blame.

nice arse

Time changes, even if the picture doesn’t: Partner seeing pictures of you – can be ok. Ex seeing pictures of you – likely not ok.

In the context of privacy, is it our right to determine who can see what about us, when we want them to? Is privacy about putting certain information in the "no one else but me" box or is it more dynamic then that – meaning, it varies according to the person consuming the information?

When I was younger, I would meet attractive girls quite a bit older than me, and as soon as I told them my age, they suddenly felt embarrassed. They either left thinking how could they let themselves be attracted to a younger man, treating me like I was suddenly inferior, or they showed a very visible reaction of distress! Actually, quite memorably when I was 20 I told a girl that I was on a date with that I was 22 – and she responded "thank God, because there is nothing more unattractive I find, than a guy that is younger than me". It turned out, fortunately, she had just turned 22. My theory about age just got a massive dose of validation.

Now me sharing this story is that certain information about ourselves can have adverse affects on us (in this case, my sex life!). I normally could not care less about my age, but with girls I would meet when I went out, I did care because it affected their perception of me. Despite nothing changing, the single bit of information about my age would totally change the interaction I had with a girl. Likewise, when we are interacting with people in our lives, the sudden knowledge of a bit of information could adversely affect their perception.

Bathroom close the hatch please

Some doors are best kept shut. Kinky for some; stinky for others

A friend of mine recently admitted to his girlfriend of six months that he’s used drugs before, which had her breakdown crying. This bit of information doesn’t change him in any way; but it shapes her perception about him, and the clash with her perception with the truth, creates an emotional reaction. Contrast this to these two party girls I met in Spain in my nine-months away, who found out I had never tried drugs before at the age of 21. I disappointed them, and in fact, one of them (initially) lost respect for me. These girls and my friends girlfriend, have two different value systems. And that piece of information, generates a completely differing perception – taking drugs can be seen as a "bad person" thing, or a "open minded" person, depending on who you talk to.

As humans, we care about what other people think. It influences our standing in society, our self-confidence, our ability to build rapport with other people. But the issue is, how can you control your image in an environment that is uncontrollable? What I tell one group of people for the sake of building rapport with them, I should also have the ability of ensuring that conversation is not repeated to others, who may not appreciate the information. If I have a fetish for women in red heels which I share with my friends, I should be able to prevent that information from being shared with my boss who loves wearing red heels and might feel a bit awkward the next time I look at her feet.

Any solutions?

Not really. We’re screwed.

Well, not quite. To bring it back to the e-mail exchange I had with my friend, I told him that the historian and technologist in me, couldn’t pull down a website for that reason. After all, there is nothing we should be ashamed about. And whilst he insisted, I made a proposal to him: what about if I could promise that no search engine would include those pages in their index, without having to pull the website down?

He responded with appreciation, as that was what the issue was. Not that he was ashamed of his prior writing, but that he didn’t want academics of today reading his leading edge thinking about the law, to come across his inflammatory criticism of some petty student politicians. He wanted to control his professional image, not erase history. So my solution of adding a robots.txt file was enough to get his desired sense of privacy, without fighting a battle with the uncontrollable.

Who knew, that privacy can be achieved with a text file that has two lines:

User-agent: *

Disallow: /

Those two lines are enough to control the search engines, from a beast that ruins our reputation, to a mechanism of enforcing our right to privacy. Open standards across the Internet, enabling us to determine how information is used, is what DataPortability can help us do achieve so we can control our world. The issue of privacy is not dead – we just need some creative applications, once we work out what exactly it is we think we are losing.