Frequent thinker, occasional writer, constant smart-arse

Tag: access (Page 1 of 2)

Can the newspaper industry please stop their damn whining

Google is not a blood sucking vampire. In fact, the newspaper industry is a spoilt little brat.

Search engines such as Google and aggregators (like the constantly criticised techmeme) provide a huge amount of economic value for the newspaper industry. They enable discovery by people that are not regular subscribers to their content. They provide traffic, which drive up the page views, that enable them to sell inflated prices for perceived access to an audience.

Newspapers put their content on the web for free by their own choice. They have plenty of ways of excluding their content from being freely accessible, either through a paid wall or technology conventions like the robots.txt…But they don’t want to completely do that, because they lose the traffic.

Subscription models will be the future revenue model for content. One where people will pay for constant access to a particular information provider (as fresh access – not static objects – is where the real value comes from in information and especially in news). Of course, this means people with established brands can only do this as people will not pay unless they know what to expect. However despite their current lead in this game due to their century-old mastheads, the newspaper industry is refusing to solely go down this route. And the reason for this, is because they still rely on advertising for the majority of their revenue mix – and advertising is driven by traffic.

Newspaper executives want the economic value provided by search engines and aggregators in discovery and traffic – but they whine consistently because these innovative new businesses in the information age have found a way to monetise this function in the value chain.

The solution is simple: cut public access, and put all content behind a paid wall. And only participate in exclusive aggregators. The search engines and free aggregators no longer have your content to add to their mix – and yes, you Mr newspaper executive no longer get as much traffic. But that’s what you get for being a whining little kid.

I am sick and tired of hearing industrial age executives refuse to compromise with information age business models.

Opera’s Unite is democratising the cloud

Opera Unite - youtube imageOpera, the Norwegian browser with little under 1% market share of the English market, has made an interesting announcement. Following a much hyped mystery campaign, “Opera Unite” has been announced as a new way to interact with the browser. It transforms the browser into a server – so that your local computer can interact across the Internet in a peer-to-peer fashion. Or in simpler words, you can plug your photos, music and post-it notes into your Opera Unite installation – and be able to access that media anywhere on the Internet, be it another computer or your mobile phone. I view this as conceptually as an important landmark in data portability. The competing browser company Mozilla may lay claim to developing ubiquity, but Opera’s announcement is a big step to ubiquity the concept.

Implications: evolving the cloud to be more democratic
Opera Unite features 1I’ve had a test drive, but I’m not going to rehash the functionality here – there is plenty of commentary going on now. (Or better yet, simply check this video.) I don’t think it’s fair to criticise it, as it’s still an early development effort – for example, although I could access my photos on my mobile phone (that were stored on my Mac), I could not stream my music (which would be amazing once they can pull that off). But it’s an interesting idea being pushed by Opera, and it’s worth considering it from the bigger picture.

Opera Unite features 2There is a clear trend to cloud computing in the world – one where all you need is a browser and theoretically you can access anything you need for a computer (as your data, applications and processing power are done remotely). What Opera Unite does, is create a cloud that can be controlled by individuals. It’s embracing the sophistication home users have developed into now that they have multiple computers and devices, connected in the one household over a home wireless network. Different individual computers can act as repositories for a variety of data, and its accessibility can be fully controlled by the individuals.

Opera Unite features 3I think that concept is a brilliant one that brings it to the mass market (and something geeks won’t appreciate as they can already do this). It’s allowing consumers an alternative to storing their data, but still have it accessible “via the cloud”. As the information value chain goes, people can now store their data wherever they wish (like their own households) and then plug those home computers into the cloud to get the desired functionality they desire. So for example, you can store all your precious children pictures and your private health information on your home computer as you’ve chosen that to be your storage facility – but be able to get access to a suite of online functionality that exists in the cloud.

As Chris Messina notes, there is still an opera proxy service – meaning all your data connecting your home computer to your phone and other computers – still go through an Opera central server. But that doesn’t matter, because it’s the concept of local storage via the browser that this embodies. There is the potential for competing, open source attempts in creating a more evenly distributed peer-to-peer model. Opera Unite matters, because it’s implemented a concept people have long talked about – packaged in a dead easy way to use.

Implications: Opera the company
WebFS-on-the-desktop
For poor little Opera, this finally gives it a focus to innovate. Its been squashed out of the web browser market, and its had limited success on the mobile phone (its main niche opportunity – although with the iPhone now facing a big threat). Google’s chrome is fast developing into the standard for running SaaS applications over the web. But Opera’s decision to pursue this project is innovating in a new area, and more inline with what was first described as the data portability file system and the DiSo dashboard.

Like all great ideas, I look forward to Unite being copied, refined, and evolve into something great for the broader world.

Facebook needs to be more like the Byzantines

Flickr graph Chris Saad wrote a good post on the DataPortability Project’s (DPP) blog about how the web works on a peering model. Something we do at the DPP is closely monitor the market’s evolution, and having done this actively for a year now as a formal organisation, I feel we are at the cusp of a lot more exciting times to come. These are my thoughts on why Facebook needs to alter their strategy to stay ahead of the game, and by implication, everyone else who is trying to innovate in this sphere.

Let’s start by describing the assertion that owning data is useless, but access is priceless.

It’s a bold statement that you might need to get some background reading to understand my point of view (link above). However once you understand it, all the debates about who “owns” what data, suddenly become irrelevant. Basically access, just like ownership, is possible due to a sophisticated society that recognises peoples rights. Our society has now got to the point where ownership matters less now for the realisation of value, as we now have things in place to do more, through access.

Accessonomics: where access drives value
Let’s use an example to illustrate the point with data. I am on Facebook, MySpace, Bebo, hi5, Orkut, and dozens of other social networking sites that have a profile of me. Now what happens if all of those social networking sites have different profiles of me? One when I was single, one when I was in a relationship, another engaged, and another “it’s complicated”.

If they are all different, who is correct? The profile I last updated of course. With the exception of your birthdate, any data about you will change in the future. There is nothing ‘fixed’ about someone and “owning” a snap shot of them at a particular point of time, is exactly that. Our interests change, as do our closest friends and our careers.

Recognising the time dimension of information means that unless a company has the most recent data about you, they are effectively carrying dead weight and giving themselves a false sense of security (and a false valuation). Facebook’s $3 billion market value is not the data they have in June 2008; but data of people they have access to, of which, that’s the latest version. Sure they can sell to advertisers specific information to target ads, but “single” in May is not as valuable as “single” in November (and even less valuable than single for May and November, but not the months in between).
Network cable

Facebook Connect and the peering network model
The announcement by Facebook in the last month has been nothing short of brilliant (and when its the CEO announcing, it clearly flags it’s a strategic move for their future, and not just some web developer fun). What they have created out of their Facebook Connect service is shaking up the industry as they do a dance with Google since the announcement of OpenSocial in November 2007. That’s because what they are doing is creating a permanent relationship with the user, following them around the web in their activities. This network business model means constant access to the user. But the mistake is equating access with the same way as you would with ownership: ownership is a permanent state, access is dependent on a positive relationship – the latter of course, being they are not permanent. When something is not permanent, you need strategies to ensure relevance.

When explaining data portability to people, I often use the example of data being like money. Storing your data in a bank allows you better security to house that data (as opposed to under your mattress) and better ability to reuse it (ie, with a theoretical debit card, you can use data about your friends for example, to filter content on a third party site). This Facebook Connect model very much appears to follow this line of thinking: you securely store your data in one place and then you can roam the web with the ability to tap into that data.

However there is a problem with this: data isn’t the same as money. Money is valuable because of scarcity in the supply system, whilst data becomes valuable from reusing and creating derivatives. We generate new information by connecting different types of data together (which by definition, is how information gets generated). Our information economy allows alchemists to thrive, who can generate value through their creativity of meshing different (data) objects.

By thinking about the information value chain, Facebook would benefit more by being connected to other hubs, than having all activity go through it. Instead of data being stored in the one bank, it’s actually stored across multiple banks (as a person, it probably scares you to store all your personal information with the one company: you’d split it if you could). What you want to do as a company is have access to this secure EFT ecosystem. Facebook can access data that occurs between other sites because they are party to the same secured transfer system, even though they had nothing to do with the information generation.

Facebook needs to remove itself from being a central node, and instead, a linked-up node. The node with the most relationships with other sites and hubs wins, because with the more data at your hands, the more potential you have of connecting dots to create unique information.

Facebook needs to think like the Byzantines
A lot more can be said on this and I’m sure the testosterone within Facebook thinks it can colonise the web. What I am going to conclude with is that that you can’t fight the inevitable and this EFT system is effectively being built around Facebook with OpenSocial. The networked peer model will trump – the short history and inherent nature of the Internet proves that. Don’t mistake short term success (ie, five years in the context of the Internet) with the long term trends.

Byzantine buildingThere was once a time where people thought MySpace was unstoppable. Microsoft unbeatable. IBM unbreakable. No empire in the history of the word has lasted forever. What we can do however, is learn the lessons of those that lasted longer than most, like the forgotten Byzantine empire.

Also known as the eastern Roman empire, its been given a separate name by historians because it outlived its western counterpart by over 1000 years. How did they last that long? Through diplomacy and avoiding war as much as possible. Rather than buying weapons, they bought friends, and ensured they had relationships with those around them who had it in their self-interest to keep the Byzantines in power.

Facebook needs to ensure it stays relevant in the entire ecosystem and not be a barrier. They are a cashed up business in growth mode with the potential to be the next Google in terms of impact – but let’s put emphasis on “potential”. Facebook has competitors that are cash flow positive, have billions in the bank, but most importantly of all are united in goals. They can’t afford to fight a colonial war of capturing people identity’s and they shouldn’t think they need to.

Trying to be the central node of the entire ecosystem, by implementing their own proprietary methods, is an expensive approach that will ultimately be beaten one day. However build a peered ecosystem where you can access all data is very powerful. Facebook just needs access, as they can create value through their sheer resources to generate innovative information products: that, not lock-in, is that will keep them up in front.

Just because it’s a decentralised system, doesn’t mean you can’t rule it. If all the kids on a track are wearing the same special shoes, that’s not going to mean everyone runs the same time on the 100 metre dash. They call the patriarch of Constantiniple even to this day “first among equals” – an important figure who worked in parallel to the emperor’s authority during the empire’s reign. And it’s no coincidence that the Byzantine’s outlived nearly all empires known to date, which even to this day, arguably still exists in spirit.

Facebook’s not going to change their strategy, because their short-term success and perception of dominance blinds their eyes. But that doesn’t mean the rest of us need to make that mistake. Pick your fights: realise the business strategy of being a central node will create more heart-ache than gain.

It may sound counter intuitive but less control can actually mean more benefit. The value comes not from having everyone walk through your door, but rather you having the keys to everyone else’s door. Follow the peered model, and the entity with the most linkages with other data nodes, will win.

The Rudd Filter

This poor blog of mine has been neglected. So let me do some catchup with some of the things I’ve been doing.

Below is a letter I sent to every senator of the Australian government several weeks ago. Two key groups responded: the Greens (one of the parties to hold the balance of power) who were encouraged by my letter, and the Independent Nick Xenophon (who is one of the two key senators that will have an impact) had his office respond in a very positive way .

It relates to the Government’s attempt to censor the Internet for Australians.

Subject: The Rudd Filter

Attention: Senators of the Australian parliament

With all due respect, I believe my elected representatives as well as my fellow Australians misunderstand the issue of Internet censorship. Below I offer my perspective, which I hope can re-position the debate with a more complete understanding of the issues.

Background

The policy of the Australian Labor Party on its Internet filter was in reaction to the Howard Government’s family-based approach which Labor said was a failure. Then leader of the Opposition, Kim Beazley, announced in March 2006 (Internet archive ) that under Labor “all Internet Service Providers will be required to offer a filtered ‘clean feed’ Internet service to all households, and to schools and other public internet points accessible by kids.” The same press release states “Through an opt-out system, adults who still want to view currently legal content would advise their Internet Service Provider (ISP) that they want to opt out of the “clean feed”, and would then face the same regulations which currently apply.”

The 2007 Federal election, which was led by Kevin Rudd, announced the election pledge that “a Rudd Labor Government will require ISPs to offer a ‚Äòclean feed‚Äô Internet service to all homes, schools and public Internet points accessible by children, such as public libraries. Labor‚Äôs ISP policy will prevent Australian children from accessing any content that has been identified as prohibited by ACMA, including sites such as those containing child pornography and X-rated material.”

Following the election, the Minister for Broadband, Communications and Digital Economy Senator Stephen Conroy in December 2007 clarified that anyone wanting uncensored access to the Internet will have to opt-out of the service .

In October 2008, the policy had another subtle yet dramatic shift. When examined by a Senate Estimates committee, Senator Conroy stated that “we are looking at two tiers – mandatory of illegal material and an option for families to get a clean feed service if they wish.” Further, Conroy mentioned “We would be enforcing the existing laws. If investigated material is found to be prohibited content then ACMA may order it to be taken down if it is hosted in Australia. They are the existing laws at the moment.”

The interpretation of this, which has motivated this paper as well as sparked outrage by Australians nation-wide, is that all Internet connection points in Australia will be subjected to the filter, with only the option to opt-out of the Family tier but not the tier that classifies ‘illegal material’. While the term “mandatory” has been used as part of the policy in the past, it has always been used in the context of making it mandatory for ISP’s to offer such as service. It was never used in the context of it being mandatory for Australians on the Internet, to use it.

Not only is this a departure from the Rudd government’s election pledge, but there is little evidence to suggest that it is truly being representative of the requests from the Australian community. Senator Conroy has shown evidence of the previous NetAlert policy by the previous government falling far below expectations. According to Conroy, 1.4 million families were expected to download the filter, but many less actually did . The estimated end usage according to Conroy is just 30,000 – despite a $22 million advertising campaign. The attempt by this government to pursue this policy therefore, is for its own ideological or political benefit . The Australian people never gave the mandate nor is there evidence to indicate majority support to pursue this agenda. Further, the government trials to date have shown the technology to be ineffective.

On the 27th of October, some 9,000 people had signed a petition to deny support of a government filter. At the time of writing this letter on the 2 November, this has now climbed to 13,655 people. The government’s moves are being closely watched by the community and activities are being planned to respond to the government should this policy continue in its current direction.

I write this to describe the impact such a policy will have if it goes ahead, to educate the government and the public.

Impacts on Australia

Context

The approach of the government to filtering is one dimensional and does not take into account the converged world of the Internet. The Internet has – and will continue to – transform our world. It has become a utility, to form the backbone of our economy and communications. Fast and wide-spread access to the Internet has been recognised globally as a priority policy for political and business leaders of the world.

The Internet typically allows three broad types of activities. The first is that of facilitating the exchange of goods and services. The Internet has become a means of creating a more efficient marketplace, and is well known to have driven demand in offline selling as well , as it creates better informed consumers to reach richer decision making. On the other hand, online market places can exist with considerable less overhead – creating a more efficient marketplace than in the physical world, enabling stronger niche markets through greater connections between buyers and sellers.

The second activity is that of communications. This has enabled a New Media or Hypermedia of many-to-many communications, with people now having a new way to communicate and propagate information. The core value of the World Wide Web can be realised from its founding purpose: created by CERN , it was meant to be a hypertext implementation that would allow better knowledge sharing of its global network of scientists. It was such a transformative thing, that the role of the media has forever changed. For example, newspapers that thrived as businesses in the Industrial Age, now face challenges to their business models, as younger generations are preferring to access their information over Internet services which objectively is a more effective way to do so .
A third activity is that of utility. This is a growing area of the Internet, where it is creating new industries and better ways of doings, now that we have a global community of people connected to share information. The traditional software industry is being changed into a service model where instead of paying a license, companies offer an annual subscription to use the software via the browser as platform (as opposed to a PC’s Window’s installation as the platform). Cloud computing is a trend pioneered by Google, and now an area of innovation by other major Internet companies like Amazon and Microsoft, that will allow people to have their data portable and accessible anywhere in the world. These are disruptive trends, that will further embed the Internet into our world.

The Internet will be unnecessarily restricted

All three of the broad activities described above, will be affected by a filter.
The impact on Markets with analysis-based filters, is that it will likely block access to sites due to a description used in selling items. Suggestions by Senators have been that hardcore and fetish pornography be blocked – content that may be illegal for minors to view, but certainly not illegal for consenting adults. For example, legitimate businesses that used the web as their shopfront (such as adultshop.com.au), will be restricted from the general population in their pursuit of recreational activities. The filter’s restriction on information for Australians is thus a restriction on trade and will impact individuals and their freedoms in their personal lives.
The impact on communications is large. The Internet has created a new form of media called “social media”. Weblogs, wiki’s, micro-blogging services like Twitter, forums like Australian start-up business Tangler and other forms of social media are likely to have their content – and thus service – restricted. The free commentary of individuals on these services, will lead to a censoring and a restriction in the ability to use the services. “User generated content” is considered a central tenet in the proliferation of web2.0, yet the application of industrial area controls on the content businesses now runs into a clash with people’s public speech as the two concepts that were previously distinct in that era, have now merged.
Further more, legitimate information services will be blocked with analysis-based filtering due to language that would trigger filtering. As noted in the ACMA report , “the filters performed significantly better when blocking pornography and other adult content but performed less well when blocking other types of content”. As a case in point, a site containing the word “breast”, would be filtered despite it having legitimate value in providing breast cancer awareness.
Utility services could be adversely affected. The increasing trend of computing ‘in the cloud’ means that our computing infrastructure will require an efficient and open Internet. A filter will do nothing but disrupt this, with little ability to achieve the policy goal of preventing illegal material. As consumers and businesses move to the cloud, critical functions will be relied on, and any threat in the distribution and under-realisation of potential speeds, will be a burden on the economy.
Common to all three classes above, is the degradation of speeds and access. The ACMA report claims that all six filters tested scored an 88% effectiveness rate in terms of blocking the content that the government was hoping would be blocked. It also claims that over-blocking of acceptable content was 8% for all filters tested, with network degradation not nearly as big of a problem during these tests as it was during previous previous trials, when performance degradation ranged from 75-98%. In this latest test, the ACMA said degradation was down, but still varied widely‚Äîfrom a low of just 2% for one product to a high of 87% for another. The fact that there is a degradation of even 0.1% is in my eyes, a major concern.The Government has recognised with the legislation it bases its regulatory authority from, that “whilst it takes seriously its responsibility to provide an effective regime to address the publication of illegal and offensive material online, it wishes to ensure that regulation does not place onerous or unjustifiable burdens on industry and inhibit the development of the online economy.”

The compliance costs alone will hinder the online economy. ISP’s will need to constantly maintain the latest filtering technologies, businesses will need to monitor user generated content to ensure their web services are not automatically filtered and administrative delays to unblock legal sites will hurt profitability and for some start-up businesses may even kill them.

And that’s just for compliance, lets not forget the actual impact on users. As Crikey has reported (Internet filters a success, if success = failure ), even the best filter has a false-positive rate of 3% under ideal lab conditions. Mark Newton (the network engineer who Senator Conroy’s office attacked recently ) reckons that for a medium-sized ISP that‚Äôs 3000 incorrect blocks every second . Another maths-heavy analysis says that every time that filter blocks something there‚Äôs an 80% chance it was wrong.

The Policy goal will not be met & will be costly through this approach

The Labor party’s election policy document states that Labor‚Äôs ISP policy will prevent Australian children from accessing any content that has been identified as prohibited by ACMA, including sites such as those containing child pornography and X-rated material. Other than being a useful propaganda device, to my knowledge children and people generally don’t actively seek child pornography, and a filter does nothing to prevent these offline real-world social networks of paedophiles to restrict their activities.

What the government seems to misunderstand, is that a filter regime will prove inadequate in achieving any of this, due to the reality of how information gets distributed on the Internet.
Composition of Internet traffic by you.

Source: http://www.ipoque.com/userfiles/file/internet_study_2007.pdf
Peer-to-peer networks (P2P), a legal technology that also proves itself impossible to control or filter, accounts for the majority of Internet traffic, with figures of between 48% in the Middle East and 80% in Eastern Europe . As noted earlier, the ACMA trials have confirmed that although they can block P2P, they cannot actually analyse the content as being illegal. This is because P2P technologies like torrents are completely decentralised. Individual torrents cannot be identified, and along with encryption technologies, make this type of content impossible to filter or identify what it is.
However, whether blocked or filtered, this is ignoring the fact that access can be bypassed by individuals who wish to do so. Tor is a network of virtual tunnels, used by people under authoritarian governments in the world – you can install the free software on a USB stick to have it working immediately. It is a sophisticated technology that allows people to bypass restrictions. More significantly, I wish to highlight that some Tor servers have been used for illegal purposes, including child pornography and p2p sharing of copyrighted files using the bit torrent protocol. In September 2006, German authorities seized data center equipment running Tor software during a child pornography crackdown, although the TOR network managed to reassemble itself with no impact to its network . This technology is but one of many available options for people to overcome a ISP-level filter.
For a filtering approach to be appropriate, it will require not just automated analysis based technology, but human effort to maintain the censorship of the content. An expatriate Australian in China claims that a staff of 30,000 are employed by the Golden Shield Project (the official name for the Great Firewall) to select what to block along with whatever algorithm they use to automatically block sites. With legitimate online activities being blocked through automated software, it will require a beefed up ACMA to handle support from the public to investigate and unblock websites that are legitimate. Given the amount of false positives proven in the ACMA trials, this is not to be taken likely, and could cost hundreds of millions of dollars in direct taxpayers money and billions in opportunity cost for the online economy.

Inappropriate government regulation

The governments approach to regulating the Internet has been one dimensional, by regarding content online with the same type that was produced by the mass media in the Industrial Era. The Information Age recognises content not as a one-to-many broadcast, but individuals communicating. Applying these previous-era provisions is actually a restraint beyond traditional publishing.
Regulation of the Internet is provided under the Broadcasting Services Amendment (Online Services) Act 1999 (Commonwealth) . Schedule Five and seven of the amendment claim the goal is to:

  • Provide a means of addressing complaints about certain Internet content
  • Restrict access to certain Internet content that is likely to cause offense to a reasonable adult
  • Protect children from exposure to Internet content that is unsuitable for them

Mandatory restricting access can disrupt freedom of expression under Article 19 of the International Covenant on Civil and Political Rights and disrupt fair trade of services under the Trade Practices Act.

It is wrong for the government to take the view of mandating restricted access, but instead should allow consumers that option to participate in a system that protects them. To allow a government to interpret what a “reasonable adult” would think is too subjective for it to be appropriate that a faceless authority regulates, over the ability for an individual adult to determine for themselves.

The Internet is not just content in the communications sense, but also in the market and utility sense. Restricting access to services, which may be done inappropriately due to proven weaknesses in filtering technology, would result in

  • reduced consumer information about goods and services. Consumers will have less information due to sites incorrectly blocked
  • violation of the WTO’s cardinal principles – the “national treatment” principle , which requires that imported goods and services be treated the same as those produced locally.
  • preventing or hindering competition under the interpretation of section 4G of the Trade Practices Act . This means online businesses will be disadvantaged from physical world shops, even if they create more accountability by allowing consumer discussion on forums that may trigger the filter due to consumers freedom of expression.

Solution: an opt-in ISP filter that is optional for Australians

Senator Conroy’s crusade in the name of child pornography is not the issue. The issue, in addition to the points raised above, is that mandatory restricting access to information, is by nature a political process. If the Australian Family Association writes an article criticising homosexuals , is this grounds to have the content illegal to access and communicate as it incites discrimination ? Perhaps the Catholic Church should have its website banned because of their stance on homosexuality?

If the Liberals win the next election because the Rudd government was voted out due to pushing ahead with this filtering policy, and the Coalition repeat recent history by controlling both houses of government – what will stop them from banning access to the Labor party’s website?

Of course, these examples sound far fetched but they also sounded far fetched in another vibrant democracy called the Weimar Republic . What I wish to highlight is that pushing ahead with this approach to regulating the Internet is a dangerous precedent that cannot be downplayed. Australians should have the ability to access the Internet with government warnings and guidance on content that may cause offence to the reasonable person. The government should also persecute people creating and distributing information like child pornography that universally is agreed by society as a bad thing. But to mandate restricted access to information on the Internet, based on expensive imperfect technology that can be routed around, is a Brave New World that will not be tolerated by the broader electorate once they realise their individual freedoms are being restricted.

This system of ISP filtering should not be mandatory for all Australians to use. Neither should it be an opt-out system by default. Individuals should have the right to opt-into a system like this, if there are children using the Internet connection or a household wishes to censor their Internet experience. To mandatory force all Australians to experience the Internet only if under Government sanction, is a mistake of the highest levels. It technologically cannot be assured, and it poses a genuine threat to our democracy.

If the Ministry under Senator Conroy does not understand my concerns by responding with a template answer six months later , and clearly showing inadequate industry consultation despite my request, perhaps Chairman Rudd can step in. I recognise with the looming financial recession, we need to look for ways to prop up our export markets. However developing in-house expertise at restricting the population that would set precedent to the rest of the Western world, is something that’s funny in a nervous type of laughter kind of way.

Like many others in the industry, I wish to help the government to develop a solution that protects children. But ultimately, I hope our elected representatives can understand the importance of this potential policy. I also hope they are aware anger exists in the governments’ actions to date, and whilst democracy can be slow to act, when it hits, it hits hard.
Kind regards,
Elias Bizannes
—-
Elias Bizannes works for a professional services firm and is a Chartered Accountant. He is a champion of the Australian Internet industry through the Silicon Beach Australia community and also currently serves as Vice-Chair of the DataPortability Project. The opinions of this letter reflect his own as an individual (and not his employer) with perspective developed in consultation with the Australian industry.
This letter may be redistributed freely. HTML version and PDF version.

You don’t nor need to own your data

One of the biggest questions the DataPortability project has grappled with (and where the entire industry is not at consensus), is a fairly basic question with some profound consequences: who owns your data. Well I think I have an answer to the question now, which I’ve now cross-validated across multiple domains. Given we live in the Information Age, this certainly matters in every respect.

So who owns “your data”? Not you. Or the other guy. Or the government, and the MicroGooHoo corporate monolith. Actually, no one does. And if they do, it doesn’t matter.

People like to conflate the concept of property ownership to that of data ownership. I mean it’s you right? You own your house so surely, you own your e-mail address, your name, your date of birth records, your identity. However when you go into the details, from a conceptual level, it doesn’t make sense.

Ownership of data
First of all, let’s define property ownership: “the ability to deny use of an asset by another entity”. The reason you can claim status to owning your house, is because you can deny someone else access to your property. Most of us have a fence to separate our property from the public space; others like the hillbillies sit in their rocking chair with a shot gun ready to fire. Either way, it’s well understood if someone else owns something, and if you trespass, the dogs will chase after you.

133377798_8c85d1f1a6_o

The characteristics of ownership can be described as follows:
1) You have legal title recognising in your legal jurisdiction that you own it.
2) You have the ability to enforce your right of ownership in your legal jurisdiction
3) You can get benefits from the property.

The third point is key. When people cry out loud “I own my data”, that’s essentially the reason (when you take out the Neanderthal emotionally-driven reasoning out of the equation). Where we get a little lost though, is when we define those benefits. It could be said, that you want to be able to control your data so that you can use it somewhere else, and so you can make sure someone else doesn’t use it in a way that causes you harm.

Whilst that might sound like ownership to you, that’s where the house of cards collapses. The reason being, unless you can prove the ability to deny use by another entity, you do not have ownership. It’s a trap, because data is not like a physical good which cannot be easily copied. It’s like a butterfly locked in a safe: the moment you open that safe up, you can say good bye. If data can only satisfy the ownership definition when you hide it from the world, that means when it’s public to the world, you no longer own it. And that sucks, because data by nature is used for public consumption. But what if you could get the same benefits of ownership – or rather, receive benefits of usage and regulate usage – without actually ‘owning’ it?

Property and data – same same, but different
Both property and data are assets. They create value for those who use them. But that’s where the similarity’s end.

Property gains value through scarcity. The more unique, the more valuable. Data on the other hand, gains value through reuse. The more derivative works off it, means the more information generated (as information is simply data connected with other data). The more information, the more knowledge, the more value created – working its way along the information value chain. If data is isolated, and not reused, it has little value. For example, if a company has a piece of data but is not allowed to ever use it – there is no value to it.

Data gains value through use, and additional value through reuse and derivative creations. If no one reads this blog, it’s a waste of space; if thousands of people read it, its value increases – as these ideas are decimated. To give one perspective on this, when people create their own posts reusing the data I’ve created, I generate value through them linking back to me. No linking, no value realised. Of course, I get a lot more value out of it beyond page rank juice, but hopefully you realise if you “steal” my content (with at least some acknowledgement to me the person), then you are actually doing me a favour.

Ignore the above!
Talking about all this ownership stuff doesn’t actually matter; it’s not ownership that we want. Let’s take a step back, and look at this from a broader, philosophical view.

Property ownership is based on the concept that you get value from holding something for an extended period of time. But in an age of rapid change, do you still get value from that? Let’s say, we lose the Holy War for people being able to ‘own’ their data. Facebook – you win – you now ‘own’ me. This is because it owns the data about me – my identity, it would appear, is under the control of Facebook – it now owns, that “I am in a relationship”. However, the Holy War might have been lost but I don’t care. Because Facebook owns crap – as six months ago, I was in a relationship. Now I’m single and haven’t updated my status. The value for Facebook, is not in owning me in a period of time: it’s in having access to me all the time – because one way they translate that data into value is advertising, and targeting ads is pointless if you have the wrong information to base your targetting on. Probably the only data that can be static in my profile, is birth-date and gender – but with some tampering and cosmetics, even those can be altered now!

468487548_06182b43d2_o

Think about this point raised by Luk Vervenne, in response to my above thoughts on the VRM mailing list, by considering employability. A lot of your personal information, is actually generated by interactions with third parties, such as the education institution you received your degree from. So do I own the fact that I have a Bachelor of Commerce from the University of Sydney? No I don’t, as that brand and the authenticity is that of the university. What I do have however, is access & usage rights to it. Last time I checked, I didn’t own the university, but if someone quizzes me on my academic record, there’s a hotline ready to confirm it – and once validated, I get the recognition that translates into a benefit for me.

Our economy is now transitioning from a goods-producing to a service-performing and experience-generating economy. It’s hard for us to imagine this new world, as our conceptual understanding of the world is built on the concept of selling, buying and otherwise trading goods that ultimately ends in us owning something. But this market era of the exchange of goods is making way for “networks” and the concept of owning property will diminish in importance, as our new world is will now place value on the access.

This is a broader shift. As a young man building his life, I cannot afford to buy a house in Sydney with its overinflated prices. But that’s fine – I am comfortable in renting – all I want is ‘access’ to the property, not the legal title to it which quite frankly would be a bad investment decision even aside from the current economic crisis. I did manage to buy myself a car, but I am cursing the fact that I wasted my money on that debt which could have gone to more productive means – instead, I could have just paid for access to public transport and taxis when I needed transport. In other words, we now have an economy where you do not need to own something to get the value: you just need access.

That’s not to say property ownership is a dead concept – rather, it’s become less important. When we consider history as well, the concept of the masses “owning” property was foreign anyway – there was a class system with the small but influential aristocracy that would own the land, with the serfs working on the land. “Ownership” really, is a new ‘established’ concept to our world – and it’s now ready to get out of vogue again. We’ve now reached a level of sophistication in our society where we no longer need the security of ownership to get the benefits in our life – and these property owners that we get our benefits from, may appear to yield power but they also have a lot of financial risk, government accountability and public scrutiny (unlike history’s aristocracy).

2516780900_fab76bf33e_o

Take a look at companies and how they outsource a lot of their functions (or even simplify their businesses’ value-activities). Every single client of mine – multi-million dollar businesses at that as well – pay rent. They don’t own the office space they are in, as for them to get the benefits, they instead simply need access which they get through rental. “Owning” the property is not part of the core value of the business. Whilst security is needed, because not having ownership can put you at the mercy of the landlord, this doesn’t mean you can’t contract protection like my clients do as part of the lease agreements.

To bring it back to the topic, access to your data is what matters – but it also needs to be carefully understood. For example, access to your health records might not be a good thing. Rather, you can control who has access to that data. Similarly, whilst no one might own your data, what you do have is the right to demand guidelines and principles like what we are trying to do at the DataPortability Project on how “your” data can be used. Certainly, the various governmental privacy and data protection legislation around the world does exactly that: it governs how companies can use personally identifiable data.

Incomplete thoughts, but I hope I’ve made you think. I know I’m still thinking.

Three startups in 24 hours – lessons in the costs of innovation

I’m sitting here at Start-up weekend, a concept instigated by Bart Jellema and partner Kim Chen, as an experiment of bringing the Australian tech community together, part of the broader effort several of us are pursuing to build Silicon Beach in Australia.

I’ve dropped in on the tail-end, and it’s amazing to see what happened. Literally, in the space of 24 hours, three teams of five people have created three separate products. They are all well-thought out, with top-notch development, and clearly designed to actually pass off as a genuine start-up that’s been working on the idea for weeks if not months.

TrafficHawk.com.au is a website delivering up to the minute traffic alerts for over six million drivers in the state of New South Wales, the biggest state in Australia
LinkViz.com is a service that enables you to visually determine what’s hot on Twitter, a social media service.
uT.ag is a service that monetises links people share with other people.

All three of these products have blown me away, not just in the quality, but the innovation. For example, ut.ag is a url shortening application that competes against the many others out on the market (popular with the social media services) but adds an advertisement to the page people click to view. Such a stupidly simple idea that I can’t believe it hasn’t been created before…and it’s already profitable in the few hours of operation! LinkViz provides a stunningly visual representation of links, that it’s hard to believe this was created – from concept to product – in such a short time. Traffic Hawk is a basic but useful mashup that genuinely adds value to consumers.

The challenges
This is a clear demonstration that when you get a bunch of people together, anything can happen: pure innovation, in a timewarp of a few hours. Products, with real revenue potential. It’s scary to witness how the costs of business in the digital economy can be boiled down to simply a lack of sleep! But talking to the teams, I realised that the costs were not as low as they could have been. It also is an interesting insight into the costs to business on the internet, seen through this artificial prism of reality.

For example, Geoff McQueen (a successful developer/businessman, that was also one of the founders of Omnidrive) of the Traffic Hawk site was telling me about the difficulty they experienced. Whilst the actual service looks like a basic mashup on Google maps, the actual scraping of data from a government website (real time traffic incidents), was considerably painful. Without attention from the team in future, their site will break as the scraping script is dependent on the current configuration with the HTML pages displaying the data. The fact that this is a ‘cost’ for them to develop the idea, is a wasteful cost of business. The lost resources in developing the custom scraping script, and the future maintenance required, is an inefficient allocation of resources in the economy. And for what good reason? Traffic hawk is making the same data, more useful – it’s not hurting, only helping the government service.

This is a clear example of the benefits of open data, where in a DataPortability enabled world, the entrepreneurial segments of the economy that create this innovation, can focus their efforts in areas that add value. In fact, it is a clear demonstration of the value chain for information.

LinkViz on the other end have a different disadvantage. Whilst they have access to the Twitter API that enables them to pull data and create their own mashup with it (unlike the hawkers), the actual API is slow. So the price for a better service from them, is money to get faster access.

Talking with the ut.ag boys, the issue was more a matter of infrastructure. They could have had the same product launched with more features, 12 hours earlier (ie, 12 hours AFTER conception) if it wasn’t for the niggly infrastructure issues. This is more a function of the purposely disorganised nature of the camp, and so the guys lost a lot of time with setting up networks and repositories. I believe however, this should be a lesson about the impact of the broader ecosystem and governments influence on innovation.

For example, we need faster broadband…but also an omnipresent connection. A person should be able to flip open their laptop, and access the internet in an affordable matter that connects them to their tools, anywhere.

Concluding thoughts
The whole purpose of this camp, was not the destination but the process. People getting to know each other, learn new skills – or as Linda Gehard says: “It’s like exercise for entrepreneurs”. However what started as a quick and dirty post to give the guys some exposure, has had me realise the costs to innovation in the economy. When you take out the usual whines about investors and skills shortages, and put together some highly capable people, there are some specific things that still need to be done.

Update 8/9/08:Apologies for the typos and misspellings – my blog locked me out due to a corruption in the software and didn’t save my revised version that I thought I saved. And a huge thank you to Joan Lee, who has now become my official proof reader. Liako.Biz, it appears, is no longer a one man band!

Net neutrality

Mick just posted a video he has come across on net neutrality. At the very least, you should watch it because it’s an important issue. We take for granted that when we are connected to the net, we are given equal access; what the large Telco’s appear to be pushing, is to influence that actual content that flows over the internet. It’s a bit like a bunch of shops in a marketplace – the Telco’s want the consumers to pay for entering the marketplace (as they do) and now they want to charge the people who choose to host shops in the marketplace as well as allocating consumers a personal guide who depending on how much you tip them, will determine if you can visit a particular shop. The beauty of the internet, is that all you need is a connection and you are given equal like anyone else. In a world where space is finite, rent for a shop makes sense because it’s managing the demand; but in a virtual world there is no reason to inherit this age old concept as it will kill the core of what is what makes it the most amazing invention in history.

The first video is one I found that is more concise and to the point. The second one is the one on Mick’s blog which is certainly a lot more emotive.

DataPortability is about user value, fool!

In a recent interview, VentureBeat asks Facebook creator and CEO Mark Zuckerberg the following:

VB: Facebook has recently joined DataPortability.org, a working group among web companies, that intends to develop common standards so users can access their data across sites. Is Facebook going to let users — and other companies — take Facebook data completely off Facebook?

MZ: I think that trend is worth watching.

It disappoints me to see that, because it seems like a quick journalists hit at a contentious issue. On the other hand, we have seen amazing news today which are examples of exactly the type of thing we should be expecting in a data portability enabled world: the Google contacts API which has been a thing we have highlighted for months now as an issue for data security and Google analytics allowing benchmarking which is a clear example of a company that understands by linking different types of data you generate more information and therefore value for the user. The DataPortability project is about trying to advocate new ways of thinking, and indeed, we don’t have to formally produce a product in as much maintain the agenda in the industry.

However the reason I write this is that it worries me a bit that we are throwing around the term “data portability” despite the fact the DataPortability Project has yet to formally define what that means. I can say this because as a member of the policy action group and the steering action group which are responsible for making this distinction, we have yet to formally decide.

Today, I offer an analysis of what the industry needs to be talking about, because the term is being thrown around like buggery. Whilst it may be weeks or months before we finalise this, it’s starting to bother me that people seem to think the concept means solving the rest of the world’s problems or to disrupt the status quo. It’s time for some focus!

Value creation
First of all, we need to determine why the hell we want data portability. DataPortability (note the distinction of the term with that of ‘data portability’ Рthe latter represents the philosophy whilst the former is the implementation of that philosophy by DataPortability.org) is not a new utopian ideal; it’s a new way of thinking about things that will generate value in the entire Information sector. So to genuinely want to create value for consumers and businesses alike, we need to apply thinking that we use in the rest of the business world.

A company should be centered on generating value for its customers. Whilst they may have obligations to generate returns for their shareholders, and may attempt different things to meet those obligations; they also have an obligation to generate shareholder value. To generate shareholder value, means to fund the growth of their business ultimately through increased customer utility which is the only long term way of doing so (taking out acquisitions and operational efficiency which are other ways companies generate more value but which are short term measures however). Therefore an analysis of what value DataPortability creates should be done with the customer in mind.

The economic value of a user having some sort of control over their data is that they can generate more value through their transactions within the Information economy. This means better insights (ie, greater interoperability allowing the connection of data to create more information), less redundancy (being able to use the same data), and more security (which includes better privacy which can compromise a consumers existence if not managed).

Secondly, what does it mean for a consumer to have data portability? Since we have realised that the purpose of such an exercise is to generate value, questions about data like “control”, “access” and “ownership” need to be reevaluated because on face value, the way they are applied may have either beneficial or detrimental effects for new business models. The international accounting standards state that you can legally “own” an asset but not necessarily receive the economics benefits associated with that asset. The concept of ownership to achieve benefit is something we really need to clarify, because quite frankly, ownership does not translate into economic benefit which is what we are at stake to achieve.

Privacy is a concept that has legal implications, and regardless of what we discuss with DataPortability, it still needs to be considered because business operates within the frameworks of law. Specifically, the human rights of an individual (who are consumers) need to be given greater priority than any other factor. So although we should be focused on how we can generate value, we also need to be mindful that certain types of data, like personally identifiable data, needs to be considered in adifferent light as there are social implications in addition to the economic aspects.

The use cases
The technical action group within the DataPortability project has been attempting to create a list of scenarios that constitute use cases for DataPortability enablement. This is crucial because to develop the blueprint, we also need to know what exactly the blueprint applies to.

I think it’s time however we recognise, that this isn’t merely a technical issue, but an industry issue. So now that we have begun the research phase of the DataPortability Project, I ask you and everyone else to join me as we discuss what exactly is the economic benefit that DataPortability creates. Rather than asking if Facebook is going to give up its users data to other applications, we need to be thinking on what is the end value that we strive to achieve by having DataPortability.

Portability in context, not location
When the media discuss DataPortability, please understand that a user simply being able to export their data is quite irrelevant to the discussion, as I have outlined in my previous posting. What truly matters is “access”. The ability for a user to command the economic benefits of their data, is the ability to determine who else can access their data. Companies need to be thinking that value creation comes from generating information – which is simply relationships between different data ‘objects’. If a user is to get the economic benefits of using their data from other repositories, companies simply need to allow the ability for a user to delegate permission for others to access that data. Such a thing does not compromise a company’s competitive advantage as they won’t necessarily have to delete data they have of a user; rather it requires them to try to to realise that holding in custody a users data or parts of it gives them a better advantage as hosting a users data gives them complete access, to try to come up with innovative new information products for the user.

So what’s my point? When discussing DataPortability, let’s focus on the value to the user. And the next time the top tech blogs confront the companies that are supporting the movement with a simplistic “when are you going to let users take their data completely off ” I am going to burn my bra in protest.

Disclosure: I’m a hetrosexual male that doesn’t cross-dress

Update: I didn’t mean to scapegoat Eric from VentureBeat who is a brilliant writer. However I used him to give an example of the language being used in the entire community which now needs to change. With the DP research phase now officially underway for the next few months, the questions we should be asking should be more open-ended as we at the DataPortability project have realised these issues are complex, and we need to get the entire community to come to a consensus. DataPortability is no longer just about exporting your social graph – it’s an entirely new approach to how we will be doing business on the net, and as such, requires us to fundamentally reexamine a lot more than we originally thought.

Control doesn’t necessarily mean access

I was approached by multiple people – PR professionals and journalists alike – after I gave my presentation at the kickstart forum yesterday. Whilst I doubt DataPortability is something they will pick up on for feature stories given the product focus these journalists have, the conversations with them were extremely encouraging and I am thank full to get their feedback.

One conversation particularly stood out for me, which was with John Hepworth – a former engineer whose has been freelance writing for over 20 years, and it was in the context of the ability to port your health information. I’ve been thinking a lot of the scenario whereby consumers can move their health records from clinics, and with Google Health launching and the discussions in the DataPortability forums I am certainly not alone. Something that caught my attention was Deepak Singh who recently posted an interesting perspective: we shouldn’t give users access to their health records, because they will make uninformed judgments if they have control of them. That’s an excellent point, but one which prickles the whole issue of not just who owns your data, but who should have access to it (including yourself).

Hepworth provided a simple but extremely insightful position to this issue: you don’t need to give users the ability to see their data, for them to control it. Brilliant!

The benefits of controlling your data, needs to be looked at not just in the context of the laws of a country, but on the net benefit it provides to an individual. Comments provided by your physicians in your medical history, whilst although they deserve to be given ownership to the individual they are about, they also need to be given access to people who are qualified to make educated judgments. In others words, you should have the right to port your data to another doctor, but you should only have access to it in the presence of a qualified doctor.

DataPortability should not equate in you seeing your data all the time – rather it should be about determining how it gets used by others.

Study finds 3 out of 10 people don’t use the internet

A fascinating study which indicates to me how early stage the internet as infrastructure is, was recently published. It says how 29% of all U.S. households (31 million homes) do not have Internet access and do not intend to subscribe to an Internet service over the next 12 months. Even more interesting, is the reason why these people don’t have internet access.

Forty-four percent of this group, are “not interested in anything internet”. Seventeen percent are “not sure how to use the internet”. In other words 18% of all consumers in the worlds leading nation with internet access and usage, don’t see the point in using the internet. Couple this with the fact that the developed world which hasn’t got the infrastructure to connect yet (but over the next decade will), means there is still a ridiculous amount of growth going to occur in the internet space.

Mass media execs: if you are stuggling now for audience share against this new medium, good luck to you in five years time.

« Older posts