Frequent thinker, occasional writer, constant smart-arse

Tag: open

Google Wave’s dirty little secret

google wave logoGoogle has announced a new technology that is arguably the boldest invention and most innovative idea to come out in recent years for the Internet (full announcement here).

It has the potential to replace email, instant messenging, and create a new technical category for collaboration and interactivity in the broadest sense. However hidden in the details, is a dirty little secret about the practicality of this project.

Google Wave is transformative, but it also is a technical challenge. If adopted, it will entrench cloud computing and ultimately Google’s fate as the most dominant company in the world.

The challenge in its development
For the last two years, the Google Sydney office has been working on a “secret project”. It got to the stage where the office – which runs the Google Maps product (another Sydney invention) – was competing for resources and had half the office dedicated to developing it. So secret was the project, that only the highest level of Google’s management team in Mountain View knew about it. Googler’s in other parts of the world either didn’t know about it, or people like me in the local tech scene, knew it was something big but didn’t know what exactly.

However although I didn’t know what exactly it was, I was aware of the challenge. And basically, it boils down to this: it’s a difficult engineering feat to pull off. The real time collaboration, which is at the core of what this technology provides, requires computationally a huge amount of resources for it to work.

It needs everyone to use it
Although we are all digging into the details, one thing I know for a fact, is that Google wants to make this as open as possible. It wants competitors like Microsoft, Yahoo and the entire development community to not just use it – but be a big driver in its adoption. For collaboration to work, you need people – and it makes little sense to restrict it to only a segment of the Internet population (much the same like email). Google’s openness isn’t being driven out of charity, but pure economic sense: it needs broad-based market adoption for this to work.

federation_diagram_fixed2

Only few can do it
However, with lots of people using it comes another fact: only those with massive cloud computing capabilities will be able to do this. Google practically invented and popularised the most important trend in computing right now. A trend where the industrial age’s economies of scale has come to play – reminding us that there are aspects of the Information Economy that are not entirely different from the past. What Google’s Wave technology does, is give a practical application that relies on cloud computing for its execution. And if the Wave protocol becomes as ubiquitous as email and Instant Messaging – and goes further to become core to global communications – then we will see the final innings to who now runs this world.

Wave is an amazing technology, and I am excited to see it evolve. But mark my words: this open technology requires a very expensive setup behind the scenes. And those that will meet this setup, will be our masters of tomorrow. Google has come to own us due to its innovation in information management – now watch Act II as it does the same for communications.

Why open wins

Open standards matter, but so does the water; and just like water is not what creates a Mona Lisa or a Hoover Dam alone, so too do open standards not really matter that much to what we are trying to do with the DataPortability Project in the longer term. But they matter for the industry, which is why we advocate for them. Here’s why.

Hoover dam

Bill Washburn is one of the soft-spoken individuals that has driven a lot of change, like leading the charge to open government technology (the Internet as we know it) to the rest of the world. He’s been around long enough to see trends, so I asked him: why does open always win? What is it about the walled garden that makes it only temporary?

Bill gave me two reasons: technologies need to be easy to implement and they also need to be cheap. It may sound obvious, but below I offer my interpretation why in the context of standards

1) Easy to implement
If you are a developer constantly implementing a standard, you want the easiest one to implement. Having to learn a new standard each time you need to do something is a burden – you want to learn how to do something once and that’s it. And if there is a choice to implement two standards that do the same thing, guess which one will win?

That’s why you will see the technically inferior RSS dominate over ATOM. Both allow syndication and give the end-user the same experience, but for a developer trying to parse it, ATOM is an absolute pain in the buttocks. Compare also JSON and XML – the former being a data structure that’s not even really a standard, and the latter which is one of the older data format standards on Internet. JSON wins out for using asynchronous technologies in the web2.0 world, because it’s just easier to do. Grassroots driven micro-formats and W3C endorsed RDF? Same deal. RDF academically is brilliant – but academic isn’t real world.

2) Cheap to implement
This is fairly obvious – imagine if you had two ways of performing something that did the same thing, but one was free and the other had licensing costs – what do you think a developer or company will use? Companies don’t want to pay licensing fees, especially for non-core activities; and developers can’t afford license fees for a new technology. Entities will bias their choices to the cheaper of the two, like free.

I think an interesting observation can be made about developer communities. Look at people that are the .Net community, compared to say something like Python advocates. You tend to find Python people are more open to collaboration, meetups, and other idea exchanges rather than the .Net developers who keep to themselves (a proprietary language). With the Microsoft owned .Net suite requiring a lot more costs to implement, it actually holds back the adoption of the technology to dominate the market. If people aren’t collaborating as much when compared to rival technologies, that means less innovation, more costs to learning – a longer term barrier to market adoption.

The most important point to make is on the actual companies that push these standards. Let’s say you are Facebook pushing your own standard, which although free, could only be modified by and adapted by the Facebook team. That’s going to cost resources – at the very least, a developer overseeing it. Maybe a team of evangelists to promote your way of thinking; a supervisor to manage this team. If you are the sole organisation in charge of something, it’s going to cost you (not anyone else) a lot of money.

Bridge being built on the Hoover dam

Compare that to an open community effort, where lots of companies and people pool their resources. Instead of one entity bearing the cost, it’s hundreds of entities bearing the cost. On a singular basis, it’s actually cheaper to create a community driven standard. And honestly, when you think about it, why a company fights over what standard gets implemented has nothing to do with their core strategic objectives. Sure they might get some marketing out of it (as the Wikipedia page says “this company created this standard”), but realistically, it’s rewarding more the individuals within these companies who can now put on their resume “I created this technology that everyone is using now”.

Why Open wins
In the short run, open doesn’t win because it’s a longer process, that in part relies on an industry reacting to a proprietary approach. In the long run, Internet history has proven that the above two factors always come to dominate. Why? Because infrastructure is expensive to build and maintain, and usually, it’s better to pool our efforts to build that infrastructure. You don’t want to spend your money on something that’s for the public benefit, only to have no one in the public using it – do you, Mr Corporate Vice-President?

So open it’s closed

The DataPortability Project has successfully promoted in 2008 the concept of “data portability”. However it’s become too successful – people make announcements now that claim to be “data portability” but are misleadingly not. Further, the term “Open” has become the new black. But really, when people say they are open – are they?

Status update on the DataPortability Project & context
The DataPortability Project now has developed a strong underlying transparent governance model to make decisions which embeds a process to achieve outcomes. We have also formulated our vision that forms the core DNA of the Project and allow us to align our efforts. Organisationally, we are currently working on a legal entity to protect our online community, and we are doing this whilst also ensuring we are working with others in the industry, such as the discussions we’ve had within the IDTBD proposal with Liberty Alliance, Identity Commons and others.

Our brand communications are nearly finalised (this time, legally vetted), and a refreshed website with a new blog has been rolled out. We’ve put out calls for positions and have already finalised our agreement with a new community manager. (Now open are positions for our analyst roles if you are interested.)

We have a Health Care task force that’s just started, looking to broaden our work into another sector of the economy. We also have an Service Provider Grid Task force finalising its work, which via an online interface and API, will allow people to query what various entities use in terms of open standards. We also have a task force that will provide sample EULA and TOS documents that encourage data portability, and further our vision.

The DataPortability vision states that people should be able to reuse their data. Traditionally in the past, people have said this means “physically” porting their data amongst web services. Whilst this applies in some cases, it is also about access as I recently argued .

So to synchronise our work on the EULA/ToS task force, I believe we need a technology equivalent, and which will give additional value to our Service Provider Grid. This is because Open Standards comply with our vision, and we need to ensure we only support efforts that we believe are worthy.

Hi, I’m open
Open Standards have been a core value that the DataPortability Project has advocated for since its founding, getting to the point where its even been confused as its core mission (it’s not). For us, they are an enabler Рand it has always been in our interest to see all of them work together.

Standards are important because they allow interoperability. For people to be able to access their data from multiple systems, we require systems to be able to easily communicate with each other. Likewise, for people to get value of any data they export from a system, they need to be able to import it – and this can only occur if the data is structured in a way that is compatible with another system.

We advocate “Open” because we want to minimise the costs of business for wanting to comply with our vision. However during 2008, the term "Open" Standards has been over-used, to the point of abuse.

An open standard is a standard that is publicly available and has various rights of use associated with it. But really, what’s open?
– its availability?
– the authority controlling the standard?
– the decision making process over the standard?

Liberty Alliance defines it as:

– The costs for the use of the standard are low.
– The standard has been published.
– The standard is adopted on the basis of an open decision-making procedure.
– The intellectual property rights to the standard are vested in a not-for-profit organisation, which operates a completely free access policy.
– There are no constraints on the re-use of the standard.

That I believe, perfectly encapsulates what I think an Open Standard should be. However as someone who spends his days applying international accounting standards to what companies report in their financials, I can assure you, simply flagging the criteria is only half the fun. Interpreting them is a whole debate in itself.

In my eye, most of these "open" efforts don’t fit that criteria. To illustrate, I am going to shame myself as I am a member of a workgroup that claims to be open: the APML workgroup. The group fails the open test because:
– it has a closed workgroup that makes the decisions, without a clearly defined decision making procedure
– it does not have a non-profit behind it, with the copyright owned by a company (although it’s made clear there is no intention to issue patents)
– it has no clear rights attached to it

So does that mean every standard group needs to create a legal entity for it to be open? Thankfully no – the Open Web Foundation (OWF) will solve this problem. Or does it? Whilst the decision making process is "open" (you can read the mailing list where the discussion occurs), what about the way it selects members? It’s dependent on being invited. That’s Open with a big But.

How about OpenID (which I am also a member of) – that poster child for "Open Standards". On the face of it, it fits the bill. But did you know OpenID contains other standards as part of it? As my friend and intellectual mentor Steve Greenberg said:

openid xrds greenberg

Now thankfully, XRDS fits the bill as a safe standard. Well kind of. It has links to another standard XRI, which it is alleged are subject to patent claims. Well sort of. Kinda. Oh God, let’s not get into a discussion about this again. But don’t give poor APML, the OWF or Open ID too much grief – I could indeed raise some nastier questions especially at other groups. However this isn’t about shaming – rather, it’s about raising questions.

The standards communities are fraught with politics, they are murky, and now they are creeping into the infrastructure of our online world. As a proponent for these "Open Standards", I think it’s time we start looking at them with a more critical eye. Yes, I recognise all these questions I’m raising are fixable, but that’s why I want to raise the point, because they are currently being swept under the carpet outside of the traditional authorities like the W3C.

It’s time some boundaries were set on what is effectively the brand of Open. It’s also time the term is defined, because quite frankly, its lost all meaning now. I’ve listed some criteria – but what we really need is some consensus on what ‘the’ criteria for Open should be.

Silicon Beach Australia – the movie!

Last year in June, I said on this blog:

…David Bolliger coined the term “Sillicon Beach” to refer to a bunch of Sydney based start-ups – continuing an international trend of regionalising hotspots of tech innovation that aspire to be like Sillicon Valley (my other favourite is New York as Sillicon Alley). Although it‚Äôs not the first time the term has been used, everyone from Perth, Melbourne, Newscastle, Brisbane, and the rest are claiming they are the real silicon beach.

So seeing as our population is only 20 million, and we are one big island continent anyway – I think I am going to settle with calling Australia’s tech industry as a whole as “Silicon Beach”.

After having separate discussions with Bronwen Clune and Mick Liubinskas, I’ve been thinking over the last few months about how to actively build a strong Internet community here in Australia. With Bronwen, I was investigating the possibility of my firm hosting a conference; with Mick, I’ve been doing weekly Friday drinks as a way for people to get to know each other. And so the other week, I registered the domain name SiliconBeachAustralia.org with no real plan on what to do with it. To me, it’s just seemed like the natural name to brand such attempts.

Although the site’s been up for a few weeks, I’ve been busy. But it was only last night that I created a google group, and announced it by inviting people I knew. It hasn’t even been 24 hours, and already some great discussions have been had with individuals I consider to be pillars of the Australian community…as well as people I didn’t even know existed!

Many thanks to Kim for his coverage as well as Renai for raising awareness. It’s satisfying to see such an open embrace by so many good people.

So join the conversation! As I said on the discussion forum: “Now what? Plan – what plan?”. Last time I used that slogan, it was as the title of this blog when I went backpacking around Europe for nine months – and that was one of the most amazing experiences ever. Here’s hoping for another roller coaster ride.

What is the DataPortability Project

When we created the DataPortability workgroup in November 2007, it was after discussion amongst a few of us to further explore an idea; a vision for the future of the social web. By working together, we thought we could make real change in the industry. What we didn’t realise, was how quickly and how big the attention generated by this workgroup was to be. A press release has been released that details the journey to date, which highlight’s some interesting tidbits. What I am going to write below, are how my own thoughts have evolved over the last few months, and what it is that I think DataPortability is.

1) Getting companies to adopt open, existing standards
RSS , OpenID , APML , oAuth , RDF , and the rest. These technologies exist, with of which have been around for many years. Everyone that understands what they are, know that they rock. If these standards are all so great – why hasn’t the entire technology industry adopted them yet? Now we just need awareness, education and in some cases pressure on the industry heavies to adopt them.

2) Create best practices of implementing these standards
When you are part of a community, you are in the know, and don’t realise how the outside world looks in. Let the standards communities focus their precious energies on creating and maintaining the technologies; and DataPortability can help provide resources for people to implement them. Is providing PHP4 support for oAuth really a priority? It isn’t for them – but by pooling the community with people that have diverse skillsets and are committed to the overall picture, it has a better chance of happening.

3) Synthesise these open standards to play nice with each other.
All these different communities working in isolation have been doing their own thing. An example is how Yadis-XRDS are working on service discovery and have a lacklustre catalogue. Do we just leave them to do their own thing? Does someone else in Bangalore create his own catalogue? (Which is highly likely given the under-exposure of this key aspect to groups needing it for the other standards, and the current state its in). Thanks to Kaliya for mentioning that the XRDS guys have been more then proficient in working with other groups – "how do you think their spec is part of the OpenID spec?". Julian Bond goes on to say: "Yadis-XRDS is only months old and XRDS-Simple is literally days old…Having trouble thinking of a community that is working in isolation. And that isn’t likely to be hugely offended if you suggested it. " So let me leave the examples here, and just say the DataPortability Project when defining technical and policy blueprints, can identify issues and from the bigger picture perspective focus attention on where it’s needed. By embracing the broader community, and focusing our attention on weaknesses, we can ensure no one is reinventing wheels .

4) Communicate all the good things the existing communities are doing, under the one brand, to the end user.
RSS is by far the most recognised open standard. Have you ever tried explaining RSS to someone who is outside of the tech industry? I have. Multiple times. It’s like I’ve just told them about the future with flying cars and settlements on Mars. I’ve done it in in the corporate world, to friends, family, girls I date, guys I weight train with and anyone else. Moving onto OpenID – does anyone apart from Scoble and the technorati who try all the webservices they can, really care? Most people use Facebook, Hotmail (the cutting edge are using Gmail) and that’s it. On your next trip to Europe ask a cultured French (wo)man if they know what OpenID is; why they need it; what they can do with it. Now try explaining RSS to the mix. And APML. And oAuth. Bonus if you can explain RDF to yourself.

Wouldn’t it be just easier if you explained what DataPortability is, and explained the benefits that can be achieved by using all these standards? Standards are invisible things that consumers shouldn’t need to care about; they just care about the benefits. Do consumers care about the standards behind Wi-Fi, as defined by Zero-conf – or do they care about clicking "enable wireless" on their laptop and them connecting to the Internet. If you are going around evangelising the technical standards, the only audience you will get are the corporates in IT departments, who couldn’t care less. The corporate IT guys respond to their customer/client facing guys, who in turn respond to consumers – and consumers couldn’t care less on how its done, but just what they can do. Have the consumer channel their demand, and it benefits the whole ecosystem.


The new DataPortability trustmark

It has been said the average consumer doesn’t care about DataPortability. Of course they don’t – we are still in the investigation phase of the Project ; which later on will evolve to the design phases and then evangelising phases. We know people would want RSS, oAuth, and the rest of the Alphabet soup – so lets use DataPortability as a brand that we can communicate this. Sales is about creating demand – lets coordinate our ‘selling’ to make it overwhelming – and make it easy for consumers to channel that want in a way they can relate to. You don’t say "oAuth"; you say "preventing password theft" to them instead.

5) Make the business case that a user should get open access to their data
Why should Facebook let other applications use the data it has on its servers? Why should google give up all this data they have about their users to a competitor? Why should a Fortune 500 adopt solutions that decentralise their control? Why should a user adopt RDF on their blog when they get no clear benefit from it? Is a self-trained PHP coder who can whack something together, going to be able to articulate that to the VC’s?

The tech industry has this obsession that nothing gets done unless the developers are on board. No surprises there – if we don’t have an engineer to build the bridge, we are going to have to keep jumping off the cliff hoping we make it to the other side. But at the same time, if you don’t have the people persuading the people that would fund this bridge; or the broader population about how important it is for them to have this bridge – that engineer can build what he wants but the end result is that no one will ever walk on it. Funny how web2.0 companies suck at the revenue model thing : overhype on the development innovation, with under-hype on the value-proposition to the ordinary consumer who funds their business .

Developers need to be on board because they hassle their bosses and sometimes that evangelising from within works; but imagine if we get the developers bosses bosses on board because some old bear on the board of directors wants DataPortability after his daughter explained it to him (the same person that also told him about Facebook and Youtube). I can assure you, as I’ve seen it first hand with the senior leadership at my own firm, this is exactly what is happening.

Intel is one of the best selling computer-chip companies in the world. Do you really think as a consumer I care about what chip my computers works on? Logically – no. But "Intel’s Inside" marketing campaign gave them a monopoly, because end consumers would ask "does it have intel inside?" and this pressure forced Intel’s customers (IBM and the rest) to actually use Intel. Steve Greenberg corrects me by saying "The Intel Inside campaign came a decade after Intel took over the world. It wasn’t what got them there. It was in response to Microsoft signaling that they liked AMD. Looked like AMD was going to take off… but then they didn’t". So my facts were slightly wrong, but the point still remains.
At the same time, it isn’t just political pressure but its also to educate. I genuinely believe opening up your data is a smart business strategy that will change the potential of web services.

You make people care by giving them an incentive to do it (business opportunities; customer political pressure; peer pressure as individuals and an industry which later evolve to industry norms). The semantic web communities, the VRM communities, the entire open standards communities – all have a common interest in doing this. DataPortability is culture change on an industry wide level, that will improve the entire ecosystem. Apparently innovation has died – I say it’s just beginning .