Frequent thinker, occasional writer, constant smart-arse

Category: Internet (Page 4 of 8)

Facebook’s no longer a startup

Facebook pokeFacebook announced today that they became cash-flow positive in the last quarter. This is a big deal, and should be looked at in the broader context of the Internet’s development and the economy’s resurgence.

The difference between a start-up and a growth company
There are four stages in the life-cycle of a business: start-up, growth, maturity, and decline.

In tech, we tend to obsess over the “start-up” – a culture that idolises small, nimble teams innovating every day. Bu there is a natural consequence of getting better, bigger, and more dominant in a market – you become a big company. And big company’s can do a lot more (and less) than when they could as startup’s.

Without going too much into the difference between the cycles, it’s worth mentioning that a functional definition to differentiate a “startup” business from a “growth” business is its financial performance. Meaning, a startup can be one who has revenues and expenses – but the revenues don’t tend to cover the operating costs of a business. A growth business on the other hand, is experiencing the same craziness of a start-up – but is now self-supporting because its revenues can over its costs.

This makes a big difference in a company, lest of all longer term sustainability. When a business is cashflow negative, it risks going bankrupt and management’s attention can be distracted by attempts to raise money. But at least now with Facebook finally going cash-flow positive, it has one less thing to worry about and can now grow with a focus less on survival and more on dominance.

Cash register

Looking at history
Several years after the Dot Com bubble, I remember reading an article by a switched on journalist. He was talking about the sudden growth of Google, and how Google could potentially bring the tech industry back from the ashes. He was right.

Google has created a lot of innovative products, but its existence has had two very important impacts on the Internet’s development.

First of all, there was adsense – a innovative new concept in advertising that millions of websites around the world could participate in. Google provided the web a new revenue model that has supported millions of content creators, utility providers, and marketplaces powered by the Internet.

Secondly, Google created a new exit model. Startup’s now had a new hungry acquisition machine, giving startups more opportunities to get funded as Venture Capitalists no longer relied on an IPO to make their money – which had now been effectively killed thanks to the over-engineered requirements of Sarbanes Oxley.

Why Facebook going cashflow positive is a big deal
Facebook is doing what Google did, and it’s money and innovation will drive the industry to a new level. Better still, its long been regarded that technology is what helps economies achieve growth again, and so the growth of Facebook will not only see a rebuilding of the web economy but also of the American one. The multiplier effect of Facebook funding the ecosystem will be huge.

And just like Google, Facebook will likely be pioneering a new breed of advertising network that benefits the entire Internet. And even if it fails in doing that, its cash will at least fund the next hype cycle of the web.

So mark this day as when the nuclear winter has ended – it’s spring time boys and girls. We my not have a word like Web2.0 to describe the current state of the Internets evolution, but whatever its called, that era has now begun.

Opera’s Unite is democratising the cloud

Opera Unite - youtube imageOpera, the Norwegian browser with little under 1% market share of the English market, has made an interesting announcement. Following a much hyped mystery campaign, “Opera Unite” has been announced as a new way to interact with the browser. It transforms the browser into a server – so that your local computer can interact across the Internet in a peer-to-peer fashion. Or in simpler words, you can plug your photos, music and post-it notes into your Opera Unite installation – and be able to access that media anywhere on the Internet, be it another computer or your mobile phone. I view this as conceptually as an important landmark in data portability. The competing browser company Mozilla may lay claim to developing ubiquity, but Opera’s announcement is a big step to ubiquity the concept.

Implications: evolving the cloud to be more democratic
Opera Unite features 1I’ve had a test drive, but I’m not going to rehash the functionality here – there is plenty of commentary going on now. (Or better yet, simply check this video.) I don’t think it’s fair to criticise it, as it’s still an early development effort – for example, although I could access my photos on my mobile phone (that were stored on my Mac), I could not stream my music (which would be amazing once they can pull that off). But it’s an interesting idea being pushed by Opera, and it’s worth considering it from the bigger picture.

Opera Unite features 2There is a clear trend to cloud computing in the world – one where all you need is a browser and theoretically you can access anything you need for a computer (as your data, applications and processing power are done remotely). What Opera Unite does, is create a cloud that can be controlled by individuals. It’s embracing the sophistication home users have developed into now that they have multiple computers and devices, connected in the one household over a home wireless network. Different individual computers can act as repositories for a variety of data, and its accessibility can be fully controlled by the individuals.

Opera Unite features 3I think that concept is a brilliant one that brings it to the mass market (and something geeks won’t appreciate as they can already do this). It’s allowing consumers an alternative to storing their data, but still have it accessible “via the cloud”. As the information value chain goes, people can now store their data wherever they wish (like their own households) and then plug those home computers into the cloud to get the desired functionality they desire. So for example, you can store all your precious children pictures and your private health information on your home computer as you’ve chosen that to be your storage facility – but be able to get access to a suite of online functionality that exists in the cloud.

As Chris Messina notes, there is still an opera proxy service – meaning all your data connecting your home computer to your phone and other computers – still go through an Opera central server. But that doesn’t matter, because it’s the concept of local storage via the browser that this embodies. There is the potential for competing, open source attempts in creating a more evenly distributed peer-to-peer model. Opera Unite matters, because it’s implemented a concept people have long talked about – packaged in a dead easy way to use.

Implications: Opera the company
WebFS-on-the-desktop
For poor little Opera, this finally gives it a focus to innovate. Its been squashed out of the web browser market, and its had limited success on the mobile phone (its main niche opportunity – although with the iPhone now facing a big threat). Google’s chrome is fast developing into the standard for running SaaS applications over the web. But Opera’s decision to pursue this project is innovating in a new area, and more inline with what was first described as the data portability file system and the DiSo dashboard.

Like all great ideas, I look forward to Unite being copied, refined, and evolve into something great for the broader world.

The Internet, Iran, and Ubiquity

P1040449What’s happening right now in Iran is absolutely remarkable. It validates the remarkable impact ubiquitous computing and ubiquitous connectivity to the Internet has and its potential to disrupt even the most tightly controlled police state in the world.

The rejection of the election by the public is creating public chaos, finally giving the people a reason to revolt against a regime they’ve detested for decades now. This situation has the potential to escalate to bigger things – or it likely will settle down – but regardless, it gives us a real insight into the future. That is, how these new technologies are transforming everything, and disgracing the mass media in the process.

What I saw in Iran
This blog of mine actually started as a travel blog, and one of the countries I wrote about was Iran. In my analysis of that beautiful country, I hypothesised a revolution was brewing based on societal discontent. What prevented this revolution from ever occurring, was a legitimate trigger – one that wasn’t shut down by the Islamic propaganda.
P1040208
A interesting thing I noted was that the official messaging of the country was anti-American and very over the top – no surprises there. But when you talked to people on a one-on-one level, you realised the Iranian’s actually respect the American’s – and it was the establishment they detested. It seemed the regime had a tight grip on society, using Islam as a way of controlling them in much the same way the Bush Administration use patriotism and the War on Terror, to do what it wanted and silence criticism. But by controlling the media (amongst other things), it essentially helped control society from revolting.

How ubiquity has changed that
In my previously linked article, I talk about the rising trend of a ubiquitous world – one where connectivity, computing, and data was omnipresent in our world. Separately, we are seeing a rising trend toward a “network” operating model for internet businesses, as demonstrated with Facebook’s CEO recently saying how he imagines Facebook’s future to not be a destination site.
denied
The implication is that people are now connected , can share information and communicate without restraint, but better yet, do so in a decentralised manner. The use of Twitter to share information to the world, isn’t reliant on visiting Twitter.com – it’s simply a text message away. It’s hard to censor something that’s not centralised. And it’s even harder to control and influence a population, where they no longer need the mass media for information, but can communicate directly with each other on a mass scale.

Take note
Social media is having a remarkable impact. Not only are we getting better quality reporting of events (with the mass media entirely failing us), but it’s enabling mass collaboration on a grand scale. One where even a government has the risk of being toppled. I’m still waiting to here from my Iranian friends to get their insight into the situation, but if it’s one lesson we should take note of, is that the Internet is transforming the world. Industries are not only being impacted, but society in the broadest sense. If a few picture-capable phones, a short-messaging communication service, and some patchy wireless Internet can rattle the most authoritarian state in the world, then all I can say is I’m gobsmacked at what else is on the horizon.

The ‘always-beta’ culture is affecting more than just journalism

always in betaMichael Arrington, one if the world’s most successful bloggers, writes about the latest battle he’s had against the mainstream media. He quotes the progressive journalism thinker Jeff Jarvis who identifies the conflict as a difference between “process” and “product” journalism. This is a brilliant step forward in understanding the evolution of the news media (highly recommend you read both posts), and to validate it, I will share how this very fact is true in other domains (specifically, web2.0 in the enterprise).

How I sold the idea of a wiki at PwC
In 2006, I pitched to senior management at my firm – the world’s largest knowledge firm – that we were failing at how we did knowledge. More specifically, I argued that the systems in place was creating opportunity cost, because the way we viewed “knowledge” was wrong – and the systems we had only supported one type. As a solution, I proposed we implement Web2.0 tools as a way of changing this.

What I want to share more of however, is the actual problem I identified. It was a problem that senior management knew existed but in different words. What I did was give the intellectual justification that created the “ah-ha” moment.

Soft knowledge versus hard knowledge
Central to my thesis was that knowledge had a continuum, and that we have traditionally said knowledge was a product only. The physical output of knowledge in the industrial society has been some published form like a book or a magazine. This output therefore defined the perspective of this product – multiple reviews of the content, close scrutiny of what was being said, and careful consideration of what made the final cut. It was expensive to create a book, and so quite reasonably, we’ve aimed at making it perfect.

However most knowledge within a firm doesn’t exist in a published form. When we talk about sharing knowledge within organisations, we are actually actually referring to having people talk to each other. Human conversation is the most established type of human knowledge transfer, and until the alphabet was adopted by various cultures, was the only way knowledge could be transmitted. This is called “soft knowledge”, and it’s not better or worse than “hard” knowledge, but just a different state on the continuum.

tree roots

Soft knowledge rapidly evolves. It never has a fixed state. Sometimes, it never ever makes it up the line to become “hard” knowledge, or solidified – but this doesn’t make it useless. In fact, when it comes to doing our work, this tacit knowledge doesn’t need a fixed state – it’s a fluid piece of knowledge that will never justify it being published in a hard-bound book. Like a dynamic conversation between a group of people, the ideas are rapidly evolving so fast that trying to lock it down actually ruins the process. Soft knowledge is not so much a product but a process – like rapidly firing electrons remixing towards the goal of a more solid state.

The ‘always-beta’ culture
Technology is enabling us to evolve our ability to communicate. Its gone beyond a one-to-one and one-to-many model that we’ve traditionally been accustomed to, but now allowing a many-to-many model. This new form of communication is allowing knowledge to get better captured in this ‘soft’ state. Categories are no longer useful, even though as a society hierarchies and linearity is how we are accustomed to the world. We need to now become more adapt at analysing knowledge through a network.

When it comes to information (including the news), the value comes not from its accuracy but its availability. If I have an emergency situation on a client, I want all the available options for me to assist in my decision making. As a professional, I can then assess what route to take. Although pre-certification of knowledge has value in accuracy, sometimes full accuracy results in a bigger opportunity cost: inaction.

crushing waves

There will always be a place for news as a product. But what we need right now is to understand blogs do news differently, and potentially for news itself, might be a better model. And whether you like it or not, it’s worked before- after all, we’ve been doing conversation now for close to a 100,000 years. If we never did it, we’d never end up to where we are now.

Google Wave will take a generation

google wave logoChris Saad used to ask me questions about tech in enterprise due to my history (I’ve got the battle scars rolling out web2.0 at PwC), but he asked me after he wrote this post. So instead of telling him he’s wrong by email (ironic given the topic), I’m going to shame him to the world!

Why Google Wave will take over ten years to turn into a trending wave
As I previously wrote when the news of Google’s new technology was announced, there is a hidden detail Google hasn’t announced to the world: it requires massive computational power to pull off. It doesn’t take a brain to realise it either – anyone thats used a bloated Instant Messenger (like Lotus Same Time) probably understands this. All that rich media, group chat, real time – Jesus, how many fans are we going to need now to blow the steam generated by our computer processors? Mozilla pioneered tabbed browsing – and it’s still trying to pioneer on the same idea – from your computer crashing when you have more than a few tabs open!

Don’t get me wrong, Google Wave is phenomenal. But it’s only the beginning. The fact Google has opened this up to the world is a good thing. But we need to be realistic, because even if this technology is distributed (like how email is), the question I want to know is how many users can one server support? I’d be surprised at these early stages if it’s more than a dozen (the demo itself showed there’s still a lot of work to be done). Do I have inside knowledge? No – just common sense and experience with every other technology I’ve used to date.

Why Google Wave won’t hit the enterprise in the next 12 months
Now to the point where Saad is *really* wrong. “20% of enterprise users will be using wave in the first 12 months for more than 50% of their comms (replacing email and wiki)“.

chris saad google wave

Yeah right. It’s going to take at least three years, with a stable and mature technology, for this to work. Email sucks, but it also works. IT departments, especially in this economy, are not going to try a new form of communication that’s half working and is not a mass adopted technology (wiki’s are a new thing – there’s a cultural battle still being fought within enterprises).

The real time nature potentially might even scare communications departments. Entire divisions exist in firms like mine, to control the message sent to employees. If you are revealing a message before the final message has been crafted, you’ve given away control to that message – the process now becomes just as important as the final message. I understand this functionality can be turned off, but I’m raising it to highlight how enterprises think.

Google Wave rocks
Again, don’t get me wrong. Google Wave blows my mind. But let’s be realistic here – big ideas take time. It took a while for Google the search engine to domiante. Heck, Gmail has taken nearly a decade to get to the point of being called dominant. And you can fix bugs, deploy software, and roll out sales teams – but sometimes with big ideas, it’s a generational thing.

Wave will dominate our world communications – one day. But not for a while.

Google Wave’s dirty little secret

google wave logoGoogle has announced a new technology that is arguably the boldest invention and most innovative idea to come out in recent years for the Internet (full announcement here).

It has the potential to replace email, instant messenging, and create a new technical category for collaboration and interactivity in the broadest sense. However hidden in the details, is a dirty little secret about the practicality of this project.

Google Wave is transformative, but it also is a technical challenge. If adopted, it will entrench cloud computing and ultimately Google’s fate as the most dominant company in the world.

The challenge in its development
For the last two years, the Google Sydney office has been working on a “secret project”. It got to the stage where the office – which runs the Google Maps product (another Sydney invention) – was competing for resources and had half the office dedicated to developing it. So secret was the project, that only the highest level of Google’s management team in Mountain View knew about it. Googler’s in other parts of the world either didn’t know about it, or people like me in the local tech scene, knew it was something big but didn’t know what exactly.

However although I didn’t know what exactly it was, I was aware of the challenge. And basically, it boils down to this: it’s a difficult engineering feat to pull off. The real time collaboration, which is at the core of what this technology provides, requires computationally a huge amount of resources for it to work.

It needs everyone to use it
Although we are all digging into the details, one thing I know for a fact, is that Google wants to make this as open as possible. It wants competitors like Microsoft, Yahoo and the entire development community to not just use it – but be a big driver in its adoption. For collaboration to work, you need people – and it makes little sense to restrict it to only a segment of the Internet population (much the same like email). Google’s openness isn’t being driven out of charity, but pure economic sense: it needs broad-based market adoption for this to work.

federation_diagram_fixed2

Only few can do it
However, with lots of people using it comes another fact: only those with massive cloud computing capabilities will be able to do this. Google practically invented and popularised the most important trend in computing right now. A trend where the industrial age’s economies of scale has come to play – reminding us that there are aspects of the Information Economy that are not entirely different from the past. What Google’s Wave technology does, is give a practical application that relies on cloud computing for its execution. And if the Wave protocol becomes as ubiquitous as email and Instant Messaging – and goes further to become core to global communications – then we will see the final innings to who now runs this world.

Wave is an amazing technology, and I am excited to see it evolve. But mark my words: this open technology requires a very expensive setup behind the scenes. And those that will meet this setup, will be our masters of tomorrow. Google has come to own us due to its innovation in information management – now watch Act II as it does the same for communications.

Why Twitter will make advertising an endagered species

twitter-logo-small Twitter has transformed the way we communicate in the world. That’s a big deal, because as human beings, the ability to communicate is how we broke free from the rest of the animal kingdom. Our entire society is based on this fact, and so it should come as no surprise that so are some of our biggest industries. Advertising, the billion-dollar industry that funds the web and media, is literally about communicating to the public.

More fundamentally, that’s how the market economy operates. There are three elements to a market: conversations, relationships, and transactions. In the industrial age, we forgot about this and came to associate markets as purely transactional: we see a price attached to a mass produced item, and that is meant to convey everything we need to know. But as Doc Searls shares the story with his African friend, the conversation at the market is how selling used to be done, underpinned by a relationship.

My firm PricewaterhouseCoopers is one of the biggest firms in the world. In Australia, we are almost twice the size of our nearest competitor and manage to charge more than our competitors as well without consequence. I’ve often wondered how this could be, but it was only until I broke down the fundamental components of the market that I realised. Price matters – but only when you don’t know anything else. When someone gets to know someone at the firm, they have conversations – and build a relationship. Those relationships are what makes PwC the behemoth it is. It’s not that price is irrelevant, but now with additional information to inform an economic buyer, it’s no longer the sole determinant.

Demand and Supply, sitting in a tree
Twitter co-founder Isaac “Biz” Stone recently defended the company’s stance on advertising as a revenue model. He rightly says the banner ad model is dead – no kidding. But his brilliance comes through when he says that they are exploring ways in “facilitating connections between businesses and individuals in meaningful and relevant ways”. Those words so simply explain more than just Twitter’s opportunity, but the entire future of advertising.

My half-cousin Alex Lambousis has created his own fashion label. Primarily a Jeans business, he controls the entire design process as he owns an industrial laundry, and so can compete on the global scale with high-end jean product. Like any startup, he’s trying to crack new markets.

Think about Alex’s issue. He’s a wholesaler, who relies on retail outlets to sell his product – not exactly the best of customers. He’s reliant on celebrities wearing his clothes, and negotiating special rack space in high end fashion outlets, to get exposure of his world-class product. But it’s a hard market to crack – he’s had success, but is not where he wants to be. What’s a man to do?

Have a look at this search query I just did on Twitter’s community. Twitter allows you – in real time – to search for what people are talking about right now. My first attempt, without trying to be creative with the search string, yielded the following results:
new jeans - Twitter Search

A new customer just appeared on the market half a minute ago. A few of the others can be identified as market opportunities. Imagine if Alex simply responded to them, giving them a discount on his range or just pointing them to a blog post where he can show case his in-depth knowledge. Before the Internet, for a wholesaler like Alex to make money, he relied on advertising in fashion magazines. Now he can interact directly with his customers, and even if he can’t make a sale – he can at least invest in a relationship for future sales.

He’s having a conversation and building relationships. Price is no longer the only source of information for the customer. Those curves on the demand and supply curve have now been personified. That’s better than some poster stuck on a billboard – that’s a return to how our world used to work before factory’s pumped our standard-issue Model T’s.

I might not have solved Twitter’s revenue challenge in this post, but I sure as hell am excited about the future opportunities afforded by tools like Twitter for the economy.

Google should acquire Friendfeed, the leader in the real time web

May is real time month: everyone is now saying the latest trend for innovation is the real time web. Today, we hear that Larry Page, co-founder of Google, confirming to Loic Le Meur that real time search was overlooked by Google and is now a focus for their future innovation.

With all this talk of Google acquiring Twitter, I’m now wondering why isn’t Friendfeed seen as the best candidate to ramp up Google’s real time potential.

Friendfeed does real time better than anyone else. Facebook rules when it comes to the activity stream of a person ‚Äì meaning, tracking an individuals life and to some extent media sharing. Twitter rules for sentiment, as it’s like one massive chat room, and to some extent link sharing. But Friendfeed, quite frankly, craps all over Facebook and Twitter in real time search.

Why? Three reasons:

1) It‚Äôs an aggregator. The fundamental premise of the service is in aggregating people‚Äôs lives and their streams. People don‚Äôt even have to ever visit Friendfeed other than an initial sign up. Once someone confirms their data sources, Friendeed has a crawler constantly checking an individuals life stream AND that’s been validated as their own. It doesn‚Äôt rely on a person Tweeting a link, or sharing a video ‚Äì it‚Äôs done automatically through RSS.

2) It’s better suited for discovery. The communities for Twitter, Facebook, and Friendfeed are as radically different as America, Europe, and Asia are in cultures. People that use Friendfeed literally sit there discovering new content, ranking it with their “likes” and expanding it with their comments to items. It’s a social media powerhouse.

3) It’s better technology. Don’t get me wrong, Facebook has an amazing team. But they don’t have the same focus. With less people and less money – but with a stricter focus – Friendfeed actually has a superior product specifically when it comes to real time search. Their entire service is built around maximizing it.

Up until now, I‚Äôve been wondering about Friendfeed’s future. It has a brilliant team rolling out features I didn‚Äôt even realise I needed or could have. But I couldn’t see the value proposition ‚Äì or rather, I don‚Äôt have the time to get the value out of Friendfeed because I have a job that distracts me from monitoring that stream!

But now it‚Äôs clear to me that Friendfeed is a leader in the pack – a pack that’s now shaping into a key trend of innovation. And given the fact the creator of Gmail and Adsense is one of the co-founders, I couldn‚Äôt imagine a better fit for Google.

The future is webiqitous

In the half century since the Internet was created – and the 20 years that the web was invented – a lot has changed. More recently, we’ve seen the Dot Com bubble and the web2.0 craze drive new innovations forward. But as I’ve postulated before, those eras are now over. So what’s next?

Well, ubiquity of course.

.com

Huh?
Let’s work backwards with some questions to help you understand.

Why do we now need ubiquity, and what exactly that means, requires us to think of another two questions. The changes brought by the Internet are not one big hit, but a gradual evolution. For example, “Open” has existed since the first days of the Internet in culture: it wasn’t a web2.0 invention. But “openess” was recognised by the masses only in web2.0 as a new way of doing things. This “open” culture had profound consequences: it led to the mass socialisation around content, and recognition of the power that is social media.

As the Internet’s permeation in our society continues, it will generate externalities that affect us (and that are not predictable). But the core trend can be identifiable, which is what I hope to explain in this post. And by understanding this core trend, we can comfortably understand where things are heading.

So let’s look at these two questions:
1) What is the longer term trend, that things like “open” are a part of?
2) What are aspects of this trend yet to be fully developed?

The longer term trend
The explanation can be found into why the Internet and the web were created in the first place. The short answer: interoperability and connectivity. The long answer – keep reading.

Without going deep into the history, the reason why the Internet was created was so that it could connect computers. Computers were machines that enabled better computation (hence the name). As they had better storage and querying capacities than humans, they became the way the US government (and large corporations) would store information. Clusters of these computers would be created (called networks) – and the ARPANET was built as a way of building connections between these computers and networks by the US government. More specifically, in the event of a nuclear war and if one of these computing networks were eliminated – the decentralised design of the Internet would allow the US defense network to rebound easily (an important design decision to remember).

The web has a related but slightly different reason for its creation. Hypertext was conceptualised in the 1960s by a philosopher and scientist, as a way of harnessing computers to better connect human knowledge. These men were partly inspired by an essay written in the the 1940s called “As We May Think“, where the chief scientist of the United States stated his vision whereby all knowledge could be stored on neatly categorised microfirm (the information storage technology at the time), and in moments, any knowledge could be retrieved. Several decades of experimentation in hypertext occurred, and finally a renegade scientist created the World Wide Web. He broke some of the conventions of what the ideal hypertext system would look like, and created a functional system that solved his problem. That being, connecting all these distributed scientists around the world and their knowledge.

So as it is clearly evident, computers have been used as a way of storing and manipulating information. The Internet was invented to connect computing systems around the world; and the Web did the same thing for the people who used this network. Two parallel innovative technologies (Internet and hypertext) used a common modern marvel (the computer) to connect the communication and information sharing abilities of machines and humans alike. With machines and the information they process, it’s called interoperability. With humans, it’s called being connected.

Cables

But before we move on, it’s worth noting that the inventor of the Web has now spent a decade advocating for his complete vision: a semantic web. What’s that? Well if we consider the Web as the sum of human knowledge accessible by humans, the Semantic Web is about allowing computers to be able to understand what the humans are reading. Not quite a Terminator scenario, but so computers can become even more useful for humans (as currently, computers are completely dependent on humans for interpretation).

What aspects of the trend haven’t happened yet?
Borders have been broken down that previously restrained us. The Internet and Hyptertext are enabling connectivity with humans and interoperability for computer systems that store information. Computers in turn, are enabling humans to process tasks that could not be done before. If the longer term trend is connecting and bridging systems, then the demon to be demolished are the borders that create division.

So with that in mind, we can now ask another question: “what borders exist that need to be broken down?” What it all comes down to is “access”. Or more specifically, access to data, access to connectivity, and access to computing. Which brings us back to the word ubiquity: we now need to strive to bridge the gap in those three domains and make them omnipresent. Information accessible from anywhere, by anyone.

Let’s now look at this in a bit more detail
Ubiquitous data: We need a world where data can travel without borders. We need to unlock all the data in our world, and have it accessible by all where possible. Connecting data is how we create information: the more data at our hands, the more information we can generate. Data needs to break free – detached from the published form and atomised for reuse.

Ubiquitous connectivity: If the Internet is a global network that connects the world, we need to ensure we can connect to that network irregardless of where we are. The value of our interconnected world can only achieve its optimum if we can connect wherever with whatever. At home on your laptop, at work on your desktop, on the streets with your mobile phone. No matter where you are, you should be able to connect to the Internet.

Ubiquitous computing: Computers need to become a direct tool available for our mind to use. They need to become an extension of ourselves, as a “sixth sense”. The border that prevents this, is the non-assimilation of computing into our lives (and bodies!). Information processing needs to become thoroughly integrated into everyday objects and activities.

Examples of when we have ubiquity
My good friend Andrew Aho over the weekend showed me something that he bought at the local office supplies shop. It was a special pen that, well, did everything.
– He wrote something on paper, and then through his USB, could transfer an exact replica to his computer in his original handwriting.
– He could perform a search on his computer to find a word in his digitised handwritten notes
– He was able to pass the pen over a pre-written bit of text, and it would replay the sounds in the room when he wrote that word (as in the position on the paper, not the time sequence)
– Passing the pen over the word also allowed it to be translated into several other languages
– He could punch out a query with the drawn out calculator, to compute a function
– and a lot more. The company has now created an open API on top of its platform – meaning anyone can now create additional features that build on this technology. It has the equivalent opportunity to when the Web was created as a platform, and anyone was allowed to build on top of it.

The pen wasn’t all that bulky, and it did this simply by having a camera attached, a microphone and special dotted paper that allowed the pen to recognise its position. Imagine if this pen could connect to the Internet, with access to any data, and the cloud computing resources for more advanced queries?

Now watch this TED video to the end, which shows the power when we allow computers to be our sixth sense. Let your imagination run wild as you watch it – and while it does, just think about ubiquitous data, connectivity, and computation which are the pillars for such a future.

Trends right now enabling ubiquity
So from the 10,000 feet view that I’ve just shown you, let’s now zoom down and look at trends occurring right now. Trends that are heading towards this ever growing force towards ubiquity.

From the data standpoint, and where I believe this next wave of innovation will centre on, we need to see two things: Syntactic Interoperability and Semantic Interoperability. Syntactic interoperability is when two or more systems can communicate with each other – so for example, having Facebook being able to communicate with MySpace (say, with people sending messages to each other). Semantic interoperability is the ability to automatically interpret the information exchanged meaningingfully – so when I Google Paris Hilton, the search engine understands that I want a hotel in a city in Europe, not a celebrity.

The Semantic Web and Linked Data is one key trend that is enabling this. It’s interlinking all the information out there, in a way that makes it accessible for humans and machines alike to reuse. Data portability is similarly another trend (of which I try to focus my efforts), where the industry is fast moving to enable us to move our identities, media and other meta data wherever we want to.

As Chris Messina recently said:

…the whole point of working on open building blocks for the social web is much bigger than just creating more social networks: our challenge is to build technologies that enhance the network and serve people so that they in turn can go and contribute to building better and richer societies…I can think of few other endeavors that might result in more lasting and widespread benefits than making the raw materials of human connection and knowledge sharing a basic and fundamental property of the web.

The DiSo Project that Chris leads is an umbrella effort that is spearheading a series of technologies, that will lay the infrastructure for when social networking will become “like air“, as Charlene Li has been saying for the last two years.

One of the most popular open source pieces of software (Drupal) has now for a while been innovating on the data side rather than on other features. More recently, we’ve seen Google announce it will cater better for websites that markup in more structured formats, giving an economic incentive for people to participate in the Semantic Web. API‘s (ways for external entities to access a website’s data and technology) are now flourishing, and are providing a new basis for companies to innovate and allow mashups (like newspapers).

As for computing and connectivity, these are more hardware issues, which will see innovation at a different pace and scale to the data domain. Cloud computing has long been understood as a long term shift, and which aligns with the move to ubiquitous computing. Theoretically, all you will need is an Internet connection, and with the cloud, be able to have computing resources at your disposal.

CERN

On the connectivity side, we are seeing governments around the world make broadband access a top priority (like the Australian governments recent proposal to create a national broadband network unlike anything else in the world). The more evident trend in this area however, will be the mobile phone – which since the iPhone, has completely transformed our perception of what we can done with this portable computing device. The mobile phone, when connected to the cloud carrying all that data, unleashes the power that is ubiquity.

And then?
Along this journey, we are going to see some unintended impacts, like how we are currently seeing social media replacing the need for a mass media. Spin-off trends will occur which any reasonable person will not be able to predict, and externalities (both positive and negative) will emerge as we drive towards this longer term trend of everything and everyone being connected. (The latest, for example, being the real time web and the social distribution network powering it).

Computing is life

It’s going to challenge conventions in our society and the way we go about our lives – and that’s something that we can’t predict but just expect. For now, however, the trend is pointing to how do we get ubiquity. Once we reach that, then we can ask the question of what happens after it – that being: what happens when everything is connected. But until then, we’ve got to work out on how do we get everything connected in the first place.

New web trend – real time spam

Marshall Kirkpatrick wrote a brilliant post the other day about the coming real time web, following comments by Paul Buchheit (inventor of Gmail). Twitter and the Summize acquisition; Friendfeed; and Facebook are the engine room of innovation right now and real time activity streams are now coming to light as the next big thing. (And I know it’s only going to get bigger as the activity strea.ms workgroup are nearing the finalisation of their spec – meaning all websites can offer it in a standardised way.)

Whilst innovation is always a good thing to see, let’s not forget some of the more innovative people in our world actually are the bad guys. Ladies and gentlemen – introducing real time spam.

The screening of the popular new release movie Star Trek was one of the biggest topics being discussed in the Twitter community (a community where the real time web is at its biggest right now). And the spammers have bombarded it.

#startrek - Twitter Search

The Real Time Web has massive opportunity for our society – especially when everyone is connected. But it also makes us vulnerable – as real time means a captured attention from the audience. And like a police car chase constantly trying to out the bad guys, trying to regulate the Real Time Web could be a challenge.

« Older posts Newer posts »