Archive for the 'Trends' Category

Page 2 of 2

The Internet, Iran, and Ubiquity

P1040449What’s happening right now in Iran is absolutely remarkable. It validates the remarkable impact ubiquitous computing and ubiquitous connectivity to the Internet has and its potential to disrupt even the most tightly controlled police state in the world.

The rejection of the election by the public is creating public chaos, finally giving the people a reason to revolt against a regime they’ve detested for decades now. This situation has the potential to escalate to bigger things – or it likely will settle down – but regardless, it gives us a real insight into the future. That is, how these new technologies are transforming everything, and disgracing the mass media in the process.

What I saw in Iran
This blog of mine actually started as a travel blog, and one of the countries I wrote about was Iran. In my analysis of that beautiful country, I hypothesised a revolution was brewing based on societal discontent. What prevented this revolution from ever occurring, was a legitimate trigger – one that wasn’t shut down by the Islamic propaganda.
P1040208
A interesting thing I noted was that the official messaging of the country was anti-American and very over the top – no surprises there. But when you talked to people on a one-on-one level, you realised the Iranian’s actually respect the American’s – and it was the establishment they detested. It seemed the regime had a tight grip on society, using Islam as a way of controlling them in much the same way the Bush Administration use patriotism and the War on Terror, to do what it wanted and silence criticism. But by controlling the media (amongst other things), it essentially helped control society from revolting.

How ubiquity has changed that
In my previously linked article, I talk about the rising trend of a ubiquitous world – one where connectivity, computing, and data was omnipresent in our world. Separately, we are seeing a rising trend toward a “network” operating model for internet businesses, as demonstrated with Facebook’s CEO recently saying how he imagines Facebook’s future to not be a destination site.
denied
The implication is that people are now connected , can share information and communicate without restraint, but better yet, do so in a decentralised manner. The use of Twitter to share information to the world, isn’t reliant on visiting Twitter.com – it’s simply a text message away. It’s hard to censor something that’s not centralised. And it’s even harder to control and influence a population, where they no longer need the mass media for information, but can communicate directly with each other on a mass scale.

Take note
Social media is having a remarkable impact. Not only are we getting better quality reporting of events (with the mass media entirely failing us), but it’s enabling mass collaboration on a grand scale. One where even a government has the risk of being toppled. I’m still waiting to here from my Iranian friends to get their insight into the situation, but if it’s one lesson we should take note of, is that the Internet is transforming the world. Industries are not only being impacted, but society in the broadest sense. If a few picture-capable phones, a short-messaging communication service, and some patchy wireless Internet can rattle the most authoritarian state in the world, then all I can say is I’m gobsmacked at what else is on the horizon.

The ‘always-beta’ culture is affecting more than just journalism

always in betaMichael Arrington, one if the world’s most successful bloggers, writes about the latest battle he’s had against the mainstream media. He quotes the progressive journalism thinker Jeff Jarvis who identifies the conflict as a difference between “process” and “product” journalism. This is a brilliant step forward in understanding the evolution of the news media (highly recommend you read both posts), and to validate it, I will share how this very fact is true in other domains (specifically, web2.0 in the enterprise).

How I sold the idea of a wiki at PwC
In 2006, I pitched to senior management at my firm – the world’s largest knowledge firm – that we were failing at how we did knowledge. More specifically, I argued that the systems in place was creating opportunity cost, because the way we viewed “knowledge” was wrong – and the systems we had only supported one type. As a solution, I proposed we implement Web2.0 tools as a way of changing this.

What I want to share more of however, is the actual problem I identified. It was a problem that senior management knew existed but in different words. What I did was give the intellectual justification that created the “ah-ha” moment.

Soft knowledge versus hard knowledge
Central to my thesis was that knowledge had a continuum, and that we have traditionally said knowledge was a product only. The physical output of knowledge in the industrial society has been some published form like a book or a magazine. This output therefore defined the perspective of this product – multiple reviews of the content, close scrutiny of what was being said, and careful consideration of what made the final cut. It was expensive to create a book, and so quite reasonably, we’ve aimed at making it perfect.

However most knowledge within a firm doesn’t exist in a published form. When we talk about sharing knowledge within organisations, we are actually actually referring to having people talk to each other. Human conversation is the most established type of human knowledge transfer, and until the alphabet was adopted by various cultures, was the only way knowledge could be transmitted. This is called “soft knowledge”, and it’s not better or worse than “hard” knowledge, but just a different state on the continuum.

tree roots

Soft knowledge rapidly evolves. It never has a fixed state. Sometimes, it never ever makes it up the line to become “hard” knowledge, or solidified – but this doesn’t make it useless. In fact, when it comes to doing our work, this tacit knowledge doesn’t need a fixed state – it’s a fluid piece of knowledge that will never justify it being published in a hard-bound book. Like a dynamic conversation between a group of people, the ideas are rapidly evolving so fast that trying to lock it down actually ruins the process. Soft knowledge is not so much a product but a process – like rapidly firing electrons remixing towards the goal of a more solid state.

The ‘always-beta’ culture
Technology is enabling us to evolve our ability to communicate. Its gone beyond a one-to-one and one-to-many model that we’ve traditionally been accustomed to, but now allowing a many-to-many model. This new form of communication is allowing knowledge to get better captured in this ‘soft’ state. Categories are no longer useful, even though as a society hierarchies and linearity is how we are accustomed to the world. We need to now become more adapt at analysing knowledge through a network.

When it comes to information (including the news), the value comes not from its accuracy but its availability. If I have an emergency situation on a client, I want all the available options for me to assist in my decision making. As a professional, I can then assess what route to take. Although pre-certification of knowledge has value in accuracy, sometimes full accuracy results in a bigger opportunity cost: inaction.

crushing waves

There will always be a place for news as a product. But what we need right now is to understand blogs do news differently, and potentially for news itself, might be a better model. And whether you like it or not, it’s worked before- after all, we’ve been doing conversation now for close to a 100,000 years. If we never did it, we’d never end up to where we are now.

Google Wave will take a generation

google wave logoChris Saad used to ask me questions about tech in enterprise due to my history (I’ve got the battle scars rolling out web2.0 at PwC), but he asked me after he wrote this post. So instead of telling him he’s wrong by email (ironic given the topic), I’m going to shame him to the world!

Why Google Wave will take over ten years to turn into a trending wave
As I previously wrote when the news of Google’s new technology was announced, there is a hidden detail Google hasn’t announced to the world: it requires massive computational power to pull off. It doesn’t take a brain to realise it either – anyone thats used a bloated Instant Messenger (like Lotus Same Time) probably understands this. All that rich media, group chat, real time – Jesus, how many fans are we going to need now to blow the steam generated by our computer processors? Mozilla pioneered tabbed browsing – and it’s still trying to pioneer on the same idea – from your computer crashing when you have more than a few tabs open!

Don’t get me wrong, Google Wave is phenomenal. But it’s only the beginning. The fact Google has opened this up to the world is a good thing. But we need to be realistic, because even if this technology is distributed (like how email is), the question I want to know is how many users can one server support? I’d be surprised at these early stages if it’s more than a dozen (the demo itself showed there’s still a lot of work to be done). Do I have inside knowledge? No – just common sense and experience with every other technology I’ve used to date.

Why Google Wave won’t hit the enterprise in the next 12 months
Now to the point where Saad is *really* wrong. “20% of enterprise users will be using wave in the first 12 months for more than 50% of their comms (replacing email and wiki)“.

chris saad google wave

Yeah right. It’s going to take at least three years, with a stable and mature technology, for this to work. Email sucks, but it also works. IT departments, especially in this economy, are not going to try a new form of communication that’s half working and is not a mass adopted technology (wiki’s are a new thing – there’s a cultural battle still being fought within enterprises).

The real time nature potentially might even scare communications departments. Entire divisions exist in firms like mine, to control the message sent to employees. If you are revealing a message before the final message has been crafted, you’ve given away control to that message – the process now becomes just as important as the final message. I understand this functionality can be turned off, but I’m raising it to highlight how enterprises think.

Google Wave rocks
Again, don’t get me wrong. Google Wave blows my mind. But let’s be realistic here – big ideas take time. It took a while for Google the search engine to domiante. Heck, Gmail has taken nearly a decade to get to the point of being called dominant. And you can fix bugs, deploy software, and roll out sales teams – but sometimes with big ideas, it’s a generational thing.

Wave will dominate our world communications – one day. But not for a while.

Google Wave’s dirty little secret

google wave logoGoogle has announced a new technology that is arguably the boldest invention and most innovative idea to come out in recent years for the Internet (full announcement here).

It has the potential to replace email, instant messenging, and create a new technical category for collaboration and interactivity in the broadest sense. However hidden in the details, is a dirty little secret about the practicality of this project.

Google Wave is transformative, but it also is a technical challenge. If adopted, it will entrench cloud computing and ultimately Google’s fate as the most dominant company in the world.

The challenge in its development
For the last two years, the Google Sydney office has been working on a “secret project”. It got to the stage where the office – which runs the Google Maps product (another Sydney invention) – was competing for resources and had half the office dedicated to developing it. So secret was the project, that only the highest level of Google’s management team in Mountain View knew about it. Googler’s in other parts of the world either didn’t know about it, or people like me in the local tech scene, knew it was something big but didn’t know what exactly.

However although I didn’t know what exactly it was, I was aware of the challenge. And basically, it boils down to this: it’s a difficult engineering feat to pull off. The real time collaboration, which is at the core of what this technology provides, requires computationally a huge amount of resources for it to work.

It needs everyone to use it
Although we are all digging into the details, one thing I know for a fact, is that Google wants to make this as open as possible. It wants competitors like Microsoft, Yahoo and the entire development community to not just use it – but be a big driver in its adoption. For collaboration to work, you need people – and it makes little sense to restrict it to only a segment of the Internet population (much the same like email). Google’s openness isn’t being driven out of charity, but pure economic sense: it needs broad-based market adoption for this to work.

federation_diagram_fixed2

Only few can do it
However, with lots of people using it comes another fact: only those with massive cloud computing capabilities will be able to do this. Google practically invented and popularised the most important trend in computing right now. A trend where the industrial age’s economies of scale has come to play – reminding us that there are aspects of the Information Economy that are not entirely different from the past. What Google’s Wave technology does, is give a practical application that relies on cloud computing for its execution. And if the Wave protocol becomes as ubiquitous as email and Instant Messaging – and goes further to become core to global communications – then we will see the final innings to who now runs this world.

Wave is an amazing technology, and I am excited to see it evolve. But mark my words: this open technology requires a very expensive setup behind the scenes. And those that will meet this setup, will be our masters of tomorrow. Google has come to own us due to its innovation in information management – now watch Act II as it does the same for communications.

Google should acquire Friendfeed, the leader in the real time web

May is real time month: everyone is now saying the latest trend for innovation is the real time web. Today, we hear that Larry Page, co-founder of Google, confirming to Loic Le Meur that real time search was overlooked by Google and is now a focus for their future innovation.

With all this talk of Google acquiring Twitter, I’m now wondering why isn’t Friendfeed seen as the best candidate to ramp up Google’s real time potential.

Friendfeed does real time better than anyone else. Facebook rules when it comes to the activity stream of a person ‚Äì meaning, tracking an individuals life and to some extent media sharing. Twitter rules for sentiment, as it’s like one massive chat room, and to some extent link sharing. But Friendfeed, quite frankly, craps all over Facebook and Twitter in real time search.

Why? Three reasons:

1) It‚Äôs an aggregator. The fundamental premise of the service is in aggregating people‚Äôs lives and their streams. People don‚Äôt even have to ever visit Friendfeed other than an initial sign up. Once someone confirms their data sources, Friendeed has a crawler constantly checking an individuals life stream AND that’s been validated as their own. It doesn‚Äôt rely on a person Tweeting a link, or sharing a video ‚Äì it‚Äôs done automatically through RSS.

2) It’s better suited for discovery. The communities for Twitter, Facebook, and Friendfeed are as radically different as America, Europe, and Asia are in cultures. People that use Friendfeed literally sit there discovering new content, ranking it with their “likes” and expanding it with their comments to items. It’s a social media powerhouse.

3) It’s better technology. Don’t get me wrong, Facebook has an amazing team. But they don’t have the same focus. With less people and less money – but with a stricter focus – Friendfeed actually has a superior product specifically when it comes to real time search. Their entire service is built around maximizing it.

Up until now, I‚Äôve been wondering about Friendfeed’s future. It has a brilliant team rolling out features I didn‚Äôt even realise I needed or could have. But I couldn’t see the value proposition ‚Äì or rather, I don‚Äôt have the time to get the value out of Friendfeed because I have a job that distracts me from monitoring that stream!

But now it‚Äôs clear to me that Friendfeed is a leader in the pack – a pack that’s now shaping into a key trend of innovation. And given the fact the creator of Gmail and Adsense is one of the co-founders, I couldn‚Äôt imagine a better fit for Google.

The future is webiqitous

In the half century since the Internet was created – and the 20 years that the web was invented – a lot has changed. More recently, we’ve seen the Dot Com bubble and the web2.0 craze drive new innovations forward. But as I’ve postulated before, those eras are now over. So what’s next?

Well, ubiquity of course.

.com

Huh?
Let’s work backwards with some questions to help you understand.

Why do we now need ubiquity, and what exactly that means, requires us to think of another two questions. The changes brought by the Internet are not one big hit, but a gradual evolution. For example, “Open” has existed since the first days of the Internet in culture: it wasn’t a web2.0 invention. But “openess” was recognised by the masses only in web2.0 as a new way of doing things. This “open” culture had profound consequences: it led to the mass socialisation around content, and recognition of the power that is social media.

As the Internet’s permeation in our society continues, it will generate externalities that affect us (and that are not predictable). But the core trend can be identifiable, which is what I hope to explain in this post. And by understanding this core trend, we can comfortably understand where things are heading.

So let’s look at these two questions:
1) What is the longer term trend, that things like “open” are a part of?
2) What are aspects of this trend yet to be fully developed?

The longer term trend
The explanation can be found into why the Internet and the web were created in the first place. The short answer: interoperability and connectivity. The long answer – keep reading.

Without going deep into the history, the reason why the Internet was created was so that it could connect computers. Computers were machines that enabled better computation (hence the name). As they had better storage and querying capacities than humans, they became the way the US government (and large corporations) would store information. Clusters of these computers would be created (called networks) – and the ARPANET was built as a way of building connections between these computers and networks by the US government. More specifically, in the event of a nuclear war and if one of these computing networks were eliminated – the decentralised design of the Internet would allow the US defense network to rebound easily (an important design decision to remember).

The web has a related but slightly different reason for its creation. Hypertext was conceptualised in the 1960s by a philosopher and scientist, as a way of harnessing computers to better connect human knowledge. These men were partly inspired by an essay written in the the 1940s called “As We May Think“, where the chief scientist of the United States stated his vision whereby all knowledge could be stored on neatly categorised microfirm (the information storage technology at the time), and in moments, any knowledge could be retrieved. Several decades of experimentation in hypertext occurred, and finally a renegade scientist created the World Wide Web. He broke some of the conventions of what the ideal hypertext system would look like, and created a functional system that solved his problem. That being, connecting all these distributed scientists around the world and their knowledge.

So as it is clearly evident, computers have been used as a way of storing and manipulating information. The Internet was invented to connect computing systems around the world; and the Web did the same thing for the people who used this network. Two parallel innovative technologies (Internet and hypertext) used a common modern marvel (the computer) to connect the communication and information sharing abilities of machines and humans alike. With machines and the information they process, it’s called interoperability. With humans, it’s called being connected.

Cables

But before we move on, it’s worth noting that the inventor of the Web has now spent a decade advocating for his complete vision: a semantic web. What’s that? Well if we consider the Web as the sum of human knowledge accessible by humans, the Semantic Web is about allowing computers to be able to understand what the humans are reading. Not quite a Terminator scenario, but so computers can become even more useful for humans (as currently, computers are completely dependent on humans for interpretation).

What aspects of the trend haven’t happened yet?
Borders have been broken down that previously restrained us. The Internet and Hyptertext are enabling connectivity with humans and interoperability for computer systems that store information. Computers in turn, are enabling humans to process tasks that could not be done before. If the longer term trend is connecting and bridging systems, then the demon to be demolished are the borders that create division.

So with that in mind, we can now ask another question: “what borders exist that need to be broken down?” What it all comes down to is “access”. Or more specifically, access to data, access to connectivity, and access to computing. Which brings us back to the word ubiquity: we now need to strive to bridge the gap in those three domains and make them omnipresent. Information accessible from anywhere, by anyone.

Let’s now look at this in a bit more detail
Ubiquitous data: We need a world where data can travel without borders. We need to unlock all the data in our world, and have it accessible by all where possible. Connecting data is how we create information: the more data at our hands, the more information we can generate. Data needs to break free – detached from the published form and atomised for reuse.

Ubiquitous connectivity: If the Internet is a global network that connects the world, we need to ensure we can connect to that network irregardless of where we are. The value of our interconnected world can only achieve its optimum if we can connect wherever with whatever. At home on your laptop, at work on your desktop, on the streets with your mobile phone. No matter where you are, you should be able to connect to the Internet.

Ubiquitous computing: Computers need to become a direct tool available for our mind to use. They need to become an extension of ourselves, as a “sixth sense”. The border that prevents this, is the non-assimilation of computing into our lives (and bodies!). Information processing needs to become thoroughly integrated into everyday objects and activities.

Examples of when we have ubiquity
My good friend Andrew Aho over the weekend showed me something that he bought at the local office supplies shop. It was a special pen that, well, did everything.
– He wrote something on paper, and then through his USB, could transfer an exact replica to his computer in his original handwriting.
– He could perform a search on his computer to find a word in his digitised handwritten notes
– He was able to pass the pen over a pre-written bit of text, and it would replay the sounds in the room when he wrote that word (as in the position on the paper, not the time sequence)
– Passing the pen over the word also allowed it to be translated into several other languages
– He could punch out a query with the drawn out calculator, to compute a function
– and a lot more. The company has now created an open API on top of its platform – meaning anyone can now create additional features that build on this technology. It has the equivalent opportunity to when the Web was created as a platform, and anyone was allowed to build on top of it.

The pen wasn’t all that bulky, and it did this simply by having a camera attached, a microphone and special dotted paper that allowed the pen to recognise its position. Imagine if this pen could connect to the Internet, with access to any data, and the cloud computing resources for more advanced queries?

Now watch this TED video to the end, which shows the power when we allow computers to be our sixth sense. Let your imagination run wild as you watch it – and while it does, just think about ubiquitous data, connectivity, and computation which are the pillars for such a future.

Trends right now enabling ubiquity
So from the 10,000 feet view that I’ve just shown you, let’s now zoom down and look at trends occurring right now. Trends that are heading towards this ever growing force towards ubiquity.

From the data standpoint, and where I believe this next wave of innovation will centre on, we need to see two things: Syntactic Interoperability and Semantic Interoperability. Syntactic interoperability is when two or more systems can communicate with each other – so for example, having Facebook being able to communicate with MySpace (say, with people sending messages to each other). Semantic interoperability is the ability to automatically interpret the information exchanged meaningingfully – so when I Google Paris Hilton, the search engine understands that I want a hotel in a city in Europe, not a celebrity.

The Semantic Web and Linked Data is one key trend that is enabling this. It’s interlinking all the information out there, in a way that makes it accessible for humans and machines alike to reuse. Data portability is similarly another trend (of which I try to focus my efforts), where the industry is fast moving to enable us to move our identities, media and other meta data wherever we want to.

As Chris Messina recently said:

…the whole point of working on open building blocks for the social web is much bigger than just creating more social networks: our challenge is to build technologies that enhance the network and serve people so that they in turn can go and contribute to building better and richer societies…I can think of few other endeavors that might result in more lasting and widespread benefits than making the raw materials of human connection and knowledge sharing a basic and fundamental property of the web.

The DiSo Project that Chris leads is an umbrella effort that is spearheading a series of technologies, that will lay the infrastructure for when social networking will become “like air“, as Charlene Li has been saying for the last two years.

One of the most popular open source pieces of software (Drupal) has now for a while been innovating on the data side rather than on other features. More recently, we’ve seen Google announce it will cater better for websites that markup in more structured formats, giving an economic incentive for people to participate in the Semantic Web. API‘s (ways for external entities to access a website’s data and technology) are now flourishing, and are providing a new basis for companies to innovate and allow mashups (like newspapers).

As for computing and connectivity, these are more hardware issues, which will see innovation at a different pace and scale to the data domain. Cloud computing has long been understood as a long term shift, and which aligns with the move to ubiquitous computing. Theoretically, all you will need is an Internet connection, and with the cloud, be able to have computing resources at your disposal.

CERN

On the connectivity side, we are seeing governments around the world make broadband access a top priority (like the Australian governments recent proposal to create a national broadband network unlike anything else in the world). The more evident trend in this area however, will be the mobile phone – which since the iPhone, has completely transformed our perception of what we can done with this portable computing device. The mobile phone, when connected to the cloud carrying all that data, unleashes the power that is ubiquity.

And then?
Along this journey, we are going to see some unintended impacts, like how we are currently seeing social media replacing the need for a mass media. Spin-off trends will occur which any reasonable person will not be able to predict, and externalities (both positive and negative) will emerge as we drive towards this longer term trend of everything and everyone being connected. (The latest, for example, being the real time web and the social distribution network powering it).

Computing is life

It’s going to challenge conventions in our society and the way we go about our lives – and that’s something that we can’t predict but just expect. For now, however, the trend is pointing to how do we get ubiquity. Once we reach that, then we can ask the question of what happens after it – that being: what happens when everything is connected. But until then, we’ve got to work out on how do we get everything connected in the first place.

New web trend – real time spam

Marshall Kirkpatrick wrote a brilliant post the other day about the coming real time web, following comments by Paul Buchheit (inventor of Gmail). Twitter and the Summize acquisition; Friendfeed; and Facebook are the engine room of innovation right now and real time activity streams are now coming to light as the next big thing. (And I know it’s only going to get bigger as the activity strea.ms workgroup are nearing the finalisation of their spec – meaning all websites can offer it in a standardised way.)

Whilst innovation is always a good thing to see, let’s not forget some of the more innovative people in our world actually are the bad guys. Ladies and gentlemen – introducing real time spam.

The screening of the popular new release movie Star Trek was one of the biggest topics being discussed in the Twitter community (a community where the real time web is at its biggest right now). And the spammers have bombarded it.

#startrek - Twitter Search

The Real Time Web has massive opportunity for our society – especially when everyone is connected. But it also makes us vulnerable – as real time means a captured attention from the audience. And like a police car chase constantly trying to out the bad guys, trying to regulate the Real Time Web could be a challenge.

How Twitter is using psychology to bootstrap an unbelievable trend

The core activity of Twitter compels its users to act in ways that makes them forget about what they are really doing.

When I first came to accept that lifelogging was an upcoming trend, I thought how the hell would people allow that to happen? Lifelogging (or lifestreaming as I prefer to call it) is a constant stream of your life, in a way that is reminiscent of the Truman Show. Put yourself in the mindset of someone in 1995, 2000 or even 2005 – and imagine if someone said: "One day, you will make public your inner most thoughts about the world". What would your response be? I would have probably laughed in disbelief.

Stalker

How people get caught into the river of lifestreaming
The Facebook homescreen (which came to life in late 2006), certainly gave the lifestreaming concept a big jump forward by forcing it on their users. Arguably, you could say blogs started it all – but it’s not quite the same as what Twitter is creating.

With Twitter, people usually start hesitantly and confused. They send messages, and drop a few personal insights into their life, because they realise that’s what other people are doing. As they acquire followers, they develop a more persistent relationship with their service. They realise the value of connecting with new people, which happens as they develop their social melebrity status. There is a sense of status in the fact hundreds of people willingly follow you – with status being a core human aspiration. They get hungry: they want more followers.

How a race distracts from the behaviour you then permit
What Aston Kutcher did is typical of what all Twitter users do, albeit on a smaller scale (ie, increase their following). The fascinating thing about this, is that people forget that they are now chasing an endless tail, which further entrenches them into the lifestreaming phenomenon.

Twitter requires people to explicitly share. The focus on this core activity, makes people attempt to create a witty message or one that people value. They get caught up in participating in lifestreaming, completely forgetting and then accepting (if confronted) what they’ve lost.

That being, we’ve now given up the thing people most freak out about in electronic communications: our anonymity and privacy in the world.

As more and more people get onto Twitter, and as more celebrities get caught up in it which will bring the non-tech world into the fold – watch this phenomenon. The natural cycle of a Twitter user, which eventuates in follower acquisition to increase their sense of status, is actually opening up a world I never thought would actually happen.
One where we share our inner most thoughts and details in life, because psychologically, we feel compelled to.

Truman show exit

Think about that last sentence for a bit. That’s kind of crazy.