Tag Archive for 'mobile'

Opera’s Unite is democratising the cloud

Opera Unite - youtube imageOpera, the Norwegian browser with little under 1% market share of the English market, has made an interesting announcement. Following a much hyped mystery campaign, “Opera Unite” has been announced as a new way to interact with the browser. It transforms the browser into a server – so that your local computer can interact across the Internet in a peer-to-peer fashion. Or in simpler words, you can plug your photos, music and post-it notes into your Opera Unite installation – and be able to access that media anywhere on the Internet, be it another computer or your mobile phone. I view this as conceptually as an important landmark in data portability. The competing browser company Mozilla may lay claim to developing ubiquity, but Opera’s announcement is a big step to ubiquity the concept.

Implications: evolving the cloud to be more democratic
Opera Unite features 1I’ve had a test drive, but I’m not going to rehash the functionality here – there is plenty of commentary going on now. (Or better yet, simply check this video.) I don’t think it’s fair to criticise it, as it’s still an early development effort – for example, although I could access my photos on my mobile phone (that were stored on my Mac), I could not stream my music (which would be amazing once they can pull that off). But it’s an interesting idea being pushed by Opera, and it’s worth considering it from the bigger picture.

Opera Unite features 2There is a clear trend to cloud computing in the world – one where all you need is a browser and theoretically you can access anything you need for a computer (as your data, applications and processing power are done remotely). What Opera Unite does, is create a cloud that can be controlled by individuals. It’s embracing the sophistication home users have developed into now that they have multiple computers and devices, connected in the one household over a home wireless network. Different individual computers can act as repositories for a variety of data, and its accessibility can be fully controlled by the individuals.

Opera Unite features 3I think that concept is a brilliant one that brings it to the mass market (and something geeks won’t appreciate as they can already do this). It’s allowing consumers an alternative to storing their data, but still have it accessible “via the cloud”. As the information value chain goes, people can now store their data wherever they wish (like their own households) and then plug those home computers into the cloud to get the desired functionality they desire. So for example, you can store all your precious children pictures and your private health information on your home computer as you’ve chosen that to be your storage facility – but be able to get access to a suite of online functionality that exists in the cloud.

As Chris Messina notes, there is still an opera proxy service – meaning all your data connecting your home computer to your phone and other computers – still go through an Opera central server. But that doesn’t matter, because it’s the concept of local storage via the browser that this embodies. There is the potential for competing, open source attempts in creating a more evenly distributed peer-to-peer model. Opera Unite matters, because it’s implemented a concept people have long talked about – packaged in a dead easy way to use.

Implications: Opera the company
WebFS-on-the-desktop
For poor little Opera, this finally gives it a focus to innovate. Its been squashed out of the web browser market, and its had limited success on the mobile phone (its main niche opportunity – although with the iPhone now facing a big threat). Google’s chrome is fast developing into the standard for running SaaS applications over the web. But Opera’s decision to pursue this project is innovating in a new area, and more inline with what was first described as the data portability file system and the DiSo dashboard.

Like all great ideas, I look forward to Unite being copied, refined, and evolve into something great for the broader world.

The future is webiqitous

In the half century since the Internet was created – and the 20 years that the web was invented – a lot has changed. More recently, we’ve seen the Dot Com bubble and the web2.0 craze drive new innovations forward. But as I’ve postulated before, those eras are now over. So what’s next?

Well, ubiquity of course.

.com

Huh?
Let’s work backwards with some questions to help you understand.

Why do we now need ubiquity, and what exactly that means, requires us to think of another two questions. The changes brought by the Internet are not one big hit, but a gradual evolution. For example, “Open” has existed since the first days of the Internet in culture: it wasn’t a web2.0 invention. But “openess” was recognised by the masses only in web2.0 as a new way of doing things. This “open” culture had profound consequences: it led to the mass socialisation around content, and recognition of the power that is social media.

As the Internet’s permeation in our society continues, it will generate externalities that affect us (and that are not predictable). But the core trend can be identifiable, which is what I hope to explain in this post. And by understanding this core trend, we can comfortably understand where things are heading.

So let’s look at these two questions:
1) What is the longer term trend, that things like “open” are a part of?
2) What are aspects of this trend yet to be fully developed?

The longer term trend
The explanation can be found into why the Internet and the web were created in the first place. The short answer: interoperability and connectivity. The long answer – keep reading.

Without going deep into the history, the reason why the Internet was created was so that it could connect computers. Computers were machines that enabled better computation (hence the name). As they had better storage and querying capacities than humans, they became the way the US government (and large corporations) would store information. Clusters of these computers would be created (called networks) – and the ARPANET was built as a way of building connections between these computers and networks by the US government. More specifically, in the event of a nuclear war and if one of these computing networks were eliminated – the decentralised design of the Internet would allow the US defense network to rebound easily (an important design decision to remember).

The web has a related but slightly different reason for its creation. Hypertext was conceptualised in the 1960s by a philosopher and scientist, as a way of harnessing computers to better connect human knowledge. These men were partly inspired by an essay written in the the 1940s called “As We May Think“, where the chief scientist of the United States stated his vision whereby all knowledge could be stored on neatly categorised microfirm (the information storage technology at the time), and in moments, any knowledge could be retrieved. Several decades of experimentation in hypertext occurred, and finally a renegade scientist created the World Wide Web. He broke some of the conventions of what the ideal hypertext system would look like, and created a functional system that solved his problem. That being, connecting all these distributed scientists around the world and their knowledge.

So as it is clearly evident, computers have been used as a way of storing and manipulating information. The Internet was invented to connect computing systems around the world; and the Web did the same thing for the people who used this network. Two parallel innovative technologies (Internet and hypertext) used a common modern marvel (the computer) to connect the communication and information sharing abilities of machines and humans alike. With machines and the information they process, it’s called interoperability. With humans, it’s called being connected.

Cables

But before we move on, it’s worth noting that the inventor of the Web has now spent a decade advocating for his complete vision: a semantic web. What’s that? Well if we consider the Web as the sum of human knowledge accessible by humans, the Semantic Web is about allowing computers to be able to understand what the humans are reading. Not quite a Terminator scenario, but so computers can become even more useful for humans (as currently, computers are completely dependent on humans for interpretation).

What aspects of the trend haven’t happened yet?
Borders have been broken down that previously restrained us. The Internet and Hyptertext are enabling connectivity with humans and interoperability for computer systems that store information. Computers in turn, are enabling humans to process tasks that could not be done before. If the longer term trend is connecting and bridging systems, then the demon to be demolished are the borders that create division.

So with that in mind, we can now ask another question: “what borders exist that need to be broken down?” What it all comes down to is “access”. Or more specifically, access to data, access to connectivity, and access to computing. Which brings us back to the word ubiquity: we now need to strive to bridge the gap in those three domains and make them omnipresent. Information accessible from anywhere, by anyone.

Let’s now look at this in a bit more detail
Ubiquitous data: We need a world where data can travel without borders. We need to unlock all the data in our world, and have it accessible by all where possible. Connecting data is how we create information: the more data at our hands, the more information we can generate. Data needs to break free – detached from the published form and atomised for reuse.

Ubiquitous connectivity: If the Internet is a global network that connects the world, we need to ensure we can connect to that network irregardless of where we are. The value of our interconnected world can only achieve its optimum if we can connect wherever with whatever. At home on your laptop, at work on your desktop, on the streets with your mobile phone. No matter where you are, you should be able to connect to the Internet.

Ubiquitous computing: Computers need to become a direct tool available for our mind to use. They need to become an extension of ourselves, as a “sixth sense”. The border that prevents this, is the non-assimilation of computing into our lives (and bodies!). Information processing needs to become thoroughly integrated into everyday objects and activities.

Examples of when we have ubiquity
My good friend Andrew Aho over the weekend showed me something that he bought at the local office supplies shop. It was a special pen that, well, did everything.
– He wrote something on paper, and then through his USB, could transfer an exact replica to his computer in his original handwriting.
– He could perform a search on his computer to find a word in his digitised handwritten notes
– He was able to pass the pen over a pre-written bit of text, and it would replay the sounds in the room when he wrote that word (as in the position on the paper, not the time sequence)
– Passing the pen over the word also allowed it to be translated into several other languages
– He could punch out a query with the drawn out calculator, to compute a function
– and a lot more. The company has now created an open API on top of its platform – meaning anyone can now create additional features that build on this technology. It has the equivalent opportunity to when the Web was created as a platform, and anyone was allowed to build on top of it.

The pen wasn’t all that bulky, and it did this simply by having a camera attached, a microphone and special dotted paper that allowed the pen to recognise its position. Imagine if this pen could connect to the Internet, with access to any data, and the cloud computing resources for more advanced queries?

Now watch this TED video to the end, which shows the power when we allow computers to be our sixth sense. Let your imagination run wild as you watch it – and while it does, just think about ubiquitous data, connectivity, and computation which are the pillars for such a future.

Trends right now enabling ubiquity
So from the 10,000 feet view that I’ve just shown you, let’s now zoom down and look at trends occurring right now. Trends that are heading towards this ever growing force towards ubiquity.

From the data standpoint, and where I believe this next wave of innovation will centre on, we need to see two things: Syntactic Interoperability and Semantic Interoperability. Syntactic interoperability is when two or more systems can communicate with each other – so for example, having Facebook being able to communicate with MySpace (say, with people sending messages to each other). Semantic interoperability is the ability to automatically interpret the information exchanged meaningingfully – so when I Google Paris Hilton, the search engine understands that I want a hotel in a city in Europe, not a celebrity.

The Semantic Web and Linked Data is one key trend that is enabling this. It’s interlinking all the information out there, in a way that makes it accessible for humans and machines alike to reuse. Data portability is similarly another trend (of which I try to focus my efforts), where the industry is fast moving to enable us to move our identities, media and other meta data wherever we want to.

As Chris Messina recently said:

…the whole point of working on open building blocks for the social web is much bigger than just creating more social networks: our challenge is to build technologies that enhance the network and serve people so that they in turn can go and contribute to building better and richer societies…I can think of few other endeavors that might result in more lasting and widespread benefits than making the raw materials of human connection and knowledge sharing a basic and fundamental property of the web.

The DiSo Project that Chris leads is an umbrella effort that is spearheading a series of technologies, that will lay the infrastructure for when social networking will become “like air“, as Charlene Li has been saying for the last two years.

One of the most popular open source pieces of software (Drupal) has now for a while been innovating on the data side rather than on other features. More recently, we’ve seen Google announce it will cater better for websites that markup in more structured formats, giving an economic incentive for people to participate in the Semantic Web. API‘s (ways for external entities to access a website’s data and technology) are now flourishing, and are providing a new basis for companies to innovate and allow mashups (like newspapers).

As for computing and connectivity, these are more hardware issues, which will see innovation at a different pace and scale to the data domain. Cloud computing has long been understood as a long term shift, and which aligns with the move to ubiquitous computing. Theoretically, all you will need is an Internet connection, and with the cloud, be able to have computing resources at your disposal.

CERN

On the connectivity side, we are seeing governments around the world make broadband access a top priority (like the Australian governments recent proposal to create a national broadband network unlike anything else in the world). The more evident trend in this area however, will be the mobile phone – which since the iPhone, has completely transformed our perception of what we can done with this portable computing device. The mobile phone, when connected to the cloud carrying all that data, unleashes the power that is ubiquity.

And then?
Along this journey, we are going to see some unintended impacts, like how we are currently seeing social media replacing the need for a mass media. Spin-off trends will occur which any reasonable person will not be able to predict, and externalities (both positive and negative) will emerge as we drive towards this longer term trend of everything and everyone being connected. (The latest, for example, being the real time web and the social distribution network powering it).

Computing is life

It’s going to challenge conventions in our society and the way we go about our lives – and that’s something that we can’t predict but just expect. For now, however, the trend is pointing to how do we get ubiquity. Once we reach that, then we can ask the question of what happens after it – that being: what happens when everything is connected. But until then, we’ve got to work out on how do we get everything connected in the first place.

The mobile 3D future – as clear as mud

I’ll be happy to admit that in the past, I never understood the hype behind the mobile web. That is of course, before I bought the Nokia E61 – a brilliant phone I loved until it was stolen from me in April 2008. I had the phone since January 2007 with no regrets – only the hype surrounding the iPhone swayed me from (maybe) getting a new phone.

I used that phone for my news reading and e-mail (using the Gmail application, not the native phone installation) and I could understand how mobile was the future for technology innovation. Sure, it didn’t make me want throw my hands in the air and shout in excitement, but it made me understand amongst the naysayers of the mobile web, that there was potential. The defining thing for this realisation of mine, was the fact it had a massive screen, which was not common in phones before that.

So as my phone was stolen, I had the dilemma of buying a crappy new phone until the iPhone came out to Australia (potentially) several months later. But why bother I thought – I ended up buying the first generation iPhone over eBay. And lets just say, ever since, I’ve been throwing my hands in the air in excitement.

Using the iPhone is truly a transformative experience. Quite frankly, it sucks as a phone – no support for MMS; doesn’t syncronise with my corporate Lotus Notes calendar; and the call quality and hearing is consistently bad. But where it lacks as a phone, it makes up for it as a device. The fact I would use it over my laptop at home, just reflects how perfect it was: a portable computer that makes mobile browsing enjoyable. The native e-mail client made the process fun!

My cracked iPhone

Naturally of course, I smashed the screen of my phone (which made the iPhone “just” a phone, because the screen is its core value proposition), and yesterday I finally got around to buying a new phone. I faced an issue: do I upgrade to an iPhone 3G (and boy, did I miss 3G so I was willing to upgrade just for that), or go back to my beloved Nokia’s, who perform one of the most valuable features for me which is the syncing with my calendar. It turns out Nokia had just released in Australia the N96 – which is basically the N95 but better. I can recall Lachlan Hardy swearing by the N95 and convincing me it was the perfect phone. Indeed, his view was supported by anecdotal evidence I found: it was as if Nokia said “heck, lets just chuck every single feature we have into the one phone and see what happens”.

So surely I thought, the one week old N96 which is the evolution of sex-on-a-stick N95 must be just as good, if not better? The experimenter in me went for it, because I knew I could get a better understanding about the mobile future.

From iPhone to Nokia
I absolutely hate it. I’ve barely even had the phone for 24 hours, and yet I am dying to get rid of the phone – I much prefer my cracked screen iPhone on GPRS than the N96. The iPhone compared to the Nokia N96 is like going from Windows Vista to Window’s 3.1 (the version BEFORE Windows 95). This phone, which has a market value of $1200; and had every feature under the sun you could dream for in a phone, is something I am willing to sacrifice. The interface of the iPhone has made me addicted like a heroin addict – I literally, cannot force myself to get used to the degraded user experience.

And I just find it amazing how it’s provoking such a strong reaction in me. Before the iPhone, this phone would have been heaven for me. It has every feature I dream for in a phone, yet the pixelated, text-driven graphical interface makes me all agitated and angry that Nokia hasn’t focused on what really matters.

Playing with the N96, using the browser and even the Gmail app to try to make it a more seamless user experience, reminds me again why I never got the mobile web before. Realistically, it will be a few years before the standard mobile evolves to a richer interface and so consequently, it’s too early to think the mobile web is about to take off now. However, the mobile and the 3D internet are both trends where I am willing to bet my life that it’s not a question of “if” but “when”, and whoever is in the right place at the right time at the emergence of the next upswing post this financial crisis, will be the new barons of the technology world.

The interface and user experience, are the missing link between connecting the vision of the early supporters, into the excitement of the mainstream of society. Mark my words, just like social networking sites such as MySpace and Facebook appeared to come out of nowhere to dominate our world, so too will the mobile future and the 3D Internet.

Pricks

If you don’t have a valid e-mail, Facebook forces you to verify it, before it removes those annoying CAPTCHA boxes.It’s a pretty standard thing for websites to do this.

Now, it’s telling me, I have to verify my mobile phone number – even though I have been regularly using the service for eight months.

bastards

This is not about verifying my identity – it’s about forcing me to give up my personal information. Bastards.

My media consumption

I’ve only recently started blogging again, and I need to get into the habit. Even though I have a gazillion things I want to write about, I literally don’t have the time! So here’s a quick post, to keep me – ahem – regular. It’s actually something I think about a lot, given my interest in the internet started from my interest/background in media. Meme was started by Jeremiah, and I saw it on a posting by Marty.

Continue reading ‘My media consumption’