Frequent thinker, occasional writer, constant smart-arse

Tag: google (Page 2 of 3)

How Google reader can finally start making money

Today, you would have heard that Newsgator, Bloglines, Me.dium, Peepel, Talis and Ma.gnolia have joined the APML workgroup and are in discussions with workgroup members on how they can implement APML into their product lines. Bloglines created some news the other week on their intention to adopt it, and the announcement today about Newsgator means APML is now fast becoming an industry standard.

Google however, is still sitting on the side lines. I really like using Google reader, but if they don?¢‚Ǩ‚Ñ¢t announce support for APML soon, I will have to switch back to my old favourite Bloglines which is doing some serious innovating. Seeing as Google reader came out of beta recently, I thought I?¢‚Ǩ‚Ñ¢d help them out to finally add a new feature (APML) that will see it generate some real revenue.

What a Google reader APML file would look like
Read my previous post on what exactly APML is. If the Google reader team was to support APML, what they could add to my APML file is a ranking of blogs, authors, and key-words. First an explanation, and then I will explain the consequences.

In terms of blogs I read, the percentage frequency of posting I read from a particular blog will determine the relevancy score in my APML file. So if I was to read 89% of Techcrunch posts ?¢‚Ǩ‚Äú which is information already provided to users ?¢‚Ǩ‚Äú it would convert this into a relevancy score for Techcrunch of 89% or 0.89.

ranking

APML: pulling rank

In terms of authors I read, it can extract who posted the entry from the individual blog postings I read, and like the blog ranking above, perform a similar procedure. I don?¢‚Ǩ‚Ñ¢t imagine it would too hard to do this, however given it?¢‚Ǩ‚Ñ¢s a small team running the product, I would put this on a lower priority to support.

In terms of key-words, Google could employ its contextual analysis technology from each of the postings I read and extract key words. By performing this on each post I read, the frequency of extracted key words determines the relevance score for those concepts.

So that would be the how. The APML file generated from Google Reader would simply rank these blogs, authors, and key-words – and the relevance scores would update over time. Over time, the data is indexed and re-calculated from scratch so as concepts stop being viewed, they start to diminish in value until they drop off.

What Google reader can do with that APML file
1. Ranking of content
One of the biggest issues facing consumers of RSS is the amount of information overload. I am quite confident to think that people would pay a premium, for any attempt to help rank the what can be the hundreds of items per day, that need to be read by a user. By having an APML file, over time Google Reader can match postings to what a users ranked interests are. So rather than presenting the content by reverse chronology (most recent to oldest); it can instead organise content by relevancy (items of most interest to least).

This won?¢‚Ǩ‚Ñ¢t reduce the amount of RSS consumption by a user, but it will enable them to know how to allocate their attention to content. There are a lot of innovative ways you can rank the content, down to the way you extract key works and rank concepts, so there is scope for competing vendors to have their own methods. However the point is, a feature to ?¢‚ǨÀúSort by Personal Relevance?¢‚Ǩ‚Ñ¢ would be highly sort after, and I am sure quite a few people will be willing to pay the price for this God send.

I know Google seems to think contextual ads are everything, but maybe the Google Reader team can break from the mould and generate a different revenue stream through a value add feature like that. Google should apply its contextual advertising technology to determine key words for filtering, not advertising. It can use this pre-existing technology to generate a different revenue stream.

2. Enhancing its AdSense programme

blatant ads

Targeted advertising is still bloody annoying

One of the great benefits of APML is that it creates an open database about a user. Contextual advertising, in my opinion is actually a pretty sucky technology and its success to date is only because all the other types of targeted advertising models are flawed. As I explain above, the technology instead should be done to better analyse what content a user consumes, through keyword analysis. Over time, a ranking of these concepts can occur ?¢‚Ǩ‚Äú as well as being shared from other web services that are doing the same thing.

An APML file that ranks concepts is exactly what Google needs to enhance its adwords technology. Don?¢‚Ǩ‚Ñ¢t use it to analyse a post to show ads; use it to analyse a post to rank concepts. Then, in aggregate, the contextual advertising will work because it can be based off this APML file with great precision. And even better, a user can tweak it ?¢‚Ǩ‚Äú which will be the equivalent to tweaking what advertising a user wants to get. The transparency of a user being able to see what ‘concept ranking’ you generate for them, is powerful, because a user is likely to monitor it to be accurate.

APML is contextual advertising biggest friend, because it profiles a user in a sensible way, that can be shared across applications and monitored by the user. Allowing a user to tweak their APML file for the motivation of more targeted content, aligns their self-interest to ensure the targeted ads thrown at them based on those ranked concepts, are in fact, relevant.

3. Privacy credibility
Privacy is the inflation of the attention economy. You can?¢‚Ǩ‚Ñ¢t proceed to innovate with targeted advertising technology, whilst ignoring privacy. Google has clearly realised this the hard way by being labeled one of the worst privacy offenders in the world. By adopting APML, Google will go a long way to gain credibility in privacy rights. It will be creating open transparency with the information it collects to profile users, and it will allow a user to control that profiling of themselves.

APML is a very clever approach to dealing with privacy. It?¢‚Ǩ‚Ñ¢s not the only approach, but it a one of the most promising. Even if Google never uses an APML file as I describe above, the pure brand-enhancing value of giving some control to its users over their rightful attention data, is something alone that would benefit the Google Reader product (and Google?¢‚Ǩ‚Ñ¢s reputation itself) if they were to adopt it.

privacy

Privacy. Stop looking.

Conclusion
Hey Google – can you hear me? Let’s hope so, because you might be the market leader now, but so was Bloglines once upon a time.

Bloglines to support APML

Tucked away in a post by one of the leading RSS readers in the world, Bloglines had announced that they will be investigating on how they can implement APML into their service. The thing about standards is that as fantastic as they are, if no one uses them, they are not a standard. Over the last year, dozens of companies have implemented APML support and this latest annoucement by a revitalised Bloglines team that is set to take back what Google took from them, means we are going to be seeing a lot more innovation in an area that has largely gone unanswered.

The annoucement has been covered by Read/WriteWeb, APML founders Faraday Media,?Ç? and a thoughtful analysis has been done by Ross Dawson. Ben Melcalfe had also written a thought-provoking analysis, of the merits of APML.

What this means?

APML is about taking control of data that companies collect about you. For example, if you are reading lots of articles about dogs, RSS readers can make a good guess you like dogs – and will tick the “likes dogs” box on the profile they build of you which they use to determine advertising.?Ç? Your attention data is anything you give attention to – when you click on a link within facebook, that’s attention data that reveals things about you implicitly.

The big thing about APML is that is solves a massive problem when it comes to privacy. If you look at my definition of what constitutes privacy, the abillity to control what data is collected with APML, completely fits the bill. I was so impressed when I first heard about it, because its a problem I have been thinking about for years, that I immediately joined the APML workgroup.

Privacy is the inflation of the attention economy, and companies like Google are painfully learning about the natural tension between privacy and targetted advertising. (Targetted advertising being the thing that Google is counting on to fund its revenue.) The web has seen a lot of technological innovation, which has disrupted a lot of our culture and society. It’s time that the companies that are disrupting the world’s economies, started innovating to answer the concerns of the humans that are using their services. Understanding how to deal with privacy is a key competitive advantage for any company in the Internet sector. It’s good to see some finally realising that.

Don’t get the Semantic Web? You will after this

Prior to 2006, I had sort of heard of the Semantic Web. To be honest, I didn’t know much – it was just another buzzword. I’ve been hearing about Microformats for years, and cool but useless initiatives like XFN. However to me it was simply just another web thing being thrown around.

Then in August 2006, I came across Adrian Holovaty’s article where he argues journalism needs to move from a story-centric world to a data-centric world. And that’s when it dawned on me: the Semantic web is some serious business.

I have since done a lot of reading, listening, and thinking. I don’t profess to be a Semantic Web expert – but I know more than the average person as I have (painfully) put myself through videos and audios of academic types who confuse the crap out of me. I’ve also read through a myriad of academic papers from the W3C, which are like the times when you read a novel and keep re-reading the same page and still can’t remember what you just read.

Hell – I still don’t get things. But I get the vision, so that’s what I am going to share with you now. Hopefully, my understanding will benefit the clueless and the skeptical alike, because it’s a powerful vision which is entirely possible

1) The current web is great for humans; useless for machines
When you search for ambiguous terms, at best, search engines can algorithmically predict some sort of answer that partially answers your query. Sometimes not. But the complexity of language, is not something engineers can engineer to deal with. After all, without ambiguity of natural languages, the existence of poetry is impossible.

Fine.

What did you think when you read that? As in: “I’ve had it – fine!” which is like another way of saying ok or agreeing with something. Perhaps you thought about that parking ticket I just got – illegal parking gets you fined. Maybe you thought I am applauding myself by saying that was one fine piece of wordcraftship I just wrote, or said in another context, like a fine wine.

Language is ambiguous, and depending on the context with other words, we can determine what the meaning of the word is. Search start-up company Powerset, which is hoping to kill Google and rule the world, is employing exactly this technique to improve search: intelligent processing of words depending on context. So by me putting in “it’s a fine”, it understands the context that it’s a parking ticket, because you wouldn’t say “it’s a” in front of ‘fine’ when you use it to agree with something (the ‘ok’ meaning above).

But let’s use another example: “Hilton Paris” in Google – the worlds most ‘advanced’ search engine. Obviously, as a human reading that sentence, you understand because of the context of those words I would like to find information about the Hilton in Paris. Well maybe.

Let’s see what Google comes up with: Of the ten search results (as of when I wrote this blog posting), one was a news item on the celebrity; six were on the celebrity describing her in some shape or form, and three results were on the actual Hotel. Google, at 30/70 – is a little unsure.

Why is Paris Hilton, that blonde haired thingy of a celebrity, coming up in the search results?

Technologies like Powerset apparently produce a better result because it understands the order of the words and context of the search query. But the problem with these searches, isn’t the interpretation of what the searcher wants – but also the ability to understand the actual search results. Powerset can only interpret so much of the gazilions of words out there. There is the whole problem of the source data, no just the query. Don’t get what I mean? Keep reading. But for now, learn this lesson

Computers have no idea about the data they are reading. In fact, Google pumping out those search results is based on people linking. Google is a machine, and reads 1s and 0s – machine language. It doesn’t get human language

2) The Semantic web is about making what human’s read, machine readable
Tim Berner’s Lee, the guy that invented the World Wide Web and the visionary behind the Semantic Web, prefers to call it the ‘data web’. The current web is a web of documents – by adding this extra data to content – machines will be able to understand it. Metadata, is data about data.

A practical outcome of having a semantic web, is that Google would know that when it pulls up a web page regardless of the context of the words – it will understand what the content is. Think of every word on the web, being linked to a master dictionary.

The benefit of the semantic web is not for humans – at least immediately. The Semantic Web is actually pretty boring with what it does – what is exciting, is what it will enable. Keep reading.

3) The Semantic web is for machines to interpret, not people
A lot of the skeptics of the semantic web, usually don’t see the value of it. Who cares about adding all this extra meta data? I mean heck – Google still was able to get the website I needed – the Hilton in Paris. Sure, the other 60% of the results on that page were irrelevant, but I’m happy.

I once came across a Google employee and he asked “what’s the point of a semantic web; don’t we already enough metadata?” To some extent, he’s right – there are some websites out there that have metadata. But the point of the semantic web is so that machines once they read the information, can start thinking like how a human would and connecting it to other information. There needs to be across the board metadata.

For example, my friend Michael was recently looking to buy a car. A painful process, because there are so many variables. So many different models, different makes, different dealers, different packages. We have websites, with cars for sale neatly categorised into profile pages saying what model it is, what colour it is, and how much. (Which may I add, are hosted on multiple car sites with different types of profiles). A human painfully reads through these profiles, and computes as fast as a human can. But a machine can’t read these profiles.

Instead of wasting his (and my) weekends driving around Sydney to find his car, a machine could find it for him. So, Mike would enter his profile in – what he requires in a car, what his credit limit is, what his prior history with cars are – everything that would affect his judgement of a car. And then, the computer can query every online website with cars to match the criteria. Because the computer can interpret these websites across the board, it can evaluate and it can go back to Michael and say “this is the car for you, at this dealer – click yes to buy”.

The semantic web is about giving computers the information to be able to interpret data, so that it can do what they do really well – compute.

4) A worldwide database
What essentially Berner’s Lee envisions, is turning the entire world wide web into a database that can be queried. Currently, the web looks like Microsoft Word – one swab of text. However, if that swab of text was neatly categorised in an Excel spreadsheet, you could manipulate that data and do what you please – create reports, reorder them, filter, and do whatever until your heart is content.

At university, I was forced to do an Information Systems subject which was essentially about the theory of databases. Damn painful. I learned only two things from that course. The first thing was that my lecturer, tutor, and classmates spoke less intelligible English than a caterpillar. But the second thing was that I learned what information is and how it differs from data. I am now going to share with you that lesson, and save you three months of your life.

You see, data is meaningless. For example, 23 degrees is data. On its own, it’s useless. Another piece of data in Sydney. Again, Рuseless. I mean, you can think all sorts of things when you think of Sydney, but it doesn’t have any meaning.

Now put together 23 degrees and Sydney, and you have just created information. Information is about creating relationships between data. By creating a relationship, an association, between these two different pieces of data – you can determine it’s going to be a warm day in Sydney. And that is what information is: Relationship building; connecting the dots; linking the islands of data together to generate something meaningful.

The semantic web is about allowing computers to be able to query the sum of human knowledge like one big database to generate information

Concluding thoughts
You are probably now starting to freak out and think “Terminator” images with computers suddenly erupting form under your computer desk, and smashing you against the wall as a battle between humans and computers begins. But I don’t see it like that.

I think about the thousands of hours humans spend trying to compute things. I think of the cancer research, whereby all this experimentation occurring in labs, is trying to connect new pieces of data with old data to create new information. I think about computers being about to query the entire taxation legislation to make sure I don’t pay any tax, because it knows how it all fits together (having studied tax, I can assure you – it takes a lifetime to only understand a portion of tax law). In short, I understand the vision of the Semantic web as a way of linking things together, to enable computers to compute – so that I can sit on my hammock drinking my beer, as I can delegate the duties of my life to the machines.

All the semantic web is trying to do, is making sure everything is structured in a consistent manner, with a consistent dictionary behind the content, so that a machine can draw connections. As Berner’s Lee said on one of the videos I saw: “it’s all about creating links”.

The process to a Semantic Web is boring. But once we have those links, we can then start talking about those hammocks. And that’s when the power of the internet – the global network – will really take off.

Facebook is doing what Google did: enabling

The hype surrounding the Facebook platform has created a frenzy of hype – on it being a closed wall, on privacy and the right to users having control of their data, and of course the monetisation opportunities of the applications themselves (which on the whole, appear futile but that will change).

We’ve heard of applications becoming targeted, with one (rumoured) for $3 million – and it has proved applications are an excellent way to acquire users and generate leads to your off-Facebook website & products. We’ve also seen applications desperately trying to monetise their products, by putting Google Ads on the homepage of the application, which are probably just as effective as giving a steak to a vegetarian. The other day however was the first instance where I have seen a monetisation strategy by an application that genuinely looked possible.

It’s this application called Compare Friends, where you essentially compare two friends on a question (who’s nicer, who has better hair, who would you rather sleep with…). The aggregate of responses from your friends who have compared you, can indicate how a person sits in a social network. For example, I am most dateable in my network, and one of the people with prettiest eyes (oh shucks guys!).

The other day, I was given an option to access the premium service – which essentially analyses your friends’ responses.

compare sub

It occurred to me that monetisation strategies for the Facebook platform are possible beyond whacking Google Adsense on the application homepage. Valuable data can be collected by an application, such as what your friends think of you, and that can be turned into a useful service. Like above, they offer to tell you who is most likely to give you a good reference – that could be a useful thing. In the applications current iteration, I have no plans to pay 10 bucks for that data – but it does make you wonder that with time, more sophisticated services can be offered.

Facebook as the bastion of consumer insight

On a similar theme, I did an experiment a few months ago whereby I purchased a facebook poll, asking a certain demographic a serious question. The poll itself revealed some valuable data, as it gave me some more insight into the type of users of Facebook (following up from my original posting). However what it also revealed was the power of tapping into the crowd for a response so quickly.
clustered yes
Seeing the data come in by the minute as up to 200 people took the poll, as a marketer you could quickly gauge how people think about something in a statistically valid sample, in literally hours. You should read this posting discussing what I learned from the poll if you are interested.

It’s difficult to predict the trends I am seeing, and what will become of Facebook because a lot could happen. However one thing is certain, is that right now, it is a highly effective vehicle for individuals to gain insight about themselves – and generating this information is something I think people will pay for if it proves useful. Furthermore, it is an excellent way for organisations to organise quick and effective market research to test a hypothesis.

The power of Facebook, for external entities, is that it gives access to controlled populations whereby valuable data can be gained. As the WSJ notes, the platform has now started to see some clever applications that realise this. Expect a lot more to come.

Facebook is doing what Google did for the industry

When Google listed, a commentator said this could launch a new golden age that would bring optimism not seen since the bubble days to this badly shaken industry. I reflected on that point he made to see if his prophesy would come true one day. In case you hadn’t noticed, he was spot on!

When Google came, it did two big things for the industry

1) AdSense. Companies now had a revenue model – put some Google ads on your website in minutes. It was a cheap, effective advertising network that created an ecosystem. As of 30 June 2007, Google makes about 36% of their revenue from members in the Google network – meaning, non-Google websites. That’s about $2.7 billion. Although we can’t quantify how much their partners received – which could be anything from 20% to 70% (the $2.7 billion of course is Google’s share) – it would be safe to say Google helped the web ecosystem generate an extra $1 billion. That’s a lot of money!

2) Acquisitions. Google’s cash meant that buyouts where an option, rather than IPO, as is what most start-ups aimed for in the bubble days. In fact, I would argue the whole web2.0 strategy for startups is to get acquired by Google. This has encouraged innovation, as all parties from entrepreneurs to VC’s can make money from simply building features rather than actual businesses that have a positive cashflow. This innovation has a cumulative effect, as somewhere along the line, someone discovers an easy way to make money in ways others hadn’t thought possible.

Google’s starting to get stale now – but here comes Facebook to further add to the ecosystem. Their acquisition of a ‘web-operating system‘ built by a guy considered to be the next Bill Gates shows that Facebook’s growth is beyond a one hit wonder. The potential for the company to shake the industry is huge – for example, in advertising alone, they could roll out an advertising network that takes it a step further than contextual advertising as they actually have a full profile of 40 million people. This would make it the most efficient advertising system in the world. They could become the default login and identity system for people – no longer will you need to create an account for that pesky new site asking you to create an account. And as we are seeing currently, they enable a platform the helps other businesses generate business.

I’ve often heard people say that history will repeat itself – usually pointing to how 12 months ago Myspace was all the rage: Facebook is a fad, they will be replaced one day. I don’t think so – Facebook is evolving, and more importantly is that it is improving the entire web ecosystem. Facebook, like Google, is a company that strengthens the web economy. I am probably going to hate them one day, just like how my once loved Google is starting to annoy me now. But thank God it exists – because it’s enabling another generation of commerce that sees the sophistication of the web.

On the future of search

Robert Scoble has put together a video presentation on how Techmeme, Facebook and Mahalo will kill Google in four years time. His basic premise is that SEO’s who game Google’s algorithm are as bad as spam (and there are some pissed SEO experts waking up today!). People like the ideas he introduces about social filtering, but on the whole – people are a bit more skeptical on his world domination theory.

There are a few good posts like Muhammad‘s on why the combo won’t prevail, but on the whole, I think everyone is missing the real issue: the whole concept of relevant results.

Relevance is personal

When I search, I am looking for answers. Scoble uses the example of searching for HDTV and makes note of the top manufacturers as something he would expect at the top of the results. For him – that’s probably what he wants to see – but for me, I want to be reading about the technology behind it. What I am trying to illustrate here is that relevance is personal.

The argument for social filtering, is that it makes it more relevant. For example, by having a bunch of my friends associated with me on my Facebook account, an inference engine can determine that if my friend called A is also friends with person B, who is friends with person C – than something I like must also be something that person C likes. When it comes to search results, that sort of social/collaborative filtering doesn’t work because relevance is complicated. The only value a social network can provide is if the content is spam or not – a yes or no type of answer – which is assuming if someone in my network has come across this content. Just because my social network can (potentially) help filter out spam, doesn’t make the search results higher quality. It just means less spam results. There is plenty of content that may be on-topic but may as well be classed as spam.

Google’s algorithm essentially works on the popularity of links, which is how it determines relevance. People can game this algorithm, because someone can make a website popular to manipulate rankings through linking from fake sites and other optimisations. But Google’s pagerank algorithm is assuming that relevant results are, at their core, purely about popularity. The innovation the Google guys brought to the world of search is something to be applauded for, but the extreme lack of innovation in this area since just shows how hard it is to come up with new ways of making something relevant. Popularity is a smart way of determining relevance (because most people would like it) – but since that can be gamed, it no longer is.

The semantic web

I still don’t quite understand why people don’t realise the potential for the semantic web, something I go on about over and over again (maybe not on this blog – maybe it’s time I did). But if it is something that is going to change search, it will be that – because the semantic web will structure data – moving away from the document approach that webpages represent and more towards the data approach that resembles a database table. It may not be able to make results more relevant to your personal interests, but it will better understand the sources of data that make up the search results, and can match it up to whatever constructs you present it.

Like Google’s page rank, the semantic web will require human’s to structure data, which a machine will then make inferences – similar to how Pagerank makes inferences based on what links people make. However Scoble’s claim that humans can overtake a machine is silly – yes humans have a much higher intellect and are better at filtering, but they in no way can match the speed and power of a machine. Once the semantic web gets into full gear a few years from now, humans will have trained the machine to think – and it can then do the filtering for us.

Human intelligence will be crucial for the future of search – but not in the way Mahalo does it which is like manually categorising pieces of paper into a file cabinet – which is not sustainable. A bit like how when the painters of the Sydney harbour bridge finish painting it, they have to start all over again because the other side is already starting to rust again. Once we can train a machine that for example, a dog is an animal, that has four legs and makes a sound like “woof” – the machine can then act on our behalf, like a trained animal, and go fetch what we want; how those paper documents are stored will now be irrelevant and the machine can do the sorting for us.

The Google killer of the future will be the people that can convert the knowledge on the world wide web into information readeable by computers, to create this (weak) form of artificial intelligence. Now that’s where it gets interesting.

Google: the ultimate ontology

A big issue with the semantic web is ontologies – the use of consistent definitions to concepts. For those that don’t understand what I’m talking about – essentially, the next evolution of the web is about making content readable by not just humans but also machines. However for a machine to understand something it reads, it needs consistent definitions. Human’s for example, are intelligent – they understand that the word “friend” is also related to the word “acquaintance”, but a computer would treat them to mean two different things. Or do they?

Just casually looking at some of my web analytics, I noticed some people landed on my site by doing a google search for how many acquaintances do people have, which took them to a popular posting of mine about how many friends people have on facebook. I’ve had a lot of visitors because of this posting, and its been an interesting case study for me on how search engines work. However today was something different from other times: I found the word acquaintance weird. I know I didn’t use that word in my posting – and when I went to the Google cache I realised something interesting: because someone linked to me using that word, the search engine replaced the word ‘friend’ with ‘acquaintances’.

acquaintances

Google’s linking mechanism is one powerful ontology generator.

BarCampSydney2

Things I learned at this BarCamp

  • It was a very different crowd from the first one.
  • It’s so easy to network – it was as difficult as breathing in, breathing out! I gave a presentation, and as a consequence, I had people throughout the day approach me and introduce themselves.
  • In the morning, collaboration was a bit of a hot theme. John Rotenstein from Atlassian asked the question of how do people define collaboration: “when two or more people work together on a business purpose”, was my answer. We agreed. Everyone else, kind of didn’t.
  • How to raise money – was the afternoon’s theme. Great points were brought up by Marty Wells, Mike Canon-Brookes and Dean McEvoy who led the discussion.
  • Some things mentioned:
  1. Aussie VC’s lead you on. “Nice idea- let’s keep in touch” is their way of not burning bridges
  2. VC’s work in a cycle that are in five or so year cycles – raise money at the beginning of the cycle
  3. Rule of thumb: give 30% away on the first round, 30% on the second round
  4. Advisor’s that give out Comet grants work on a 2% commission of future venture capital that you raise.
  5. No one understands the advertising market – everyone in the room wanted something they could read to learn more (check back here soon – I promise!). For example, Google’s adwords programme is largely supported by the property market – the mortgage lending market that is affected by the current credit crisis, is going to affect start-ups relying on adsense as the money drops out of these ads.
  • I met Jan Devos, who randomly approached me and blew me away with what he has done in his life. Basically (and from the age of 17), he created an implementation of the MPEG4 compression technology (for non-tech readers – MP4 as opposed to the older MP3) and he licenses out the technology to major consumer appliance companies like Samsung, who incorporate the technology into their products.
  • I met Dave O’Flynn – self-described as a “tall Irish red-head” developer; Matt June – a former Major in the Australian military, and now pursuing a project based around social innovation; I discovered Rai of Tangler is a commitmentphobe; Mick thinks he can skip most of BarCamp because he thinks organising a wedding is so hard; Mike Canon-Brookes over beer revealed he is a Mark Zuckerberg wannabe; and Christy Dena one of the lead (un)organisers of the conference looks completely different from the person I thought she was!

I got a positive reaction to my half hour session on five lessons I have learned on successful intrapreneurship due to a large internal project I started at my employer, with people throughout the day getting into a chat with me about it. Richard Pendergast, who is starting a online parenting site, said he was going to write a blog on one the points with his own personal battle of creating credibility. Glad I helped! I said to him I was going to blog what I talked about it so we could turn it into a discussion, but I have decided, this exam I have to sit in 12 8 days might need to start getting my attention. Anyway, here were the five points I made, however given the discussion during the session by everyone, is a very rough framework as people brought up some great points when talking:

1) It is a lot easier to seek forgiveness, than permission when doing something in an organisation. Or in other words, just do it.

2) Be proactive, never reactive. By pushing the agenda, you are framing the agenda for something that works for your project. Once you start reacting to others, your idea will die.

3) The more you let go – the bigger your idea will get. Use other people to achieve your vision. Give other people a sense of ownership in it. Let them take credit.

4) It’s all about perception. It’s amazing how much credibility you can build by simply associating your idea to other things – and which in the process, builds your own personal brand to push through with more later on.

5) Hype build hype. Get people excited, and they will carry your idea forward. People get excited when you communicate the potential, and have them realise it.

Thank you to all those involved – both the organisers and the contributors – and I look forward to the next one.

Half the problem has been solved with time spent

On Thursday, I attended the internal launch of the Australian Entertainment & Media Outlook for 2007-2011. It was an hour packed with interesting analysis, trends, and statistics across a dozen industry segments. You can leave a comment on my blog if you are interested in purchasing the report and I’ll see if I can arrange it for you.

One valuable thing briefly mentioned, was the irony of online advertising.
Continue reading

Some things will never change: how to create credibility

This weekend in my office with a half dozen colleagues, we toiled away on an (academic) assignment due tonight. When you spend 11 hours in one day around one table, on something that drives you mad – conversation is a aplenty on things not related to what we were doing. And when there as no conversation, procrastination was aplenty with Facebook being the prime culprit amongst all of us.

An interesting scenario happened, which made me revisit something I have long wondered. One of the girls asked how does Facebook make money, and I went on a rant about their $200 million Microsoft deal, how they are heading towards an IPO, and other random facts I just happen to know. They all looked at me stunned, in the sense how could I possibly know such things, and I replied I read a lot – I read a lot of blogs.

“…but how do you know that stuff you are reading is accurate?” with reference to that $200 million that I don’t even know where I read that. The funny thing about the question, is that it’s smart and stupid at the same time. The answer seems too obvious – but it isn’t: how DO I know those facts I stated where true?

Why I bring this up, is because this is an issue I have long tried to come to grips with – what makes information credible? How do you know when you read something on the internet, that it is reliable? The answer is we don’t. Sort of.

This “new media” world isn’t the reason why we have this apparent problem: information credibility has long been an issue, first realised by the citizens of western democracy after the Great War when they recognised newspapers could no longer be taken as fact (due to the propaganda efforts). So its been a problem long before computers and hypertext had even been invented – it’s only that with us being in an Information Age, the quality of information has been under higher scrutiny with its abundance.

How do we know what makes something reliable? Is it some gee-whiz Google algorithm? Perhaps it’s the wisdom of the crowds? Maybe – but there is something else even more powerful that I have to thank Scott Karp for making me realise this, back in the days when he was starting out as a blogger: it’s all about branding.

Why makes an article about the New York Times, more credible than one written by a random student newspaper rag? What makes a high profile author, more credible in what they say, than a random nobody who puts their hand up in a town hall meeting? And going back to the question my colleague asked earlier – how do I know the blogs I am reading have any credibility – over say, something I read in an established newspaper such The Economist?

Simple: branding establishes information credibility. And a brand – for any type of entity be it an individual journalist or a news organisation – is dependent on recognition by others. There could be absolutely no credibility in your information (like Wikipedia) and yet you could have a brand that by default establishes credibility – just like how people regularly cite Wikipedia as a source now, despite knowing it’s inherently uncredible.

The power of branding is that no matter how uncredible you are – your brand will be enough to make anything you say, incredible.

Thoughts on attention, advertising, and a metric to measure both: keep it simple

Advertising on the Internet is exploding. Assuming you accept my premise that the Internet will be the backbone of the world’s attention economy – then, I am sure you can see the urgency of developing an effective metric for measuring audiences that consume content online. Advertisers are expecting more accountability online and there is increasing demand for an independent third-party to verify results. But you can’t have accountability and there is no value in audits, if one place measures in apples and the other in bananas.

The Attention Economy is seriously lacking an effective measurement system

Ajax broke the pageview model of impressions, the one billion-dollar practice of click-fraud is the dirty big secret of pay-for-performance advertising, and the other major metric of using unique visitors (through cookies) is proving inaccurate.

It sounds crazy, doesn’t it? The Internet has the best potential for targeted advertising, and advertisers are moving onto it in stampedes – and yet, we still can’t work out how to measure audiences effectively. Measurement is broken on the Net.

(Although I am focusing on advertising, this can be applied in other contexts. An advertising metric is simply putting a monetary value on what is really an attention metric.)

Yet when we look at the traditional media, are we being a little harsh on this new media? Is the problem with the web’s measurement systems just that it is more accountable for its errors? After all – radio, television, and print determine their audience through inference which are based on sampling methods and not actually directly measuring an audience. Sampling is about making educated guesses – but a guess is still a guess.

Maybe another way of looking at it is that the old way of doing advertising is no longer effective. Although we can say pageviews are broken due to AJAX, the truth is it was always an ineffective measurement system, as it was based on the traditional media’s premise of how many viewers/subscribers theoretically and potentially could see that ad. As an example of why this is not how it should be: when people visit my blog via Google Images, they hang around for 30 seconds. People that search for business issues on the web that I write about, like stuff you are reading right now – spend 5+ minutes. If both are equal in terms of page views, but the later actually reads the pages and the former only scans the content for an image – why are we treating them equally? My blog is half about travel, and half about the business of the internet, which is why I have two very different audiences. Just because I get high page views from my travel content, doesn’t mean I can justify higher CPM’s for people that want to advertise on internet issues. Not all pageviews are the same – especially when I know the people giving me high pageviews, arn’t really consuming my content

Another issue is that advertisers are so caught up on who can create the most entertaining 30 second ad, that the creativity to get people entertained has ovetaken the reason why advertising happens in the first place: to make sales. The way you do that, is by communicating your product to the people that would want to buy it. If I placed advertising on this blog, from people who want to do web-business related stuff, they should only pay for the peope that read my blog postings for 5+ minutes on the Attention economy, not for the Google images searchers who are looking for porn (my top keywords, and how people find my blog, makes me laugh out loud sometimes!).

When we create a metric that measures attention, lets be sure of one thing: the old way is broken, and the new ways will continue to be broken if we simply copy and paste the old ways. New ways like click-through ads that appear on search results, and account for 40% of internet advertising is not how advertising should be measured. The reason is because it is putting the burden of an effective advertising campaign, on a publisher. Why should a publisher not get paid, with the opportunity cost of not using another ad that would have paid, because of the ineffectiveness of the advertisers campaign strategy at targeting?

When measuring audience attention, lets not overcomplicate it. It should be purely measuring if someone saw it. As an advertiser, I should be able to determine which people from which demograph can see it my ad – and yes, I will pay the premium for that targeting. If it turns into a sale, or if they enjoyed the content – is where your complex web analytic packages come in. But for a simple global measurement system, lets keep it simple.

Concluding thought

If I stood at the toll booths of the Sydney Harbour bridge naked, some people will honk at me and others won’t. If I can guarantee that they can see me naked, that’s all as a publisher I need to do. It’s the advertisers problem if people honk at me or not. (Not enough honks means as a model I should still get my wage. They just need to hire a better looking model next time!)

« Older posts Newer posts »