Frequent thinker, occasional writer, constant smart-arse

Tag: information (Page 4 of 6)

Pageview’s are a misleading metric

Recently MySpace, the social networking site that once dominated but is now being overtaken by Facebook, sent me an e-mail informing me that a friend of mine had a birthday. What is unusual, is that although I have received notifications of this type when I had logged into the site, I had never been e-mailed.

Below is a copy of the e-mail, and lets see if you notice what I did:
birthdayreminder

It doesn’t tell me whose birthday it is. In fact, it is even ambiguous as to whether it was just the one person or not. Big deal? Not really. But it very clearly tells me something: MySpace is trying to increase its pageviews.

Social networking sites are very useful services to an individual; they enable a person to manage and monitor their personal networks. Not only am I in touch with so many people I lost contact with, but I am in the loop with their lives. I may not message them, but by passive observation, I know what everyone is up to. Things like what they’re studying, where they work, what countries they will be holidaying in, and useful things like when they have their birthday.

Social networking sites are not just a website, but an information service, to help you manage your life. However as useful as I find these services, the revenue model is largely dependent on advertising, with premium features a rare thing now. So when you rely on advertising, you are going to be looking at ways of boosting the key figures that determine that revenue stream.

Friendster’s surprising growth in May was due to some clever techniques of using e-mail, to drive pageviews. And it worked. E-mail notifications, when done tactfully, can drive a huge amount of activity. Of the what seems like hundreds of web services I have joined, e-mail at times is the only way for me to remember I even subscribed to it once upon a time. Combine e-mail with information I want to be updated with, and you’ve got a great recipe for using e-mail as a tool to drive page views.

…And that is the problem. MySpace has very cleverly sent this e-mail to get me to log into my account. A marketing campagn like that will at the very least, see a good day in pageview growth. But the reason I am logging in, is just so I can see whose birthday it is. Myspace now to me is irrelevant: those pageviews attributed to me are actually, not one of an engaged user.

Pageviews as a metric for measuring audience engagement is prone to manipulation. Increases in pageviews on the face of it, make a website appear more popular. But in reality, dig a little deeper and the correlation for what really matters (audience engagement) is not quite on par.

So everyone, repeat after me: Pageviews – we need to drop them as a concept if we are ever going to make progress.

How Google reader can finally start making money

Today, you would have heard that Newsgator, Bloglines, Me.dium, Peepel, Talis and Ma.gnolia have joined the APML workgroup and are in discussions with workgroup members on how they can implement APML into their product lines. Bloglines created some news the other week on their intention to adopt it, and the announcement today about Newsgator means APML is now fast becoming an industry standard.

Google however, is still sitting on the side lines. I really like using Google reader, but if they don?¢‚Ǩ‚Ñ¢t announce support for APML soon, I will have to switch back to my old favourite Bloglines which is doing some serious innovating. Seeing as Google reader came out of beta recently, I thought I?¢‚Ǩ‚Ñ¢d help them out to finally add a new feature (APML) that will see it generate some real revenue.

What a Google reader APML file would look like
Read my previous post on what exactly APML is. If the Google reader team was to support APML, what they could add to my APML file is a ranking of blogs, authors, and key-words. First an explanation, and then I will explain the consequences.

In terms of blogs I read, the percentage frequency of posting I read from a particular blog will determine the relevancy score in my APML file. So if I was to read 89% of Techcrunch posts ?¢‚Ǩ‚Äú which is information already provided to users ?¢‚Ǩ‚Äú it would convert this into a relevancy score for Techcrunch of 89% or 0.89.

ranking

APML: pulling rank

In terms of authors I read, it can extract who posted the entry from the individual blog postings I read, and like the blog ranking above, perform a similar procedure. I don?¢‚Ǩ‚Ñ¢t imagine it would too hard to do this, however given it?¢‚Ǩ‚Ñ¢s a small team running the product, I would put this on a lower priority to support.

In terms of key-words, Google could employ its contextual analysis technology from each of the postings I read and extract key words. By performing this on each post I read, the frequency of extracted key words determines the relevance score for those concepts.

So that would be the how. The APML file generated from Google Reader would simply rank these blogs, authors, and key-words – and the relevance scores would update over time. Over time, the data is indexed and re-calculated from scratch so as concepts stop being viewed, they start to diminish in value until they drop off.

What Google reader can do with that APML file
1. Ranking of content
One of the biggest issues facing consumers of RSS is the amount of information overload. I am quite confident to think that people would pay a premium, for any attempt to help rank the what can be the hundreds of items per day, that need to be read by a user. By having an APML file, over time Google Reader can match postings to what a users ranked interests are. So rather than presenting the content by reverse chronology (most recent to oldest); it can instead organise content by relevancy (items of most interest to least).

This won?¢‚Ǩ‚Ñ¢t reduce the amount of RSS consumption by a user, but it will enable them to know how to allocate their attention to content. There are a lot of innovative ways you can rank the content, down to the way you extract key works and rank concepts, so there is scope for competing vendors to have their own methods. However the point is, a feature to ?¢‚ǨÀúSort by Personal Relevance?¢‚Ǩ‚Ñ¢ would be highly sort after, and I am sure quite a few people will be willing to pay the price for this God send.

I know Google seems to think contextual ads are everything, but maybe the Google Reader team can break from the mould and generate a different revenue stream through a value add feature like that. Google should apply its contextual advertising technology to determine key words for filtering, not advertising. It can use this pre-existing technology to generate a different revenue stream.

2. Enhancing its AdSense programme

blatant ads

Targeted advertising is still bloody annoying

One of the great benefits of APML is that it creates an open database about a user. Contextual advertising, in my opinion is actually a pretty sucky technology and its success to date is only because all the other types of targeted advertising models are flawed. As I explain above, the technology instead should be done to better analyse what content a user consumes, through keyword analysis. Over time, a ranking of these concepts can occur ?¢‚Ǩ‚Äú as well as being shared from other web services that are doing the same thing.

An APML file that ranks concepts is exactly what Google needs to enhance its adwords technology. Don?¢‚Ǩ‚Ñ¢t use it to analyse a post to show ads; use it to analyse a post to rank concepts. Then, in aggregate, the contextual advertising will work because it can be based off this APML file with great precision. And even better, a user can tweak it ?¢‚Ǩ‚Äú which will be the equivalent to tweaking what advertising a user wants to get. The transparency of a user being able to see what ‘concept ranking’ you generate for them, is powerful, because a user is likely to monitor it to be accurate.

APML is contextual advertising biggest friend, because it profiles a user in a sensible way, that can be shared across applications and monitored by the user. Allowing a user to tweak their APML file for the motivation of more targeted content, aligns their self-interest to ensure the targeted ads thrown at them based on those ranked concepts, are in fact, relevant.

3. Privacy credibility
Privacy is the inflation of the attention economy. You can?¢‚Ǩ‚Ñ¢t proceed to innovate with targeted advertising technology, whilst ignoring privacy. Google has clearly realised this the hard way by being labeled one of the worst privacy offenders in the world. By adopting APML, Google will go a long way to gain credibility in privacy rights. It will be creating open transparency with the information it collects to profile users, and it will allow a user to control that profiling of themselves.

APML is a very clever approach to dealing with privacy. It?¢‚Ǩ‚Ñ¢s not the only approach, but it a one of the most promising. Even if Google never uses an APML file as I describe above, the pure brand-enhancing value of giving some control to its users over their rightful attention data, is something alone that would benefit the Google Reader product (and Google?¢‚Ǩ‚Ñ¢s reputation itself) if they were to adopt it.

privacy

Privacy. Stop looking.

Conclusion
Hey Google – can you hear me? Let’s hope so, because you might be the market leader now, but so was Bloglines once upon a time.

Explaining APML: what it is & why you want it

Lately there has been a lot of chatter about APML. As a member of the workgroup advocating this standard, I thought I might help answer some of the questions on people’s minds. Primarily – “what is an APML file”, and “why do I want one”. I suggest you read the excellent article by Marjolein Hoekstra on attention profiling that she recently wrote, if you haven’t already done so, as an introduction to attention profiling. This article will focus on explaining what the technical side of an APML file is and what can be done with it. Hopefully by understanding what APML actually is, you’ll understand how it can benefit you as a user.

APML – the specification
APML stands for Attention Profile Markup Language. It’s an attention economy concept, based on the XML technical standard. I am going to assume you don’t know what attention means, nor what XML is, so here is a quick explanation to get you on board.

Attention
There is this concept floating around on the web about the attention economy. It means as a consumer, you consume web services – e-mail, rss readers, social networking sites – and you generate value through your attention. For example, if I am on a Myspace band page for Sneaky Sound System, I am giving attention to that band. Newscorp (the company that owns MySpace) is capturing that implicit data about me (ie, it knows I like Electro/Pop/House music). By giving my attention, Newscorp has collected information about me. Implicit data are things you give away about yourself without saying it, like how people can determine what type of person you are purely off the clothes you wear. It’s like explicit data – information you give up about yourself (like your gender when you signed up to MySpace).

Attention camera

I know what you did last Summer

XML
XML is one of the core standards on the web. The web pages you access, are probably using a form of XML to provide the content to you (xHTML). If you use an RSS reader, it pulls a version of XML to deliver that content to you. I am not going to get into a discussion about XML because there are plenty of other places that can do that. However I just want to make sure you understand, that XML is a very flexible way of structuring data. Think of it like a street directory. It’s useless if you have a map with no street names if you are trying to find a house. But by having a map with the street names, it suddenly becomes a lot more useful because you can make sense of the houses (the content). It’s a way of describing a piece of content.

APML – the specification
So all APML is, is a way of converting your attention into a structured format. The way APML does this, is that it stores your implicit and explicit data – and scores it. Lost? Keep reading.

Continuing with my example about Sneaky Sound System. If MySpace supported APML, they would identify that I like pop music. But just because someone gives attention to something, that doesn’t mean they really like it; the thing about implicit data is that companies are guessing because you haven’t actually said it. So MySpace might say I like pop music but with a score of 0.2 or 20% positive – meaning they’re not too confident. Now lets say directly after that, I go onto the Britney Spears music space. Okay, there’s no doubting now: I definitely do like pop music. So my score against “pop” is now 0.5 (50%). And if I visited the Christina Aguilera page: forget about it – my APML rank just blew to 1.0! (Note that the scoring system is a percentage, with a range from -1.0 to +1.0 or -100% to +100%).

APML ranks things, but the concepts are not just things: it will also rank authors. In the case of Marjolein Hoekstra, who wrote that post I mention in my intro, because I read other things from her it means I have a high regard for her writing. Therefore, my APML file gives her a high score. On the other hand, I have an allergic reaction whenever I read something from Valleywag because they have cooties. So Marjolein’s rank would be 1.0 but Valleywag’s -1.0.

Aside from the ranking of concepts (which is the core of what APML is), there are other things in an APML file that might confuse you when reviewing the spec. “From” means ‘from the place you gave your attention’. So with the Sneaky Sound System concept, it would be ‘from: MySpace’. It’s simply describing the name of the application that added the implicit node. Another thing you may notice in an APML file is that you can create “profiles”. For example, the concepts about me in my “work” profile is not something I want to mix with my “personal” profile. This allows you to segment the ranked concepts in your APML into different groups, allowing applications access to only a particilar profile.

Another thing to take note of is ‘implicit’ and ‘explicit’ which I touched on above – implicit being things you give attention to (ie, the clothes you wear – people guess because of what you wear, you are a certain personality type); explicit being things you gave away (the words you said – when you say “I’m a moron” it’s quite obvious, you are). APML categorises concepts based on whether you explicitly said it, or it was implicitly determined by an application.

Okay, big whoop – why can an APML do for me?
In my eyes, there are five main benefits of APML: filtering, accountability, privacy, shared data, and you being boss.

1) Filtering
If a company supports APML, they are using a smart standard that other companies use to profile you. By ranking concepts and authors for example, they can use your APML file in the future to filter things that might interest you. As I have such a high ranking for Marjolein, when Bloglines implements APML, they will be able to use this information to start prioritising content in my RSS reader. Meaning, of the 1000 items in my bloglines reader, all the blog postings from her will have more emphasis for me to read whilst all the ones about Valleywag will sit at the bottom (with last nights trash).

2) Accountability
If a company is collecting implicit data about me and trying to profile me, I would like to see that infomation thank you very much. It’s a bit like me wearing a pink shirt at a party. You meet me at a party, and think “Pink – the dude must be gay”. Now I am actually as straight as a doornail, and wearing that pink shirt is me trying to be trendy. However what you have done is that by observation, you have profiled me. Now imagine if that was a web application, where this happens all the time. By letting them access your data – your APML file – you can change that. I’ve actually done this with Particls before, which supports APML. It had ranked a concept as high based on things I had read, which was wrong. So what I did, was changed the score to -1.0 for one of them, because that way, Particls would never show me content on things it thought I would like.

3) Privacy
I joined the APML workgroup for this reason: it was to me a smart away to deal with the growing privacy issue on the web. It fits my requirements about being privacy compliant:

  • who can see information about you
  • when can people see information about you:
  • what information they can see about you

The way APML does that is by allowing me to create ‘profiles’ within my APML file; allowing me to export my APML file from a company; and by allowing me to access my APML file so I can see what profile I have.

drivers

Here is my APML, now let me in. Biatch.

4) Shared data
An APML file can, with your permission, share information between your web-services. My concepts ranking books on Amazon.com, can sit alongside my RSS feed rankings. What’s powerful about that, is the unintended consequences of sharing that data. For example, if Amazon ranked what my favourite genres were about books – this could be useful information to help me filter my RSS feeds about blog topics. The data generated in Amazon’s ecosystem, can benefit me and enjoy a product in another ecosystem, in a mutually beneficial way.

5) You’re the boss!
By being able to generate APML for the things you give attention to, you are recognising the value your attention has – something companies already place a lot of value on. Your browsing habits can reveal useful information about your personality, and the ability to control your profile is a very powerful concept. It’s like controlling the image people have of you: you don’t want the wrong things being said about you. 🙂

Want to know more?
Check the APML FAQ. Othersise, post a comment if you still have no idea what APML is. Myself or one of the other APML workgroup members would be more than happy to answer your queries.

Understand your content

I picked up a book my parents used on their recent trip to Greece, which was a guidebook of the Peloponnese. Flicking through this paper book reminded me of my thoughts of how the content business is so rife with piracy. Especially with an online world now, people can copy content – images, text, audio – and mash it up into their own creation. It seems crazy but why do people enter a business like that?

The Information Sector is not only a big money maker, but very unique as well. Yes, it can be copied and ripped off – unlike a barbie doll where its form can’t really be manipulated into a new product. However different from selling barbies, is that information products do things that are very unique in this world and extremely powerful. In my view there are four types of information product, which can be explained under the categories of data or culture.

Data

New data
A friend and aspiring politician, once said to me that “information is the currency of politics”. Reuters, the famed news organisation that supplies breaking news to media outfits across the world – derives 90% of its revenue from selling up-to-the-minute financial information to stockbrokers and the like who profit on getting information before others. New information, like what the weather will be tomorrow, loses value with time (no many care what the weather was eight days ago). But people are willing to pay a price, and a big one, to get access to this breaking news because it can help make decisions.

Old data
On the flip side, old information can be very valuable because of the ability to conduct research and analysis. Search engines effectively fit into this segment of the information economy, because they can query past news and knowledge to produce answers. Extending the weather example, being about to find out that data eight days ago along with the weather exactly one, five and ten years ago – can help you identify trends that, for example, validates the global warming theory.

Culture

Analysis
The third category of information products, I call them simply analysis because what they are is unique insight into things. We all have access to the same news for example, but it takes a smart thinker to create a prediction, by pulling the pieces together and creating new value from them. Analytical content usually gets plagiarised by students writing essays, but its also the stuff that shapes peoples perceptions in world-changing ways.

Entertainment
One of the most powerful uses of content is the way it can impact people – entertainment type content is the stuff that generates emotion in people. Emotions are a key human trait that you should keep in mind in any decision – no matter how logical someone is, the emotional self can overtake. A documentary that portrays an issue negatively, and that can generate an angry response in a person, is the stuff that can topple governments and corporations.

Not all information is equal
If you are a content creator, you need to accept that other people can copy your creation. The key is to understand what type of content you are creating, and develop a content strategy that exploits its unique characteristics.

Information products need different strategies in order to effectively monetise them. Below is a brief discussion which extends on the above to help you understand.
New data
With this type of content, the value is in the time; the quicker that information can be accessed, the more useful it is. News items (like current affairs) fit into this category. As a news consumer, I don’t care how I get my news, but I care about how quickly I can get it. It’s for this reason I no longer read newspapers, yet through various technologies like RSS and my mobile phone, that I probably consume more news than ever before.

You should sell this data based on access – the more you pay, the quicker the access. Likewise, the ability to enable multiple outputs is key – you need to be able to deliver your content to as many different places as possible: SMS, email, RSS etc. You should not discriminate on the output; the value is on the time.

If you create news breaks, why are you wasting your time on who can access that information, because of the threat that someone can copy it? If the value is in the time, who cares who copies it because by the time they republish it, its already lost value. A flash driven site like the Australian Financial Review is an example of a management that doesn’t realise this.

Old data
A recent example of action in this space is the New York Times who have recently removed their paid subscription wall, which was previously only available via subscription but now can be accessed by anyone for free. This is a smart business move, because if you are selling archived content, you will make more money by having more people know what exists. A paid wall limits people using it which decreases the opportunity for consumption: you a relying on a brand only to create demand. If you are website with a lot of historical content – restricting access is stupid because you are effectively asking people to pay for access to something that they have no idea what value it holds for them. It’s a bit like traveling – if you’ve never been overseas, you don’t know what you are missing out on. Give people a taste of the travel bug, and they will never be able to sit still.

Unlike new data where the value is based on time, old data finds value on accessibility. People will place value on things like search, and the ability to find relevant content through the mountains of content available. Here the multitude of outputs doesn’t matter, because researchers have all the time in the world. What matters is a good interface, and powerful tools to mine the data: the value is on being to find information. You shouldn’t charge people on access to the content; where you will make money is on the tools to mine the data.

Analysis
This type of content is difficult to create, but easily ripped off by other people – just think of how rife plagiarism is with schools and universities, where the latter treats plagiarism as a crime just short of murder. You can distinguish this type of content as it demonstrates the ability to offer content that is was produced from a common set on inputs that anyone could access, and creating a viewpoint that only a certain type of person could create. The value is on the unique insight.

Despite the higher intlellect to product, it unfortunately is content that is harder to capitalise on. A lot of technology blogs feel the pressure of moving into a more news style than analytical service because news is what gets eyeballs. If you are a blogger looking to make money – the new data approach above should be your strategy. But if you are a blogger trying to build your brand – do analysis. The consequence with analysis is that its harder to do, so you shouldn’t feel pressured to produce more content. I’ve noticed a trend for example, that if I post more blog postings, I will get more traffic. But on the same token, more postings puts more pressure on me, which means less quality content. Understand that the value of analysis isn’t dependent on time. Or better said, the value of analysis is not how quickly it gets pumped out and realised, but how thoroughly it gets incubated as an idea and later communicated.

The value for analysis is clarity and ability to offer new thoughts. To look at the relationship with advertising models, new data like news (discussed above) typically gets higher viewers – which works for the pageview model (the more people refreshing, the more CPMs). Analysis, on the other hand, works with the time spent model. Take advantage of the engagement you have with those types of readers, because you are cultivating a community of smart people – there can be a lot more loyalty with that type of readership.

Entertainment
My sister downloads the Chaser’s War on Everything as a podcast. She first came across them on the radio, but she now downloads the podcasts religiously. Even though I knew about the Chaser’s efforts for years in their various products, I didn’t realise they were still around. If the last few weeks, I have been noticing my friends bring up the shows they are doing. The value in this content was the ability to make people laugh, due to their unique stunts. Their brand is built because of word of mouth recommendations.

Like analysis, entertainment can be a very hard thing to generate because it relies on unique thinking. With a strong brand, people will pay for access to that content. Although it may seem that the viral spreading of funny content for free is a nightmare for a content producer trying to collect royalties, it’s actually a good thing because it entrenches the brand: more people will find out about it. The nature of entertainment, like analysis, is that it is difficult to do repeatedly. Sure people can copy your individual tricks – but they can only do so after the fact. They can’t pre-anticipate the next thing you will do; because unlike breaking news which is on how quickly you can pump out content, entertainment content requires a unique creative process to produce it.

The key with entertainment content, is to build a relationship with an audience and to sustain it. Create a predictable flow of content. Encourage people copying it, because all it does it get more people wanting to see what you come up with next. If it wasn’t for Stephen Colbert‘s clips on Youtube, I would never have realised his brilliance. Not knowing he existed, means a DVD set of his shows means nothing to me (but which holds a lot of value now). The value of entertainment is to generate emotions in people repeatedly. Emotions are a powerful influence on human behaviour – master that and you can be dangerous!

Concluding thoughts
This posting only touches on the issues, but what I suggest is that creators of content need to look at what type of content they are producing, for them to exploit its unique aspects. Content represents human ideas, and content isn’t distiguished by a physical form. The theft of your content should be a given and can actually help you. Depending on what that content is, there may be natural safeguards that make it irrelevant (ie, the time value of news).

5 observations of how social networking (online) has changed social networking (offline)

Just then, I had an image get shattered. A well respected blogger, whose online persona had me think they were a very cool person offline, is infact, a fat geek with an annoying voice. I can pretty much cross off the list that he can relate to experiences of how Facebook is mentioned in trendy nightclubs on the dancefloor.

Another thing I have noticed: all the major commentators & players of the Internet economy, are usually married, in their 30s or 40s, and almost all come from an IT background.

Don’t get me wrong – the industry has a lot of people that are a goldmine with what they say. They challenge my thinking, and they are genuinely intelligent. But although they are users of web services like Facebook or MySpace – just like the rest of society – they are people experiencing these technologies in the bubble of the technology community. Their view of the world, is not aligned with what’s actually happening in the mainstream. No surprises there – they are the early adopters, the innovators and the pioneers. It’s funny however, that comparable to other services (like Twitter) the adoption amongst the tech community for Facebook has been slow: it was only when the developer network launched that it started getting the attention.

What I want to highlight is that most commentators have no way in the world of understanding the social impact of these technologies in the demograghic where the growth occurs. We all know for example, Facebook is exploding with users – but do we know why it’s exploding? A married man in his 40s with a degree in computer science, isn’t going to be able to answer that, because most of the growth comes from single 20 year olds with an history major.

So what I am about to recount is my personal experience. I am not dressing it up as a thought-piece; I am just purely sharing how I have seen the world take to social networking sites and how it has transformed the lives of my own and the people around me. I’m 23 years old, the people in my life generally fall into the computer clueless category, and I have about 500 Facebook friends that I know through school, university, work, or just life (about ten are in the tech industry).

1) Social networking sites as a pre-screening tool
Observation: I randomly was approached by a chick one night and during the course of our conversation she insisted I knew a certain person. Ten minutes, and 20 more “I swear…you know xxx” – I finally realised she was right and that I did know that person. For her to be so persistent in her claim, she had to be sure of herself. But how can someone be sure of themselves with that piece of information, when I had only met her 30 seconds earlier?

I then realised this chick had already seen me before – via facebook. I know this is the case, because I myself have wandered on a persons profile and realised we have a lot of mutual friends. In those times I would note it is bound to happen that I would meet them.

Implication: People are meeting people and know who they are before they even talk. They say most couples meet through friends. Well now you can explore your friends’s friends – and then start hanging around that friend when you know they know someone you like!

2) Social networking sites getting you more dates
Observation: I met a chick and had a lengthy chat with her, and although she was nice, I left that party thinking I would probably never see her again as I didn’t give out any contact details. That next day, she added me as a friend on Facebook. In another scenario, there was a girl I met from a long time ago and I hadn’t seen her since. We randomly found each other on Facebook, and I’ve actually got to know the girl – picking up from where we left off.

Implication: Social networking sites help you further pursue someone, even though you didn’t get their number. In fact, it’s a lot less akward. Facebook has become a aprt of the courtship process – flirtation is a big aspect of the sites activity.

3) Social networking sites helping me decide
Observation: There was a big party, but I wasn’t sure if I would go because I didn’t know who would go with me. I looked at the event RSVP, and I to my surprise found out a whole stack of people I knew were going.

Implication: Facebook added valuable information that helped me decide. Not knowing what people were going, I probably wouldn’t have gone. Think about this on another level: imagine you were were interested in buying a camera, and you had access to the camera makes of your friends (because the digital photos they upload contain the camera model – as seen with Flickr). Knowing what your friends buy is a great piece of advice on what you want to buy.

4) Social networking sites increasing my understanding of people I know
Observation: I found out when a friend added me on myspace, that she was bisexual – something I never would have realised. Being bi is no big deal – but it’s information that people don’t usually give up about themselves. Likewise, I have since found out about people I went to school with are now gay. Again – no big deal – but discreet information like that increases your depth of understanding about someone (ie, not making gay jokes around them). I know what courses my contacts have studied since I last saw them, and what they are doing with their lives. I also know of someone that will be at one of my travel destinations when I go on holiday.

Implication: You are in the loop about the lives of everyone you’ve met. It’s nothing bad, because these people control what you can see, but it’s great because there are things you know, things you know you don’t know, but now you can find out things you didn’t know that you didn’t know.

5) Social networking sites as a shared calendar
Observation: My little sister is currently going through 21st season – back to back parties of her friends. One of the gripes of 21sts when organising them, is overlap with other peoples. Not only that – but also the physical process of contacting people and getting them to actually RSVP – it’s a pain. However unlike my 21st season experience from a few years ago, my sister has none of these issues. This is because Facebook is like one big shared calender. Another example is how I send my congratulations to birthday friends a lot more than I have in the past because I actually know its their birthday- due to fact our calendars are effectively pooled as a shared calendar.

Implication: Facebook has become an indispensable tool to peoples social lives.

6) Bonus observation – explaining the viral adoption of Facebook
I have a few friends that don’t have Facebook. You can almost count them on the one hand. And when you bring it up, they explode with a “I’m sick of Facebook!” and usually get defensive because so many people hassle them. In most cases, they make an admission that one day, they will join. The lesson here is that Facebook is growing because of peer pressure. The more people in someone’s network, the more valuable facebook becomes to them. When they say 40 million users, it’s actually 40 million sales people.

God bless the network effect.

Don’t get the Semantic Web? You will after this

Prior to 2006, I had sort of heard of the Semantic Web. To be honest, I didn’t know much – it was just another buzzword. I’ve been hearing about Microformats for years, and cool but useless initiatives like XFN. However to me it was simply just another web thing being thrown around.

Then in August 2006, I came across Adrian Holovaty’s article where he argues journalism needs to move from a story-centric world to a data-centric world. And that’s when it dawned on me: the Semantic web is some serious business.

I have since done a lot of reading, listening, and thinking. I don’t profess to be a Semantic Web expert – but I know more than the average person as I have (painfully) put myself through videos and audios of academic types who confuse the crap out of me. I’ve also read through a myriad of academic papers from the W3C, which are like the times when you read a novel and keep re-reading the same page and still can’t remember what you just read.

Hell – I still don’t get things. But I get the vision, so that’s what I am going to share with you now. Hopefully, my understanding will benefit the clueless and the skeptical alike, because it’s a powerful vision which is entirely possible

1) The current web is great for humans; useless for machines
When you search for ambiguous terms, at best, search engines can algorithmically predict some sort of answer that partially answers your query. Sometimes not. But the complexity of language, is not something engineers can engineer to deal with. After all, without ambiguity of natural languages, the existence of poetry is impossible.

Fine.

What did you think when you read that? As in: “I’ve had it – fine!” which is like another way of saying ok or agreeing with something. Perhaps you thought about that parking ticket I just got – illegal parking gets you fined. Maybe you thought I am applauding myself by saying that was one fine piece of wordcraftship I just wrote, or said in another context, like a fine wine.

Language is ambiguous, and depending on the context with other words, we can determine what the meaning of the word is. Search start-up company Powerset, which is hoping to kill Google and rule the world, is employing exactly this technique to improve search: intelligent processing of words depending on context. So by me putting in “it’s a fine”, it understands the context that it’s a parking ticket, because you wouldn’t say “it’s a” in front of ‘fine’ when you use it to agree with something (the ‘ok’ meaning above).

But let’s use another example: “Hilton Paris” in Google – the worlds most ‘advanced’ search engine. Obviously, as a human reading that sentence, you understand because of the context of those words I would like to find information about the Hilton in Paris. Well maybe.

Let’s see what Google comes up with: Of the ten search results (as of when I wrote this blog posting), one was a news item on the celebrity; six were on the celebrity describing her in some shape or form, and three results were on the actual Hotel. Google, at 30/70 – is a little unsure.

Why is Paris Hilton, that blonde haired thingy of a celebrity, coming up in the search results?

Technologies like Powerset apparently produce a better result because it understands the order of the words and context of the search query. But the problem with these searches, isn’t the interpretation of what the searcher wants – but also the ability to understand the actual search results. Powerset can only interpret so much of the gazilions of words out there. There is the whole problem of the source data, no just the query. Don’t get what I mean? Keep reading. But for now, learn this lesson

Computers have no idea about the data they are reading. In fact, Google pumping out those search results is based on people linking. Google is a machine, and reads 1s and 0s – machine language. It doesn’t get human language

2) The Semantic web is about making what human’s read, machine readable
Tim Berner’s Lee, the guy that invented the World Wide Web and the visionary behind the Semantic Web, prefers to call it the ‘data web’. The current web is a web of documents – by adding this extra data to content – machines will be able to understand it. Metadata, is data about data.

A practical outcome of having a semantic web, is that Google would know that when it pulls up a web page regardless of the context of the words – it will understand what the content is. Think of every word on the web, being linked to a master dictionary.

The benefit of the semantic web is not for humans – at least immediately. The Semantic Web is actually pretty boring with what it does – what is exciting, is what it will enable. Keep reading.

3) The Semantic web is for machines to interpret, not people
A lot of the skeptics of the semantic web, usually don’t see the value of it. Who cares about adding all this extra meta data? I mean heck – Google still was able to get the website I needed – the Hilton in Paris. Sure, the other 60% of the results on that page were irrelevant, but I’m happy.

I once came across a Google employee and he asked “what’s the point of a semantic web; don’t we already enough metadata?” To some extent, he’s right – there are some websites out there that have metadata. But the point of the semantic web is so that machines once they read the information, can start thinking like how a human would and connecting it to other information. There needs to be across the board metadata.

For example, my friend Michael was recently looking to buy a car. A painful process, because there are so many variables. So many different models, different makes, different dealers, different packages. We have websites, with cars for sale neatly categorised into profile pages saying what model it is, what colour it is, and how much. (Which may I add, are hosted on multiple car sites with different types of profiles). A human painfully reads through these profiles, and computes as fast as a human can. But a machine can’t read these profiles.

Instead of wasting his (and my) weekends driving around Sydney to find his car, a machine could find it for him. So, Mike would enter his profile in – what he requires in a car, what his credit limit is, what his prior history with cars are – everything that would affect his judgement of a car. And then, the computer can query every online website with cars to match the criteria. Because the computer can interpret these websites across the board, it can evaluate and it can go back to Michael and say “this is the car for you, at this dealer – click yes to buy”.

The semantic web is about giving computers the information to be able to interpret data, so that it can do what they do really well – compute.

4) A worldwide database
What essentially Berner’s Lee envisions, is turning the entire world wide web into a database that can be queried. Currently, the web looks like Microsoft Word – one swab of text. However, if that swab of text was neatly categorised in an Excel spreadsheet, you could manipulate that data and do what you please – create reports, reorder them, filter, and do whatever until your heart is content.

At university, I was forced to do an Information Systems subject which was essentially about the theory of databases. Damn painful. I learned only two things from that course. The first thing was that my lecturer, tutor, and classmates spoke less intelligible English than a caterpillar. But the second thing was that I learned what information is and how it differs from data. I am now going to share with you that lesson, and save you three months of your life.

You see, data is meaningless. For example, 23 degrees is data. On its own, it’s useless. Another piece of data in Sydney. Again, Рuseless. I mean, you can think all sorts of things when you think of Sydney, but it doesn’t have any meaning.

Now put together 23 degrees and Sydney, and you have just created information. Information is about creating relationships between data. By creating a relationship, an association, between these two different pieces of data – you can determine it’s going to be a warm day in Sydney. And that is what information is: Relationship building; connecting the dots; linking the islands of data together to generate something meaningful.

The semantic web is about allowing computers to be able to query the sum of human knowledge like one big database to generate information

Concluding thoughts
You are probably now starting to freak out and think “Terminator” images with computers suddenly erupting form under your computer desk, and smashing you against the wall as a battle between humans and computers begins. But I don’t see it like that.

I think about the thousands of hours humans spend trying to compute things. I think of the cancer research, whereby all this experimentation occurring in labs, is trying to connect new pieces of data with old data to create new information. I think about computers being about to query the entire taxation legislation to make sure I don’t pay any tax, because it knows how it all fits together (having studied tax, I can assure you – it takes a lifetime to only understand a portion of tax law). In short, I understand the vision of the Semantic web as a way of linking things together, to enable computers to compute – so that I can sit on my hammock drinking my beer, as I can delegate the duties of my life to the machines.

All the semantic web is trying to do, is making sure everything is structured in a consistent manner, with a consistent dictionary behind the content, so that a machine can draw connections. As Berner’s Lee said on one of the videos I saw: “it’s all about creating links”.

The process to a Semantic Web is boring. But once we have those links, we can then start talking about those hammocks. And that’s when the power of the internet – the global network – will really take off.

Facebook is doing what Google did: enabling

The hype surrounding the Facebook platform has created a frenzy of hype – on it being a closed wall, on privacy and the right to users having control of their data, and of course the monetisation opportunities of the applications themselves (which on the whole, appear futile but that will change).

We’ve heard of applications becoming targeted, with one (rumoured) for $3 million – and it has proved applications are an excellent way to acquire users and generate leads to your off-Facebook website & products. We’ve also seen applications desperately trying to monetise their products, by putting Google Ads on the homepage of the application, which are probably just as effective as giving a steak to a vegetarian. The other day however was the first instance where I have seen a monetisation strategy by an application that genuinely looked possible.

It’s this application called Compare Friends, where you essentially compare two friends on a question (who’s nicer, who has better hair, who would you rather sleep with…). The aggregate of responses from your friends who have compared you, can indicate how a person sits in a social network. For example, I am most dateable in my network, and one of the people with prettiest eyes (oh shucks guys!).

The other day, I was given an option to access the premium service – which essentially analyses your friends’ responses.

compare sub

It occurred to me that monetisation strategies for the Facebook platform are possible beyond whacking Google Adsense on the application homepage. Valuable data can be collected by an application, such as what your friends think of you, and that can be turned into a useful service. Like above, they offer to tell you who is most likely to give you a good reference – that could be a useful thing. In the applications current iteration, I have no plans to pay 10 bucks for that data – but it does make you wonder that with time, more sophisticated services can be offered.

Facebook as the bastion of consumer insight

On a similar theme, I did an experiment a few months ago whereby I purchased a facebook poll, asking a certain demographic a serious question. The poll itself revealed some valuable data, as it gave me some more insight into the type of users of Facebook (following up from my original posting). However what it also revealed was the power of tapping into the crowd for a response so quickly.
clustered yes
Seeing the data come in by the minute as up to 200 people took the poll, as a marketer you could quickly gauge how people think about something in a statistically valid sample, in literally hours. You should read this posting discussing what I learned from the poll if you are interested.

It’s difficult to predict the trends I am seeing, and what will become of Facebook because a lot could happen. However one thing is certain, is that right now, it is a highly effective vehicle for individuals to gain insight about themselves – and generating this information is something I think people will pay for if it proves useful. Furthermore, it is an excellent way for organisations to organise quick and effective market research to test a hypothesis.

The power of Facebook, for external entities, is that it gives access to controlled populations whereby valuable data can be gained. As the WSJ notes, the platform has now started to see some clever applications that realise this. Expect a lot more to come.

Facebook is doing what Google did for the industry

When Google listed, a commentator said this could launch a new golden age that would bring optimism not seen since the bubble days to this badly shaken industry. I reflected on that point he made to see if his prophesy would come true one day. In case you hadn’t noticed, he was spot on!

When Google came, it did two big things for the industry

1) AdSense. Companies now had a revenue model – put some Google ads on your website in minutes. It was a cheap, effective advertising network that created an ecosystem. As of 30 June 2007, Google makes about 36% of their revenue from members in the Google network – meaning, non-Google websites. That’s about $2.7 billion. Although we can’t quantify how much their partners received – which could be anything from 20% to 70% (the $2.7 billion of course is Google’s share) – it would be safe to say Google helped the web ecosystem generate an extra $1 billion. That’s a lot of money!

2) Acquisitions. Google’s cash meant that buyouts where an option, rather than IPO, as is what most start-ups aimed for in the bubble days. In fact, I would argue the whole web2.0 strategy for startups is to get acquired by Google. This has encouraged innovation, as all parties from entrepreneurs to VC’s can make money from simply building features rather than actual businesses that have a positive cashflow. This innovation has a cumulative effect, as somewhere along the line, someone discovers an easy way to make money in ways others hadn’t thought possible.

Google’s starting to get stale now – but here comes Facebook to further add to the ecosystem. Their acquisition of a ‘web-operating system‘ built by a guy considered to be the next Bill Gates shows that Facebook’s growth is beyond a one hit wonder. The potential for the company to shake the industry is huge – for example, in advertising alone, they could roll out an advertising network that takes it a step further than contextual advertising as they actually have a full profile of 40 million people. This would make it the most efficient advertising system in the world. They could become the default login and identity system for people – no longer will you need to create an account for that pesky new site asking you to create an account. And as we are seeing currently, they enable a platform the helps other businesses generate business.

I’ve often heard people say that history will repeat itself – usually pointing to how 12 months ago Myspace was all the rage: Facebook is a fad, they will be replaced one day. I don’t think so – Facebook is evolving, and more importantly is that it is improving the entire web ecosystem. Facebook, like Google, is a company that strengthens the web economy. I am probably going to hate them one day, just like how my once loved Google is starting to annoy me now. But thank God it exists – because it’s enabling another generation of commerce that sees the sophistication of the web.

Study finds people would not pay for privacy options

A 2007 study by researchers at Carnegie Mellon and the University of California at Berkeley found that most subjects were unwilling to spend even a quarter [25 cents] to keep someone from selling sensitive information about them — such as their weight or number of sex partners. “People prefer money over data, always,” says Alessandro Acquisti, assistant professor of information technology and public policy at CMU. Source: Wired

Is privacy really that big a deal? In short – yes – but paying for paid privacy options is not something I am sure of. Rather, I would expect the forces to swell up like a hurricane that will require political action to enforce privacy as a legal right. As a business, you shouldn’t think of privacy as a revenue stream, but rather, a branding tool to build trust. It’s users ignorance of the truth, not acceptance of the consequences, that has people giving away their data. Don’t abuse that trust if given the opportunity.

On the future of search

Robert Scoble has put together a video presentation on how Techmeme, Facebook and Mahalo will kill Google in four years time. His basic premise is that SEO’s who game Google’s algorithm are as bad as spam (and there are some pissed SEO experts waking up today!). People like the ideas he introduces about social filtering, but on the whole – people are a bit more skeptical on his world domination theory.

There are a few good posts like Muhammad‘s on why the combo won’t prevail, but on the whole, I think everyone is missing the real issue: the whole concept of relevant results.

Relevance is personal

When I search, I am looking for answers. Scoble uses the example of searching for HDTV and makes note of the top manufacturers as something he would expect at the top of the results. For him – that’s probably what he wants to see – but for me, I want to be reading about the technology behind it. What I am trying to illustrate here is that relevance is personal.

The argument for social filtering, is that it makes it more relevant. For example, by having a bunch of my friends associated with me on my Facebook account, an inference engine can determine that if my friend called A is also friends with person B, who is friends with person C – than something I like must also be something that person C likes. When it comes to search results, that sort of social/collaborative filtering doesn’t work because relevance is complicated. The only value a social network can provide is if the content is spam or not – a yes or no type of answer – which is assuming if someone in my network has come across this content. Just because my social network can (potentially) help filter out spam, doesn’t make the search results higher quality. It just means less spam results. There is plenty of content that may be on-topic but may as well be classed as spam.

Google’s algorithm essentially works on the popularity of links, which is how it determines relevance. People can game this algorithm, because someone can make a website popular to manipulate rankings through linking from fake sites and other optimisations. But Google’s pagerank algorithm is assuming that relevant results are, at their core, purely about popularity. The innovation the Google guys brought to the world of search is something to be applauded for, but the extreme lack of innovation in this area since just shows how hard it is to come up with new ways of making something relevant. Popularity is a smart way of determining relevance (because most people would like it) – but since that can be gamed, it no longer is.

The semantic web

I still don’t quite understand why people don’t realise the potential for the semantic web, something I go on about over and over again (maybe not on this blog – maybe it’s time I did). But if it is something that is going to change search, it will be that – because the semantic web will structure data – moving away from the document approach that webpages represent and more towards the data approach that resembles a database table. It may not be able to make results more relevant to your personal interests, but it will better understand the sources of data that make up the search results, and can match it up to whatever constructs you present it.

Like Google’s page rank, the semantic web will require human’s to structure data, which a machine will then make inferences – similar to how Pagerank makes inferences based on what links people make. However Scoble’s claim that humans can overtake a machine is silly – yes humans have a much higher intellect and are better at filtering, but they in no way can match the speed and power of a machine. Once the semantic web gets into full gear a few years from now, humans will have trained the machine to think – and it can then do the filtering for us.

Human intelligence will be crucial for the future of search – but not in the way Mahalo does it which is like manually categorising pieces of paper into a file cabinet – which is not sustainable. A bit like how when the painters of the Sydney harbour bridge finish painting it, they have to start all over again because the other side is already starting to rust again. Once we can train a machine that for example, a dog is an animal, that has four legs and makes a sound like “woof” – the machine can then act on our behalf, like a trained animal, and go fetch what we want; how those paper documents are stored will now be irrelevant and the machine can do the sorting for us.

The Google killer of the future will be the people that can convert the knowledge on the world wide web into information readeable by computers, to create this (weak) form of artificial intelligence. Now that’s where it gets interesting.

A casual chat with a media industry insider

Today I had the chance of picking the mind of Achilles from the International Herald Tribune, who last year was appointed Vice-President, circulation and development. Achilles is a family friend and I took the opportunity to talk to him about the world of media and the challenges being faced.

The IHT is one of the three daily financial newspapers of the world, along with the Wall Street Journal and the Financial Times. It is currently owned by the New York Times, and has a global circulation of 240,000 people. I had a great chat on a lot of different themes which could have me blog about for a week straight, but here are some of the facts I picked up from our discussion, which I will summarise below as future talking points:

  • On Murdoch’s acquisition of the Wall Street Journal: “very interested to see if he will remove the paid wall”.
  • The IHT experiemented with a paid wall for it’s opinion content, but they will be removing that later this year
  • He says the Bancroft family sold it because they are emotionally detached from the product. It was just an asset to them.
  • A lot of the content is simply reedited content from the NYT and internationalising it. For example, replacing sentences like “Kazakhstan in the size of New York state” doesn’t work well for an international reader who has no idea how big New York state is.
  • On the threat of citizen journalism with traditional media: “they are a competitive threat because we are competing for the same scarce resource: the attention of readers”
  • The problem with citizen journalism and bloggers is the validity of their information – behind a newspapers brand, is trust from readers of the large amounts of research and factchecking that occur. They have no credibility.
  • A blog may develop credibility with an audience greater than the New York Times. But this poses problems for advertising as advertisers might only advertise because of its niche audience. Blogs are spreading the advertising dollars, which is hurting everyone – it’s become decentralised and that has implications which are problematic.
  • The IHT’s circulation is spread thinly across the world. For example, it has 30,000 readers in France and six in Mauritius.
  • Their target market is largely the business traveller, which has its own unique benefits and problems. For example, a business traveler will read it for two days but when they get back home, they will revert to their normal daily newspaper. It’s not a very loyal reader.
  • Readership is a more important concept than circulation as it tells advertisers how big the actual audience of a publication is. For example, the average newspaper has 2.7 readers per copy. However due to the nature of the IHT’s readers, despite having high circulation, they have low readership.
  • IHT is in a unique position of relying on circulation revenue more than advertising. For example, a normal daily relies on circulation revenue as 20% of its total revenue; the IHT counts on it for 50%.
  • It’s hard to get advertising because a readership of university professors is less desirable than fund managers that might read the WSJ. Advertisers prefer to target key decision makers.
  • It doesn’t rely on classifieds as a revenue source – a key thing hurting the newspaper industry currently.
  • Although they place more reliance on circulation revenue, they still get some good advertising opportunities as a lot of readers are politicians and government decision makers.
  • They get a lot of advertising for fashion
  • Psychographic data is more important to advertisers than circulation and it shows what type of readership a publication has.
« Older posts Newer posts »