Category Archives: Analysis

Facebook Graph Search Optimisation Basics

Facebook’s ‘Graph Search’ feature will unlock new discovery opportunities for brand pages to reach those who are demonstrating engagement ‘intent’ with their Facebook Graph search query. Pages that optimise both their organic and paid content strategy for Graph Search’s semi-structured queries will out perform those who continue to optimise for Edgerank alone.


While this feature is still in beta, Facebook is priming brands for Graph Search’s full launch with a number of suggestions:

  • As always, continue to invest in your Page by making sure your Page is complete and up-to-date.
  • The name, category, vanity URL, and information you share in the “About” section all help people find your business and should be shared on Facebook.
  • If you have a location or a local place Page, update your address to make sure you can appear as a result when someone is searching for a specific location.
  • Focus on attracting the right fans to your Page and on giving your fans a reason to interact with your content on an ongoing basis.
  • You can learn more about fan acquisition and Page publishing best practices here.


The above tips are the ‘bare minimum’ required to bring a Page into the Graph Search era. For first movers, there is a window of opportunity here to gain valuable insight in how to optimise Pages for Graph Search while it’s a novel feature for users and fans will tolerate your attempts to hack the algorithm through trial and error.

Make sure to check out my Facebook Graph Search Optimisation Hacks for Brands post for more optimisation tips.






The language needed by #nocleanfeed to succeed in the “real world”.

The comments on my post Why the language of #nocleanfeed dooms the movement to failure. were overwhelming, and has prompted me to write a second post in the series on the language surrounding the #nocleanfeed debate. Josh Mehlman’s post over on The Drum Filter opponents: change tactics or fail mirrored many of my sentiments in my original post but raised an interesting issue.

These arguments have also failed to convince the public because the anti-filter groups have allowed their opponents to set the terms and language of the debate. Senator Conroy has consistently framed the filter in terms of protecting children from online nasties such as child pornography. The mainstream media has almost without exception taken this line uncritically when reporting on the filter.

This is an important roadblock to taking the language currently used online into the offline space. By making arguments based on censorship, you immediately are on the back foot against the Government, as by opposing the filter, you are automatically labeled as being in support of child pornography. Once you are tarred with that brush, whatever arguments you make from that point, no matter how technically correct, will be tainted by the belief that you aren’t interested in saving the children.

As we all know, this isn’t the reality, but it is the perception of the issue by Joe Public and Government alike. The language of the debate has been dictated to the #nocleanfeed campaigners by supporters of the filter. By this stage it is too late to re-frame the debate in our own language, we must use language that is compatible with the debate as it currently stands.

This needs to take the form of:

a) We want to protect the children as much as you do

b) The current plan for a filter will not achieve this

c) Here is a viable alternative

Simply opposing the filter with no viable alternative undermines our argument that we want to protect the children as much as the Government does. What form the viable alternative takes, I’m looking for suggestions, but a viable alternative needs to be established in order to overcome the public perception that we do not support the protection of children by opposing the filter.

Facebook scams users to keep up with Twitter in the real time search war.

In a previous post I discussed the reasons why Facebook couldn’t win the real time search war with Twitter. Namely because of the advantage Twitter has with mobile clients. Facebook has come along in leaps and bounds since then, with the release of the Facebook 3.0 app for the iPhone a particularly strong offering. Even my platform of choice, Windows Mobile has a perfectly functional, albeit poor looking Facebook app available.

The one area I didn’t cover in my previous post, was perhaps the most critical factor to Facebook competing with Twitter in the real time search market. Facebook as a platform has since it’s inception, been an inherently closed social network. People need to be part of your network before they can gain access to your information, either by being by being a friend directly, or belonging to your extended network should you have chosen to allow “friends of friends” get access to your information.

Twitter on the other hand, is an open platform by default. Sure users can opt to restrict access to their tweets, but for the most part, this is counter to the workings of Twitter. If people can’t find you by seeing you participate in conversations with others, you don’t exist to them. Which limits your “Twitter experience” considerably.

As a result, Twitter generates considerable amounts of publicly searchable information every second. Whereas Facebook is stuck with it’s data being in silos around each individual that are inaccessible to the general public. In order to make this information available to search engines to provide meaningful results (and more importantly generate revenues from Bing/Google in the process)  it is *vital* for Facebook to change from a closed social networking platform. To a more open one, at least in regards to peoples wall posts (that which provides the best real time data).

This week Facebook offered it’s users what they called a ‘Transition tool’ to manage the transition to the new structure of the privacy settings. The tool itself is illustrated below by a screenshot taken by @beaney.

Facebook Privacy Settings Update

Now what this transition tool offered was the new default options for each setting for Facebook on the left, and a column of radio buttons on the right to allow you to select to keep your old settings if it was different from the new default. Instead of allowing users to select from all the  privacy levels for each  option, they only permitted the default (for the primary options this was the most permissive ‘everyone’ option) or the existing setting. Nothing in between.

Surely Facebook could have offered drop down boxes as with their normal privacy options menu, and simply listed the previously used setting next to each option and allowed the users freedom of choice. But no, this is a deceptive move on Facebooks behalf to scam its users into opening up their data so that Bing and Google can mine it and pay Facebook for the privilege.

Coinciding with this change in privacy settings for Facebook was an announcement not long ago that Facebook and Microsoft had ‘inked a deal’ to index all of Facebook users data. Obviously it is in Facebook’s financial interests to push users into opening up their status updates, but doing so in such a deceptive way will only result in users over-sharing and exposing themselves to risks unknowingly such as having their payoffs cut off when insurers see pictures of them on Facebook ‘smiling’. I expect Facebook’s response to such incidents as being “well the user opted in to up opening their profile” which may be true, but is unscrupulous and not the sort of behavior I would expect from the company.

You can do better Mark Zuckerberg. You evangelise the user experience with Facebook, but you have let your users down with this one.

Why the Windows 7 Beta was a Marketing, not Engineering Success.

I attended the Windows 7 launch on the 22nd October in Sydney (Apologies for the average quality of the above photo, I was up the back of the auditorium taking photos on my mobile) and the one thing that struck me (other than the relief Microsoft employees had they weren’t promoting Vista anymore) was the belief that the audience had in Windows 7 from first hand experience, with approximately 2/3 of the audience having already used the OS prior to launch, either participating in the public beta program or running the release candidate of Windows 7 that was made available prior to launch to iron out the last few bugs in the wild before the RTM version went to the manufacturers.

Windows 7 Australia Launch

Obviously the audience at the launch event is one that skews heavily towards ‘geek’ so the 2/3 figure isn’t something that would carry over to the general population. However it is an important group of influencers who are already acting as evangelists for the product before it has been launched to the general public. For the most part the beta releases of Windows 7 were very nearly already polished enough to be released as the final version, so the public beta served Microsoft more as a marketing exercise than an engineering one.

Microsoft managed to convince this key group of influencers that they could safely put their online social capital behind the product by letting the influencers get hands on with Windows 7 throughout the beta program. By creating an army of influencers who evangelise Windows 7, this acts to overcome the distrust of marketing pitches coming from companies trying to sell us a product.

This effect was dramatic, where normally with a major OS release, most consumers would take a wait and see approach, holding off to the next service pack before upgrading. With this army of influencers Microsoft was able to achieve a 234% increase in sales for Windows 7 above that of Windows Vista in the first weeks of the products release.

While Microsoft couldn’t have been created an army of  Windows 7 evangelists without it being a good product. The public beta program was instrumental in creating positive word of mouth for the product in the influencers likely to sway consumers choices. Many companies would shy away from letting the public use a non-final version of their software, for this I believe Microsoft needs to be commended, not just for using the public beta to create the most polished version of their OS yet, but leveraging the public beta as a marketing tool to build critical momentum prior to the Windows 7 launch.

If 88% of people refuse to pay for online news, can make money?

In a survey conducted through it’s online market research service Pureprofile has released results of a survey how prepared people are to pay for news content online. The results are unsurprising, with 88% of those polled in both Australia and the UK unwilling to pay for online news. Only 5% were willing to pay for content if it was deemed of a high enough quality, with the remaining 7% only willing to pay for content if advertising was removed from the site.

(Survey data in the below image).



The weakly defined options of the poll above aside, if 12% of respondents are willing to pay for their news online, is this a sufficient number for Murdoch’s plan to succeed? I’ve discussed Murdoch’s decision to charge for online content before and concluded that potentially it is a viable business model if enough people are willing to pay the online access prices. Now that we have an estimate to the number of people who are willing to pay for online news content from the survey above, we can look at the kind of income this might generate for Murdoch’s Australian news portal Using News Limited’s self published data from August 09, achieved 4.1 million unique browsers and 17 million browsing sessions. Let’s do some basic maths to conclude that those two numbers average out to a reader who returns to the site 4 times in a month. With this profile we can start crunching some numbers as to how much money News Limited might make if they turn the site over to a fully pay-walled site.

A Likely Pricing Model

For’s  ‘average’ reader (Lets call him Mr Pink) there are two likely payment options to eventuate, either a weekly subscription model or a price levied per session/online ‘edition’. Fortunately given an average of just over 4 sessions per unique visitor, this makes the mathematics fundamentally the same for either of the models. For the sake of simplicity at this stage we will be discounting the potential loss in advertising revenue involved by catering to the 7% of respondents to the survey above who would only consider paying for their online news if it was advertising free.

We can now generate potential revenue figures with the below formula:

Current Readership  x  Percent willing to pay  x  Fee levied = Monthly income

Or using the above data and an estimate on the likely fee structure we get:

4.1 Million x 12% x  Let’s use $2 per session/week* for access = $3,936,000 per month.

(*Reports have surfaced in the last week that Murdoch intends to charge mobile users $2/w for access to the NYT so these estimates are close to expected figures.)

Switching to a paid access model could secure just over $47 Million a year in online subscription fees, not to mention possibly forcing reader back to it’s struggling printed newspapers business, which could add additional income. Is this enough to justify the potential loss in advertising revenue from decreased readership figures? Not to mention the readers who were only willing to pay for their news without advertising? Given the difficulty of pulling accurate financial data for from within the overall spectrum of earnings of the News Corporation business entity, it’s hard to make a concrete conclusion whether an annual income of $47 Million is a viable move for Murdoch.

$47 Million, Spare Change?

Given annual revenues of $30 Billion from across all of it’s business units, $47 Million is a drop in the ocean and seems to undervalue the property. Unless a number of the respondents to the survey above can be convinced to change their mind on this issue, a decision to charge for content online in this case is likely to result in a negative outcome for Rupert’s bottom line.

**If any readers  have additional information regarding the above numbers or know where I can find revenue figures for please add a comment below.

Google Acquires reCAPTCHA: For the tech or the distribution network?

Recently Google acquired CAPTCHA provider reCAPTCHA one of the leading CAPTCHA providers on the Internet. The key strength behind the reCAPTCHA implementation of the CAPTCHA test is that it pairs a known word (to the server) with an unknown word that an OCR scan has failed to recognise. This allows reCAPTCHA to crowdsource the digitisation of scanned books such as those in the Google Books project as Google outlines in their blog post on the acquisition:

“This technology also powers large scale text scanning projects like Google Books and Google News Archive Search. Having the text version of documents is important because plain text can be searched, easily rendered on mobile devices and displayed to visually impaired users. So we’ll be applying the technology within Google not only to increase fraud and spam protection for Google products but also to improve our books and newspaper scanning process.”

The above publicly stated reasons for the aquisition seem obvious and  perhaps they are a little too obvious, hiding the real reason behind the aquisition. reCAPTCHA hold no specific patents for the technology behind their text CAPTCHA algorithms (At least none they discuss on their website or are able to be found on the US Patents & Trademark Office site) and given that reCAPTCHA operates mostly on open source software the case for buying Google buying the company gets thinner.

Given that Google could easily code their own reCAPTCHA equivalent, this business deal goes beyond the obvious. Google already has their own ‘CAPTCHA Killer’ that operates using images and video, surely they could roll that out if they were really serious about security, so the tech involved with reCAPTCHA is not compelling from this perspective. ReCaptcha’s Prof. Von Ahn has already licenced his ESPgame image labeling program to Google (Now known as Google Image Labeler) so buying reCAPTCHA might have been an attempt to grab this technology as a bundle with the companies other assets.

Certainly using reCAPTCHA to digitise and make searchable Google’s vast collection of books in the Google Books archive project, however given the above they could have designed a similar system themselves. What is really key to this purchase by Google is in fact the existing distribution network of websites that are already using reCAPTCHA and the API they have created to allow new sites/uses of the reCAPTCHA system. Given the 100,000 sites and 30 Million CAPTCHA’s served daily by reCAPTCHA, the decision by Google to buy reCAPTCHA is about the text processing volume it gives them immediately, not the technology behind it.

Social Media Metrics Solutions: An overview of the options.

Companies are increasingly being drawn into the world of social media because it is the media of choice for many of their customers. The only problem is given that it is still such a new discipline, marketing to your customers in the social media space is still a gray area for many, which is a major impediment preventing companies from embracing it fully. Enter the land of  Social Media Metrics., Some promise to track your sentiment over time or compare it to competitors in the market. Others let you dive into conversations in the social media world related to your product. These tools equip your company/agency with a leg up and at the very least a proxy metric to determining the success of your marketing efforts in transferring  to the world of social media. None of them are perfect, but to not use tools such as these you risk missing beneficial opportunities to engage your customers in new ways.

This article is a precursor to a number of more in depth reviews of social media metrics software which will follow in the coming weeks (provided I can get my hands on them all) and offers a starting point for your own research. Below are the platforms I’m familiar with and a run down on their major strengths and weaknesses. If you are using a social media metrics platform not included below, please leave a comment and I’ll endeavor to include it in my in-depth reviews here moving forward

Social Mention

Social Mention is a handy little social media real time search tool and alert system. Allowing you to mine the social media landscape for “mentions” or specific search strings and set up daily alerts via email to monitor your brand/name online. While a robust search engine, this solution is limited in its analytics and reporting options and is probably best for smaller businesses/individuals looking to make a first move into the social media metrics world.


Radian6 is the most complete solution, offering both measurement/analytics&reporting (common across most of the software suites) as well as integration into CRM for tracking sales leads and other ‘conversation management’ options once opportunities for engagement are discovered.

Sentiment Metrics

Mostly a white label offering to be resold by agencies. Not overly powerful, offers a basic set of semantics to track lumped loosely into positive and negative categories. Enables you to drill down into brand mentions, but falls short of enabling you to jump in to conversation streams to engage people through social media. Is mostly a tracking tool.

Techrigy SM2

A more advanced solution compared to Sentiment Metrics and still primarily a monitoring/tracking tool. Probably best targeted to PR agencies looking to track streams of public sentiment in various ways relating to a brand/area of interest. Is more powerful in that it gives you the ability to specify search strings in addition to Bayesian inference to analyse language mentions on your product.


Trendrr is a robust set of tracking tools with reasonable reporting options (for a fee) but is primarily focused on visual comparison on trends in the social web over time. Useful for comparing your own brand vs. another over time, by social network/site.

Nielsen My Buzzmetrics

Nielsen offers their own ‘Buzzmetrics’ service (Where they do the analysis for you and generate a report pack) along with a hosted service called My Buzzmetrics. My Buzzmetrics is really a monitoring tool combined with additional segmentation data available through Nielson. Useful for tracking campaigns amongst targeted consumer segments.


Probably second to Radian6 in features, BuzzNumbers is more hands on than most of the basic monitoring tools, allows you to track individual conversations (set up flags for notification of new mentions in specific threads for example) but is limited in enabling you to engage directly from the interface. Decent reporting/analysis options.

Stay tuned in the coming weeks I will be diving deeper into each of the solutions to give you a better feel for their capabilities and examine the likely best case usage usage scenario for each of these software platforms. I’ve also been made aware of an excellent list of measurement tools included in a presentation from Rachael Maughan at the white agency over at Katie Chatfields site here, well worth a look too.

Twitter is *now* and Facebook is *Last Weekend* (Why Facebook can’t win in real time search)

With the FriendFace deal I’ve written about before the R&D team from FriendFeed was touted as giving Facebook a leg up in the real time search world against Twitter, and while Friendfeed has some great technology to integrate into Facebook, until we see a fundamental paradigm shift in how people interact with Facebook. The service will always lag behind Twitter in the timeliness of posts, and hence, always be behind the 8-ball when it comes to being a true real time search engine.


In an increasingly connected world, application support on mobile devices is critical to enable people to participate in their social  networks on the go. They need to be lightweight and easy to update your status or participate in your ‘stream’ in the spare moments a person has on the bus, between meetings with clients etc. Facebook has focused a lot recently on developing a new version for the iPhone “Facebook 3.0” which purports to be much easier to use and offers a better optimised user experience than the mobile Facebook site which is “underdeveloped” to put it kindly.

Facebook 3.0 moves to address some of the shortcomings that have limited its use on mobile platforms, but the Facebook ‘environment’ does not translate well from the desktop to the mobile screen. The sheer number of features and plug-ins and other services that Facebook offers can not easily be transferred to a mobile device and this will continue to impede adoption of Facebook by mobile users.

Twitter on the other hand, is inherently optimised for the mobile space. Being limited from early in it’s evolution to messages of 140 Characters (Twitter originally had no message size limit, but this changed soon after the service was born) it has by default a natural place on mobiles, essentially a multi-cast version of SMS. This is both a natural extension of behaviour users are already accustomed to and is an ideal fit for the mobile form factor.

Sure Twitter mobile applications have added many bells and whistles over the basic function of sending 140 character messages, applications such as Tweetie on the iPhone and PockeTwit on Windows Mobile (The screenshots of PockeTwit don’t do it justice, it’s a great App) have added media service integration (uploading of photos, GPS integration with mapping services etc) but the core competancy of these applications is the short succinct ‘tweet’ which drives their function.

Facebook cannot compete with this ‘limitation’ without neutering the experience desktop users have with the service. Which is why when it comes to what is happening now Facebook can’t compete, and is limited to being the repository of the photos from the party you had on the weekend when you finally upload them a week later.

Twitter is the real winner in the FriendFace buyout.

On first inspection of the FriendFace deal, with Facebook paying $150Million (In stock/cash) for Friendfeed it looks like Twitter just lost a potential buyer in Facebook and might be limited to brokering a deal with Google or Microsoft, or actually having to find a way to monetise Twitter on their own.

All is not lost though, Twitter was never a good deal for Facebook, the price was too high and the benefits were limited for Facebook (who was looking for someone they could integrate into Facebook, not an external site separate to the Facebook ecosystem). For Twitter, giving up sovereignty to Facebook was never an option, and there was never a strategic fit between the two companies.

So for Facebook, Friendfeed was by far the better buy. Not only does Friendfeed integrate better into the Facebook environment (Facebook has been copying Friendfeed UI elements for a while now) but because of it’s stagnant growth despite continued innovation (Growing 0.26% in July Vs Twitter’s 16%) Friendfeed was brought to the auction table at a bargain basement price.

What is important to keep in mind here was Facebook bought Friendfeed not because of its user base, but because of it’s technology. Which is where twitter wins out of this deal.


After being bought out by Facebook, Friendfeed has no future as a stand alone product anymore. Sure there are ruminations about the site continuing, but the exodus has begun and Friendfeed users, unsure about the future of the site, no longer trust Friendfeed to exist in a years time. With this cloud over the future of the site, investing their time through continuing to use Friendfeed has a limited payoff and they are looking for alternatives.

Facebook has made a land grab for disenfranchised Friendfeed users with ‘Facebook Lite’ but given that these users just lost their ‘home’ because of Facebook, this is a psychological impediment to moving to a Facebook property.

Where will they go?


Why Rupert Murdoch’s decision to charge for content online could save the news industry.

After Rupert Murdoch made the decision to charge for online access to content across his suite of newspapers online properties, there was considerable consternation across the Internet. With many armchair pundits in the blogosphere/twitter-verse crying foul at the idea of charging for content online that was previously available for free, citing their own usage expectations and a belief that the content isn’t worth paying for given the perceived drop in journalistic standards and cost cutting amongst newspapers. While I have considerable time for the case against changing to a cash for content mode, I believe in Murdoch’s case it has merit and might actually save the hemorrhaging news industry. Here’s why;

Charging for content online suddenly makes printed newspapers relatively less expensive

So you are running a newspaper company, what is your biggest cost? Sure journalists salaries figure in here somewhere, but it is the actual printing presses and distribution networks that are the significant cost centres in a newspaper operation. With declining readerships of actual newspapers over the last 10 years, advertisers are paying less for space than they used to, and less income is coming in from actual newspaper purchases and the fixed costs of running the printing press aren’t getting any lower and such make up a larger proportion of your total operating costs.

In order to make purchasing an actual news-‘paper’ a more attractive proposition for consumers, one way to drive that change is to make other sources of news relatively more expensive. If you charge for the online version of your newspaper, the actual printed version becomes an attractive option for consumers who had moved to the previously free online alternative. This will drive a percentage of readers back to the paper version and patch up the hole in the traditional newspaper component of Murdoch’s enterprise. For how long, this is debatable, but in localised markets where Murdoch’s papers have near monopolies over content/distribution, it will have a larger effect than in more competitive geographies.

Murdoch doesn’t own the entire news industry, many news sites will remain free and become profitable (or at least, lose less money)

Assuming that all the readers of Murdoch’s online properties do not continue with the site in a cash for content capacity, their eyeballs and the advertising revenue they represent will venture elsewhere. What this means is that free online news sites that were really struggling to meet their costs with a drop in online advertising revenue (exacerbated by the GFC) will be able to stop the revenue bleed with a greater share of the news audience. This ‘tiering’ of the online news audience is good for everyone provided that Murdoch can make a compelling value proposition for readers of his sites to pay for content that they unable to find elsewhere (Ideally, shifting the focus to quality journalism in lieu of the click generating sensationalism seen now).

Will all this save the news(paper) industry?

Maybe. In the long run the competitive forces of citizen journalism and sites such as wikinews will continue to place downward pressure on the operational costs of running a newspaper. With the costs of distribution online almost zero there is nothing stopping an upstart news operation running out of a garage to unseat a large monolithic operation such as News Corp if they can’t continue to offer a decent value proposition while charging for content.

At the very least, this decision will breathe some life back into printed newspapers until some industry shake out has occurred and people adapt to the new online news landscape. This might just buy Murdoch some time to regroup, will it be enough?