How the Liberal Party just undermined the climate change skeptics.

If you live in Australia, it’s hard to have not been exposed to the turmoil sparked last week by the factional fights within the Liberal party over the ETS/CPRS legislation that was defeated in the senate today. With the leadership spill driven by the climate change skeptics within the party who believed that Malcolm Turnbull was doing the wrong thing by brokering a deal with the Labor party over the ETS legislation, believing the science needed more analysis or at the very least, needed to wait until after the United Nations Copenhagen climate change conference. The Liberal Party has in the process of electing their new leader Tony Abbott just ruined the best chance the climate change skeptics had in clawing back this issue from the hands of the Labor party.

Given the Liberals poor showing in the polls even before the ETS issue erupted (Worse than at the last election) the Liberals were likely to lose their majority in the senate at the next election. The Liberals only have a majority in the senate still due to the half senate elections cycle and the large number of seats Howard won in the 2004 federal election. With this in mind, the path Turnbull took in regards to the ETS was very sensible, he understood the likelihood of losing the ability to block the legislation after the next election, and regardless of his position on the science behind climate change knew that now was the time to negotiate a deal that would best serve his constituents. This opportunity existed because the Labor party is keen to be seen as a world leader with the ETS  and use it for political gain  around the Copenhagen conference, the time was right, Labor wanted action on the legislation before the next election, and the Liberals held the majority in the senate. But instead, by imploding over this issue the Liberal Party has ensured that after the next election, the Labor party will be in a position to pass whatever legislation they want without amendment.

While the climate change skeptics in  the Liberal Party never held a strong enough position to block the legislation past the next election, they held a strong enough position in the short term to win a number of important concessions from the Labor party to benefit their members. The Liberal Party has squandered this position, and ensured the worst possible outcome for their members and the climate change skeptics alike.

Critics argue Twitter is 99% noise, and they’re right.

This is a complaint that I often come across when discussing Twitter with non-users, they cite media hype and their own experience with Twitter claiming that the majority of tweets on the service are of no interest, or simply “people telling the world about their breakfast” or some variant on the inane comments theory.

They are right.

But they are also missing the point.

For any individual 99% of the internet will always be useless noise. For instance, when was the last time you went to YouTube and watched all the top videos and were interested in them all? I’m guessing never. Does this mean that YouTube is useless because so little of the content is of interest to any particular individual? No.

In reality you filter the videos you watch on YouTube by search, by channel, or by receiving the video as a recommendation by a friend. Twitter is much the same, You cut through the noise by following those users who match search criteria, by following twitter lists or by following those other users/friends recommend.

The critics have made the mistake of confusing the Macro-Twitter environment (ie: All the tweets from every user) with the Micro-Twitter environment that an individual user experiences (Only tweets by persons of interest/value) when they are discriminatory in how they use Twitter.

It’s up to the users of Twitter to explain this important error in perception to non users, or risk the continued alienation of a large segment of the online community.

So how will you do this?

Why the Windows 7 Beta was a Marketing, not Engineering Success.

I attended the Windows 7 launch on the 22nd October in Sydney (Apologies for the average quality of the above photo, I was up the back of the auditorium taking photos on my mobile) and the one thing that struck me (other than the relief Microsoft employees had they weren’t promoting Vista anymore) was the belief that the audience had in Windows 7 from first hand experience, with approximately 2/3 of the audience having already used the OS prior to launch, either participating in the public beta program or running the release candidate of Windows 7 that was made available prior to launch to iron out the last few bugs in the wild before the RTM version went to the manufacturers.

Windows 7 Australia Launch

Obviously the audience at the launch event is one that skews heavily towards ‘geek’ so the 2/3 figure isn’t something that would carry over to the general population. However it is an important group of influencers who are already acting as evangelists for the product before it has been launched to the general public. For the most part the beta releases of Windows 7 were very nearly already polished enough to be released as the final version, so the public beta served Microsoft more as a marketing exercise than an engineering one.

Microsoft managed to convince this key group of influencers that they could safely put their online social capital behind the product by letting the influencers get hands on with Windows 7 throughout the beta program. By creating an army of influencers who evangelise Windows 7, this acts to overcome the distrust of marketing pitches coming from companies trying to sell us a product.

This effect was dramatic, where normally with a major OS release, most consumers would take a wait and see approach, holding off to the next service pack before upgrading. With this army of influencers Microsoft was able to achieve a 234% increase in sales for Windows 7 above that of Windows Vista in the first weeks of the products release.

While Microsoft couldn’t have been created an army of  Windows 7 evangelists without it being a good product. The public beta program was instrumental in creating positive word of mouth for the product in the influencers likely to sway consumers choices. Many companies would shy away from letting the public use a non-final version of their software, for this I believe Microsoft needs to be commended, not just for using the public beta to create the most polished version of their OS yet, but leveraging the public beta as a marketing tool to build critical momentum prior to the Windows 7 launch.

Tiger Airways: Why treating customers like farm animals helps build their brand

In a recent trip to Melbourne, I thought I was taking advantage of Tiger Airways extremely low prices and landing myself a great deal on airfares. Sure the process seemed smooth enough (despite the extra wait time required at Sydney terminal before the flight at check in). The flight itself was straightforward and the extra fee we chose to pay to get seats in the exit row was worth it for my 6’4″frame.

Although once you add in charges for ‘extra’ luggage (above your included carry on allowance) and the ‘exit row’ suddenly the $25 ‘bargain’ tickets no longer look like such a bargain (Approaching, but not quite at VirginBlue or Jetstar rates for a comparable flight). This was not entirely a surprise, as there is always a catch somewhere, and we felt like taking a punt on a new airline “for the experience”.

What was a surprise though, was how Tiger Airways treat their customers once they arrive at their terminal (T4) at Melbourne Airport (Tullamarine).Tiger Airways Terminal Melbourne

The baggage claim area for Tiger Airways was essentially a tin shed with chicken wire walls on a concrete floor.

Tiger Airways Terminal Melbourne

The exit to the terminal/baggage claim area.

I couldn’t help but feel like I was being herded through the terminal like a cow to the slaughter by Tiger Airways. What was interesting on reflecting on the experience was that Melbourne Airport is Tiger Airways primary hub for operating in Australia. This struck me as odd that they would construct their premier hub in Australia in such a cheap and nasty way.

On further analysis though, it is entirely reasonable for a cut price operator in any industry to ‘dress’ the part. If the visual cues when flying Tiger are true to the sense that you are saving money, this reinforces the purchase decision and acts as a feedback loop to solidifying the perception that the customer has managed to purchase a ticket on the cheapest airline around.

Does this ‘build’ Tiger Airways brand? It certainly acts as an important differentiator to the other airlines offerings in Australia, that in itself is important in carving out a niche  for the brand against a market that has two strong ‘value’ offerings in Virgin Blue and Jetstar. Being ‘value’ isn’t enough of a differentiator. But being ‘cheap’ is. I’d call it a success, though I’ll be flying another airline next time.

iSnack 2.0: What Kraft should have done.

isnack20

Kraft finally decided on a name for their new Vegemite product. The name, ‘iSnack 2.0’ has been universally panned by the mainstream media and social media alike. Where the naming competition process fell down, was not that it was crowd sourced as many are suggesting, rather that it was not crowd sourced enough. Sure we can all understand why agencies and brand custodians alike are both hesitant to completely open their brand up to the wisdom of the crowds, it makes the brands feel naked and exposed as well as making bringing the value the agency brings to the brand into question (Since they arguably aren’t doing anything creative in their work for the brand.)

Kraft would have struggled with almost all of the names they chose for the new Vegemite spread. With the brand being so close to the hearts of  many Australians, a large number would have reacted to any name chosen regardless. Unfortunately for Kraft they chose a name most likely to offend those with a voice in the social media and from that point on it was game over for iSnack 2.0.

What Kraft should have done

Run a more open competition, either completely open with a digg style submission and voting system. (which would have opened the competition to being ‘gamed’ to the point where the poll looks something like Time Magazines ‘hacked’ 100 most influential people results) Or a partially open competition, where they could have let the competition run as it did in for iSnack 2.0, come up with a shortlist of 10 (or less) from the crowd sourced entries and let people vote on their favorites.

It is entirely possible that if the competition is gamed you will end up with a result less optimal than had the competition been totally fair and all votes represented the true wisdom of the crowd. But at the very least, if you let the voting run for a period of time, you have introduced the potential product names to the population over time via the short list and the final result will not shock the loyal customers who with iSnack 2.0 were so outraged at having this name thrust upon their brand. With Kraft caving to the pressure and declaring they would rename the product before it has even shipped to retailers, this is a victory for the crowd, and evidence once again that “The mob is faster, smarter and stronger than you are.”

After you #brokereplies we were promised friend recommendations, so where are they Twitter?

It seems like an eternity ago, but in May 2009 Twitter made what they described in their blog as a “small settings update” to their service unexpectedly and in doing so ‘broke’ an option for how @replies worked for a small percentage of their users. Now it just so happened that the 3% of users who were affected were a vocal lot and managed to create a significant amount of noise on twitter with the #fixreplies hashtag (that is still in use today). The level of noise was disproportionate to the number of users affected, as many users were upset at twitter removing the choice that was available to them previously without warning.

The unexpected change in Twitters service was arguably the right move from a consistency of service perspective. Now users are guaranteed that @reply tweets they send are visible only to people who are following the person in question and not confusing people who aren’t close enough to the conversation stream. That said, an important aspect that was noted in the initial post highlighting the service change on the Twitter blog is “The Importance of Discovery”, many people who were fans of the option to view all @replies used the option to discover new and interesting people to follow who people they followed were interacting with. Essentially the people you were following, when they interacted with others were giving you an implicit cue that this person was worth following too. This act of discovery was important and has not yet been replaced with a suitable option by Twitter despite assurances that this would occur:

“We’re hearing your feedback and reading through it all. One of the strongest signals is that folks were using this setting to discover and follow new and interesting accounts—this is something we absolutely want to support. Our product, design, user experience, and technical teams have started brainstorming a way to surface a new, scalable way to address this need. “

If you have been reading this blog before, you would have no doubt encountered by belief that Twitter holds some advantages over Facebook but when it comes to new user discovery this is one area that Facebook has it in spades over Twitter with its Friend Suggestion feature. Now Twitter does have a suite of tools comparable to Facebook’s Friend Finder tools, but these are limited to finding users who you already have existing relationships with (ether in person, via IM/Email etc). Twitter offers a  Suggested Users page, however this is nothing more than a glorified list of celebrities and news outlets, how is that useful?

It wouldn’t be hard to create an algorithm to examine the relationships you have on twitter and create a list of suggestions based on you you already follow on twitter. The only issue is having access to the dataset and a few CPU cycles to mine through it. This could be offloaded to an external party through the  Twitter API, but involves more data than is feasible to be sent to a third party for processing. (If I follow 1000 people who each follow 1000 people, the API needs to return 1000 x 1000 contacts for cross matching and comparison, which is unfeasible for a third party service.

With Twitter sitting on this gold mine of data, what is stopping them of releasing a user discovery/recommendation engine? The code necessary to create such a feature work is not impossible, just potentially computationally expensive. Twitter has already revealed that the original @reply options meant the service was potentially going to hit a server capacity wall that it could not overcome (A combination of it’s original design and trouble scaling due to Ruby on Rails) is this why? Twitter has grown by 1444% over the last year and this obviously has it’s own set of problems, not only do new users put a strain on the system, but the number of individual interconnects (followers/following) in the network increase the activity of the system. In addition to this, as users increase the number of their followers there is a correlation showing that this increases the average number of tweets per day further putting strain on the servers of Twitter.

It seems like an obvious and relatively straightforward feature to implement, one which Twitter assured us would be forthcoming after they #brokereplies. But is there more to this delay in implementation than Twitter are telling us?

If 88% of people refuse to pay for online news, can News.com.au make money?

In a survey conducted through it’s online market research service Pureprofile has released results of a survey how prepared people are to pay for news content online. The results are unsurprising, with 88% of those polled in both Australia and the UK unwilling to pay for online news. Only 5% were willing to pay for content if it was deemed of a high enough quality, with the remaining 7% only willing to pay for content if advertising was removed from the site.

(Survey data in the below image).

 

Pureprofile

The weakly defined options of the poll above aside, if 12% of respondents are willing to pay for their news online, is this a sufficient number for Murdoch’s plan to succeed? I’ve discussed Murdoch’s decision to charge for online content before and concluded that potentially it is a viable business model if enough people are willing to pay the online access prices. Now that we have an estimate to the number of people who are willing to pay for online news content from the survey above, we can look at the kind of income this might generate for Murdoch’s Australian news portal News.com.au. Using News Limited’s self published data from August 09,News.com.au achieved 4.1 million unique browsers and 17 million browsing sessions. Let’s do some basic maths to conclude that those two numbers average out to a reader who returns to the site 4 times in a month. With this profile we can start crunching some numbers as to how much money News Limited might make if they turn the site over to a fully pay-walled site.

A Likely Pricing Model

For News.com.au’s  ‘average’ reader (Lets call him Mr Pink) there are two likely payment options to eventuate, either a weekly subscription model or a price levied per session/online ‘edition’. Fortunately given an average of just over 4 sessions per unique visitor, this makes the mathematics fundamentally the same for either of the models. For the sake of simplicity at this stage we will be discounting the potential loss in advertising revenue involved by catering to the 7% of respondents to the survey above who would only consider paying for their online news if it was advertising free.

We can now generate potential revenue figures with the below formula:

Current Readership  x  Percent willing to pay  x  Fee levied = Monthly income

Or using the above data and an estimate on the likely fee structure we get:

4.1 Million x 12% x  Let’s use $2 per session/week* for access = $3,936,000 per month.

(*Reports have surfaced in the last week that Murdoch intends to charge mobile users $2/w for access to the NYT so these estimates are close to expected figures.)

Switching to a paid access model could secure News.com.au just over $47 Million a year in online subscription fees, not to mention possibly forcing reader back to it’s struggling printed newspapers business, which could add additional income. Is this enough to justify the potential loss in advertising revenue from decreased readership figures? Not to mention the readers who were only willing to pay for their news without advertising? Given the difficulty of pulling accurate financial data for News.com.au from within the overall spectrum of earnings of the News Corporation business entity, it’s hard to make a concrete conclusion whether an annual income of $47 Million is a viable move for Murdoch.

$47 Million, Spare Change?

Given annual revenues of $30 Billion from across all of it’s business units, $47 Million is a drop in the ocean and seems to undervalue the News.com.au property. Unless a number of the respondents to the Pureprofile.com survey above can be convinced to change their mind on this issue, a decision to charge for content online in this case is likely to result in a negative outcome for Rupert’s bottom line.

**If any readers  have additional information regarding the above numbers or know where I can find revenue figures for news.com.au please add a comment below.

Google Acquires reCAPTCHA: For the tech or the distribution network?

Recently Google acquired CAPTCHA provider reCAPTCHA one of the leading CAPTCHA providers on the Internet. The key strength behind the reCAPTCHA implementation of the CAPTCHA test is that it pairs a known word (to the server) with an unknown word that an OCR scan has failed to recognise. This allows reCAPTCHA to crowdsource the digitisation of scanned books such as those in the Google Books project as Google outlines in their blog post on the acquisition:

“This technology also powers large scale text scanning projects like Google Books and Google News Archive Search. Having the text version of documents is important because plain text can be searched, easily rendered on mobile devices and displayed to visually impaired users. So we’ll be applying the technology within Google not only to increase fraud and spam protection for Google products but also to improve our books and newspaper scanning process.”

The above publicly stated reasons for the aquisition seem obvious and  perhaps they are a little too obvious, hiding the real reason behind the aquisition. reCAPTCHA hold no specific patents for the technology behind their text CAPTCHA algorithms (At least none they discuss on their website or are able to be found on the US Patents & Trademark Office site) and given that reCAPTCHA operates mostly on open source software the case for buying Google buying the company gets thinner.

Given that Google could easily code their own reCAPTCHA equivalent, this business deal goes beyond the obvious. Google already has their own ‘CAPTCHA Killer’ that operates using images and video, surely they could roll that out if they were really serious about security, so the tech involved with reCAPTCHA is not compelling from this perspective. ReCaptcha’s Prof. Von Ahn has already licenced his ESPgame image labeling program to Google (Now known as Google Image Labeler) so buying reCAPTCHA might have been an attempt to grab this technology as a bundle with the companies other assets.

Certainly using reCAPTCHA to digitise and make searchable Google’s vast collection of books in the Google Books archive project, however given the above they could have designed a similar system themselves. What is really key to this purchase by Google is in fact the existing distribution network of websites that are already using reCAPTCHA and the API they have created to allow new sites/uses of the reCAPTCHA system. Given the 100,000 sites and 30 Million CAPTCHA’s served daily by reCAPTCHA, the decision by Google to buy reCAPTCHA is about the text processing volume it gives them immediately, not the technology behind it.

Social Media Metrics Solutions: An overview of the options.

Companies are increasingly being drawn into the world of social media because it is the media of choice for many of their customers. The only problem is given that it is still such a new discipline, marketing to your customers in the social media space is still a gray area for many, which is a major impediment preventing companies from embracing it fully. Enter the land of  Social Media Metrics., Some promise to track your sentiment over time or compare it to competitors in the market. Others let you dive into conversations in the social media world related to your product. These tools equip your company/agency with a leg up and at the very least a proxy metric to determining the success of your marketing efforts in transferring  to the world of social media. None of them are perfect, but to not use tools such as these you risk missing beneficial opportunities to engage your customers in new ways.

This article is a precursor to a number of more in depth reviews of social media metrics software which will follow in the coming weeks (provided I can get my hands on them all) and offers a starting point for your own research. Below are the platforms I’m familiar with and a run down on their major strengths and weaknesses. If you are using a social media metrics platform not included below, please leave a comment and I’ll endeavor to include it in my in-depth reviews here moving forward

Social Mention

Social Mention is a handy little social media real time search tool and alert system. Allowing you to mine the social media landscape for “mentions” or specific search strings and set up daily alerts via email to monitor your brand/name online. While a robust search engine, this solution is limited in its analytics and reporting options and is probably best for smaller businesses/individuals looking to make a first move into the social media metrics world.

Radian6

Radian6 is the most complete solution, offering both measurement/analytics&reporting (common across most of the software suites) as well as integration into salesforce.com CRM for tracking sales leads and other ‘conversation management’ options once opportunities for engagement are discovered.

Sentiment Metrics

Mostly a white label offering to be resold by agencies. Not overly powerful, offers a basic set of semantics to track lumped loosely into positive and negative categories. Enables you to drill down into brand mentions, but falls short of enabling you to jump in to conversation streams to engage people through social media. Is mostly a tracking tool.

Techrigy SM2

A more advanced solution compared to Sentiment Metrics and still primarily a monitoring/tracking tool. Probably best targeted to PR agencies looking to track streams of public sentiment in various ways relating to a brand/area of interest. Is more powerful in that it gives you the ability to specify search strings in addition to Bayesian inference to analyse language mentions on your product.

Trendrr

Trendrr is a robust set of tracking tools with reasonable reporting options (for a fee) but is primarily focused on visual comparison on trends in the social web over time. Useful for comparing your own brand vs. another over time, by social network/site.

Nielsen My Buzzmetrics

Nielsen offers their own ‘Buzzmetrics’ service (Where they do the analysis for you and generate a report pack) along with a hosted service called My Buzzmetrics. My Buzzmetrics is really a monitoring tool combined with additional segmentation data available through Nielson. Useful for tracking campaigns amongst targeted consumer segments.

BuzzNumbers

Probably second to Radian6 in features, BuzzNumbers is more hands on than most of the basic monitoring tools, allows you to track individual conversations (set up flags for notification of new mentions in specific threads for example) but is limited in enabling you to engage directly from the interface. Decent reporting/analysis options.

Stay tuned in the coming weeks I will be diving deeper into each of the solutions to give you a better feel for their capabilities and examine the likely best case usage usage scenario for each of these software platforms. I’ve also been made aware of an excellent list of measurement tools included in a presentation from Rachael Maughan at the white agency over at Katie Chatfields site here, well worth a look too.

Twitter is *now* and Facebook is *Last Weekend* (Why Facebook can’t win in real time search)

With the FriendFace deal I’ve written about before the R&D team from FriendFeed was touted as giving Facebook a leg up in the real time search world against Twitter, and while Friendfeed has some great technology to integrate into Facebook, until we see a fundamental paradigm shift in how people interact with Facebook. The service will always lag behind Twitter in the timeliness of posts, and hence, always be behind the 8-ball when it comes to being a true real time search engine.

Why?

In an increasingly connected world, application support on mobile devices is critical to enable people to participate in their social  networks on the go. They need to be lightweight and easy to update your status or participate in your ‘stream’ in the spare moments a person has on the bus, between meetings with clients etc. Facebook has focused a lot recently on developing a new version for the iPhone “Facebook 3.0” which purports to be much easier to use and offers a better optimised user experience than the mobile Facebook site which is “underdeveloped” to put it kindly.

Facebook 3.0 moves to address some of the shortcomings that have limited its use on mobile platforms, but the Facebook ‘environment’ does not translate well from the desktop to the mobile screen. The sheer number of features and plug-ins and other services that Facebook offers can not easily be transferred to a mobile device and this will continue to impede adoption of Facebook by mobile users.

Twitter on the other hand, is inherently optimised for the mobile space. Being limited from early in it’s evolution to messages of 140 Characters (Twitter originally had no message size limit, but this changed soon after the service was born) it has by default a natural place on mobiles, essentially a multi-cast version of SMS. This is both a natural extension of behaviour users are already accustomed to and is an ideal fit for the mobile form factor.

Sure Twitter mobile applications have added many bells and whistles over the basic function of sending 140 character messages, applications such as Tweetie on the iPhone and PockeTwit on Windows Mobile (The screenshots of PockeTwit don’t do it justice, it’s a great App) have added media service integration (uploading of photos, GPS integration with mapping services etc) but the core competancy of these applications is the short succinct ‘tweet’ which drives their function.

Facebook cannot compete with this ‘limitation’ without neutering the experience desktop users have with the service. Which is why when it comes to what is happening now Facebook can’t compete, and is limited to being the repository of the photos from the party you had on the weekend when you finally upload them a week later.

%d bloggers like this: