Archive

Posts Tagged ‘Competitive Intelligence’

Google versus Bing – a competitive intelligence case study

February 2, 2011 7 comments

Search experts regularly emphasise that to get the best search results it is important to use more than one search engine. The main reason for this is that each search engine uses a different relevancy ranking leading to different search results pages. Using Google will give a results page with the sites that Google thinks are the most relevant for the search query, while using Bing is supposed to give a results page where the top hits are based on a different relevancy ranking. This alternative may give better results for some searches and so a comprehensive search needs to use multiple search engines.

You may have noticed that I highlighted the word supposed when mentioning Bing. This is because it appears that Bing is cheating, and is using some of Google’s results in their search lists. Plagiarising Google’s results may be Bing’s way of saying that Google is better. However it leaves a bad taste as it means that one of the main reasons for using Microsoft’s search engine can be questioned, i.e. that the results are different and that all are generated independently, using different relevancy rankings.

Bing is Microsoft’s third attempt at a market-leading, Google bashing, search engine – replacing Live.com which in turn had replaced MSN Search. Bing has been successful and is truly a good alternative to Google. It is the default search engine on Facebook (i.e. when doing a search on Facebook, you get Bing results) and is also used to supply results to other search utilities – most notably Yahoo! From a marketing perspective, however, it appears that the adage “differentiate or die” hasn’t been fully understood by Bing. Companies that fail to fully differentiate their product offerings from competitors are likely to fail.

The story that Bing was copying Google’s results dates back to Summer 2010, when Google noticed an odd similarity to a highly specialist search on the two search engines. This, in itself wouldn’t be a problem. You’d expect similar results for very targeted search terms – the main difference will be the sort order. However in this case, the same top results were being generated when spelling mistakes were used as the search term. Google started to look more closely – and found that this wasn’t just a one-off. However to prove that Bing was stealing Google’s results needed more than just observation. To test the hypothesis, Google set up 100 dummy and nonsense queries that led to web-sites that had no relationship at all to the query. They then gave their testers laptops with a new Windows install – running Microsoft’s Internet Explorer 8 and with the Bing Toolbar installed. The install process included the “Suggested Sites” feature of Internet Explorer and the toolbar’s default options.

Within a few weeks, Bing started returning the fake results for the same Google searches. For example, a search for hiybbprqag gave the seating plan for a Los Angeles theatre, while delhipublicschool40 chdjob returned a Ohio Credit Union as the top result. This proved that the source for the results was not Bing’s own search algorithm but that the result had been taken from Google.

What was happening was that the searches and search results on Google were being passed back to Microsoft – via some feature of Internet Explorer 8, Windows or the Bing Toolbar.

As Google states in their Blog article on the discovery (which is illustrated with screenshots of the findings):

At Google we strongly believe in innovation and are proud of our search quality. We’ve invested thousands of person-years into developing our search algorithms because we want our users to get the right answer every time they search, and that’s not easy. We look forward to competing with genuinely new search algorithms out there—algorithms built on core innovation, and not on recycled search results from a competitor. So to all the users out there looking for the most authentic, relevant search results, we encourage you to come directly to Google. And to those who have asked what we want out of all this, the answer is simple: we’d like for this practice to stop.

Interestingly, Bing doesn’t even try to deny the claim – perhaps because they realise that they were caught red-handed. Instead they have tried to justify using the data on customer computers as a way of improving search experiences – even when the searching was being done via a competitor.  In fact, Harry Shum, a Bing VP, believes that this is actually good practice, stating in Bing’s response to a blog post by Danny Sullivan that exposed the practice:

“We have been very clear. We use the customer data to help improve the search experience…. We all learn from our collective customers, and we all should.”

It is well known that companies collect data on customer usage of their own web-sites – that is one purpose of cookies generated when visiting a site. It is less well known that some companies also collect data on what users do on other sites (which is why Yauba boasts about its privacy credentials). I’m sure that the majority of users of the Bing toolbar and other Internet Explorer and Windows features that seem to pass back data to Microsoft would be less happy if they knew how much data was collected and where from. Microsoft has been collecting such data for several years, but ethically the practice is highly questionable, even though Microsoft users may have originally agreed to the company collecting data to “help improve the online experience“.

What the story also shows is how much care and pride Google take in their results – and how they have an effective competitive intelligence (and counter-intelligence) programme, actively comparing their results with competitors. Microsoft even recognised this by falsely accusing Google of spying via their sting operation that exposed Microsoft’s practices – with Shum commenting (my italics):

What we saw in today’s story was a spy-novelesque stunt to generate extreme outliers in tail query ranking. It was a creative tactic by a competitor, and we’ll take it as a back-handed compliment. But it doesn’t accurately portray how we use opt-in customer data as one of many inputs to help improve our user experience.

To me, this sounds like sour-grapes. How can copying a competitor’s results improve the user experience? If it doesn’t accurately portray how customer data IS used, maybe now would be the time for Microsoft to reassure customers regarding their data privacy. And rather than view the comment that Google’s exposure of Bing’s practices was a back-handed compliment, I’d see it as slap in the face with the front of the hand. However what else could Microsoft & Bing say, other than Mea Culpa.

Update – Wednesday 2 February 2011:

The war of words between Google and Bing continues. Bing has now denied copying Google’s results, and moreover accused Google of click-fraud:

Google engaged in a “honeypot” attack to trick Bing. In simple terms, Google’s “experiment” was rigged to manipulate Bing search results through a type of attack also known as “click fraud.” That’s right, the same type of attack employed by spammers on the web to trick consumers and produce bogus search results.  What does all this cloak and dagger click fraud prove? Nothing anyone in the industry doesn’t already know. As we have said before and again in this post, we use click stream optionally provided by consumers in an anonymous fashion as one of 1,000 signals to try and determine whether a site might make sense to be in our index.

Bing seems to have ignored the fact that Google’s experiment resulted from their observation that certain genuine searches seemed to be copied by Bing – including misspellings, and also some mistakes in their algorithm that resulted in odd results. The accusation of click fraud is bizarre as the searches Google used to test for click fraud were completely artificial. There is no way that a normal searcher would have made such searches, and so the fact that the results bore no resemblance to the actual search terms is completely different to the spam practice where a dummy site appears for certain searches.

Bing can accuse Google of cloak and dagger behaviour. However sometimes, counter-intelligence requires such behaviour to catch miscreants red-handed. It’s a practice carried out by law enforcement globally where a crime is suspected but where there is insufficient evidence to catch the culprit. As an Internet example, one technique used to catch paedophiles is for a police officer to pretend to be a vulnerable child on an Internet chat-room. Is this fraud – when the paedophile subsequently arranges to meet up – and is caught? In some senses it is. However saying such practices are wrong gives carte-blanche to criminals to continue their illegal practices. Bing appears to be putting themselves in the same camp – by saying that using “honeypot” attacks is wrong.

They also have not recognised the points I’ve stressed about the ethical use of data. There is a big difference between using anonymous data tracking user  behaviour on your own search engine and tracking that of a competitor. Using your competitor’s data to improve your own product, when the intelligence was gained by technology that effectively hacks into usage made by your competitor’s customers is espionage. The company guilty of spying is Bing – not Google. Google just used competitive intelligence to identify the problem, and a creative approach to counter-intelligence to prove it.

Gun smuggling, airline security and an intelligence failure.

January 25, 2011 2 comments

The headline article in the London Times for 25 January 2011 (print edition), Gunrunner Security Fiasco, reports how a security consultant named Steven Greenoe had smuggled numerous weapons into the UK – subsequently sold to UK criminals and gangs. At least one gun is known to have been used in a drive-by shooting.

This story raises several issues – not least the problem of airport security and how to ensure passenger safety, both on the ground and in the air. The news appeared to break on the same day that a suicide bomber killed three dozen people at the Moscow arrivals lounge.

I’ve often felt that the current paranoia over airport security was “overkill” (pardon the word-use). When I first started flying it was an adventure, but since September 2001 it has become more and more unpleasant. The security checks – although necessary – are becoming increasingly intrusive, yet the terrorists and criminals continually find new ways to get round them. Each time they are caught, new barriers are put in front of the innocent travelling public, to the extent that the average traveller is now so nervous that it would be almost impossible to differentiate between the genuinely nervous innocent and the person exhibiting nervousness due to their plans to blow up a plane.

Just as an example of how easy it is to blow up a plane if you really wanted to, I did some quick research prior to writing this post. For a few hundred US$ it is possible to purchase a few grams of a chemical and package it in a way that would not arouse suspicion if taken on a plane. With the addition of further chemicals available to all passengers on the plane, this could be turned into a bomb that would cause substantial damage. I’m not going to identify the chemicals for obvious reasons and not having tested this, I can’t say whether this bomb would be sufficient to blow a hole in the plane’s fuselage. However videos of the two chemicals in combination are available on the Internet, and the reaction is always highly explosive, completely destroying the reaction container. (One described the reaction of just 2 grams of a similar less-reactive chemical as like letting off a hand-grenade in a bath tub, and the resulting video confirmed this as the bath was destroyed).

The point is that if you want to kill and cause mayhem, it is possible. The job of security is to spot those people who are acting suspiciously or where intelligence suggests that they may be up to no good. This is how El Al caught Nezar Hindawi when he persuaded his pregnant girlfriend to carry a bomb onto a plane for him. The girlfriend was innocent and knew nothing about the suitcase with semtex hidden inside. It was only due to excellent intelligence, prior to reaching check-in, that a massacre was stopped.

The problem today is that everybody is likely to act suspiciously due to nervousness – and so make the job of picking up the genuine criminal more difficult. I believe that this is the first problem with airline security. The second is the laxness of checks at some smaller airports. Both are examples of intelligence failures. The first adds “noise” to the security problem, and uses staff that just go through procedures rather than depend on intelligence skills. The second is potentially worse in that it fails to use intelligence at all, and just hopes that the fact that the airport is small / regional means that the risk will be much lower. Of course, any potential terrorist can spot this from a long way off.

The US has long felt relatively safe, so long as the terrorist is kept out. As a result, checks on domestic flights are minimal or ineffective. This means that it is relatively easy to pack guns in domestic luggage – that then gets transferred to an international flight. Part of the problem here is the US obsession with gun ownership as a right (with the right saying that guns don’t kill people – people kill people, and ignoring the fact that guns make it easier for people to kill people). As long as the gun is in stored luggage there is less of an incentive to stop the passenger – even if detected. In the case of Steven Greenoe, he was reportedly stopped on at least one occasion – but managed to justify himself and so was allowed to fly, rather than get arrested. (I find it strange that in America – driving at 95mph or smoking cannabis – both generally less dangerous than owning and using a loaded gun are more likely to result in a criminal record).

The Times newspaper article mentioned that the gun smuggler concerned, Steven Greenoe, described himself as a security consultant. I did a brief search and up popped Greenoe’s LinkedIn page. Greenoe describes himself as the CEO of Jolie Rouge (which to me sounds a bit like the name given to the Pirate Flag – the Jolly Roger: surely not a coincidence). One part of Jolie Rouge’s business appears to be competitive intelligence – although the company doesn’t actually seem to use this term. Nevertheless Jolie Rouge Consulting states:

JRC uses public and private sources to unearth information critical to accurately valuing business and financial transactions. JRC uses an established network of legal, political, business, and military thought leaders to rapidly compile up-to-date and difficult-to-acquire information. Our clients use JRC’s oral and written reports to validate and sharpen their investment strategies and long-term business planning.

When I first looked at Greenoe’s profile he’d included the Business Strategy & Competitive Strategy forum within his LinkedIn profile.  When I next looked this had disappeared. I don’t know whether Greenoe dropped the group, or the group dropped him – scared about adverse publicity linking a gun runner to competitive strategy. Nevertheless, it highlights how important it is for the competitive intelligence community to police their own and ensure that anybody linked to the profession behaves ethically and morally. (This wouldn’t be the first time. There is a well-known and erudite CI consultant and author who many years ago, got caught up similarly, causing a scandal that is still remembered by long-time competitive intelligence professionals). Gun-running – especially where the guns are then sold on illegally is a lucrative business. (The guns cost $500 each but were reported to be selling at 10x that amount – meaning that the consignment he was arrested over would have netted him $360,000 profit for a little over $40,000 expenditure).

However the really odd thing about this news story is the date. Although the reports reached the press today (January 2011), Greenoe was first stopped on May 3, 2010, and arrested in July 2010. I wonder why it has taken six months for this story to hit the headlines. It’s another example of how care needs to be taken when doing competitive intelligence analyses – as what may look like a new news story could actually be quite old.

Delicious humbug and monitoring News stories

December 20, 2010 Leave a comment

Effective competitive intelligence monitoring means keeping up with the news, and where news is likely to impact you, drawing up strategies to take into account changes.

The problem with instant news via twitter, blog posts and various other news feeds is that news updates sometimes happen too quickly, before the snow has even had a chance to settle. That’s fine – just so long as the source for the news is 100% reliable, and the news story itself is also totally accurate. (I’m using snow as a metaphor here – rather than the more normal dust – as outside there is around 15cm of the stuff with more promised during what looks like being the coldest winter in Europe for over 20 years).

Unfortunately, more often than not, one of these two aspects fails: the source may not be reliable, or the story may not be true, or may be only half-true. Typically however people pick up on the story and it spreads like wildfire (so not giving that snow a chance to settle before it gets melted all over the web).

An example of this has been taking place this last week – with numerous posts reporting the demise of the web-bookmarking service Delicious.

Delicious (originally located at http://del.icio.us) was founded in 2003 and acquired by Yahoo! in 2005. By 2008 (according to Wikipedia) it had over 5 million users and 180 million bookmarked URLs. This makes it an important source for web-searching as, unlike with a search engine such as Google or Bing, each URL will be human-validated and valued.

Apparently, during a strategy meeting held by Yahoo! looking at its products, Delicious was named as a “sunset” product.

Slide from Yahoo! strategy presentation - on plans for various products

An image of this slide was tweeted – and after Yahoo! failed to deny that Delicious was to be closed, posts quickly appeared denouncing the company for the decision. Nobody really cared that sites like AltaVista and AlltheWeb were going – as they were to all intents and purposes dead anyway. (Their search features have long been submerged into Yahoo!’s own – although I for one, still miss some of the advanced features these services offered. Alltheweb allowed searching of flash content, and AltaVista had a search option that nobody now offers: the ability to specify lower/upper case searches).

The problem is that many of the sources posting the story are normally extremely accurate and reliable so when they post something, it is reasonable to believe what they say. This then compounds the problem as the news then gets spread even further – and when the story is corrected, the news followers often fail to spot the corrections.

The example of Delicious is not isolated. There are many news stories that develop over time – and when making strategy decisions based on news it is important to take into account changes, but also not rush in, if a news item hasn’t been fully confirmed.

Ideally check the source – and if the source is a press item (or blog post) then look to see if there is a press release or where the original item came from, in case there is a bias, inaccuracy or mis-interpretation. Only when the news has been confirmed (or where there are no contra-indications) should strategy implementation take place (although of course, the planning stage should be considered immediately if the potential impact of the news is high).

In the case of Delicious their blog gives the real story. The slide leaked to twitter was correct – Delicious is viewed as a “Sunset” product. However that doesn’t mean it will be closed down – and Yahoo! states that they plan to sell the service rather than shut it down (although it is noticeable that they don’t promise to keep the service going if they fail to find a buyer).

There is, in fact, another lesson to be learned here, relating to company awareness on the impact of industry blogs and twitter. It is important to not only monitor what is said about your company, but also to anticipate what could be said.  In a world where governments can’t protect secrets being leaked via Wikileaks, it would be surprising if high-impact announcements from companies didn’t also leak out to industry watchers. Some companies constantly face leaks – Apple is notorious in this regard – and part of their strategies involve managing potential leaks before they do harm. In this Yahoo! failed. As a media company that depends on the web for its business this is a further example suggesting how Yahoo! seems to have lost its way. This is not negated by evidence such as the leaked slide mentioning Delicious, showing they are thinking about their future and product/service portfolio.

9% of 11-year old boys can’t read! So what?

December 17, 2010 1 comment

You can tell that news is sparse on the ground – unlike the snow. The newspapers have already done a blanket coverage on the snow and how the UK again skidded to a halt, so they can’t do that one again. Instead, the press is trumpeting on about how terrible it is that 9% of boys can’t read properly when they leave primary school.

Apparently BBC Radio 4 asked the Department of Education for the number of children who failed to reach level 2 reading age, the standard expected for seven-year olds, and found out that around 18000 boys aged 11 had a reading age of seven or less. This was in contrast to other statistics that have shown a steady rise in standards – with children achieving the expected minimum level 4 having gone up from 49% of children to 81% in the last 15 years.

Seemingly, even worse, in some areas – for example Nottingham – 15% of boys failed to get past the level 2 reading level.

The problem with all this isn’t the statistic but the lack of context. When reporting information (whether for competitive intelligence, general business or marketing research, or whatever) it is essential to include the context. A figure on its own is meaningless. In fact, those figures for Nottingham could be brilliant – if five years ago, 30% of boys had failed to get past the level 2 reading level. It would mean that the numbers of children failing had halved. Conversely if the number had gone up from 5% then this would be a massive indictment against the teaching profession who were failing to motivate and educate their pupils.

In fact, the original story from the BBC does give some context.

In 1995, the proportion of 11-year-olds getting Level 2 or below in English – the standard expected of a seven-year-old – was 7%. In 2010, it had fallen only to 5%.

The figures show the problem is worse for boys. Overall in England, 9% of them – about 18,000 – achieved a maximum of level 2 in reading.

This shows that in fact, performance has improved overall, with underachievers falling from 7% in 1995 to 5% of all children now. However without a longer-term trend it is impossible to put much value into the statistics – especially as other research reported by the BBC looking at seven year olds showed that children with special educational needs, and from deprived homes (meaning that they were entitled to free school meals), were the worst performers. A third (33.6%) of seven years olds on free school meals failed to reach the requisite level 2 in writing and 29.3% failed to reach this level for reading. In contrast, the children who did not receive free school meals did much better – only 12.1% failed to reach the required level for reading, and 15.5% for writing.

I’m actually surprised that some mathematically-challenged journalist hasn’t picked up on these figures and claimed that providing free school meals results in children under-performing at school. In reality, all the figures show is that such children have barriers to learning that schools have to try to overcome. This may be because the children are under-stimulated at home (and so start at a lower level than their peers), come from homes where English is not spoken by the parents or are of lower intelligence overall. (In fact, intelligence tends to fall on a normal curve. If 10% of children outperform – and have a reading age 3 years ahead of the norm, you can expect that a further 10% will have a reading age 3 years less than the norm).

The lesson from such statistics and reporting is simple: before publishing statistics in the press or in a business report provide a context.

This context can be temporal – looking at how figures change over time. In the case of the school statistics, they appear to have improved over the years for both the low and average achievers – a testament to the teaching profession. Context can also be seen when comparisons are made – as in the comparison between children on free school meals versus those not entitled to this benefit.

Strategic decisions based on figures should only be made when context is included. Without it, the figures mean nothing, and should be left to melt away, like snow.

%d bloggers like this: