Archive

Posts Tagged ‘search engines’

Zanran – a new data search engine

April 21, 2011 4 comments

I’ve been playing with a new data search engine called Zanran – that focuses on finding numerical and graphical data. The site is in an early beta. Nevertheless my initial tests brought up material that would only have been found using an advanced search on Google – if you were lucky. As such, Zanran promises to be a great addition for advanced data searching.

Zanran.com

Zanran.com - Front Page

Zanran focuses on finding what it calls  ‘semi-structured’ data on the web. This is defined as numerical data presented as graphs, tables and charts – and these could be held in a graph image or table in an HTML file, as part of a PDF report, or in an Excel spreadsheet. This is the key differentiator – essentially, Zanran is not looking for text but for formatted numerical data.

When I first started looking at the site I was expecting something similar to Wolfram Alpha – or perhaps something from Google (e.g. Google Squared or Google Public Data). Zanran is nothing like these – and so brings something new to search. Rather than take data and structure or tabulate it (as with Wolfram Alpha and Google Squared), Zanran searches for data that is already in tables or charts and uses this in its results listing.

Zanran.com

Zanran.com Search: "Average Marriage Age"

The site has a nice touch in that hovering the cursor over results gives you the relevant data page – whether a table, a chart or a mix of text, tables or charts.

Zanran.com - Hovering over a result brings up an image of the data.

The advanced search options allow country searching (based on server location), document date and file type, each selectable from a drop-down box, as well as searches on specified web-sites.  At the moment only English speaking countries can be selected (Australia, Canada, Ireland, India, UK New Zealand, USA and South Africa). The date selections allow for the last 6, 12 or 24 months and the file type allows for selection based on PDF; Excel; images in HTML files; tables in HTML files; PDF, Excel and dynamic data; and dynamic data alone. PowerPoint and Word files are promised as future options. There are currently no field search options (e.g. title searches).

My main dislike was that the site doesn’t give the full URLs for the data presented. The top-level domain is given, but not the actual URL which makes the site difficult to use when full attribution is required for any data found (especially if data gets downloaded, rather than opening up in a new page or tab).

Zanran.com has been in development since at least 2009 when it was a finalist in the London Technology Fund Competition. The technology behind Zanran is patented and based on open-source software, and cloud storage. Rather than searching for text, Zanran searches for numerical content, and then classifies it by whether it’s a table or a chart.

Atypically, Zanran is not a Californian Silicon Valley Startup, but is based in the Islington area of London, in a quiet residential side-street made up of a mixture of small mostly home-based businesses and flats/apartments. Zanran was founded by two chemists, Jonathan Goldhill and Yves Dassas, who had previously run telecom businesses (High Track Communications Ltd and Bikebug Radio Technologies) from the same address. Funding has come from the London Development Agency and First Capital among other investors.

Zanran views competitors as Wolfram Alpha, Google Public Data and also Infochimps (a database repository – enabling users to search for and download a wide variety of databases). The competitor list comes from Google’s cache of Zanran’s Wikipedia page as unfortunately, Wikipedia has deleted the actual page – claiming that the site is “too new to know if it will or will not ever be notable“.

Google Cache of Zanran's Wikipedia entry

I hope that Wikipedia is wrong and that Zanran will become “notable” as I think the company offers a new approach to searching the web for data. It will never replace Google or Bing – but that’s not its aim. Zanran aims to be a niche tool that will probably only ever be used by search experts. However as such, it deserves a chance, and if its revenue model (I’m assuming that there is one) works, it deserves success.

Google versus Bing – a competitive intelligence case study

February 2, 2011 7 comments

Search experts regularly emphasise that to get the best search results it is important to use more than one search engine. The main reason for this is that each search engine uses a different relevancy ranking leading to different search results pages. Using Google will give a results page with the sites that Google thinks are the most relevant for the search query, while using Bing is supposed to give a results page where the top hits are based on a different relevancy ranking. This alternative may give better results for some searches and so a comprehensive search needs to use multiple search engines.

You may have noticed that I highlighted the word supposed when mentioning Bing. This is because it appears that Bing is cheating, and is using some of Google’s results in their search lists. Plagiarising Google’s results may be Bing’s way of saying that Google is better. However it leaves a bad taste as it means that one of the main reasons for using Microsoft’s search engine can be questioned, i.e. that the results are different and that all are generated independently, using different relevancy rankings.

Bing is Microsoft’s third attempt at a market-leading, Google bashing, search engine – replacing Live.com which in turn had replaced MSN Search. Bing has been successful and is truly a good alternative to Google. It is the default search engine on Facebook (i.e. when doing a search on Facebook, you get Bing results) and is also used to supply results to other search utilities – most notably Yahoo! From a marketing perspective, however, it appears that the adage “differentiate or die” hasn’t been fully understood by Bing. Companies that fail to fully differentiate their product offerings from competitors are likely to fail.

The story that Bing was copying Google’s results dates back to Summer 2010, when Google noticed an odd similarity to a highly specialist search on the two search engines. This, in itself wouldn’t be a problem. You’d expect similar results for very targeted search terms – the main difference will be the sort order. However in this case, the same top results were being generated when spelling mistakes were used as the search term. Google started to look more closely – and found that this wasn’t just a one-off. However to prove that Bing was stealing Google’s results needed more than just observation. To test the hypothesis, Google set up 100 dummy and nonsense queries that led to web-sites that had no relationship at all to the query. They then gave their testers laptops with a new Windows install – running Microsoft’s Internet Explorer 8 and with the Bing Toolbar installed. The install process included the “Suggested Sites” feature of Internet Explorer and the toolbar’s default options.

Within a few weeks, Bing started returning the fake results for the same Google searches. For example, a search for hiybbprqag gave the seating plan for a Los Angeles theatre, while delhipublicschool40 chdjob returned a Ohio Credit Union as the top result. This proved that the source for the results was not Bing’s own search algorithm but that the result had been taken from Google.

What was happening was that the searches and search results on Google were being passed back to Microsoft – via some feature of Internet Explorer 8, Windows or the Bing Toolbar.

As Google states in their Blog article on the discovery (which is illustrated with screenshots of the findings):

At Google we strongly believe in innovation and are proud of our search quality. We’ve invested thousands of person-years into developing our search algorithms because we want our users to get the right answer every time they search, and that’s not easy. We look forward to competing with genuinely new search algorithms out there—algorithms built on core innovation, and not on recycled search results from a competitor. So to all the users out there looking for the most authentic, relevant search results, we encourage you to come directly to Google. And to those who have asked what we want out of all this, the answer is simple: we’d like for this practice to stop.

Interestingly, Bing doesn’t even try to deny the claim – perhaps because they realise that they were caught red-handed. Instead they have tried to justify using the data on customer computers as a way of improving search experiences – even when the searching was being done via a competitor.  In fact, Harry Shum, a Bing VP, believes that this is actually good practice, stating in Bing’s response to a blog post by Danny Sullivan that exposed the practice:

“We have been very clear. We use the customer data to help improve the search experience…. We all learn from our collective customers, and we all should.”

It is well known that companies collect data on customer usage of their own web-sites – that is one purpose of cookies generated when visiting a site. It is less well known that some companies also collect data on what users do on other sites (which is why Yauba boasts about its privacy credentials). I’m sure that the majority of users of the Bing toolbar and other Internet Explorer and Windows features that seem to pass back data to Microsoft would be less happy if they knew how much data was collected and where from. Microsoft has been collecting such data for several years, but ethically the practice is highly questionable, even though Microsoft users may have originally agreed to the company collecting data to “help improve the online experience“.

What the story also shows is how much care and pride Google take in their results – and how they have an effective competitive intelligence (and counter-intelligence) programme, actively comparing their results with competitors. Microsoft even recognised this by falsely accusing Google of spying via their sting operation that exposed Microsoft’s practices – with Shum commenting (my italics):

What we saw in today’s story was a spy-novelesque stunt to generate extreme outliers in tail query ranking. It was a creative tactic by a competitor, and we’ll take it as a back-handed compliment. But it doesn’t accurately portray how we use opt-in customer data as one of many inputs to help improve our user experience.

To me, this sounds like sour-grapes. How can copying a competitor’s results improve the user experience? If it doesn’t accurately portray how customer data IS used, maybe now would be the time for Microsoft to reassure customers regarding their data privacy. And rather than view the comment that Google’s exposure of Bing’s practices was a back-handed compliment, I’d see it as slap in the face with the front of the hand. However what else could Microsoft & Bing say, other than Mea Culpa.

Update – Wednesday 2 February 2011:

The war of words between Google and Bing continues. Bing has now denied copying Google’s results, and moreover accused Google of click-fraud:

Google engaged in a “honeypot” attack to trick Bing. In simple terms, Google’s “experiment” was rigged to manipulate Bing search results through a type of attack also known as “click fraud.” That’s right, the same type of attack employed by spammers on the web to trick consumers and produce bogus search results.  What does all this cloak and dagger click fraud prove? Nothing anyone in the industry doesn’t already know. As we have said before and again in this post, we use click stream optionally provided by consumers in an anonymous fashion as one of 1,000 signals to try and determine whether a site might make sense to be in our index.

Bing seems to have ignored the fact that Google’s experiment resulted from their observation that certain genuine searches seemed to be copied by Bing – including misspellings, and also some mistakes in their algorithm that resulted in odd results. The accusation of click fraud is bizarre as the searches Google used to test for click fraud were completely artificial. There is no way that a normal searcher would have made such searches, and so the fact that the results bore no resemblance to the actual search terms is completely different to the spam practice where a dummy site appears for certain searches.

Bing can accuse Google of cloak and dagger behaviour. However sometimes, counter-intelligence requires such behaviour to catch miscreants red-handed. It’s a practice carried out by law enforcement globally where a crime is suspected but where there is insufficient evidence to catch the culprit. As an Internet example, one technique used to catch paedophiles is for a police officer to pretend to be a vulnerable child on an Internet chat-room. Is this fraud – when the paedophile subsequently arranges to meet up – and is caught? In some senses it is. However saying such practices are wrong gives carte-blanche to criminals to continue their illegal practices. Bing appears to be putting themselves in the same camp – by saying that using “honeypot” attacks is wrong.

They also have not recognised the points I’ve stressed about the ethical use of data. There is a big difference between using anonymous data tracking user  behaviour on your own search engine and tracking that of a competitor. Using your competitor’s data to improve your own product, when the intelligence was gained by technology that effectively hacks into usage made by your competitor’s customers is espionage. The company guilty of spying is Bing – not Google. Google just used competitive intelligence to identify the problem, and a creative approach to counter-intelligence to prove it.

RIP Kartoo

March 14, 2010 Leave a comment
When I conduct training sessions on how to search I always emphasise that it’s more important to know how to find information rather than to depend on a small selection of key web-sites.

Many searchers depend on their bookmark list but what happens when a key site disappears: if you don’t know how to search you are stuck.

Searching isn’t just going to google and typing your query in the search box. Expert searching demands that you consider where the information you are looking for is likely to be held, and in what format. It requires the searcher to understand the search tools they use – how they work and their strengths and weaknesses. Such skills are crucial when key sites disappear as happened in January with the small French meta-search engine, Kartoo.

Kartoo was innovative and presented results graphically. It enabled you to see links between terms and was brilliant for concept searching where you didn’t really know where to start. Unfortunately it’s now gone to cyber-heaven, or wherever dead web-sites disappear to. It will be missed – at least until something similar appears. Already Google’s wonderwheel (found from the “options” link just above the search results”) offers some of the functionality and graphic feel, and there are other sites that offer similar capabilities (e.g. Touchgraph). Kartoo however was special – it was simple, free and showed that Europeans can still come up with good search ideas.

Example of a Kartoo Search

Of course Kartoo isn’t the first innovative site to disappear. Over the years, many great search tools have gone. Greg Notess lists some in his SearchEngineShowdown blog – and an article in Online magazine. There are more. How many people remember IIBM’s Infomarket service – an early online news aggregator from 1995, or Transium.

In fact, it was learning that sites are mortal that led to my approach to searching: don’t depend on a limited selection of sites but rather know how to find sites and databases that lead you to the information wanted. That’s a key skill for all researchers and is as valid today in the Google generation as it was in the days before Google.

Yauba – Big Brother isn’t watching you

June 17, 2009 Leave a comment

Sixty years after George Orwell published 1984 many of the ideas have, unfortunately, become commonplace. There are speed cameras watching how fast you drive, and CCTV monitoring many UK towns. On the Internet, search engines such as Google monitor your searches – keeping the data for months. They know what operating system you use. AWARE doesn’t record this information, despite showing some in our top bar, but many sites, and most search engines do).

Yauba bucks the trend by proudly announcing that it respects user privacy. Its privacy policy proudly states:

We do not keep any personally identifiable information.
Period.

Following the Iranian elections (June 2009) many Iranian dissidents and protesters have switched to Yauba, according to the searchengine blog site, Pandia.

“Ahmed Hossain, CIO of Yauba, tells Pandia: “Our traffic from Iran has jumped 300% over the past several days, as many of them are using the Yauba Search Engine and the anonymity proxy filter to access blocked sites and get news from foreign sources.”

Anonymity may be important for some people. However for most, it’s search results that count. Although Yauba claims to be able to search semantically, differentiating between Java the island, Java the coffee and Java the computer language is this a meaningless boast?

In other words is Yauba worth using for those not looking to hide their identity.

The short answer is yes. Yauba searches various types of content – which are separated. As such it enables you to quickly find Acrobat files, Word documents, PowerPoint presentations, news, blogs, images, video, etc. in a single search. Each are kept distinct – and this is an interesting differentiator between it and other search engines. It also presents ways of refining queries and where there are alternative meanings it shows these – allowing users to pick the one they want.

Rather than use the search they suggest i.e. Java I put in Apple. The three meanings I thought of were

  1. The fruit
  2. The computer company
  3. The music company founded by the Beatles

In fact, there are several more – as Yauba shows:

apple can mean:
  • Apple Inc. (formerly Apple Computer, Inc.), a consumer electronics and software company
  • Apple Bank, an American bank in the New York City area
  • Apple Corps, a multimedia corporation founded by The Beatles
  • Apple (album), an album by Mother Love Bone
  • Apple (band), a British psychedelic rock band
  • Apple Records, record label founded by The Beatles
  • Apple I, Apple II series, Apple III, etc., various personal computer models produced by Apple, Inc and sold from 1976 until 1992.
  • Ariane Passenger Payload Experiment, an Indian experimental communication satellite launched in 1981
  • Apple (automobile), an American automobile manufactured by Apple Automobile Company from 1917 to 1918
  • Billy Apple, artist
  • Fiona Apple, a Grammy award winning American singer-songwriter
  • R. W. Apple, Jr., an associate editor at The New York Times
  • Clicking on Apple (automobile) gives a number of results – not all directly relevant but some which were. There is also a brief encyclopedia type entry at the top of the page:

    The Apple was a short-lived American automobile manufactured by Apple Automobile Company in Dayton, Ohio from 1917 to 1918. Agents were assured that its $1150 Apple 8 model was “a car which you can sell!”. Sadly for the company, it would seem that the public did not buy.

    On the right of the screen are various suggestions for alternative searches. For example, a search for apple gives:

    Compare the clarity of this to the same search on google. (Admitedly the search is not sophisticated and a competent searcher would refine the term – but for testing, it’s good enough)


    It means that amateur searchers are more likely to find resuls for complex searches – fulfilling Yauba’s claim to allow people to search without a knowledge of Boolean logic.

    Also interesting is that a component of each search includes a real-time element – from Twitter and social news from Digg. The real time search element is useful as it provides another option to scoopler.


    Sponsored ads appear to come from the Google network. There are also options to filter searches (although there is currently no information on what is being filtered) and a Lite version which seems to remove the refinement options and the top-level definitions (i.e. making it more Google like in its results presentation).

    There is also an option to refine searches – alongside the search box.

    Selection of one of the options allows further search refinement either by keyword

    or domain

    Overall I like Yauba. The interface is clean (and the black background makes a change from competitors).


    Currently the site says it’s only an early Beta / Late Alpha preview release so more work / changes can be expected. Hopefully these will include Help files explaining what the Lite search is supposed to do and what a Filtered search actually filters. Also, what syntax is acceptable – to refine searches. Does Boolean searching actually work, for example? On my brief tests it seemed to – as did phrase searching i.e. putting search terms in quotes. What about other options – could any of the advanced search options from Exalead be included. And will the site cover more countries, than the current small number (Italy, France, UK, India, Brazil, Russia and the .com site)? Yauba promises to cover more countries – I’m just surprised that there is no Chinese or German version as I would have expected these before the Italian version. I guess the Yauba team have Italian speakers but currently no Chinese speakers.

    But it’s not google – Bing goes Live!

    June 2, 2009 Leave a comment

    Another long wait between entries – I really must update more often. However recent events in the Search world and in the CI world mean I have no choice but to update. My thoughts on recent changes at SCIP will have to wait till my next post. This post will look at Microsoft‘s replacement for Live and MSN Search – with its new Bing search engine.

    Searches at Live or MSN Search now redirect to Bing.com. I like the front-end – it’s clean and colourful. However I couldn’t find anywhere to change the front image – at least on the UK version that’s still in Beta.
    The US version does allow you to scroll back to previous images – with a little arrow option at the bottom of the right side of the screen.

    The US version also includes hot-spots describing aspects of the picture, plus a side-bar offering more search options.

    At the bottom of both versions is a link for help – interestingly still pointing to Live.com. Obviously Microsoft still has more work to do on this. The help section gives the format for advanced commands and also allows you to remove the screen background.

    So how does Bing perform. For the searches I tried, the results are good – and there isn’t that much to choose between Google and Bing. One difference i did notice is that URLs with the search terms used seem to come higher than other sites – so, for example, AWARE‘s web-site came to the top for a search on “marketing-intelligence“. Also relevant is that the algorithm is sufficiently intelligent to realise that “CompetitorAnalysis.com” is a likely candidate for searches on “Competitor Analysis“. I’m not sure the same precision exists in Google. Another odd feature is that some titles seem to be edited. For example some searches on my web-site content bring up the following title: “

    This title doesn’t exist on our web-site so has been taken from somewhere else – most likely from a link on a UK government business support web-site.

    Where Bing falls is in the advanced searching and also the preferences. I like that you can set Google to display 100 hits at a time. Bing only allows 50. Bing also lacks some of the field / advanced search options available to Google. There are no wild-card searches (using the * character) or synonym searches (the ~ character) for example. However there are options that are not currently available in Google – such as the feed:, hasfeed:, loc:, and contains: options. These allow for searching for RSS sites (feed: and hasfeed:), location searches (loc:), searches for sites containing links to types of content such as WMA, MPG files, etc. – contains:. These options are not available in the advanced search boxes.

    All in all – i like Bing and prefer its interface to Live. I like colourful pages, and have customised my Google page with iGoogle themes, and Ask with it’s skins. Yet again, however, this is not a Google Killer – and perhaps it’s not trying to be. The key thing: Bing is not google!

    A number of other reviews on Bing worth reading:

    Mixed reviews of Bing, Microsoft’s new search engine
    – the Daily Telegraph
    Bing Don’t Bother
    – Karen Blakeman’s review

    Bing Launches – it’s awful
    – Phil Bradley’s review

    Bing Bing: Microsoft’s search engine unexpectedly live, but not Live
    – the Guardian


    Cuil – not going to cull Google!

    July 29, 2008 Leave a comment
    For a change I thought I’d give my opinions on a new search engine that’s being touted around.

    Cuil is a new search engine that claims to have the biggest search index and give better results than Google owing to a methodology that looks at word context rather than page links.

    There are already lots of comments on Cuil – for example, Webware’s “New Search Engine Cuil takes aim at Google” or Karen Blakemen’s “Cuil – not so cool

    I too played with Cuil – for around 5 minutes before I realised that this is very much a “what you see is what you get” effort – and I didn’t see very much.

    One of the first things I do when I use a search engine is change my preferences – to get 100 hits per page. I find a much more efficient way of looking through pages of results – and the time to look at 10 versus 100 on a single page isn’t that much more. So I headed to Cuil’s preferences page – and found that there was almost nothing to change. So you’re stuck with a page of descriptions – and if they aren’t right, you’re forced to try the next page or a new search. Not clever! Then what about modifying my search – for specific types of content – title search, filetype search. Nada!

    My top test keywords (generally “competitive intelligence” and various permutations of this) came up with the expected sites – but nothing new and not even all I’d expect – plus irritating logos attached to each entry that seemed to be stolen from images that seemed relevant.

    My main complaint supports a comment on the Webware blogDidn’t we stop the pissing contest over number of pages searched about 10 years ago?“. I concur totally. So what if Cuil has 120 billion pages. It’s not size that counts – it’s what you do with what you’ve got that counts. (I’m sure I’ve heard that somewhere before in a different context 😉 That’s why Exalead is so useful – as it’s so easy to customise, and refine searches. That’s why Google is top-dog – as its interface is so simple and the results tend to be accurate. That’s why Ask works – as it gives good results, with options to refine and it highlights news, images, encyclopaedia entries all together making search seem simple.

    Finally their purported killer feature – relating search to the words on the page and their context. Isn’t that similar or the same as the method Ask (or it’s predecessor Teoma) uses, or have I missed something? (Or perhaps it only refers to the actual page rather than related pages which is what Ask does – if so, it’s also 10 years out-of-date as just relating content to the actual page rather than linked pages was killed off by Google’s linkage innovation).

    So – not impressed. I still think that there’s scope for a Google Killer out there, but Cuil ain’t that Killer!

    Know your information sources

    December 8, 2005 Leave a comment

    I’m not sure what the weather is like outside London. Summer here could have been better – but also a lot worse. I know that there are heatwaves in Southern Europe, and I’m sure that even though hurricanes hardly hever happen in Spain (where there currently is not much rain on the plain), this is not the case everywhere. And as Discovery found out, you can’t predict when bad weather can cause you to change your landing slot!

    It is important to know information sources. When I do competitive intelligence training, one workshop exercise I take people through is to show how many different information sources there actually are. I do this by compiling an A to Z of information sources – with one rule, that they must all be different types. (So you can’t have Search Engine and then Yahoo! as options for S and Y as Yahoo! is a Search Engine and so is included in this category). Try it – it is not that difficult. I have 4 items for K and 2 for Q, X and Z. Most other letters get spoilt for choice.

    However it is not just good enough to know your information sources. You should also know how accurate they are – and ideally how the source gathers the information. Only then can you guard against disinformation – or perhaps from using secondary information that you think is actually a primary source.

    There is a story:

    A film crew was on location deep in the desert. One day an
    elderly native American went up to the director and said, “Tomorrow rain.”
    The next day it rained. A week later, the native American went up to
    the director and said, “Tomorrow storm.”

    The next day there was a hailstorm. “This Indian is
    incredible,” said the director. He told his secretary to
    hire the man to predict the weather for the remaining of
    the shoot. However, after several successful predictions,
    the old native American didn’t show up for two weeks.

    Finally the director sent for him. “I have to shoot a big
    scene tomorrow,” said the director, “and I’m depending on
    you. What will the weather be like?”

    The native American shrugged his shoulders. “Don’t know,” he said.
    “My radio is broken.”

    So before using a source, make sure you fully understand it – and any drawbacks or weaknesses associated with it.

    The above story illustrates the need to know your source’s source. There is a variation on the story illustrating the same lesson, but also showing how important it is to be objective. Some sources are not totally objective, and so the information provided can actually be false or disinformation.

    The Native Americans asked their Chief in autumn, if the winter was going to be cold or not. Not really knowing an answer, the chief replies that the winter was going to be cold and that the members of the village were to collect wood to be prepared.

    Being a good leader, he then went to the next phone booth and called the National Weather Service and asked, “Is this winter to be cold?”

    The man on the phone responded, “This winter was going to be quite cold indeed.”

    So the Chief went back to speed up his people to collect even more wood to be prepared. A week later he called the National Weather Service again, “Is it going to be a very cold winter?”

    “Yes,” the man replied, “it’s going to be a very cold winter.”

    So the Chief goes back to his people and orders them to go and find every scrap of wood they can find. Two weeks later he calls the National Weather Service again: “Are you absolutely sure, that the winter is going to be very cold?”

    “Absolutely,” the man replies, “the Native Americans are collecting wood like crazy!”

    As real examples of disinformation, there is the well known Di-hydrogen Monoxide Research Division web-site.

    Another interesting example is the satellite maps shown for 1 Infinite Loop, Cupertino, CA 95014 on Google Maps compared to that on Microsoft’s Virtual Earth.

    Of course it could be that Microsoft is using a satellite image from around 30 or so years ago (despite the 2004 copyright notice from NAVTEQ at the bottom). More likely is that Microsoft just wants to air-brush a major competitor – Apple Computers – out of history.

    As somebody who has just received delivery of my brand new Apple iBook, I can fully understand why Microsoft would like to do this. However just renaming your future operating system from a breed of cow (Longhorn) to a long-term view (Vista) does not make it a Tiger! :-).

    From a competitive intelligence information source point of view the above two maps show how easy it is to

    a) misinform
    b) blind yourself to the real picture
    c) use a respected information source such as Microsoft’s mapping software that may not be totally accurate.

    %d bloggers like this: