I too played with Cuil – for around 5 minutes before I realised that this is very much a “what you see is what you get” effort – and I didn’t see very much.
One of the first things I do when I use a search engine is change my preferences – to get 100 hits per page. I find a much more efficient way of looking through pages of results – and the time to look at 10 versus 100 on a single page isn’t that much more. So I headed to Cuil’s preferences page – and found that there was almost nothing to change. So you’re stuck with a page of descriptions – and if they aren’t right, you’re forced to try the next page or a new search. Not clever! Then what about modifying my search – for specific types of content – title search, filetype search. Nada!
My top test keywords (generally “competitive intelligence” and various permutations of this) came up with the expected sites – but nothing new and not even all I’d expect – plus irritating logos attached to each entry that seemed to be stolen from images that seemed relevant.
My main complaint supports a comment on the Webware blog “Didn’t we stop the pissing contest over number of pages searched about 10 years ago?“. I concur totally. So what if Cuil has 120 billion pages. It’s not size that counts – it’s what you do with what you’ve got that counts. (I’m sure I’ve heard that somewhere before in a different context That’s why Exalead is so useful – as it’s so easy to customise, and refine searches. That’s why Google is top-dog – as its interface is so simple and the results tend to be accurate. That’s why Ask works – as it gives good results, with options to refine and it highlights news, images, encyclopaedia entries all together making search seem simple.
Finally their purported killer feature – relating search to the words on the page and their context. Isn’t that similar or the same as the method Ask (or it’s predecessor Teoma) uses, or have I missed something? (Or perhaps it only refers to the actual page rather than related pages which is what Ask does – if so, it’s also 10 years out-of-date as just relating content to the actual page rather than linked pages was killed off by Google’s linkage innovation).
So – not impressed. I still think that there’s scope for a Google Killer out there, but Cuil ain’t that Killer!
Competitor information becomes public for a number of reasons, but these can be summarised into three categories:
- Intentional dissemination of information about the company by the company – for example, an annual report or a press release
- Accidental dissemination of information about the company by the company – for example, a leak or rumour
- Information that comes from a third party. This itself can take a number of forms. One is like a footprint in the sand – so competitor actions can provide clues to their plans or strategies. A typical example is when a competitor signs a contract with another company. This company may mention the contract – giving out information about the competitor. A second example is where a third party has managed to collect information on a competitor from a variety of sources including interviews and non-published sources. In this case, the third party may decide to publish their information as a market research report. Sometimes the third party may include a synthesis of information that combined gives further insights.
Sometimes though, information can come too easily. Part of the skillset of a competent competitor analyst should be an ability to evaluate why information became available. Which of the above was the reason and how reliable is the information? There is a risk that the gathered intelligence is wrong, and a validity check can help assess the chances of this. (One common approach is to grade both the intelligence and source, giving a likelihood of accuracy). Whatever method is used, however, there is always the risk that sometimes things will be wrong.
A simple approach is to consider how easy the information was to obtain. This works on the assumption that competitors will try and protect information that they would prefer not to be in the public domain. So if information is easily available it has a lower value and may be more suspect than information that took a lot of thought and work to obtain.
This is illustrated by the following story – in this case there was an ulterior motive in providing information that on the surface, looked like a real money-saver. The true reason came out as an accidental disclosure following a pointed interview type question!
A man was having problems with the quality of the print from his printer so he called a local repair shop where a friendly man informed him that the printer probably needed only to be cleaned. Because the store charged $50 for such cleanings, he told him he might be better off reading the printer’s manual and trying the job himself. Pleasantly surprised by his candor, the man asked, “Does your boss know that you discourage business?
“Actually, it’s my boss’s idea,” the employee replied sheepishly. “We usually make more money on repairs if we let people try to fix things themselves”
I’m not sure what the weather is like outside London. Summer here could have been better – but also a lot worse. I know that there are heatwaves in Southern Europe, and I’m sure that even though hurricanes hardly hever happen in Spain (where there currently is not much rain on the plain), this is not the case everywhere. And as Discovery found out, you can’t predict when bad weather can cause you to change your landing slot!
It is important to know information sources. When I do competitive intelligence training, one workshop exercise I take people through is to show how many different information sources there actually are. I do this by compiling an A to Z of information sources – with one rule, that they must all be different types. (So you can’t have Search Engine and then Yahoo! as options for S and Y as Yahoo! is a Search Engine and so is included in this category). Try it – it is not that difficult. I have 4 items for K and 2 for Q, X and Z. Most other letters get spoilt for choice.
However it is not just good enough to know your information sources. You should also know how accurate they are – and ideally how the source gathers the information. Only then can you guard against disinformation – or perhaps from using secondary information that you think is actually a primary source.
There is a story:
A film crew was on location deep in the desert. One day an
elderly native American went up to the director and said, “Tomorrow rain.”
The next day it rained. A week later, the native American went up to
the director and said, “Tomorrow storm.”
The next day there was a hailstorm. “This Indian is
incredible,” said the director. He told his secretary to
hire the man to predict the weather for the remaining of
the shoot. However, after several successful predictions,
the old native American didn’t show up for two weeks.
Finally the director sent for him. “I have to shoot a big
scene tomorrow,” said the director, “and I’m depending on
you. What will the weather be like?”
The native American shrugged his shoulders. “Don’t know,” he said.
“My radio is broken.”
So before using a source, make sure you fully understand it – and any drawbacks or weaknesses associated with it.
The above story illustrates the need to know your source’s source. There is a variation on the story illustrating the same lesson, but also showing how important it is to be objective. Some sources are not totally objective, and so the information provided can actually be false or disinformation.
The Native Americans asked their Chief in autumn, if the winter was going to be cold or not. Not really knowing an answer, the chief replies that the winter was going to be cold and that the members of the village were to collect wood to be prepared. Being a good leader, he then went to the next phone booth and called the National Weather Service and asked, “Is this winter to be cold?”
The man on the phone responded, “This winter was going to be quite cold indeed.”
So the Chief went back to speed up his people to collect even more wood to be prepared. A week later he called the National Weather Service again, “Is it going to be a very cold winter?”
“Yes,” the man replied, “it’s going to be a very cold winter.”
So the Chief goes back to his people and orders them to go and find every scrap of wood they can find. Two weeks later he calls the National Weather Service again: “Are you absolutely sure, that the winter is going to be very cold?”
“Absolutely,” the man replies, “the Native Americans are collecting wood like crazy!”
As real examples of disinformation, there is the well known Di-hydrogen Monoxide Research Division web-site.
Of course it could be that Microsoft is using a satellite image from around 30 or so years ago (despite the 2004 copyright notice from NAVTEQ at the bottom). More likely is that Microsoft just wants to air-brush a major competitor – Apple Computers – out of history.
As somebody who has just received delivery of my brand new Apple iBook, I can fully understand why Microsoft would like to do this. However just renaming your future operating system from a breed of cow (Longhorn) to a long-term view (Vista) does not make it a Tiger! .
From a competitive intelligence information source point of view the above two maps show how easy it is to
b) blind yourself to the real picture
c) use a respected information source such as Microsoft’s mapping software that may not be totally accurate.