top | item 9289409

Data is at the heart of search, but who has access to it?

105 points| dpw | 11 years ago |andreasgal.com | reply

81 comments

order
[+] ChuckMcM|11 years ago|reply
Sigh, this is incorrect.

edit: incorrect is perhaps too strong, it is incomplete.

While it is true that click tracking can be used as a relevance signal, the people who were really pissed off when the data stream got dumped were advertisers who wanted to buy AdWords. That was a very simple system, pay someone for clickstream data, extract trending queries, front those with AdWord buys to get your page on the top of Google's results, and profit.

Having built a search engine and run it for 5 years, we got to see what people felt was relevant and what wasn't in a very loose way with click stream data. Basically you have a query and 10 blue links you can split the results in quartiles and figure out if the thing they clicked on was top half, bottom half, top quarter/second quarter etc. And do A/B testing to see how that played out. But what we found was that the best indication of what a page was about, was the text that linked to it. If you have an in-link to a page which was "<href='page'>great radio site"[1] then "great radio site" would be a query that should return that page which might be titled something like "bob's electromagnetic spectrum imaginarium" or something equally unlikely to come up in a query string.

So the bottom line is that there are lots of ways to try to determine relevance, click stream data is a part of that but by no means the biggest factor.

[1] neutered html for obvious reasons.

[+] Animats|11 years ago|reply
The value of looking at queries is that it allows learning what questions users ask. The front end of the search process is to infer from the query what the user really should be given. That's a machine learning problem. The head of Google search remarked recently that "as the search engine gets smarter, the queries get dumber".

This is reflected in Google's search results. A Google query which can possibly be interpreted as related to a popular culture item usually will be. Google has become more aggressive about this over the years. Their "Did you mean" result tag once offered an alternative for a second search. Now, they return results for the more popular interpretation first.

The back side of search, page quality and ranking, is weaker than many think. Links are less useful than they used to be. Most links to business sites are now from "social" sites or forums, which are easily spammed. Using social signals was a disaster back in 2012, when, for a few months, Google went all-in on social signals. Google tried to recognize sites that "look like spam", but everybody knows that now and spam sites look better than ever. (The same thing happened with spam emails a decade ago.) Google doesn't recognize provenance, so they can be fooled by scraper sites. Google doesn't recognize the business behind the web page, so they can be fooled by marginal businesses. There are even SEO companies using machine learning to reverse engineer Google's algorithms, to find out how far they can go with keyword stuffing before a penalty kicks in.

Google does far more manual adjustment than they did two years ago. There's an army of people doing manual ranking, and a smaller unit handing appeals from manual penalties. There was a time when Google boasted they did no manual adjustments to ranking. The automation is starting to fail.

[+] Sven7|11 years ago|reply
But where's the competitive ecosystem in search? Innovation in search is restricted to few hundred people in Mountain View. And that's a tragedy.

What Google did for innovation in smartphone\tablet\browser they have gone and done the opposite for search.

[+] minthd|11 years ago|reply
Chuck, while blekko is a great search engine(especially due to custom search), it's clear that it is very different quality wise from Google.Same for Bing - it's not upto Google.And not for the lack of trying or money(bing).

So how do you think Google is succeeding so well, if it's not click stream data? and why can't it be maybe a combination of things that strongly depends on click stream data that others couldn't copy?

[+] jfuhrman|11 years ago|reply
>In Germany, for example, where Google has over 95% market share, competing search engines don’t have access to adequate past search data to deliver search results that are as relevant as Google’s. And, because their search results aren’t as relevant as Google’s, it’s difficult for them to attract new users. You could call it a vicious circle.

This is interesting because of the browser choice enforced by the EU on Windows. IE whose default is Bing lost share to other browsers like Chrome, Firefox and Opera which all had Google as the default. So an attempt to fix the browser market totally distorted the Web Search market. I wonder why MS didn't request to the EU that the alternate browsers in the browser choice screen had to have Bing as the default search.

I wonder if the EU will mandate that search relevancy data must be shared by Google with rival search engines like DDG just like they mandated that SMB shares and Office formats must be documented by MS and released to developers.

[+] dheera|11 years ago|reply
Ethics and morality aside, I'm curious what allows the EU to "enforce" laws on a US company. Let's say Google and Microsoft don't register entities in the EU. Can they do anything?

Can Microsoft and other US-based technology companies theoretically just keep doing their own thing, tell the EU government "to hell with it, we're abiding by US laws, you have a choice to stop importing Windows and invent your own OS if you don't like us"?

[+] solve|11 years ago|reply
Other than the index data, there's something even bigger.

Google's biggest PR success is convincing everyone that the quality of web rankings depends almost purely on algorithms. It does not. What allows Google to hold their monopoly is the $100s of millions (or more) they continuously pay to amass more manually created training data:

http://www.theregister.co.uk/2012/11/27/google_raters_manual

http://www.forbes.com/sites/timworstall/2012/11/27/is-google...

A new search engine could appear today with algorithms 10x better than Google, but without access to this scale of training data, their rankings wouldn't even be close to Google's quality.

Google maintains their position by paying cash for this monopoly on training data made by tens of thousands of $9/hour workers, not through superior algorithms!

[+] bobajeff|11 years ago|reply
I think a problem that is happening here is that there is no competition in search just like there is no competition in social networks and operating systems. Not like there are for things like automobiles, electronics and clothing.

Computers introduce a means to lock people in that don't exist in other markets. In software products there are often ecosystems that tie directly in to the product/service which are not required to be shared with competitors unlike with road systems for cars.

Regulators ought to look into ways to enforce measures that require the companies to completely open their ecosystem to competitors. Or look into ways to standardize these ecosystems and require every service/application/website comply with them (similar to how media companies are forced to include closed captioning).

[+] pain|11 years ago|reply
"Jobs did great harm to the world with his iThings: computers designed to be jails for their users. His genius was to find the way to make these jails desirable so that millions would clamor to be locked up." —Richard Stallman
[+] sanxiyn|11 years ago|reply
In South Korea, Google's market share is below 5%, and Naver gets more than 80% of search queries. I think this is the reason why Google's search results for Korean contents are not as good as contents in other languages.
[+] jjoe|11 years ago|reply
So the whole push for SSL/https from Google has been opportunistic rather than good practice. I mean why would a search engine go as far as to make SSL a ranking signal?
[+] dheera|11 years ago|reply
Sites that use or at least offer SSL probably also tend to be higher-quality sites. The combination of verified identity and payment means that it's a natural filter for people who are at least semi-serious about their project.
[+] pixl97|11 years ago|reply
> I mean why would a search engine go as far as to make SSL a ranking signal?

Because any number of 3rd parties have been injecting their ads and other crap as MITMs. SSL is a better, but not foolproof way to make sure the content you get was the content served by the remote server.

[+] nostrademons|11 years ago|reply
It could be opportunistic and a good practice. Users do benefit from sites that offer SSL. It's just that Google benefits too.
[+] ocdtrekkie|11 years ago|reply
It makes you wonder how many changes were made for "privacy" and how many changes were made for "protecting our business".
[+] stevenbedrick|11 years ago|reply
Is it necessarily an "either/or" situation here? This seems to me like an example of a "both/and".
[+] pcl|11 years ago|reply
Interesting. I wonder to what extent this reasoning was behind executive support of the Chrome project, and whether it was a factor from the onset or something that Google stumbled upon after developing a browser.
[+] sanxiyn|11 years ago|reply
I am 100% sure this is the reason Chrome was funded. (I don't doubt Chrome developers' goal was to develop the best web browser in the world, but business case for doing so is different matter.)
[+] systemBuilder|11 years ago|reply
In my opinion, Chrome was funded for two reasons,

(a) At the time Chrome was launched, IE was dominating with ~69% market share: https://d28wbuch0jlv7v.cloudfront.net/images/infografik/norm... And, Firefox/Mozzila was topping out at 25% market share! They were basically resting on their laurels! Remember that the SPDY protocol which is the prototype standard for HTTP 2.0, was invented at Google and was the main innovation within Chrome 1.0. If you do a timed google search 2008-2010 for SPDY you will see that the SPDY whitepaper page was Nov 12, 2009 : https://www.chromium.org/spdy/spdy-whitepaper So Chrome was launched to make web browsing faster.

(b) Google Search does not want to be excluded from all browsers. The solution to this problem is to fund your own browser. If IE will dominate Firefox forever and Google was depending on Firefox defaults for much of its search traffic, then Google was virtually FORCED to create its own browser or they could always be limited to 25% (or less) search traffic share.

I think that having a "Browser account" which synchronizes browser bookmarks and settings and history across all instances of Chrome for a given user, is one of the greatest improvements in browsers in the past 5 years, and all other browsers seem to be copying this idea. If google were the evil empire as you imply, it would be suing the pants off these other browsers, but it is not.

[+] ntakasaki|11 years ago|reply
>In 2011, Google famously accused Microsoft’s Bing search engine of doing exactly that: logging Google search traffic in Microsoft’s own Internet Explorer browser in order to improve the quality of Bing results.

MS didn't do that from IE, they did for users who installed the Bing bar, a huge difference.

[+] Metapilot|11 years ago|reply
I think the author's perspective is skewed in order to stay in line with the title. Here's an example of why I say that:

The author states that "For some 90% of searches, a modern search engine analyzes and learns from past queries, rather than searching the Web itself, to deliver the most relevant results." This may be true in some types of searches but overall, I think the statement is misleading.

Rather, it's better to think of it like this: One important part of the algorithmic process involves constantly crawling the web and updating the index with new information. (Important / frequently-updated web sites may get crawled all day every day, while ones that are less important may get crawled only weekly or monthly). Meanwhile, another part of the algorithmic process constantly analyzes new info discovered in the crawl and combines it with, as the author-mentioned, click-through data learned from past queries.

The answers to many queries don't change, while the answers to many other queries deserve freshness. For example, I'm quite certain Einstein's date of birth hasn't changed in quite a while, but his theory of relativity is in constant discussion and there is always new information and new queries pertaining to it. As a result, there is not much need for a search engine to go digging for the latest info on an "einstein's birthday" query, but it's to everyone's advantage that Google is able to identify which pages on the web deserve priority crawling and that Google has retrieved and incorporated the fresh info those pages contain into its index when it comes to a topical type of query like "diffraction of light with quantum physics".

In the end, the results to every query depend on info gathered from the web and user data helps refine the results. Info that is more static can be prioritized with more input from click-through data, while new information found on the web must rely more on Google's artificial intelligence to push it up in front of searchers.

Another reason that that "90%" statement sticks out to me is that there is a fairly often-used factoid tossed around industry experts that between "6% to 20% of queries that get asked every day have never been asked before." Google can't rely heavily on past query data for all of these type of searches.

[+] solve|11 years ago|reply
You're vastly underestimating the uniqueness of search queries these days. Various sources within Google have said that 25% to 50% of queries entered into Google have never been seen before at all.
[+] wmf|11 years ago|reply
So does Mozilla's contract with Yahoo allow Mozilla to track query data and maybe feed it to underdog search engines like DDG or Blekko (oops)?
[+] minthd|11 years ago|reply
AFAIK ,the deal with yahoo was about putting yahoo search in the front. If it was about tracking Google search data - mozilla should have at least let people known, especially with their claim at protecting privacy. And if they lie ,they risk a very strong response, especially from developers they depend on.

Also ,if such changes we're to be made, there's a decent likelihood that someone would have noticed that data leakage and told us about it.

So since mozilla is a pretty decent company, we should currently give them the benefit of the doubt.

[+] rockdoe|11 years ago|reply
I think the point is more that the contract with Yahoo in the USA doesn't prevent Mozilla from making deals with smaller[1] players elsewhere (which they've been doing).

[1] Smaller than Google. The search box isn't given away for free.

[+] ekr|11 years ago|reply
So that's why Google created the Chrome browser.
[+] ntakasaki|11 years ago|reply
Not just created, but bundled and installed with default by Java and Flash updates some of which also install the Google toolbar into IE. Many folks that I had converted to Firefox from IE back in the day use Chrome now and have no idea how it ended up on their computer. This explains the steady rise of Chrome, not the few percentage of tech geeks that installed it by choice.
[+] minthd|11 years ago|reply
So, since Google tracks the full browsing experience of chrome users, and hence gets more relevant data than for other browsers users, it has the theoretical ability to offer better search results to chrome users.

Has anybody noticed this happening ?

[+] asuffield|11 years ago|reply
(Tedious disclaimer: my opinion, not my employers. Not representing anybody else. I work at Google, not on chrome)

Google does not "track the full browsing experience of chrome users". Please read the privacy policy which is very clear on this subject: https://www.google.com/chrome/browser/privacy/

I particularly draw your attention to this paragraph: "If you use Chrome to access other Google services, such as using the search engine on the Google homepage or checking Gmail, the fact that you are using Chrome does not cause Google to receive any special or additional personally identifying information about you."

[+] tokai|11 years ago|reply
Training data is nice, but I think its important not to underestimate capacity for crawling. IMO one of Googles strengths is that they crawl large quantities of new content. Smaller operations like DDG can't crawl at that scale. If I want discussion new bugs, search the articles at my favorite newspage (where the inhouse search is unusable), or just want the newest blogpost on some subject - Google is hard to beat.
[+] PaulHoule|11 years ago|reply
At this point Google is not winning because it's search results are good (have you used Google recently?), it is winning because it makes almost 10x as much revenue as other search engines do per view -- at that rate any other search engine is running a charity.
[+] minthd|11 years ago|reply
It's really pretty weird. Google certainly has the capabilities to offer a great search experience, but it's very incosistent.

For example after learning i like the results of a certain journals ,their personalization engine offered me those in releated searches. and usually i chose content from them.

But somehow, after some time, Google's personalization engine forgot that i like them ,and stopped offering me content from them, so i'm back into drowning in shitty results. Why ? no idea why.

[+] countrybama24|11 years ago|reply
Seems like there is a business opportunity to build a plugin of sorts that allows users to opt in and share their search data with competing platforms. I'd be interested in donating my data to help a rival engine compete with Google.
[+] thallukrish|11 years ago|reply
Only when user can own his data which means Apps are just logics and user can allow access to whomever whatever selectively we can suddenly find more genuine things reaching the user be it commerce or content.
[+] thrownaway2424|11 years ago|reply
It is unsettling to read this kind of chip-on-my-shoulder opinion piece full of innuendo under the Firefox logo and the Mozilla name but on the author's personal domain.
[+] Semiapies|11 years ago|reply
TL;DR - Yahoo! still exists and resents Google. But not for being better in their niche, no. Just for delivering a better service, which is not at all the same thing. Somehow.
[+] asuffield|11 years ago|reply
(Tedious disclaimer: my opinion, not my employers. Not representing anybody else. I work at Google, not on search quality)

This article makes a number of bold claims about the contents of data and code which its author hasn't seen, and is written by a company that is receiving a large amount of money from Yahoo. I would encourage people not to forget these details.

[+] rockdoe|11 years ago|reply
So just point out where it's wrong, instead of making a fairly disingenuous appeal to non-authority or however you want to call it?