Hacker News new | past | comments | ask | show | jobs | submit login
Giving up on Google (robsheldon.com)
231 points by thaumaturgy on Sept 13, 2010 | hide | past | favorite | 184 comments



I really think this is a case of confirmation bias. We see Google approaching this dominating mass that's impossible to stop, so we begin looking for ways to tear down big brother and look for holes. So, we notice every spam entry.

I work in search and I can't recollect a time EVER having been frustrated with what Google gave me - and having to go to another engine. And I live, eat, & breathe the thing.

I understand I have a notoriously small sample (as compared to the entire world of search - and I generally search for terms that have been optimized by similar search professionals), but the idea is that really, 10 good search results is pretty much impossible - what matters is that there is always three-four results on every page that are relevant. And without fail, Google continues to deliver this - at least for me.

This process is what occurs for every monopoly. We hate the Yankees, Microsoft, whatever. It's natural to want to bring down something so dominating. But for me, Google is doing an absolutely incredible job.

The only potential I see for something like DDG unseating them (or even gaining legitimate market share) is if Google loses the ability to pivot. If this occurs, the dilution of the algorithm is possible, which could turn their product to crap.

BUT, given their history of 200+ algorithm changes per year, they have shown that not to be the case. Because of this, I imagine Google will be with us - & strongly with us - for the entirety of our lifetimes.


  >> I  really think this is a case of confirmation bias. We see Google approaching this dominating mass that's impossible to stop, so we begin looking for ways to tear down big brother and look for holes. 
I think that this is a case where we were comparing Google to what existing before (Yahoo, etc), and thought "this is great". Now we compare Google only to what we think it 'should' be.


For me, I remember how great Google was when I started using it around 1999, compared to AltaVista or Metacrawler; it was great because it was able to find results that the other two just couldn't, and it did it quickly.

Lately, I've found myself feeling like search is back in 1999 again.


thaumaturgy, if it makes you feel better, search is much better than in was in 1999-2000.

When I joined Google in 2000, one thing I did was run a bunch of searches and save the results. When I went back and looked at the data, Google2000 can't compare to Google2010. Google2000 still had lots of spam--it was much, much better than Altavista or other competitors at the time, but the results weren't nearly as good as they are now. I keep meaning to do a blog post and show some examples.


Those examples would definitely be interesting to read. I'm one of the many people who feels search results have gotten worse, but since I don't have any actual saved search results from 10 years ago it's really hard to say if I'm right or misremembering. Some concrete examples of what Google2000 looked like might be good for jogging the memory.

One example category does come to mind, which is a frequent pet peeve: lyrics searches. They definitely return sort of crappy results currently, but I can't say for sure what they returned in 2000, since I didn't save any results.

Examples:

Imo [joy division walked in line lyrics] should return a page like http://www.joydiv.org/shadowplay/joyd/walkedline.html. Instead it returns a bunch of sites like lyricsfreak.com, sing365.com, lyricsdepot.com, metrolyrics.com, etc.

The case is even stronger if there's actually an official site for the musician with official lyrics. An example of that: [vnv nation fragments lyrics] should return http://www.vnvnation.com/Webfiles/lyrics/fragments.htm, but instead it again returns the usual lyrics-site suspects.

I think my recollection is that I used to get fan sites more than lyrics aggregators in 2000. That might be more due to web changes than Google changes, admittedly, since many of these lyrics aggregators didn't exist in 2000, and hobbyist fan sites may have been more vibrant. The fact that the same lyrics-aggregator sites dominate for almost all the searches seems weird, though; it seems that basically all lyrics searches return "big lyrics site", rather than trying to retrieve a site specializing in the band I searched for.


> Lately, I've found myself feeling like search is back in 1999 again.

You're spoiled then.


> You're spoiled then.

Spoiled? I remember search being horrendous back in those days. It was almost exact word matching and it took hours to find relevant results. Google was a breath of fresh air back then (especially since they didn't run all those X10 ads and popups ala Yahoo!).

I've switched to Bing a few months ago for tin-foil-haty reasons, and also being disappointed in the way Google interpreted some of my queries (for some reason the plus sign doesn't mean anything anymore).

I occasionally use Google when I give up on finding something on Bing, but 99% of the time 4-5 results of the first page are pages I already found on Bing, with the rest being irrelevant results.

I too wasn't aware of the search tools that Matt Cutts mentioned above, I'll give it a try the next time I'm having a hard time finding something with a date restriction.


"It could be worse" is always a valid rebuttal to any complaint.

But it doesn't actually move the conversation forward at all.


But I didn't mean "It could be worse", I meant google search is actually amazing and I find what I'm looking for 99% of the time.


And we never imagine what the internet would be like without Google. You think spam is bad now? It's possible that in a world without Google web search just wouldn't work because Bing or Yahoo (or altavista, cuil, powerset, or whoever) just couldn't deal with the spam.

Certainly Google isn't perfect, but they do a pretty good job combating web spam that is explicitly targeted at them.


I think part of the problem is that for many people google is the internet. Even for me (pretty technically minded) if a few google searches doesn't find what I'm looking for it isn't on the internet, or might as well not be. I don't think I've even tried turning to another search engine in several years.


It's tricky. Google gets well over 1B searches a day. And each week, 1B users search on Google. Most of those searchers aren't as savvy as HN folks. They do things like misspell ~10% of queries, not to mention searching in weird ways. Only one double-quote in the query? How should we interpret that?

So Google has power search operators and we introduce new tools like the left-hand side searching tools, but we also have to try to be accessible for the folks that aren't that tech-savvy.


> Only one double-quote in the query? How should we interpret that?

My vote: "Syntax error on line 1" with no further hints. ;)


...but I would just Google tha-- oh.


Which begs the question, are there regions of the web that just simply aren't indexed?


Absolutely there are, despite our efforts. See the bmtcinfo.com example mentioned elsewhere on this page. But normally there's a good reason if we don't have a page, e.g. bmtcinfo.com returns an error unless you add a "www" in front of it.


Those that blacklist googlebot on their robots.txt


...allegedly.


I am often annoyed by Google. Its word clustering is too clever by half.

At one point I was debugging, I think, an NSTableView object. Google says "eh, fuck it, iPhone dev is much more popular, here's UITableView. Have fun with that."

And so then I have to wrap NSTableView in quotes to force Google to use my input as I've provided it.

I wish I could turn this kind of stuff off.


I, too was annoyed when Google changed to automatically 'fixing' your terms a few years back. Sometimes it's useful, sometimes it isn't. Sometimes they TELL you at the top, sometimes they don't. It took me a while to figure out that single quotes are ignored while double quotes are taken very seriously. In the meantime, the usefulness of Google went way down for me. I don't mind having to use double quotes, but it should be the other way around - I'd rather click a 'do a fuzzy improved search' box than have to say 'yes this is what I really want to search for I MEAN IT' with double quotes.


This would make sense for the crowd here, but not for 90% of users whose searches are very fuzzy. So, another vote for "expert mode"...


I think anyone might find it confusing when they search for something specific and their words are automatically changed - except for spelling fixes.


I can't think of any good examples now, but in some IDEs in the 80s and 90s they had an "expert mode" you could toggle on and off to extend the number of menu options and the like. A similar setting on Google would be great.

I know a little of Google's syntax like the + and " workarounds, but having a literal search as default would be worthwhile for many users. I recall that I switched to Altavista back in the mid 90s because it offered a feature like that.


From your mouth to Marissa Meyer's ears.

My ultimate Google fantasy: An account setting called "2008 mode."

No instant search.

No fancy, annoying endless scroll Google image search.

No word clustering/auto-substitution.

It would be awesome.


Yes, please.

The image search has been a pet peeve of mine, especially because of the way the results are provided to the end users. It seems they do everything they can to stop people from visiting actual websites through the image search.

Instant search was fun for about 30 seconds. Then it's back to work and then it is rather annoying. At least you can disable it (just to the right of the search bar there is a little settings menu).

The word clustering and auto-substitution are a real pain in the ass, especially when it keeps coming up with stuff that just isn't right but is more popular. I really can't stand that. I find myself using quoted queries far more frequently than in the past.

2008 sounds just about right, they can market it as 'google classic' for all I care.


2008 sounds just about right, they can market it as 'google classic' for all I care.

It makes me wonder if, perhaps, Google2010 is just Google's New Coke ;-)


I can't think of one time in the 100's of times I've used image search that I cared at all about the actual web sites that the images are from. I wish it was even more direct, right to the images, than it still is, with a small link to the site if I'm interested.


YES!

At the very least, please give us an "allintext:" that actually works:

absolutely no stemming whatsoever

no "words were found in links pointing to this page" (this is the most infuriating thing ever)

and... no "you're a bot" crazyness (or captchas that normal people can solve...)


Don't even get me started with Google's captchas. They're unreadable even for humans.


I wonder if part of Google's speed increases involve dropping some of the more complex keywords while creating the index so that the search space is smaller.


Not that I can detect.

For instance: หาเมนบอร์ดช๊อคเกต775 returns 42k results, and is not exactly a common combination of words on google.com.

Extra points if you know what it stands for :)


> หาเมนบอร์ดช๊อคเกต775

'Haa men bord chok ket 775' = 'Find motherboard socket 775.'

I would have thought that 'socket' would be a direct English transliteration, and use ซ (s) not ช (ch) though.

Are you in Thailand, Jacques? Also, what makes you think this is such an uncommon phrase? It may not rival English, but there are plenty of Thai speakers out there, in many countries.


> Are you in Thailand, Jacques?

I wish.

On google.com it would be an uncommon phrase. On the Thai version of google not so much I guess.


Google Translate claims, "Find Motherboard Shock gate 775."

I don't know what that is supposed to be a reference to.


Close enough 'shock gate' is funny, phonetically very close to 'socket' right?


Here to claim my extra points. Btw, tried searching for Gmail messages in Thai before? It doesn't work.


Such a claim would need to be accompanied by proof ;)


Interesting, 1 thing to add on that hypothesis, right around April 2010, Google dropped a lot of pages from its index. That really messed up SEO play companies.

As a proof, search on any seo play company on Alexa around April 2010


For regular search, their SSL search page https://www.google.com/ is very close to "2008 mode". No luck on the image search part though.


For some reason, this is the page I get whwn I type google.com into the address bar in Safari (Webkit nightly). And I have yet to see Google Instant.


That would indeed be awesome.

Google Image Search also breaks horribly with FF and Ghostery installed.

Google Instant still does horribly at my favorite test search word: Cardinal

Do they know if I'm looking for

a) the bird b) the team c) guy with the funny hat

? A: No.


You could as well be searching for: d) A term from the Set theory e) Integer type

Too much ambiguity.



That link alone just made me switch my search engine to DuckDuckGo.


Ditto.

I actually tried this search on DDG yesterday while trying some other tests and DDG's results were much more useful IMO.


Visual Studio 2008 and beyond has this, and its disabled by default for VB.NET users. I'm not joking.

Something like "view advanced members" or something...


Here's one: I often google for Oracle and Python. I sometimes get articles about Apollo, Pythia and Delphi.


Hmm. I just searched for it. 46,100 hits. Not a UITableView within sight.

Are you sure you didn't mistype? If I type in USTableView, it shows me two UITableView hits at the top, before the exact matches.


It was a specific phrase where it did this, because I tried again with just NSTableView and it was cool. I wish I could remember what was giving me trouble so I could search for it again.


Check your search history: http://www.google.com/history/


Search history is one of Google's creepier features – I disabled it awhile ago.

Not that I suspect it helps anything, but at least if someone finds my computer or something, that data isn't two clicks away.


FYI, search history is one of the parts of the account you have to put your password in again to view. So it should be safe from prying eyes.


Prying eyes on your side of the connection you mean.


https://www.google.com/


Which terminates on the other side where everything is plaintext again. https is a transport level protocol, so it protects you from people (trivially) snooping on your data in transit but anybody at google with enough clearance has access to that data.


If that's what you are worried about then it seems like you'd have trouble using the internet in general.


I'm not worried, it's just that the gggp of this comment mentioned that search history is protected by a password and so it 'should be safe from prying eyes.'

All I did was point out that that data is stored on a server somewhere else and that there are lots of eyes that could be prying there.

I'm sure that there are plenty of people that are concerned about their internet use being monitored, for instance, dissidents and other people that have legitimate reasons to be afraid of having their search history lifted in the future.

Interesting reading on the subject of subpoenas of search records:

http://www.npr.org/templates/story/story.php?storyId=5165854

The less information that is being held on you by third parties the less chance that one day you'll be part of some drag-net operation.


Any website that you use could store all kinds of data about you without even telling you. Not to mention your ISP, who knows everything! At least Google is making it clear via Google History that they do have the means to do it.


https should stop your ISP from spying, no?


Assuming no MITM attack, yes. Of course they can still see what hosts you're connecting to for your HTTPS sessions, just not the content going back and forth.


It's creepier than you think - ever since it began making suggestions, long before Google Instant, Google has been quietly logging things you didn't search for.


They should add some kind of "I mean this word and only this word" symbol. Maybe backslash, since it's sort of like you're escaping the word. So in your case, you'd change "NSTableView" to "\NSTableView" and it would work.


They've already got it: it's the plus sign.

Try +NSTableView


In fairness, they have that already. If you wrap a word/phrase with quotes, they'll not screw around with their fancy word clustering.


Not really. I've had this break for me all the time when searching for programming terms.

Proof: Google for "what we have here is failure to communicate" http://www.google.com/search?sourceid=chrome&ie=UTF-8...

The first hit is the wiki article on the quote (good), but the exact search string is not present on the page.


Putting double quotes around a phrase does an exact phrase search. But as studer mentions, we also do exact phrase searches in anchortext pointing to the page, so you can get a match that way.


No kidding! I thought that the quotes made the term inviolable. I wonder if this is new.

A big issue with Google is that, behind the scenes, things change without any outward guidance to longterm users on how to adjust.


It's not new. And + doesn't mean required any more, either. It's treated as increased importance, but not required.

I've been complaining about these changes for years.


Click "cached", and you get the explanation:

"These terms only appear in links pointing to this page: what we have here is failure to communicate"


This is one of the worst 'features' I have been fighting recently (along with most of the other beefs around here). I am trying to find an appliance review for a stove prone to breaking so I search:

"Brand Model 1234 broken control panel"

And get an aggregate review page with a mention of broken control panels for another model and brand on which my model and brand are not listed....

I'm glad that my search term is in a link pointing to this page, but that is worth less than nothing to me...


Quotes means something different, and I'm not sure if quoting a single word actually does anything. "+" is the operator you are looking for. If I add + to the +beginning +of +words +in +my +query, it will ensure that those words are on the page and aren't coming from any other source of data.


Had a similar problem when googling ruby 1.9.2 slow snow leopard, since that describes the problem I had. There was no auto-correct, but somehow they decided to highlight the word low in the search results (which may or may not mean that the rankings were screwed as well). I guess slow is just the plural of low. Because of the s, you know. I mean, seriously, if there aren't enough relevant results with the word slow in them, why include a completely unrelated word?


I don't understand the Gmail complaint at all. Sure it has its weaknesses such as the policy to not spend the resources stemming emails causing search to suck, but Gmail pretty much defined the usable webmail interface for power users.

The interface on my webmail software feels like a mail client should -- easy navigation, threaded conversations, multiple window panes, and it's fast. Google on the other hand took years before they could be bothered to add buttons to Gmail, and even now Gmail's interface is an ugly monument to 90's era design principles.

Uh, in 2004 when Gmail launched it was one of the first widespread AJAX apps. It made Hotmail / Yahoo feel like molasses, gave you 200x storage, and added the idea of labels instead of folders. They obsessed tremendously over performance to make it usable by people who hitherto had only been able to tolerate desktop mail clients.

Now I realize a lot of things change in 6 years, but hell, Dreamhost is still using SquirrelMail. The author just throws it out there like amazing webmail is a foregone conclusion, yet I've not seen any webmail that beats Gmail even by a little bit. Can someone enlighten me?


A lot has changed since 2004. Yahoo for example has 3 pane viewing, tabbed email and chat, you can scroll through and see every email in your Inbox ever received without having to click any links.

It feels pretty much like a desktop application, actually...


threaded emails is a major reason to choose gmail over yahoo (in 2010)


GMail doesn't even respect In-Reply-To, and frequently screws up threading on any technical mailing list I subscribe to. It's good at its ad-hoc "conversation" grouping, not threading.


I use Dovecot + Postfix + Roundcube + Managesieve + SpamAssassin and am pretty happy with that stack (although I've currently got a very annoying Roundcube problem, so ...).


It's great that Dovecot + Postfix + Roundcube + Managesieve + SpamAssassin works for you, but not everyone wants to take that path (or knows how to). $50/year is worth it for some people not to have to diagnose Roundcube issues.

I think Gmail does hit a sweet spot for most power emailers. It took months for me to switch from my previous custom mutt + .procmail setup to Gmail. Do I still miss procmail sometimes? Sure. But overall, I save a ton of time using Gmail instead of rolling my own email solution.


All of that is completely understandable, but -- unless they've changed recently -- Gmail's filtering is really pretty far behind from what it could be, and Gmail's interface is -- for me, and the clients I support -- a frequent source of complaints.

That's not to say it's bad, but rather that -- given Google's size and the ridiculous amount of smarts on its payroll -- it could be a lot better. It just doesn't seem to be what Google is focusing on these days.

I'll grant though that Gmail has, hands-down, the very best spam filtering in existence. I really don't know how you guys do that.

But ... just to put things in perspective, I think that's the only killer feature that Gmail has left. Otherwise, it's pretty vulnerable; any service provider could launch the same setup I have, on the really cheap VPS services available these days, and readily compete with Google's business mail hosting.


I tend to agree with you on filtering--I want the ability to filter on headers, for example. See http://www.google.com/buzz/109412257237874861202/Tu7U17YACA8... where I made the argument that a startup could use Gmail OAuth to do filtering better than Gmail itself.

But I do think that Gmail Labs is pretty hard to beat. Labs like "Send and Archive," "Undo Send," and "Quick Links" save my bacon on a pretty regular basis.


I think the big deal there is yeah, Gmail works but it doesn't $50 a year work.


Personally, I wish someone I like, like Colin Percival, would start an email service. I like the way he runs Tarsnap.


The problem with starting an e-mail service is the crazy level of administration and sysadmin headaches it induces merely to be able to reliably send e-mail to 90%+ of endpoints.. let alone anything else.

The engineering problem is a pretty interesting one, but the headache of ensuring your users can actually get their mail someplace makes it seem dull or insurmountable to solo developers.


Yeah, I second this. Handling people's email is not fun at all, and 90% is way too low IMO. 90% would mean that 1 in 10 of someone's emails wouldn't make it to where it was supposed to.

Users start to get annoyed if you do any worse than 99.9%, and they don't like receiving any spam at all, and to make matters worse, "the other end" can be anything from a creaky old sendmail server to an Exchange monstrosity, either of which can communicate in odd ways or drop a message altogether.

I keep expecting to see email go the way of Usenet, but it keeps refusing to die.


I completely agree. Not only that, but users will typically deal with lots of software annoyances, but one thing they simply will not tolerate is late or lost email.


Having spent far too much time figuring out problems in a fairly large email system (120 servers, world-wide, just about every MTA known, including Notes) I can heartily agree - there are just so many obscure ways that email can fail that it's really not funny.


Maybe a minor quibble, but isn't DDG powered by Yahoo BOSS with features added on top? Yet, Rob then goes on to say "Yahoo can't be taken seriously" and points out queries where Yahoo does a poor job of handing synonyms - DDG has the same problem for the exact same query if you try it. Furthermore, the other queries which he suggests give irrelevant results on Google seem to give me irrelevant results on DDG.


DDG is actually a hybrid of my own crawling/indexing and BOSS/Bing/others. Additionally, I don't use the others straight up. So for some queries they may look similar and for others they will look completely different.

Wrt to spam sites, DDG often looks a lot different because I maintain a large database of spam sites that I remove from results. I see these crop up all the time in the API feeds I use. It's over 60M in just the main tlds (non country level domains).


Among the differences, DDG blocks MFA content mills and junk sites. If you report one they add it quite fast.


Yahoo certainly tries to do that too, although I guess you're getting the union of yahoo/bing's spam-fighting and DDG's spam fighting.

I suspect that of the two, yahoo's has a much bigger impact given the size of the teams involved.


Actually I remove tons of spam from the Yahoo, Bing & other feeds.


What order of magnitude is tons? Not to take away from what you're doing, but historically 90% of new domains are spam.


I'd have to check for exact numbers, but for a large % of searches I'm removing links from those APIs.


> If you report one they add it quite fast.

Probably because few people report sites.


I've tried several times to give up my Google habit. Once for an extended period with DDG. However each time, despite the correctly mentioned issues with spam, gaming etc, I still experienced severe withdrawal symptoms (and I made myself use the alternatives for 2 weeks at least, so I wasn't just hankering for the familiar).

Ironically, of all Google's services, search is the one I could most easily replace and would most like to replace, if an equally good competitor emerges. On the other hand, giving up Gmail, Google Calendar, Google Docs and even Buzz at this point would be hard as my life is interwoven into them in many complex ways. The fact that my whole life is stored in these accounts but the same account lets google track my searches and associate them to me scares me and actually makes me want to replace Google as my search engine.


I agree that Google search is the easiest to replace. I've been using DDG for a few weeks now and I've been happy with it. I tried to look for a good competitor to Gmail but every other competitor is just plain bad. It's like everyone just gave up on email after Gmail came along.


My biggest peeve about Google is how subpar their search is outside of Web & GMail. YouTube is the one that burns me the most. I often have to go to web search and append youtube at the end of my query to find the relevant result. The same exact search on YouTube gets me nothing relevant. The same is true of Android Market. A simple misplaced space, typo or misspelling breaks your search. It's like time warping back to web search circa 1996. I've never understood why Google doesn't apply all their knowledge about search to these other services.


Well, as a single sample, I tried the first thing that came to mind in google and duckduckgo. "breast shimming" which refers to the process of adjusting the magnetic field of a MRI scanner to obtain images of the breast (relevant to a project I'm working on). Every single duckduckgo result is irrelevant. Most of googles results are relevant. It's too bad because google is somewhat hit or miss for me. Someone needs to sic duck duck go on pubmed.


Did you turn safe search off on DDG? When I gave these queries a go, I noticed that DDG stripped "breast" out of the query. The results seemed much more relevant once safe search was disabled.

Not that I'd want to go around with safe search disabled all the time...


Yes, this is a safe search problem. Presumably http://duckduckgo.com/?q=breast+shimming&kp=-1 is what you want.

Edit: FWIW, I fixed this: http://duckduckgo.com/?q=breast+shimming should now work as well.


Excellent! Safe search didn't occur to me.


Just last Friday, I was pleasantly surprised to find Duck Duck Go is the only search engine (of those I tried) that shows me a hex color when I search it, e.g. http://duckduckgo.com/?q=%23004499

And not long before, I searched for my oldest still-surviving site (on Angelfire). I discovered that while Google returned 20 garbage sites, they didn’t include mine; my site was the only one DDG returned.


The balance is between offering a cool feature and the danger that all those features accrete into cruft. If you wanted something similar for Google, you can create your own "Subscribed Link" to do this: http://www.google.com/coop/subscribedlinks/

I've created a subscribed links to mimic the UNIX cal command, for example: http://www.mattcutts.com/blog/add-calendar-shortcut-to-googl...


It's a case by case thing. In this case, the query space is pretty segmented to avoid cruft.


I left a few comments below, but I wanted to say thanks for mentioning some searches that Google didn't do well on. They're interesting searches, so I thought I'd break them down:

- [avaya 103r manual] It's a fair complaint to say some low-quality results are returned, but there's a reason. Do that search and Google says "About 1,510 results." That's a minuscule number of results--it usually means the web has very little content that matches that query in any way. That's why the lower-quality and foreign results show up: we're scraping the bottom of the barrel of the web to find any results at all, and there's not many pages that have that those words. If you go to avaya.com and search for [103r], they don't find any results either. It's hard for Google to return useful Avaya results when avaya.com doesn't have that word anywhere on the site. :)

- You were looking for specs on a Gateway mt6840 motherboard, specifically the socket. Instead of trying to solve that in one query, I'd go for doing it in two steps. I did the query [site:gateway.com mt6840] to see if there was any authoritative result, and the #1 result was http://support.gateway.com/s/Mobile/2007/Oasis/1014554R/1014... which has specs for that motherboard, but not the socket. But that spec sheet mentions that the motherboard uses a Intel® Core™ 2 Duo processor T2450. So then I searched for [T2450 socket] and got a Wikipedia page that says the T2450 uses Socket M. Just to be safe, searching for [MT6840 "socket m"], which returns a page on computing.net that's a forum with some ads, but the page mentions "945GM-based laptops support Socket M Core 2 Duo CPUs." The Gateway spec sheet says the MT6840 uses a 945GM chipset, so Socket M seems like a safe bet.

- The last search was about an OpenSolaris machine that wouldn't boot and that said "Error 16" instead. The complaint was that the results were stale/old. I did the search [opensolaris boot "error 16"]. Then over on the left-hand side, click "Show search tools" and click "Past year" to get results from the past year. The #1 result shows a long discussion about debugging this (which implies it's not a trivial issue). The #2 result is a discussion that points into opensolaris.org, which then points to this bug: http://bugs.opensolaris.org/view_bug.do?bug_id=6774616 . My point on this query is that you can use the left-hand search bar to restrict the results to a certain time range (e.g. only results from the last week, month, or year).

We do try to provide tools (e.g. estimated number of results, or the ability to restrict results by date) to help find the answers--or to find out if there aren't good answers on the web.

And in some cases like your #2 search, we could do a better job of synthesizing information. If page A says that a motherboard uses this chip/chipset and page B says that this chip/chipset uses this socket, then we could infer what socket the motherboard uses. Inferring information like this is really tricky though because the web can be really noisy.

I'm not saying Google shouldn't do better on these searches. You're clearly a power searcher, and I share your frustration when it's really hard to find what you want using Google. I'll pass this article around within the search quality group at Google and see if we could do better on searches like these--thanks for the feedback.


Wow.

So, this whole thing started out as a brief rant that's been in my head for a couple of months. Also, I wanted to get rid of those tabs that had been on the far left side of Firefox for ages. I'm not even sure why I posted it to HN; I just did it and then intended to head out the door afterward, except that all of a sudden my web server became unresponsive.

Anyway: the ability to get results from the past year is great, and I was totally unaware of it. I'll update my post shortly with a note about that. Having something like that sticking right out on the left-hand side would be even better. :-)

I think I often see people complaining about search quality (here, and in the other HN thread I linked to from about a year back, as examples), but if you ask them about the specifics of the search, they don't remember. I think that's a problem that needs to be solved somehow. Although it's at least partly laziness on the user's part, I think Google could view this as a huge amount of potentially beneficial information that they're missing out on. Trying to improve search results without knowing what problems people are having with them is challenging, at least.

It would be nifty if there were some kind of "these were not the search results I was looking for" quick form linked from certain types of search results.


People in Google's search quality team really do like to read posts like this as long as someone mentions the searches they were doing. Most feedback posts don't give us enough specifics to go back and figure out what went wrong and how to make things better.

We do have a "Give us feedback" link at the bottom of each search results page which does some neat AJAX-y things. But even then, we get a lot of "Hi, I'm trying to find my great-great-great-grandfather. He last name was Smith. Thanks!" sort of submissions. Also submissions like "My computer keeps humming. I opened it up and cleaned out the dust, but it still hums. Do I need to clean out the dust again?" And some submissions that say "I would like to rank #1 for all my keywords. How do I do that?"

I've wondered whether a Chrome extension or something similar would give higher quality data for spam or bug reports. It might be worth thinking about offering something like that.


People in Google's search quality team really do like to read posts like this as long as someone mentions the searches they were doing.

Okay, here goes. Lately I have been doing a lot of C# searches. Google suggest strips out the hash (#) from the suggest items, so I keep getting faked out by likely C programming matches. Can you please stop stripping off the "#"? Otherwise, I'll need to attach a hot wire to my down arrow to break the habit. Thanks.


I'm pretty sure that several search quality people are reading this thread now, but I'll try to mention this explicitly to people. Thanks for the suggestion.

Normally Google doesn't allows searches for punctuation, because doing so would swell the index size a lot for very little return in terms of helpful searches. But since we're engineers, we do search some punctuation, e.g. underscores so you can search for sprintf_s, and terms like C# and F#. I'll ask whether the # can carry through into Suggest.


You're a Good Egg, Matt.


You should really automate this feedback process by mining the web for examples where people complain about Google quality AND they mention the search queries themselves :-)


Thanks, nhebb. ntoshev, now you're thinking like a Google employee. :)


Why not ask permission to transmit the last few searches and a snapshot of the results. I mean, I usually only hit "give feedback" when i'm not finding the stuff I want. You'll probably be able to discern a lot from that.


The "give feedback" form does automatically populate the feedback form with what you were searching for when you clicked to give feedback. We haven't tried transmitting several searches because people would probably complain, and it's not clear how much previous searches would help. But when we revamped the feedback form, that's one of the big wins we got from making it dynamic instead of a generic form.


I wonder if it would help to give some feedback to the ones that you actually took into account?

I think I've tried more than a couple of times to give some feedback about the search results. Seeing that nothing happened, and getting nothing but an automated response, I've stopped doing that.

Reporting flashing ads to the ad team worked better, they were removed later the same day.


One of the problems with replying to every piece of feedback is that a link on Google's search page--even a link that many people don't notice--gets a huge volume of feedback. And the resulting feedback can be pretty unhelpful ("I lost my cat. I think you should build a search for lost cats.") So we don't have the resources to reply to all of those. Personally, I would like if we offered a lower-volume feedback method with a higher hit rate of good feedback. Hacker News is acting as the prequalifier in this instance, but I find that people annoyed enough to blog often provide good feedback too.

Overall I agree with you, but even if we had that channel, many times the feedback would be: "Yup, that's a bad set of search results. We'll try to come up with a way to make it better, but it might be a while. Getting an algorithm to do better on this search will be hard."


I have given feedback on search for "BMTC" not returning the official website of BMTC (bmtcinfo.com). (2-3 months back). But the search results haven't changed since then.


I did a quick check, and that website has some issues. Here's a wget on bmtcinfo.com:

$ wget http://bmtcinfo.com/ --2010-09-14 09:01:43-- http://bmtcinfo.com/ Resolving bmtcinfo.com... failed: Name or service not known. wget: unable to resolve host address `bmtcinfo.com'

The url http://bmtcinfo.com/ just doesn't work in a browser or with wget. And trying to fetch the "www" version of the website, it does a 302 redirect to a deeper url. 302 redirects are the ones that don't pass PageRank. The last time we tried to crawl the page, we got an error--probably from the non-www version. I'll see what we can do to find this site better, but returning errors instead of content to Googlebot really hurts that effort.

By the way, we provide a free webmaster console that would let the owner of this domain self-diagnose these issues. The "Fetch as Googlebot" feature lets the owner send Googlebot to fetch a url on their site and see what Googlebot saw, including errors and redirects. If the owner of bmtcinfo.com were to use that tool, they'd probably notice the errors pretty quickly.


Thanks a lot for the reply. I have passed on this info to bmtcinfo.com through their feedback form.


If you get through to them, it looks like a couple DNS servers in their chain have been serving us NXDOMAINs. Point them to sb[12].mooo.com, hosted on afraid.org, and ask them to let Googlebot in. :)


Almost 2 weeks over. No response from them so far :(


Matt, it's great that you think enough of us here to give direct feedback. I wish I could give you some example searches too, but most of what has bugged me lately relates to stuff which might brush up against NDA terms, so I don't feel comfy giving specifics.

Overall I still love Google, I must use the search service 50 times a day so complaints about shortcomings are a bit like grumbling about the paintwork on my free new car :-) So I'll try and identify two recurring (and relatively new-feeling) headaches:

1. quotes. When I put it "in quotes" I don't want ti speell-checked, or cleaned, or made case-insensitive, or whatever else. I would rather get not results and experiment with other strings, than think I've got results that turn out not to be exactly what I searched for. It seems to me like punctuation often gets stripped even if it's inside quotes. For some kinds of searches involving bits of source code or so, this can be a drag.

2. Content farming. I know you are constantly struggling against people gaming the system and so forth. I don't blame Google when I get 50 results of generic junk referring to obscure search terms...the "find [niche product] resellers, hints, tips!" types that are totally generic. but what does piss me off is that a few months ago Google offered a button that would let zap such results, and I could label clutter as clutter. Now on a deep search, I often spend several minutes trying to think what terms are common enough to content farms that -excluding them will prune the results sufficiently that the remaining search results are worth checking one by one.

You remember that Bing ad where someone says something random and their geeky partner starts hypnotically chanting associated but unhelpful phrases, implying an overbroad result set? Sorry, but they had a point.


Thanks for the feedback. Punctuation is tricky, because it only adds value for power searchers, but indexing it would swell the index size quite a lot. Otherwise, using double-quotes should do an exact search.

On content farms, we've definitely heard that feedback. One point up for debate is whether to respond with algorithms-only, or whether we should update our quality guidelines to call out low-quality content farms as a type of webspam. Both DuckDuckGo and Blekko seem delighted to remove sites that most people don't like from the search results. Here's a link for DDG for example: http://www.technologyreview.com/blog/mimssbits/25532/ . The question is: would you feel comfortable if Google removed results that a lot of people don't like from our index? And how do you balance the goal of reducing clutter and junk with the goal of being fair and comprehensive?


Yes!

and if people wanted a fuller result, you can say, "more lower quality links found. Click here to view all" similar to the way that searching in gmail lets you search in trash & spam.


One potential problem of removing links that people report is that it's possible to game that system. You'll soon have spammers sending a massive amount of requests to have their competitors removed form the index.


I know there's no simple answers for these things. On punctuation, consider the (admittedly obscure) situation of searching for some command line string: obscure_utility "-unknown" "-switches" - either the - sign excludes stuff you want, or gets stripped inside the quotes.

Farming-wise, I think you should probably keep all those results in your index, even the ones that are composed of nothing more than your top searches separated by random phrases! even if it's not there now, in future it'll be possible to score page content on whether or not it has semantic value and draw inferences about sites or entire domains that are filled with junk. That will be interesting and useful from security, economic, and scientific points of view. In the meantime people will find useful analyses to run against that 'bad' data in your results which would not be practical if you purged too aggressively.

What I had envisioned (which might be a tad ambitious, but bear in mind that you already have 5% of my local CPU for the asking with the desktop tool installed) is per-user search filtering. I may like sculpture but hate politics, so I would always search for 'statue -liberty', you are the other way around so your searches tend more towards 'liberty -statue'; I would very much like to be able to have complex filters on the client side, either locally or on the client-facing parts of your servers - and not just for spam sites.

DDG takes the approach of allowing regex, which is a neat thing for the people who know enough to want it, and it would be interesting if search patterns and/or selective exclusions (as described above) could be stored locally, as either weighting tables or some sort of white/blacklist - always include wikipedia, never include about.com. I'm already running chrome and using a Nexus one, so perhaps some hashing could take place on my computers rather than increasing the load on your servers.

The other reason besides spam is that lately I find myself wanting to do specialized searches, but I don't know how to specify the bounds of the search space. For example, I'm interested in law. But a lot of legal terms are in popular currency, so even if I search for "theft +legal" I may get tons of results for cheap car alarms or something. It would be fantastic if I could get a large set of results by specifying a large number of domain-specific terms - say, "tortfeasor privity precedent appellate" - and then hash and save that result set as a 'search space'. So then I could do more specific searches and know that my results would mostly come from websites devoted to the subject, with few that mention it only incidentally.

In actual fact, the legal resources searchable via Scholar and Books are fantastic. I just picked law as an example of where you might want to temporarily limit your search set because most people can appreciate the difference between writing specifically about legal topics vs things that just mention the subject in passing. If you want to learn how to write a good disclaimer, "legal disclaimer" is not a good start because every 3rd landing page on the net includes that phrase as boilerplate. If users could save and reuse result sets, we'd get more actionable results, you'd (maybe) get lower server loads but more importantly, every successful hit (where the user doesn't search again or try another result for several minutes) is an implicit vote for the relevance of the result to the set, and thus a valid input to a semantic classifier.


I don't want ti speell-checked

/facepalm


>My point on this query is that you can use the left-hand search bar to restrict the results to a certain time range (e.g. only results from the last week, month, or year).

I am amazed at how many users still don't know about this feature. It is one of the best features in google search (that was added recently). Google should promote it better to its users.


I find myself resorting to 'Show Search Tools' -> 'Past x' more and more frequently these days. As the web ages, it seems the proportion of old and potentially outdated information in Google's index to new information is increasing.

While Google Instant is technically impressive, the feeling of speed is undone whenever I have to next manually restrict the results to the past year or 6 months or whatever (and sometimes experiment with multiple durations) in order to filter out discussions from 2008 and earlier. Same with using regular search as well.

Hence, might I suggest you guys consider something like inverting the default search, to only include pages created or modified within the past year (or whatever duration you calculate most optimal for returning relevant results)? Or perhaps changing the PageRank weighting of recency, at least for domains that change rapidly. I know this could cause some problems, and I'm not sure the web is quite at the age where this is necessary yet, but I think there's an inflection point coming soon where more searchers will expect recent results in their top 10 rather than years-old pages.

It might also be worth moving the duration filter up one tier of UI so it is both clearly visible on the search results page, only takes one instead of two clicks to activate. The OP isn't the only one who's recently needed searches restricted by age and didn't realize just from the UI that this could be done. A few months ago I made a similar complaint on another forum and was informed of the same thing - use Search Tools -> Past X. But it's not obvious to do that on the current UI.


>> legitimate search results have become cluttered by old, stale web forums and mailing list postings.

QFT. I spent the beginning of the year teaching myself Rails, and I can't count the number of times I cursed Google's search results. Multiple different copies of the same mailing list post, just from different mirrors. And almost everything pointed to some old version of Ruby or Rails that wasn't relevant anymore.

And searching on Google Groups is worse.


I also switched to DDG at the prompting of an HN commenter, and agree entirely with the points made in the OP.

The only downside really is just in the formatting of the results - I can get 11-12 results in a google search in a full-height window and about 8 in DDG. Sure, some of that is due to the very nice "disambiguator" box (or whetever they call it), and the results are indeed generally higher quality, but it is also due to large title fonts and designey white space, so I personally would like a bit of that back.

(No I haven't sent this to DDG feedback because for all I know it is a personal peeve and everybody else likes it this way)


"(No I haven't sent this to DDG feedback because for all I know it is a personal peeve and everybody else likes it this way)"

The way I see it, Gabriel's job is to decide whether it's a personal peeve or not. Your job is to ask for new features and "complain" about the old features, so that he can hear what real users think.


Yes, please complain (although be nice about it) :)

The design is continually in flux based on user feedback, but recently I've been stepping back to take a more holistic approach and really push it forward.

I will be working with a professional designer on this and the duck.co community (if anyone wants to participate). I hope this will become evident (and useful) within a few months from now. So by all means, give design feedback!


You can change the font size in the settings, if you have not already done so.


yep this is the first thing I did, it fixes part of the problem, but the generated page is still "hard to scan".

It's a shame as it is a not so hard problem to solve. It should just copy the search result google had in the early days :)


Giving up on a blog that can't display the article in full without JavaScript. Progressive enhancement folks!

(Web apps get a pass, but if your site only needs to display text it should not require JS.)


Thanks for the heads-up. Actually it is designed to gracefully fall back without JavaScript, but there appears to be a display bug I wasn't aware of.

I'll fix it soon but right now I have a server down.

(BTW, do you have any idea how much extra time you noscript guys add to web development efforts? Ugg.)

edit: fixed.


  > (BTW, do you have any idea how much extra time you
  > noscript guys add to web development efforts? Ugg.)
I can't quite form words around the feeling, but this feels like a very backwards statement to me.


It's relatively easy these days to make sites that use JS in a smart way to dynamically load content. In the case of this personal site, if JS is on, then all the links do is request a "stripped-down" version of only the content of each post, and then swap that into the main content area. There are no http requests then for the images, background, css, and so on, which is nice for the server.

That's all well and good, but then I have to go back and make that also all work for noscript users. So, for example, I can't just have a list of items on the left, and then on page load, have JavaScript just "click" the top item in the list. Instead, I have to put in extra effort to get the backend to handle the default home page, and so on.

On this site, that wasn't really that hard. On more complex sites, it can be very hard.

...and I'm one of the "silly" web developers, that actually tries to work with the .5% or so of people that have JS turned off. With jQuery and everything else out there, most web developers don't seem to bother.


I think that's the "backwards" he was talking about. You're building your links to call javascript, then scrambling later to find ways to make them work without.

The accepted way of doing this is to build your links to point to URLs, then use script to override them with whatever dynamic wackiness you think is helpful. You get all sorts of bonuses by doing it that way (such as ctrl-click, save target as, etc.) without any of the downside you go on about.

And it's not any harder to implement.


> You're building your links to call javascript, then scrambling later to find ways to make them work without.

Not exactly. I'm building JavaScript to call links, and then making sure that everything works sans JS.

I dunno why I'm catching flak for this. The display bug had nothing to do with JavaScript; it was primarily a CSS issue. The JS in that page only coincidentally fixed the CSS issue, which is exactly the kinds of backwards testing I was talking about (and obviously didn't do the last time I updated it).

See, in order for the layout to work with a minimum number of images, there's a CSS trick in some overlapping layers. The JS in the page coincidentally extends one of those layers when it inserts the fairly unobtrusive text control links that allow you to scale the page text as large or as small as is comfortable to read on your display (something the noscript folks don't even realize isn't there). Without that element, the content layer wasn't being resized correctly.

I'm on your guys' side here. I know all the "accepted ways"; I have to explain them to my clients when I justify the costs they're charged, and the benefits. Sometimes they want to know how many people this will actually affect, and I have to tell them, "maybe a few", and then try to justify doing it anyway.

And, it is harder to implement. I can build a site that will look exactly right no matter what size display you're reading it on; fonts and images will all scale, and the site will look right at 800x600 or 1600x1200, without lots of scrolling or empty space. The catch is, to do that, I need JS to work, and spending time trying to figure out the least ugly way to display the site sans JS is not "not any harder".

Feel free to browse my page source, it's fairly easy to read, if unconventional in places.


The lesson is still the same though: Deliver a simple HTML page to the client that can be viewed correctly and in full without scripting or CSS turned on. Then, optionally, use scripting to enhance what's already there.

By doing it any other way, you run into the issues you're running into.


OK, I give up. :-) I could continue trying to sort this out with you, or I could just go back to work. Thanks for the advice!


How does that work for people with accessibility issues, e.g. people using screen readers?


Thanks for catering to us noscript types. I appreciate you taking the time to make it work.

edit: JavaScript the Evil Parts[1] is what made me start using NoScript and NotScripts. The web is safer and less annoying w/ NoScript and FlashBlock.

[1] http://blip.tv/file/3684946


Wow, that blog design is beautiful.

Anyways, I'd love to switch to DDG, but too often it tells me "No more results. Try Google.", and almost every time a subsequent search on Google provides me with many more (relevant) links for the same query.

If DDG could be configured to basically be a proxy and fetch all of its results from Google, that'd be cool.


Those are usually cases where Google is changing the query and I am using it as is.


The text gets chopped for Chrome/OSX.


Argh. I will fix this. Sorry.

edit: fixed.


I've been on duckduckgo for a few weeks. It's different in a very good way. The only time I reverted to google was to try instant, which I find a great improvement.

It's brilliant that there is still quality innovation going on in the search space. It's an old industry now, after all.


Just wanted to vouch as well for yegg's amazing response. He fixed some l-key issues I complained about on duckduckgo within just a couple weeks, and responded to me within a day.

Maybe yegg could drop a comment on how he manages his time to do all this?


No magic bullet. Here's the process:

--All feedback gets pushed to my personal email inbox--gmail :)

--I try to do 0-inbox, i.e. keep my inbox at zero messages. This is usually not attainable, but means it functions essentially as a to-do list.

--I respond to all feedback ASAP (unless anonymous).

--If it is something simple that I know how to fix I try to do it that day.

--If it is something a bit more complicated, I respond, and put it on the bug queue. I'm currently using http://speckleapp.com for that, made by an HNer (http://elliottkember.com/).

--Every few days I set aside a large block of time to go through that bug queue. If easily fixable, I fix it and respond back: "Fixed!" If not, I put it in another category, and explain why it is a complex issue and how and when it might get fixed (or not).


There is so much that can still be done as computational linguistics makes its way into how search queries are understood and how content is indexed and retrieved. Faster machines will make some things possible that were not (like instant search) soon!


Not going to comment on Google, but about Duck Duck Go, I really like the service! Especially the !Bang feature and the keyboard shortcuts. I'm using it as a secondary search engine (after Google that is ;))

If someone wants to add Duck Duck Go to search engines in Opera, here's how (my own screenshot): http://twitpic.com/24ex2e


>If someone wants to add Duck Duck Go to search engines in Opera, here's how (my own screenshot):

No such hackery required - it's Opera after all! There's a "Create Search" context menu these days, and it works on any text box.

In summary, just right click the search text box, click "Create Search". Even the DDG homepage says so!

I've been using it for quite some time now, but my gripe about DDG still remains; javascript should not be essential for a search engine.


Note, you probably don't want that &v=d part.


You're right, with "&v=d" added to the URL it displays "I'm feeling ducky" and "relevant results" whereas without it displays more 'standard' results.


This article inspired me to install the Duck Duck Go Search plugin ( https://addons.mozilla.org/en-US/firefox/addon/161971/ ); I'm going to give it a week.


I would suggest more than that. After 1 week, I hadn't really seen what DDG added in value. My experience says, give it 2-4 weeks.


I'm inclined to think that if you can't see why a product is useful to use after a week, it's probably not worth it. But then, my experiences with DDG have mostly consisted of comparing its results with google, and finding that the interface is a lot less usable.


This article prompted me to do likewise. For the past 24 hours and running, I have used Duck Duck Go exclusively instead of Google. It is like a breath of fresh air. I tried DDG a few months back and it was a bit on the rough side. It has made tremendous strides in the intervening time.

As for Google, they have gone past their prime. As the other comments note, 2008 was the peak. Yahoo is in the midst of what may be the longest running identity crisis ever to hit a company. And the name of the game for both Google and Bing is to keep you on their site with their advertisements for as long as they possibly can. Like some tentacled monster, they don't want to just serve you and let you go. The cheap tricks masquerading as a flashy UI ruin the user experience and make me not want to go back. Duck Duck Go actually keeps you coming by serving salient information (aka 'value') and then lets you get on with life. Kinda reminds me of the difference between the personality ethic and the character ethic from Covey's 7HHEP.

This was a great post. Thanks for putting it forward.


As far as I can tell Google is basically giving up on the search business. About a month ago I reported a couple spam sites that were in the top ten results for a fairly popular search phrase. Over a month later they're still there. These are sites that are literally just a list of keywords with no actual content of value on the site, and Google does nothing even though they come up at the top of the results for a phrase that gets several hundred searches per day. That has to be in the top .1% of all searches, and while I realize they probably get tens of thousands of complaints per day this one would have literally taken less than 20 seconds to recognize it was a scam site and delete it from the index. And since clearly any algorithm should have moved this to the front of the queue based on both the popularity of the term and also the good standing of my account, the only real conclusion is that Google has basically just given up on trying completely.


Alex3917, do you remember the sites (or the query) you did?


I'm curious if Alex3917's response to this was set to dead as a result of automated spam filters or if there's something else going on.


I can't recall ever being seriously annoyed at google search until instant search. There have been times when I've been disappointed with the results. I've shared many of the experiences he mentions in the article.

Instant search has not saved me any time and a couple times it has gotten in my way.


A comparison article on search not mentioning Bing? Yahoo and Duck Duck Go both use Bing as far as I know. Of course there are cases when Bing is weaker than Google. For example your 'streamline/streamlining' example. (As Duck Duck Go also uses Bing of course they have the same problem). On the other hand in other cases Bing is better than Google.

I searched for this a while ago:

'impact of basic research on GDP'

There were no relevant results on Google, and the first result was relevant on Bing. (and of course on Duck Duck Go and Yahoo, which use Bing.)

Edit: It seems that now google also gives back relevant result on my search query, so this example is not relevant anymore.


DDG uses Bing, but usually not straight up. In this case, the first result is the same but others in the top ten are either different or in a different order. Not that they're necessarily better though :)


I have seen a blank page on Google, too within the past couple of days. Never saw anything of the sort before. It was when I clicked to try a search on News, and I couldn't get that search to come up with anything different - until I turned of 'Instant'.

Oh well, whatever. I'm sure they logged an exception and will fix that soon.

As for people gaming the results, it is a huge problem, but it's a problem that is bound to plague any market leader in search.


One of the issues with Duck Duck go is that it does filter results to the country of the user. As an UK user, when I put in the word flowers or "local programmers", I expect the results to be inclined towards UK pages as this is what will help me. Google does this well! There is no point in showing me stuff in NJ or Canada.


My biggest complaint is that queries with something like OpenBSD don't have the word OpenBSD on the page. I have resorted to adding the names of configuration files I know need to be changed in my queries (e.g. pf.conf).

I do really wish I could have a "NO" button when it asks "Did you mean this?". It might give some feedback.


> ... Yahoo really isn't in the search business anymore, at least among the tech-savvy.

Was Yahoo! ever in the search business, among the tech-savvy?

This is a serious question. I've been webbing since '93, but I never made any serious use of the Yahoo! search box. I thought I was typical. Am I wrong?


One of the great aspect of Google, it has remained very useful to the tech savvy, but still managed to appeal to the majority as well.


I don't yet use duck duck go because of the tiny lag (couple ms) that takes to display the results. The google results are almost instant, and i'm relly annoyed by the ddg small lag.


It's almost like you need a anti-spam filter program for google to remove all the spam/farm sites from the results.

I wonder why google can't store a list of results I simply never want returned.


I love the privacy policy on DDG - so elegant and clear. If google wants my business, I need to be able to u derstand their privacy policy. I am too suspicious.


Someone needs to tell him instant can be turned off.


Really curious why the website cuts the right border of the text without allowing js for some external site...


The external site is my business server, which hosts the JS package that the personal site uses. I wasn't aware of the border issue until just a bit ago. Sorry 'bout that.


You fixed it real quick, and I quite very much liked the scale tool.


Google "search engine"


Oh. Wow. EPIC, EPIC FAIL.

Actually, they're doing it for other things too:

[web based email] [adverts for publishers] [traffic analytics]

I'm sure there's more.

Wow. That truly is an epic fail. Google have removed Google from the SERPs.

LOL! I love(d?) Google, but they are definitely screwing something up somewhere. These rubbishy results occur when I'm logged out too (well, I don't have Web History or personalized search enabled so I guess that's n/a)


Reminds me of "Close your Facebook account" day.

Good times.


Amen brother. I found instant to be uncomfortable. Sometimes DDG results were not as good as google's, but pretty good. Gabriel is also incredibly responsive and tries to solve problems quickly, every problem I've asked about has been solved or had a serious effort to try to solve. Its so "personal" that I just feel like its a search engine that cares about me, vs me being just another face in a crowd.

In any case, I still love gmail, everything else has been crap.

Also try replacing google EVERYWHERE on an android device, quite well embedded and hard to change in 100 places and still browsers use google as the search engine.


Google may be becoming more optimized for the larger number of non-tech/non-elite users, while becoming less optimized for the smaller number of elite users. If so, we've seen this phenomenon happen with other businesses in the past, nothing new. A side effect of becoming popular with the masses.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: