Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
The Internet Businesses Google

Google Index Doubles 324

geekfiend writes "Today Google updated their website to indicate over eight billion pages crawled, cached and indexed. They've also added an entry to their blog explaining that they still have tons of work to do."
This discussion has been archived. No new comments can be posted.

Google Index Doubles

Comments Filter:
  • by account_deleted ( 4530225 ) on Thursday November 11, 2004 @07:04AM (#10785963)
    Comment removed based on user account deletion
  • by xiando ( 770382 ) on Thursday November 11, 2004 @07:06AM (#10785970) Homepage Journal
    Personally I find that the lack of relevant pages if the biggest problem with search engines, not the lack of pages with information. It seems I always find what I'm looking for eventually, what I need improved is the time I spend looking though spam-bomb pages before I find a page with the correct information.

    These spam-pages seem to be increasing; I mean those pages with just a buch of keywords or the output of some search system.
    • by Kithraya ( 34530 ) on Thursday November 11, 2004 @07:51AM (#10786142)
      I'm especially irritated by the increasing number of highly-ranked pages that are nothing more than another search engine's results. If Google could find some way to identify and remove these from my result set, Google's usefulness to me would increase 10 times over.
      • Google has a problem with this because some of those searches are actually useful.

        For instance, when I search for something technical, I often run into search results from DBLP, arXiv, CiteSeer and the like -- although these are really search results within themselves, they're immensely useful to me.

        Since we both effectively have a conflict of interest - Google would need to figure out a way to strike a balance.
        • However, results from places like Starware Search are not useful, and elevates my blood pressure with all the attempts at spamming me.

          Just because I use Firefox and Adblock doesn't mean I now want to visit all possible spam sites in existence.

          I don't care if Starware and friends make their money from advertising or not. The point is that Google is ALREADY a search engine, and a pretty good one at that. What is the point of returning results from another search engine, especially if the other one does not
    • Re:What? (Score:2, Insightful)

      by poohsuntzu ( 753886 )
      It isn't about having a better search engine, so much as it is knowing how to use it. If you are looking for information on a recipe for oriental rice using asian spice, how would you search?

      Bad search example:

      oriental rice recipe asian spice


      Good search example:

      recipe+"oriental rice"+spice


      See the difference? google tries its best to get rid of the spam pages, but it won't ever combat them all. Half of the work has to be done with you understanding the best way to describe to the search engin
      • Re:What? (Score:3, Interesting)

        I see the difference...

        Search terms: oriental rice recipe asian spice
        Search Results: Results 1 - 10 of about 254,000 for oriental rice recipe asian spice . (0.40 seconds)
        Search Effectiveness: REASONABLE. good list of relivent items matched.

        Search terms: recipe+"oriental rice"+spice
        Search Results: Your search - recipe+"oriental rice"+spice - did not match any documents.
        Search Effectiveness: UTTER SHITE

        The user wants SIMPLICITY. If google cannot give decent results for simple search criteria, then peopl

    • This is why I've been begging google folks to implement NEAR [pandia.com] operator!

      Here is an example msn search: http://search.msn.com/results.aspx?FORM=SMCRT&q=fi sh%20NEAR%20ahi%20NEAR%20recipe [msn.com]

    • Personally I find that the lack of relevant pages if the biggest problem with search engines, not the lack of pages with information.

      Actually.... information IS relevant data. If it's not relevant to what you want, then it is just data...

    • by jez9999 ( 618189 ) on Thursday November 11, 2004 @08:30AM (#10786287) Homepage Journal
      One thing that would really help me sometimes would be if Google allowed you to do an 'exact match' search. No, I don't mean enclosing something in double quotes, that still ignores capitalization, whitespace, and most non-letter characters. I'd like to be able to search for pages that have the EXACT string '#windows EFNET', for example, or '/usr/bin/' or whatever. '/Usr/biN' wouldn't match, and nor would '#windows^^EFNET' (where ^ is equal to a space :-) ).

      I sent an e-mail to Google about this and the guy who replied didn't seem to think it was possible... anyone know if it is?
      • by PsychoSlashDot ( 207849 ) on Thursday November 11, 2004 @09:03AM (#10786413)
        What I've read on the Google help pages seems to indicate that they don't index punctuation or capitalization. When you search for something, your string is looked for within an existing index, and appropriate reference materials are shown. Including punctuation wouldn't result in any hits within their index, meaning no results.

        Now, obviously, it is theoretically possible to do just about anything. But in this case, with the architecture they have in place, anyone ever doing what you're asking would require a full-text search through their multi-TB dataset, which I suspect is highly impractical.

        My point is that as I understand it, Google has coded a number of shortcut tricks which allow reasonable search times, and full-text string-exact searching would prevent them from using those shortcuts, resulting in search times they don't seem to think is reasonable.
        • by Erasmus Darwin ( 183180 ) on Thursday November 11, 2004 @09:39AM (#10786617)
          "But in this case, with the architecture they have in place, anyone ever doing what you're asking would require a full-text search through their multi-TB dataset, which I suspect is highly impractical."

          Actually, they could cut that down considerably. For example, say we were doing an exact search for '#windows EFNET' as in the original example. The first thing they could do is start with a traditional search on "#windows EFNET" [google.com]. At that point, they've cut their multi-TB dataset down to just a few megs or less of likely matches (in this case, only 10 pages matched). Then they could do a full-text check on each result, looking for an exact match and discarding all the rest.

      • How about a NEAR operator? Sure, AND OR NOT are nice, but my results would be a lot more relevant if I could eliminate results where the search terms appeared a thousand words apart.
    • The same goes for duplicate information. I don't want 200 versions of wikipedia listed when I'm looking for a specific article, nor 200 times the same man page when I'm researching something different of a unix command besides the man page of a command.
  • by tcdk ( 173945 ) on Thursday November 11, 2004 @07:06AM (#10785971) Homepage Journal
    8 billion pages and not a single link to my blog [google.com].

    Can't figure of I should just shoot my self or maybe just open a subscription to /.
  • by Jugalator ( 259273 ) on Thursday November 11, 2004 @07:07AM (#10785973) Journal
    I wonder if it'll take longer to index twice as many pages? Or if they, along with this change, improved their spider and/or added hardware. Otherwise I'm not sure this change is for the better, unless you like to search for really obscure topics.
    • Actually no. Better search results means fewer necessary searches, which in turn will make the entire process most time effective. And anyway, you can`t just stop indexing webpages just because it might take longer to index them. You just need to improve on hardware or the technology itself.
      • Better search results means fewer necessary searches, which in turn will make the entire process most time effective.

        Search results? Are you talking about a person searching? I was mostly concerned about how quickly Google can update their complete index now that it doubled in size. I understand for my part it might get better, as long as the index is kept up-to-date.

        And anyway, you can`t just stop indexing webpages just because it might take longer to index them. You just need to improve on hardware or
  • by hanssprudel ( 323035 ) on Thursday November 11, 2004 @07:07AM (#10785977)

    What the article does not point out is why this something important. For just about forever google's store has been coverging on 2**32 documents. Some people have speculated that Google simply could not update their 100,000+ servers with a new system that allowed more. Apparently they have now done the necessary architecture changes to allow for identifying documents by 64 bit (or more identifiers) and back in the business of making their search for comprehensive.

    Good timing to conincide with MSN attempt to start a new searchengine too!
    • by Jugalator ( 259273 ) on Thursday November 11, 2004 @07:16AM (#10786010) Journal
      Good timing to conincide with MSN attempt to start a new searchengine too!

      Yes, they'd better fight back, as they now have a serious competitor in MSN.
      It's giving very accurate results [msn.com].

      Doesn't anyone find it strange that Google gave the same top result there a while back?

      MSN must be using a very similar algorithm.

      Maybe a bit too similar...?

      *tinfoil hat on*
    • I don't quite believe that Google would've limited themselves that way (using 32 bit identifiers for documents) - that would've been incredibly short-sighted.
      • Probably not short sighted, but rather an space and cpu efficiency issue. Space - If you have 64-bit doc ids, even if you index 2^48 documents you're still wasting 16 bits per stemmed word per document. CPU - dealing with 64-bit integers on 32-bit hardware usually involves multiple loads, and decreases what can fit in the hardware data caches.
    • by Anonymous Coward on Thursday November 11, 2004 @07:34AM (#10786082)
      For just about forever google's store has been coverging on 2**32 documents. Some people have speculated that Google simply could not update their 100,000+ servers with a new system that allowed more. Apparently they have now done the necessary architecture changes to allow for identifying documents by 64 bit (or more identifiers) and back in the business of making their search for comprehensive.
      As someone who routinely follows these things, I couldn't agree more with your statement. My company operates a number of sites, and over the past 6 months, we've seen an obvious trend. Sites with, say, 5000+ pages, which used to be entirely indexed in Google, gradually had pages lost from Google. A search for site:somesite.com would return 5000 results 6 months ago. 3 or 4 months ago, the same search gave maybe 1000 results. This month maybe 500 or 600. We were definitely of the opinion that Google's index was "maxxed out" and was dropping large portions of indexed sites in favor of attempting to index new sites.

      Now after seeing this story, I did a search and found literally all 5000+ pages are indexed once again. This is a huge step forward for webmasters everywhere. If your site had been slowly edged out of Google's index it's most likely back in its entirety now.

      Thanks G.
    • Google won't be within reach of the pinnacle until they index .txt files, directory listings, and anonymous ftp sites.
  • by bvdbos ( 724595 ) on Thursday November 11, 2004 @07:07AM (#10785980)
    Unfortunately they didn't update [slashdot.org] the image-search [google.com] yet.
  • by Sanity ( 1431 ) on Thursday November 11, 2004 @07:08AM (#10785986) Homepage Journal
    Does every minor Google or Apple related thing deserve a slashdot story? Can slashdot create a "Fanboy" section for insignificant stories advocating Google (with their software patent) and Apple (with their iTunes DRM)? That way I could filter them out more easily.
  • by seanyboy ( 587819 ) on Thursday November 11, 2004 @07:10AM (#10785991)
    Google needs to stop obsessing about the number of indexed pages, and start concentrating on the quality. Since pagerank was switched off, 2 out of 5 searches now seem to be jammed with pages full of nothing but random words and adverts. It's even more galling when the adverts are Google Ads. Much as I love Google, they're becoming increasingly less effective as a tool.
    • I agree search engines are so 1990. I rely exlusively on word of mouth to find websites. If Firefox would add a button to the toolbar that said 'Cool Sites', maybe with an icon of a pair of glasses, and have the button link to a webpage with links to the latest cool sites on the net, that would certainly be the end of Google and their 8 billion pages. Pah!
      • That was actually how Yahoo! got started. A few of college drop-outs started making a webpage linking to their favorite sites... and their friends started going to it, and their friends' friends, and their friends' friends' friends... and then somebody offered to pay them to advertise on the site. And we ended up with this [yahoo.com].
    • To paraphrase Churchill, Google is the worst system devised by the wit of man, except for all the others. Where else would you go? Yahoo? Hey, how about AltaVista?

      The problems faced by Google in their battle against the scumbags who would game the system are faced by every other search engine. Google, IMHO, handles them better.

    • by dabadab ( 126782 ) on Thursday November 11, 2004 @07:49AM (#10786134)
      "[i]Since pagerank was switched off[/i]"

      Since when is Pagerank switched off?
  • No, wait, they are our internet search overlords since, like, 1999?

    Mhm to anonymous coward or not to anonymous coward?
    Will moderators smack my karma below zero?
  • over eight billion pages crawled

    You don't just go from 4 billion to 8 billion overnight.

    They are probably just crawling the same 4 billion twice.
  • by manmanic ( 662850 ) on Thursday November 11, 2004 @07:21AM (#10786029)
    Does this mean that I've been missing a huge amount of important information until now? I'd just assumed that Google covered the entire relevant web but now it seems to cover the whole same amount again. My Google alerts [googlealert.com] also seem to have started producing a lot more results which suggest that a lot of these new pages are rated quite highly. Who knows how much more quality content on the web we're just not seeing?
    • by jlar ( 584848 ) on Thursday November 11, 2004 @07:29AM (#10786061)
      "Does this mean that I've been missing a huge amount of important information until now?"

      Maybe the steep increase is due to all the new file formats they are indexing now. That might be useful for some people (although I sometimes find it kind of annoying that a search returns MS-Word documents).
      • Maybe the steep increase is due to all the new file formats they are indexing now.

        The steep increase is probably due to an architecture change. Google has, for a long time, been indexing around 4 billion pages. That implies that they have been giving each page a 32 bit unique identifier, and had exhausted that id space. It would be a lot of work for them to seamlessly upgrade all their software to support a larger id, and it has taken them a long time to do so. Now that they have the large jump in page

    • When a search engine announces it has increased its index of pages, it advertises a deficiency....

      "Oh, if you just added several billion pages, were you giving me crap before? How many more billions of pages are you not indexing right now?"

      Google's announcement merely gives its users reason to question the size and comprehensiveness of Google's index.

  • by Anonymous Coward
    Until today you could save your google settings [google-watch.org] without loosing your privacy [google-watch.org]. You can still save those settings but google refuses to use them when you block their cookie. In my case I get 10 search results although I like to receive 100. Seems that they are making many dollars on a user's cookie, and now they are a public company my privacy is less important than "stock holders' interests".
  • Google domination. (Score:2, Informative)

    by Anonymous Coward

    Local tabloid Aftonbladet is running a poll on search engine use:

    Google (81.4 %)
    Yahoo (2.2 %)
    MSN (3.8 %)
    Other (11.4 %)
    Don't know (1.2 %)

    61730 votes so far.

    I'm a little surprised, either the masses who use the "default" (MSN?) aren't bothering to answer, or google is simply very very dominant and those "default using masses" do not exist [in this country].

    • the masses who use the "default" (MSN?) aren't bothering to answer

      I think it is more that many users of IE just do not twig that their failed page access resulted in an automatic query to MSN.

      In reality, most users make occasional deliberate queries to Google and more frequent accidental queries to MSN.

  • ... my weight would probably double, too.
  • Microsoft (Score:4, Interesting)

    by Cookeisparanoid ( 178680 ) on Thursday November 11, 2004 @07:38AM (#10786099) Homepage
    A lot of people have been asking what the point of the artical is, why does it matter, well possibly because Microsoft announced the launch of their search engine http://news.bbc.co.uk/1/hi/technology/4000015.stm and are claiming more pages index than google (5 billion) so google have responded by effectivly doubling their pages indexed.
  • by DrYak ( 748999 )
    Of which 80% is V1AGR@ advertising,
    and 19% is pr0n.
    There's debate if the remaining 1% contains pirated music and movie or plans for DIY nukes.
  • by ayjay29 ( 144994 ) on Thursday November 11, 2004 @07:46AM (#10786124)
    From BBC News here [bbc.co.uk].

    In a statement Microsoft said its search engine returned results from five billion web pages - more than any other search engine.

    But this quickly won a response from Google which announced that its index has now grown to more than 8 billion pages.

    Prior to the Microsoft announcement, Google was only indexing 4,285,199,774 web pages.

    Steve Ballmer is soon to announce that his daddy is one hundrad years old, and kan kick your daddy's ass...

  • Grrrrr (Score:4, Funny)

    by squoozer ( 730327 ) on Thursday November 11, 2004 @07:47AM (#10786128)

    Now it's going to be even harder to get my name in the top spot. Why was I cursed with the surname Smith!

  • by hackrobat ( 467625 ) <manish.jethani@gma i l .com> on Thursday November 11, 2004 @07:49AM (#10786135) Homepage

    Looks like they've added a gazillion LiveJournal [livejournal.com] pages to their index. I used to have a Google search box on my LJ that didn't throw up relevant results until last week or so. Now it works perfectly, just like builtin search (like what you see in MT and WordPress).

  • by 't is DjiM ( 801555 ) on Thursday November 11, 2004 @07:50AM (#10786140)
    From 4 to 8 billion pages... I guess they just indexed the google cache...
  • by Richard W.M. Jones ( 591125 ) <{rich} {at} {annexia.org}> on Thursday November 11, 2004 @07:51AM (#10786143) Homepage
    On the same day that this story hits the BBC [bbc.co.uk]. In that story Microsoft claim that they have 5 billion pages indexed, more than the 4.2 billion pages indexed (at that point) by Google. The BBC have just updated the story with the 8bn figure.

    I smell competition!

    Rich.

  • Does this mean...? (Score:4, Insightful)

    by jimicus ( 737525 ) on Thursday November 11, 2004 @07:51AM (#10786147)
    Does this mean twice as many pages with "Search for 'printer problem linux' on Kelkoo"?
  • meta-no-archive (Score:3, Interesting)

    by Anonymous Coward on Thursday November 11, 2004 @07:54AM (#10786154)
    apparently my sites will never get a good ranking on google because I don't want the search engine to cache the site. So I'm using meta no-archive tags. That's the only thing I can figure out why the sites rank so poorly on google, when they come up in the top 10-20 hits on yahoo and other search engines. The keywords for the searches are valid, the sites are relevant to the keyword searches, yet the sites don't show in the top 100-300 on google.

    I've avoided all the usual spam type of tags (auto refreshing, hidden text, cloaking, etc.) and the sites are legitimate and on the up and up, and yet the only page or two that google is spidering are the one or few that appear to be without the no-archive tags and possibly the revisit/expire tags.

    Is google's policy, allows us to cache your site, or get penalized? Anyone else run into a similar problem or can shed some light on this? The only other thing I can think of is the robots text file, that keeps googlebot, and then other spiders through a *, from entering images directories. The spiders, including googlebot, aren't restricted from entering any other directories, they are given free reign.

    Anyone else with problems with no-cache, no archive, tight revisit/expire times, or similar non-spam tags that result in penalties in google ranking?

    I've been using google exclusively for a few years now. But the poor page ranking of sites on my server got me wondering about other sites that may be relevant to my own searches which may be exluded or penalized by google. So I've started using Yahoo search again, as much as I hate Yahoo (what they do with advertising to Yahoo groups and Yahoo mail is a shame). It appears that Yahoo is including better results because other sites show up with higher ranking that actually are relevant. So I've learned that Google isn't as perfect as I thought it was, which was disappointing in itself. It was easy using one search site. Now I have to use two to make sure I'm getting good results. Anyone know if there is a plugin for Firefox with both Google and Yahoo search boxes on the toolbar?
  • I regularly watch where my nickname, full name, parents names, etc come up in google. I've noticed in the past couple of months, my hits have DRASTICALLY reduced. They just disapeared from the database. But over the past 2 days, I've gotten notifications (thanks google alerts) about new pages being indexed and voila! They come up in a search again.

  • by Mostly a lurker ( 634878 ) on Thursday November 11, 2004 @08:29AM (#10786283)
    I received this response:
    This site is temporarily unavailable, please check back soon.

    Didn't get the results you expected? Help us improve.

    It is not clear to me how I can help them improve. Suggest they switch their servers to Linux?

  • by jmcmunn ( 307798 ) on Thursday November 11, 2004 @09:23AM (#10786531)
    Because every blogger in the universe has added at least 3 pages since the last index. I fail to see how it is significant to me that there are now 8 billion mostly worthless sites out there. The number of actually useful sites has not gone up considerably.

The use of money is all the advantage there is to having money. -- B. Franklin

Working...