Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Security The Internet IT

New Web Application Attack - Insecure Indexing 120

An anonymous reader writes "Take a look at 'The Insecure Indexing Vulnerability - Attacks Against Local Search Engines' by Amit Klein. This is a new article about 'insecure indexing.' It's a good read -- shows you how to find 'invisible files' on a web server and moreover, how to see contents of files you'd usually get a 401/403 response for, using a locally installed search engine that indexes files (not URLs)."
This discussion has been archived. No new comments can be posted.

New Web Application Attack - Insecure Indexing

Comments Filter:
    • Sure, and Konqueror never had it :)


      that's all nice and good, personally I think files that were never meant to be indexed make for the best reading by far !


    • Speaking of firefox (Score:5, Interesting)

      by ad0gg ( 594412 ) on Monday February 28, 2005 @08:44PM (#11808572)
      Another exploit [www.mikx.de] can out this weekend. The funny thing is that microsoft antispyware beta 1 detects the execution of the payload file and shows a prompt if you want continue or stop the execution.
      • Another exploit can out this weekend.

        I don't think it is so new - it is fixed by 1.0.1. From the description [www.mikx.de]:

        Status The exploit is based on multiple vulnerabilities: bugzilla.mozilla.org #280664 (fireflashing) bugzilla.mozilla.org #280056 (firetabbing) bugzilla.mozilla.org #281807 (firescrolling)
        Upgrade to Firefox 1.0.1 or disable javascript.
      • The funny thing is that microsoft antispyware beta 1 detects the execution of the payload file and shows a prompt if you want continue or stop the execution.
        Now what's funny about that? Should it always be the other way round? Yeah I know this is against the "majority mindset" as someone just said. I don't care
  • by Anonymous Coward on Monday February 28, 2005 @07:55PM (#11808178)
    the department-of-the-bleedingly-obvious...
    • Bleedingly obvious and written in sufficiently pompous style that you feel obliged to read the whole thing just to verify that there really is nothing there that hasn't been common knowledge for the better part of the last decade.

      Of course in those days people actually built their sites using static HTML...
  • and don't forget... (Score:5, Interesting)

    by DrKyle ( 818035 ) on Monday February 28, 2005 @07:58PM (#11808209)
    to see if you can get the site's robots.txt as the files/directories in that file are sometimes full of goodies.
    • Not when the file has something like this:
      User-agent: *
      Disallow: /
      • "sometimes"
      • Of course, that's assuming that you don't want your site indexed by any search engine (in which case, why is it exposed to the outside Internet to begin with?)

        Incidentally, it also breaks properly-designed retrieval mechanisms (like, say, RSS readers -- yes, dailykos.com, I'm talking about you!)
        • Simple, to save bandwith.

          One of my friend does some genealogy research. He decided to put all his data online. Even before all the files were uploaded his site took a massive load. He thought his site was really popular, but after looking at his logs I could tell him that most of the trafic was done by searchengine robots and his ISP might not be so happy for this massive trafic.

        • Incidentally, it also breaks properly-designed retrieval mechanisms

          if they break, how can they be properly designed ?

          • They are properly designed if they obey robots.txt files at all times... which prevents them from downloading certain files that the web site's author probably meant to allow them to download. Like an RSS feed, again.
            • RSS readers, being essentially special-use web browsers, are not obligated to honor robots.txt. They certainly aren't robots/web crawlers. If your RSS reader is checking robots.txt restrictions before retreiving RSS feeds, it's misguided at best, if not broken.
    • For this reason, I tended not to create a robots.txt file. At minimum, sensitive sites wouldn't go in it.

      If anything, I'd block googlebot/others in .htaccess files, assuming it wasn't a passworded site to begin with.
  • indexing google (Score:5, Interesting)

    by page275 ( 862917 ) on Monday February 28, 2005 @07:59PM (#11808223)
    Even though here's about internal indexing, it reminded me of the old fashion google indexing: Search google with some sensitive terms such as : 'index of /' *.pdf *.ps
  • by Capt'n Hector ( 650760 ) on Monday February 28, 2005 @08:00PM (#11808226)
    Never give web-executable scripts more permissions than absolutely required. If the search engine has permission to read sensitive documents, and web users have access to this engine... well duh. It's just common sense.
    • by WiFiBro ( 784621 ) on Monday February 28, 2005 @08:11PM (#11808316)
      This document in the first paragraphs describes how to get to files which are not public. So you also need to take the sensitive files out of the public directory, which is easy but hardly ever done. (You can easily make a script to serve the files in non-public directories to those entitled to).
    • Expecting common sense is rather presumptuous of you - don't you think
    • by Anonymous Coward
      Give me a freaking break. This is the same guy who found the "HTTP RESPONSE SPLITTING" vulnerability. Last years catch phrase among the wankers at Ernest and Young and Accidenture. The same type of people who consider a HTTP TRACE XSS a vulnerability. I guess it's been a slow freaking year for security research.

      Amit Klein at least used to work for Watchfire formerly known as Scrotum (Sanctum), and the same company who tried to patent the application security assessment process. I guess it's been a rea
  • by caryw ( 131578 ) <.carywiedemann. .at. .gmail.com.> on Monday February 28, 2005 @08:00PM (#11808228) Homepage
    Basically the article says that some site-installed search engines that simply index all the files in /var/www or whatever are insecure because they will index things that httpd would return a 401 or 403 for. Makes sense. A smarter way to do such a thing would be to "crawl" the whole site on localhost:80 instead of just indexing files, that way .htaccess and the such would be preserved throughout.
    Does anyone know if the Google search applicance is affected by this?
    - Cary
    --Fairfax Underground [fairfaxunderground.com]: Where Fairax County comes out to play
    • by XorNand ( 517466 ) on Monday February 28, 2005 @08:09PM (#11808292)
      A smarter way to do such a thing would be to "crawl" the whole site on localhost:80 instead of just indexing files, that way .htaccess and the such would be preserved throughout.
      Yes, that would be safer. But one of the powers of local search engines is the ability to index content that isn't linked elsewhere on the site, e.g. old press releases, discontinued product documentation, etc. Sometimes you don't want to clutter up your site with irrelavant content, but you want to allow people who know what they're looking for to find it. This article isn't really groundbreaking. It's just another example of how technology can be a double-edged sword.
    • by tetromino ( 807969 ) on Monday February 28, 2005 @08:10PM (#11808311)
      Does anyone know if the Google search applicance is affected by this?

      No. First of all, the Google Search Appliance crawls over http, and therefore obeys any .htaccess rules your server uses. Second, you can set it up so that users need to authenticate themselves. Third, there are many filters you can set up to prevent it from indexing sensitive content in the first place (except that since any sensitive content the google appliance indexes must already be accessible via an external http connection, one hopes it's not too sensitive).
    • by Grax ( 529699 ) on Monday February 28, 2005 @08:44PM (#11808579) Homepage
      On a site with mixed security levels (i.e. some anonymous and some permission-based access) the "proper" thing to do is to check security on the results the search engine is returning.

      That way an anonymous user would see only results for documents that have read permissions for anonymous while a logged-in user would see results for anything they had permissions to.

      Of course this idea works fine for a special purpose database-backed web site but takes a bit more work on just your average web site.

      Crawling the site via localhost:80 is the most secure method for a normal site. This would index only documents available to the anonymous user already and would ignore any unlinked documents as well.
    • A smarter way to do such a thing would be to "crawl" the whole site on localhost:80 instead of just indexing files, that way .htaccess and the such would be preserved throughout.

      That would not help much. Most sites have different content depending on the IP address accessing the content, i.e. internal IP:s get content that external IP:s cannot access. Crawling on localhost:80 would remove the non-linked files, but still gives the search engine access to a lot of content that should not be indexed.

      The o

  • News at 11! (Score:3, Insightful)

    by tetromino ( 807969 ) on Monday February 28, 2005 @08:01PM (#11808233)
    Search engines let you find stuff! This is precisely why google, yahoo, and all the rest obey robots.txt Personally, I would be amazed if local search engines didn't have their own equivalent of robots.txt that limited the directories they are allowed to crawl.
    • Re:News at 11! (Score:1, Insightful)

      by WiFiBro ( 784621 )
      With a scripting language capable of giving directory contents and opening files (php, asp, python, etc), anyone can write such a search engine. No degree required.
    • Read the article. This does not apply to "external" search engines such as Google and Yahoo - only to internal search engines that have access to the files via thru the filesystem, not through the webserver, since these "internal" search engines are capable of indexing files that would return a 403/401 via http.
  • by h4ter ( 717700 )
    The attacker first loops through all possible words in English...

    I get the idea this might take a while.
    • Wait a minute. All possible? Couldn't be satisfied with just actual words? This is going to take a lot longer than I first thought.

      (Sorry for the reply to self. It's like my own little dupe.)
      • Wait a minute. All possible? Couldn't be satisfied with just actual words? This is going to take a lot longer than I first thought.

        Well, just record the guessed words, you might stumble on Hamlet. :-P
  • by Eberlin ( 570874 ) on Monday February 28, 2005 @08:08PM (#11808288) Homepage
    The instances mentioned all seem to revolve around the idea of indexing files. Could the same be used for database driven sites? You know, like the old search for "or 1=1" trick?

    Then again, it's about being organized, isn't it? A check of what should and shouldn't be allowed to go public, some sort of flag where even if it shows up in the result, it better not make its way onto the HTML being sent back. (I figure that's more DB-centric though)

    Last madman rant -- Don't put anything up there that shouldn't be for public consumption to begin with!!! If you're the kind to leave private XLS, DOC, MDB, and other sensitive data on a PUBLIC server thinking it's safe just because nobody can "see" it, to put it delicately, you're an idiot.
    • thank you. thats the real security risk- not the indexing agent- but rather why is there internal documentation that is 'private' or 'confidential' within the webroot on an externally accessible webserver?
    • The old break-out-of-quotes trick is IMHO a different kind of vulnerability, in that it's really a programming bug. There is no reason, other than a programmer being too stupid/ignorant to escape quotes (or for most burger-flippers-turned-programmers, to even know that it's possible to escape quotes or to use prepared statements), for that happening. For that matter, also too ignorant to know that the "LIKE" operator isn't really a full text search engine.

      The search index problem is similar, but not quite.
  • by design? Surely something with permission to index internal files (even those specified to give 403s etc) is inherently designed to make them available to view.

    Either that, or it's a user error (configuration).
  • Is it possible given the time and perseverence to exploit a vunerability in a search engine's parsing of a webpage say, you maliciously published somewhere? Obviously one would expect google and the likes to have good security (well apart from the gmail exploit and... well lets not go there), so I was curious has it ever been done? (ponders)
  • Summary; If you are going to use magic to index your web site, be smart about it. Don't just blindly use a tool that "does the job".

    Nothing new here.
  • obvious? (Score:5, Insightful)

    by jnf ( 846084 ) on Monday February 28, 2005 @08:15PM (#11808362)
    I read the article and it seems to be like a good chunk of todays security papers, 'heres a long drawn out explanation of the obvious', I suppose it wasn't as long as it could be, but really ... using a search engine to find a list of files on a website? I suppose someone has to document it..

    I mean, I understand its a little more complex as described in the article- but i would hardly call this a 'new web application attack', at best perhaps one of those humorous advisories where the author overstates things and creates much ado about nothing- or at least thats my take;

    -1 not profound
  • bastards always hiding their stash. this'll show 'em

  • P2P (Score:5, Interesting)

    by Turn-X Alphonse ( 789240 ) on Monday February 28, 2005 @08:20PM (#11808418) Journal
    goto any P2P network and type @hotmail.com, @Gmail.com or @yahoo.com and see what documents turn up.. I'm willing to put money on them all being e-mails saved on idiots PCs which will contain everything from stuff to sell to spammers (if your so inclined), to sexual stuff and passwords/creditcard info.

    Nothing really new here..
    • by mibus ( 26291 )
      That should give you plenty of cookies with authentication info...

      Search for the right extension and you're likely to find MSN Messenger logs from people who have shared out all of "My Documents" without thinking!
    • Outlook *.pst files are another interesting one to search for. And most cameras prefix all photographs with something, e.g. DSGXXXXXX.jpg, so you can search for them.

      One interesing thing to note is that the site spammers are onto these things already. The photo one now pulls in lots of sample advert images for adult sites, as did a couple of the older searches that are linked on the site the article refers to.

  • "Reconstructing" files by searching every word in the english language in different orders? I want the last 5 minutes of my life back...
    • Did you RTFA?

      Search foo. You get: .. first version of Foo, the world leading ...
      Then search just the above. You get: ... to release the first version of Foo, the world leading anti-gravity engine ...
      Repeat... ... We are happy to release the first version of Foo, the world leading anti-gravity engine that works on ...
      Doesn't sound too hard?

      Of course the length is limited but that can be solved by "moving frame." Say, putting the above, the engine says your query is too long.
      Search: "anti-gravity engine tha
  • RTFM (Score:5, Informative)

    by Tuross ( 18533 ) <darthmdh&gmail,com> on Monday February 28, 2005 @08:27PM (#11808468) Homepage
    My company specialises in search engine technology (for almost a decade now). I've worked quite in-depth with all the big boys (Verity, Autonomy, FAST, ...) and many of the smaller players too (Ultraseek, ISYS, Blue Angel, ...)

    I can't recall the last time this kind of attack wasn't mentioned in the documentation for the product, along with instructions on how to disable it. If you choose to ignore the product documentation, you get what you deserve.

    It's quite simple folks. Don't open the search engine. ACL query connections. Sanitize queries like you (should?) do other CGI applications. Authenticate queries and results. If you can't be bothered, hire someone who can.
    • The problem is these are perfectly legal search engine queries. No matter how you "sanitize" the queries, that won't help, because they contain valid requests. The vulnerablity lies at the side of the indexing program, not the query/search/display one. The indexer indexes things it shouldn't. Files inaccessible normally through httpd are accessible in the search database.

      A method I see for that would be running the indexing by piping it through httpd, make even local indexing go the same way remote indexin
      • Maybe, just maybe, someone wants to see a file that's inaccessible by anyone else (or perhaps limited to a select few). Like, personal info, classified information (be it military classification or simply commercial-in-confidence), employment records, blah blah blah blah. Most search engines handle this, as I mentioned before, through various means that are more or less secure.

        You are inferring that search engines should only index public information, essentially crippling their usefulness. Glad you don
  • All these "attacks" assume the indexing program will index and return results for files you dont have access to.

    Im pretty sure the indexing server on Windows won't return 'search results' for files you dont have permissions to list. As with any other sensible indexing schemes, except perhaps the newer silly 'desktop search' tools. Seems pretty obvious to me.
    • Re:Assumptions (Score:3, Informative)

      by SharpFang ( 651121 )
      Im pretty sure the indexing server on Windows won't return 'search results' for files you dont have permissions to list.
      The problem and vulnerablity lies in definition of "you".
      The indexing program runs on privledges of a local user with direct access to the harddrive. Listing directory contents, reading user-readable files. "you" are the user, like one behind the console, maybe without access to sensitive system files, but with access to mostly everything in the htroot tree the administrator hasn't blocke
      • Yes the indexing service may have access to everything. Thats why I said it won't return search results for files *you* dont have permissions to list.

        ie, the indexing service checks the permssion of the requesting user, and only lists files they would be able to list in the OS. Its only common sense.
  • by Anonymous Coward
    my mind being the way it is, i can't help but think of an application for this in porn ;). a lot of porn sites have extensive free previews, but its hard for someone to find all the free preview pics for a certain site (useful especially for a single model's site) unless you can find a direct link to every single unique free preview gallery from somewhere, and you'll undoubtedly miss some good stuff. i want to see a firefox extension that gets me all the free pics from a given site damnit!
  • by michelcultivo ( 524114 ) on Monday February 28, 2005 @08:44PM (#11808576) Journal
    Please put this new undocumented tag on your robots.txt file: "hackthis=false" "xss=false" "scriptkiddies=log,drop" And all you problems will be solved.
    • New option for robots.txt (Score:3, Interesting)
      Please put this new undocumented tag on your robots.txt file: "hackthis=false" "xss=false" "scriptkiddies=log,drop" And all your problems will be solved.


      Note to mods: *slap*

  • 1) write your own web applications
    2) Use lucene
    3) only index what you want to index
    4) ????
    5) profit
  • This is old. (Score:4, Insightful)

    by brennz ( 715237 ) on Monday February 28, 2005 @09:29PM (#11808881)
    Why is this being labeled as something new? I remember this being a problem back in 1997 when I was still working as a webmaster.

    Whoever posted this as a "new" item, is behind the times.

    OWASP covers it! [owasp.org]

    Lets not rehash old things!

    • Not to be all "I'm so smart" but isn't this also rather obvious? If you're indexing private documents, don't return private results for public visitors. Simple as that.

      All it takes to implement this is an "access level" field stored with each index entry, and assigning an "access level" session value to each visitor (defaulting to 0 for anonymous visitors).

      Plus, this way you'll avoid pissing off visitors who click on essentially broken links in their search results.

      No wonder the search capabilities of
  • by B747SP ( 179471 ) <slashdot@selfabusedelephant.com> on Monday February 28, 2005 @09:45PM (#11808993)
    This is hardly news to me. When I need a handy-dandy credit card number with which to sign up for one of those, er, 'adult hygeine' web sites, I just google for a string like "SQL Dump" or "CREATE TABLE" or "INSERT INTO" with filetype:sql and reap the harvest. No need to piss about with hours of spamming, setting up phisching hosts, etc, etc :-)

  • solution (Score:3, Insightful)

    by Anonymous Coward on Monday February 28, 2005 @09:58PM (#11809064)
    here's a solution thats been tried and seems to work: create metadata for each page as an xml/rdf file (or db field). XPATH can be used to scrape content from HTML et al to automate the process, as can capture from CMS or other doc management solutions. create a manifest per site or sub site that is an XML-RDF tree structure containing references to the metadata files and mirroring your site structure. finally, assuming you have an API for your search solution (and don't b*gger around using ones that dont) code the indexing application to only parse the XML-RDF files, beginning with the structural manifest and then down into the metadata files. Your index will then contain relevant data, site structure, and thanks to XPATH, hyperlinks for the web site. No need to directly traverse the HTML. Still standards based. Security perms only need to allow access to the XML-RDF files for the indexer, which means process perms only are needed, user perms are irrelevant.

    There are variations and contingencies, but the bottom line is, even if someone cracked into the location for an xml metadata file, its not the data itself and while it may reveal a few things about the page or file it relates to, certainly is bottom line much less of a risk than full access to other file types on the server.

    heres another tip for free. because you now have metadata in RDF, with a few more lines of code you can output it as RSS.
  • by Anonymous Coward
    Anything I put on a publicly-acessible web server, I want publicly accessible, and I want it to be as easily accessed as possible.

    Anything else goes on a pocket network or not at all.

    The only exception would be an order form, and that will be very narrowly designed to do exactly one thing securely.
  • What if the file system supported an index attribute that proper search programs (windows search, google desktop, UNIX locate, etc) could respect?

    chmod -i file

    With the search vendors racing to own desktop search and microsoft working on WinFS, is "indexability" now an important security attribute for a file?

    • Why not just chmod 660 directory that contains the file? If the directory is unreadable by those without permission, it can't be viewed or indexed. Just be wary of whom you're giving permission to where like you already (should) be doing. There's no need to add another file attribute.

      --
      Help me help you get a free mini Mac [freeminimacs.com].

  • ... leave IT decisions to engineers, not the managers!
    Once upon a time, intelligent people were responsible for computers and IT.
    Now, it's either a manager, or a bunch of kids ("web developers") who don't know what they are playing with.
    Of course there are plenty of exploits waiting to be discovered that WILL get those documents off your web server.. UNLESS you are smart enough to keep them elsewhere.
    I realize this is a flamebait as good as they get - but please understand that I will just duck. It

A morsel of genuine history is a thing so rare as to be always valuable. -- Thomas Jefferson

Working...