Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Linux Software

Is the Unix Community Worried About Worms? 516

jaliathus asks: "While the Microsoft side of the computer world works overtime these days to fight worms, virii and other popular afflictions of NT, we in the Linux camp shouldn't be resting *too* much. After all, the concept of a worm similar to Code Red or Nimda could just as easily strike Linux ... it's as easy as finding a known hole and writing a program that exploits it, scans for more hosts and repeats. The only thing stopping it these days is Linux's smaller marketshare. (Worm propagation is one of those n squared problems). Especially if our goals of taking over the computing world are realized, Linux can and will be a prime target for the worm writers. What are we doing about it? Of course, admins should always keep up on the latest patches, but can we do anything about worms in the abstract sense?" Dispite the difficulties in starting a worm on a Unix clone, such a feat is still within the realm of possibility. Are there things that the Unix camp can be learning from Code Red and Nimbda?
This discussion has been archived. No new comments can be posted.

Is the Unix Community Worried About Worms?

Comments Filter:
  • The only thing stopping it these days is Linux's smaller marketshare.

    What smaller marketshare? Check out the Netcraft [netcraft.com] survey if you don't believe me. I think better programming is the reason we aren't seeing any worms targetted at linux web servers.
  • by Kaz Kylheku ( 1484 ) on Friday September 21, 2001 @03:36PM (#2331400) Homepage
    The UNIX world already had a worm that recursively exploited security holes and spread, back in 1988.

    THAT was the worm to learn from, not Code Red!
    • by Robert Morris (Score:2, Informative)

      Yeah. It was the classic example that we studied in my Computer ethics class. Sounds sort of like the nimd worm in that it had four different methods of spreading. The only thing that stopped it from being even worse than it could have been was a programming error that caused it to fill up memory and eventually cause the infected machine to crash.
    • While *NIX systems are not impervious to various forms of attack, they are less vulnerable for several reasons.

      1. People using *NIX systems are usually administering servers, or just love computers. The end result is that they're better (not nessicarily great) at keeping their machines patched.

      2. People using NT/2000 often don't even realize they have exposed ports. The worst of the Code Red/Nimda infections are coming from machines on Cable/DSL...home users who probably don't even know their machine is a server.

      3. Maturity. Any given piece of software will mature in features and stability/security. Most often, growth in security is sacrificed for features in commercial software. When software is free there tends to be less people trying to add marketing based features to a product. Most features come as modules which you must chose to install. With the focus on security, the number of vulnerabilities shrinks until there are virtually none.

      4. Development environment. This may not be immediately obvious as a cause, but it is very relevant. IIS is written in C++, and many people think that C++ is better than C. The real truth is that while C++ provides many benefits, it also can make auditing code more difficlt. The language contains so many features that it becomes very difficult to trace a path of execution just by looking at some code.

      I am sad to admit that every day I write code in C++, using MFC. My conclusion is that development is more difficult on Windows in C++ than on any other platform/language I have used. M$ has an idea of how an application should be laid out that very rarely fits my idea of how an application should be laid out.

      Compare Apache with IIS. Apache has been around for quite some time now, it aims to be a decent general use webserver with a useful set of features. Things such as dynamic content and indexing are provided by various modules which communicate through a well-defined API. It's written in nice, linear easy-to-read C.

      IIS has been around for a while, but the push is on features and integration with Windows. IIS integrates into many aspects of Windows, and it uses COM for it's extensions. Because all COM objects are handled at an OS level, there is much potential for a bad module to blow up the system.

      Of course, even the holes in M$ software have patches available long before they become a headline for the day.
  • by CoreyG ( 208821 )
    Worms aren't just a Microsoft thing. You should know(remember?) that the first worm ever written infected many *NIX systems (and the net in general) quite badly.
    • by Zwack ( 27039 )

      You should know(remember?) that the first worm ever written infected many *NIX systems

      The First worm ever written?

      Well, let me see, the term worm was invented by John Brunner, in his classic book, "Shockwave Rider"

      And the guys at Xerox Parc wrote some network based programs... which they called worms after the John Brunner usage.

      And WAY later, Robert Tappan Morris Jr. wrote the Internet worm.

      So, No. The first worms didn't run on Unix

      Incidentally, at least one of the xerox worms got out of hand and crashed a lot of machines at PARC.

      Z.

  • by Heem ( 448667 ) on Friday September 21, 2001 @03:37PM (#2331404) Homepage Journal
    The only thing stopping it these days is Linux's smaller marketshare.

    That, and the fact that MOST *nix users/admins tend to be a bunch of computer dorks, like us, and will be sure to stay up to date on security concerns, or at the very least, clean their system of the worm in a timely fashion.

    • Perhaps there aren't as many incompetent admins, but there are still a lot of neglected Linux installs out there. A lot of them are just forgotten boxes, or test boxes that are sitting around waiting for a project to complete, or forgotted installs - I can't even count how many default Apache pages I've seen on the internet. Somebody installed Apache and never did anything with it, so if there were a hole in Apache, who knows if they'd even remember they had Apache running, much less to patch it. Sure, worms might spread slower, but I think they'd still spread.
  • Monoculture (Score:3, Insightful)

    by fractalus ( 322043 ) on Friday September 21, 2001 @03:37PM (#2331406) Homepage
    Even if Linux gained market dominance, it wouldn't quite be the monoculture that Windows is. There are many distributions of Linux, which put important files in different places. This isn't insurmountable but it does make writing a worm capable of running rampant a wee bit harder.

    Also, it's my experience that (for now) people who set up Linux to run on the net are a little bit more clueful than NT administrators. NT seems to encourage the idea that any moron can run it because it's point and click. This isn't true; it takes more work to effectively admin an NT box than a Linux box.

    There have and will continue to be worms. Worms are most successful at any point of monoculture. (sendmail; bind; IIS) The solution, then, is not dominance... but diversity.
  • Apt and cron (Score:4, Informative)

    by Anonymous Coward on Friday September 21, 2001 @03:37PM (#2331409)
    Or any other form of auto-updater. Remember, Code Red and Nimda used holes that were patched months ago.

    Patch the holes that are inevitable. Patch them early.
    • Or any other form of auto-updater. Remember, Code Red and Nimda used holes that were patched months ago.


      No way - this is a very bad solution for security. While at first this would seem to be an absolutely good idea, in reality there's a number of really nasty security problems here.


      First, it convinces you to be lax about security. I mean, if the Auto-updater is handling the job, you probably won't check it out too closely since it's not nessisary. But with patches sometimes comes new holes, and new procedures for properly securing a box. These are jobs that require human intervention.


      Second, a new class of exploit comes along - using whatever proceedure you can make work, upload a new patch to the ftp server with some less than obvious holes in it. Sure, someone is going to spot it - maybe hours, maybe a couple of days, but it WILL get spotted. As admin, will you know if your box was one that grabbed the bad stuff? (Note, I said upload it to the ftp server, that's not the only exploit - various redirection techniques could be used too.) If tons of people moved to the auto-update idea, there'd be the potential for a lot of exploited boxen quickly.


      And third, there's the issue of reviewing patches / updates. Sure, lots of people have viewed them. If it's security related, you should be viewing them too, or at minimum the 'readme' or equivalant.


      Fourth, what update time are you planning? Once a month? Once a week? Daily? If it's less than daily, then you've got a problem - of you do grab a buggy version, that gives someone time to attack. And if it's a week before you check again, that means they've got pleanty of time to use your machine as a base to launch more attacks from. Plus, once they have the machine, you may only THINK you are still doing updates ;-) (It's always better from the attacker's standpoint to make things seem just fine and dandy :-P )


      I'm sure there's a lot more that could be added to this list - this is just the problems off the top of my head. But those problems alone are enough to really screw things up.

  • Just read this [slashdot.org] and protect yourself.

    This is a pretty pathetic ask/.
  • What I think would be interesting, is a Linux worm that used a security hole to get into a box, then closed the security hole, then propagate to other boxes, and finally uninstall itself. Maybe also leave a message or email on the box stating that it's fixed the box's security hole...;-)

    Unfortunately, doing constructive work (i.e., fixing the security hole) is always more difficult than doing destructive work (e.g., rm -rf /). But worm/virus writers seem to have plenty of time on their hands...

    • a Linux worm that used a security hole to get into a box, then closed the security hole, then propagate to other boxes, and finally uninstall itself.

      Except for the uninstalling part :-), it's been done. Try a google search for "cheese worm".

    • by wishus ( 174405 ) on Friday September 21, 2001 @03:59PM (#2331594) Journal
      What I think would be interesting, is a Linux worm that used a security hole to get into a box, then closed the security hole, then propagate to other boxes, and finally uninstall itself.

      Then you get black worms that exploit vulnerabilities in white worms, white worms that search for black worms and destroy them, black worms that hunt black-hunting white worms, grey worms that fix your security hole but extract a "payment" in the process, grey worms masquerading as white worms, black worms masquerading as white worms, white worms that inadvertantly do damage while trying to do good, black worms that exploit new holes left by those white worms, and pretty soon you've lost track of what worms you thought you had, what worms the white worms told you you had, what the grey worms have taken, and what the black worms have done.

      It's much better to fix your own security problems, and not depend on some worm that says it's white.

      • Just like a real ecosystem, then, which many people have compared the internet to.

        I think something like this may be inevitable. You may even get parasites on the worms. So long as they don't turn out like the viruses in Hyperion...

      • It's much better to fix your own security problems, and not depend on some worm that says it's white.

        Of course. However, we all pay the price (direct, in network slowdowns, and indirect, in the threat of government regulation) for sites which do not fix their own security problems. How should we respond?


        An instructive analogy: Suppose you notice that your neighbor's house is on fire. This is obviously a big problem for your neighbor, but it's also a big potential problem for you -- left uncontrolled, the fire could easily spread to your house. You try to alert your neighbor, but get no response. Does it make sense for you to call 911? Perhaps even use your own garden hose to try to control the fire? Of course; anyone would do this, and nobody would say you were doing anything wrong.


        On an internet thriving with worms of all greyscale values, properly administered sites won't need to worry about them, and improperly administered sites will hopefully get dogpiled so quickly that they'll either be forcibly patched or crash in minutes. When the vast majority of sites are being properly administered, all flavors of worm will starve for lack of prey.

    • Ah, yeah. That'd be the Cheese Worm [zdnet.com].

      And apparently, this factual informative comment "violated the postercomment compression filter.", whatever the fuck that is.
  • The only thing stopping it these days is Linux's smaller marketshare.

    I thought apache had a majority share of the web server market. One that has been hit by worms, and those worm writers usually choose IIS despite it's smaller market share.

    It could be because IIS has more exploits...
    • IIS does have a smaller market share in terms of commercial websites out there. However, there are lots of clowns at home on DSL or cable who are running win2k.

      Many people run IIS without knowing it, so i think there are much more vulnerable machines out there than just the webservers.

      Granted, IIS probably does have more exploits, but the real problem is that windows users usually aren't on top of patching them up. There are plenty of exploits out there that exploit linux, but there aren't as many issues because admins patch regularly, and the smaller market share.

      Captain_Frisk
      • However, there are lots of clowns at home on DSL or cable who are running win2k. Many people run IIS without knowing it...


        We've been over this before. [slashdot.org] Windows 2000 Professional never installs IIS by default. It must be explicitly installed by the user. And it's not in an obvious place, either. So if the average user doesn't know where to look, it won't happen by accident.
    • I believe it's not that IIS has more exploits, but that IIS users aren't as vigilant about patching their exploits.

      Think about it. Microsoft's entire appeal is based on ease-of-use; zero administration, wizards, automatically opening your attachments, and so on and so forth. This philosophy sells their servers and server software as well. So people who are used to MS products think they can set up a server, turn it on, and pretty much forget about it.

      The MCSE graduates know better, of course, but they're so expensive to hire. Meanwhile, Linux and *nix in general almost always require a degree of problem-solving ability in order to set them up and get them working. This is part of the reason why they have a smaller market share. However, it also means that most people who take the time to install *nix and get it working on a network (not all, but most) are also going to be vigilant about keeping things patched and secure.

      Maybe that's the nature of it, maybe not. But I'm convinced that there's something in the culture of *nix that drives its adopters to keep things patched, updated and secure, while the culture of MS users is to buy it, install it and let it do your work for you.
      • You are right on and ought to be modded up.

        Following your line further, the real danger is that as *nix attempts to become more popular by becoming "easier to use", it will succumb to some of the same pitfalls that plague MS.

        I have to hope that we can prove the old adage wrong - you know the one - every programmer does - I forget who said it first

        "If you make your program so simple that even a fool can use it, be assured that only fools will use it."
    • Apache != Linux

      Apache's market share includes the Apache installations running on Solaris, AIX and even Windows, not just Linux.
      • Apache != Linux

        Right. And worms, virii and other popular afflictions of NT is wrong to. Most of the worms and virii have been infecting outlook and IIS, not Windows. So to on the unix side. And the vast majority of Apache is running on unix flavors.

        My comment is a fair one, even if you do have a lower uid than mine.
  • by rkent ( 73434 ) <rkent@post.ha r v a r d . edu> on Friday September 21, 2001 @03:42PM (#2331466)
    Okay, here I go, proving my lack of server programming skilz: is it really so hard to prevent buffer overflows? Why does the length of a URL (for example) ever cause a server to crash?

    It seems like every time you get input from the outside, you would only accept it in segments of a known length, and whatever was longer would just wait for the next "get" or whatever. At least this is the case in my (obviously limited) socket programming experience. So when some program is hit with a buffer overflow error, does the team of programmers smack their collective head and say "d'oh"?
    • by Zathrus ( 232140 ) on Friday September 21, 2001 @03:57PM (#2331579) Homepage
      Yes, it's trivially simple to protect against buffer overflows. But it takes some regimented coding to do it properly instead of taking the easy way out.

      Instead of using gets(), you use fgets(). Use strncpy() instead of strcpy(). And so forth. The only real difference between these calls is that the "safer" one lets you specify a maximum number of bytes to copy. So you know you can't copy a string that's larger than your destination buffer (and you use sizeof() or #define's to ensure you have the proper buffer size) and thus start overwriting executable code.

      This is all high school level programming. Anyone that does it deserves to be strung up for professional negligence. As many others point out, one of the first large distributed cases of a buffer overrun exploit was 13 years ago. So it's not like this is a new thing.

      And yes, there are probably some Unix programs running around with buffer overrun exploits in them. They've been largely weeded out over time though and, to some extent, Unix's permission scheme avoids most serious issues, at least when services are installed properly.

      The real key difference between Unix and Windows though is very, very deep assumptions. Unix assumes that the user cannot be trusted (thou shalt not run as root), nor can any external input. Windows assumes that everyone will play nice. Since the reality of the world is that there is a significant fraction of people who will NOT "play nice" it invalidates coding under that assumption. Thus the repeated security exploits using Microsoft tools and services - which weren't designed from the ground up to distrust the input given to them.

      The plus side of "play nice" is that it's faster to code and you can put in features which would never, ever fly otherwise, like automagic remote installation of software. Or executing email attachments automatically. All that stuff that users think is "wow cool nifty" until someone does something they don't like.
      • This is somewhat offtopic, but I dispute your allegation that Unix assumes that the user cannot be trusted. This is simply not the case. If it had been, unix would not be the security seive that it is (admit it, it is, the only reason it doesn't look so bad at the moment is that in comparison windows is like those dogs that eat their own poo, relatively speaking unix looks like a friggin' fortress). Unix assumes that not all users can be trusted *to the same degree*, but fundamentally, the basic unix structure assumes that all users can be trusted at least partially.
      • Yes, it's trivially simple to protect against buffer overflows.

        It may seem trivial problem, but it is actually very hard to solve in practice. The C string API is simply poorly designed -- it is way too easy to mess up. It's not a matter negligence, people are human and make mistakes; thinking good programmers are exempt is pure hubris.

        The real solution is to expect, and learn to live with buggy code.

        Remotely accessible programs should run in chroot jails with the bare minimum of capabilities.

        Languages should make it harder to screw up. Less error prone string handling in languages such as perl, Java and even C++, are helping. Java has even more potential with its untrusted code security model.

        And yes, there are probably some Unix programs running around with buffer overrun exploits in them.

        Undoubtably. Many more than you'd think. And the vast majority won't every be found or fixed, because the program is not suid or remotely available.

        Unix assumes that the user cannot be trusted

        This assumption is broken by suid programs. They say a user is trusted to use me, but only to do something safe. It makes the implicit assumption that an suid program will only do what it was written to do. Secure systems must ensure that when these programs are inevitably comprimised, the damage is contained.

      • Instead of using gets(), you use fgets(). Use strncpy() instead of strcpy(). And so forth.

        Yes.

        My question: isn't it sort of a bug that gets() and strcpy() are still there in the standard C library? I would like, at a minimum, to see these cause a compile-time warning. It will be a long time before we can expunge all calls to these functions, but it might go quicker if we can get the compilers to complain about them.

        Has anyone looked at doing this?

        steveha
    • In the olden days where men were men, women were women, and people generally didn't engage in tiresome obnoxious behaviour, there was really no need to deal with these issues. It's worth noting that the Morris worm used the 'debug' command of sendmail. This command allowed anyone a root shell by just typing 'debug' at the sendmail prompt easily accessible from outside the system. Life went on just fine, because few people knew of the hole, and those who knew about it didn't bother to use it.

      Sadly, nowadays things are different and we must deal with tiresome security problems all the time. But it was easy to get into the habit of programming in a non-security conscious way, because for many years it really wasn't a problem at all.

      The C programming language was very much a part of that ethos. It was simply not designed to consider the buffer overflow problem. The size of buffers was almost never checked in early C programs.

      And there are many cases other than the input of text where buffer overflows can occur. For instance, sprintf is a common function used to build up a string from smaller pieces. You use it by saying:

      sprintf(destination_string, format, args);

      The format determines the way the arguments are put together to create the string. If you have a destination string of 1,000 characters, and the string being built up contains 1,200 characters, you have an overflow.

      The solution is to use snprintf, which is the same but includes a limit on the number of characters that are added to the string. But that means that every time you want to build up a string, you have to remember to use snprintf and add the count. If you've been programming "the old way" for a long period of time, it's easy to forget to do this.

      The way I work around this problem is by building my own sprintf(), which automatically uses snprintf to build up a string with the maximum buffer size I normally use. So I can program "carelessly" but be protected at the same time.

      As you can see, it's not just the size of the input string, it's how it is combined with other strings using functions like sprintf() that's the problem. And because it's a big pain to calculate all this out, it's no wonder programmers tended not to do it - until they got persuaded by tiresome security issues, that is.

      Hope that helps.

      D
    • by Osty ( 16825 )

      It seems like every time you get input from the outside, you would only accept it in segments of a known length, and whatever was longer would just wait for the next "get" or whatever. At least this is the case in my (obviously limited) socket programming experience. So when some program is hit with a buffer overflow error, does the team of programmers smack their collective head and say "d'oh"?

      The problem lies not in the realm of receiving the information, but actually processing it. What do you think happens after you've received all the necessary data chunks for the requested URL? They're put together and treated like a string, then parsed out for various pieces of data (the path to the file being requested, the type of file based on MIME types, any data parameters (passed from a form, for instance), and any other interesting information your server may be looking for). Now, with insecure coding practices, it's very easy to get a buffer overflow simply by doing something as innocuous as a call to sprintf() (because sprintf doesn't do any bounds checking). The really dangerous part, however, is when the target string is on the stack. Now, when that buffer overflows, a carefully constructed overflow string can easily put executable code into the stack and change the return address on the stack to point to the beginning of that executable code. This is sometimes referred to as "smashing the stack". If instead you're dealing with heap-allocated buffers, it's harder to get code executed, but you can still just as easily cause an access violation and kill the server anyway.


      I'm not trying to pick on sprintf directly, because there are a ton of other potentially unsafe (any unbounded string operation, for instance) or always unsafe (gets, fgets, any function that expects a string to be formatted in a certain way, etc) functions that are used commonly. In fact, too many people use these functions without even knowing that they're opening themselves up to major problems.


      One way to mitigate the possibility of having a buffer overflow in your application is by always using bounded string ops (snprintf, strncpy, etc) (note that strncat is a special case, in that the 'n' refers to the amount of chars to be appended, not the size of the target buffer). Another way is to simply not use the completely unsafe functions, like *gets(). These won't guarantee that you'll be safe, but it's a start. There are plenty of resources out there [google.com], so if you're interested, I suggest you do some reading.

      • Another thing to do is never ever do copy-type operations into auto variables (on the stack). Auto variables should only be assigned to with the = operator. Makes it awfully hard to stack-smash.
  • To really make a worm mess people up, it needs to get root access. That fact alone is enough to make Apache more secure than IIS, due to the fact that unless you're an idiot you run your Apache servers as a non-root user. And even if you're an idiot there's still a good chance you are running your server as 'nobody' anyway, since that's the default insallation setting. You would have to be a very special sort of idiot, the kind that goes out of his way to do idiotic things on purpose, in order to be running Apache as root.

    Now, this doesn't alleviate all the problems of course, because even with "normal user" access a person can still do some damage. The web pages are probably owned by that normal user, so with normal access a person could alter your content. The normal user could set up cron jobs for himself such that he attacks other machines later, and thus you can still get propigation without root. So this still leaves open the possiblity of having DNS attacks (since being a part of the attack doesn't require root privilieges, just any user will do.) But it doesn't really leave any way to mess up the target machine permanently. You couldn't alter the httpd program, for example, since it isn't owned by the same user as the user ID it runs under.

    At worst, you lose the web pages themselves, but most likely you have those copied over from some other location as part of your "I'm going to edit in a scratch area and then install these changes for real after I try them out" technique.

    • To really make a worm mess people up, it needs to get root access. That fact alone is enough to make Apache more secure than IIS, due to the fact that unless you're an idiot you run your Apache servers as a non-root user.

      For a moment, this didn't ring true. Why? Because the capacity of a local user to utilize a local root exploit (and thus render your argument invalid) is high.

      But then, I realized something. Open Source software encourages diversity. Apache may be running on Windows, Debian GNU/Linux, Redhat, OpenBSD, FreeBSD, etc... etc... And the root exploits are all different. Who are you going to pick on? All of them?

      The worm we're seeing floating around the MS community are exploiting lots of known bugs in one fell swoop. Virtually all Windows installations except those secured by some smart users and some smart admins are vulnerable to one of these attacks. Thus, once again, the Open Source world could have a worm that used a collection of exploits to root many kinds of boxes, right?

      Wrong. The memory footprint and coding skill this would take would make the worm look a lot more like "Microsoft Office for Every Platform" than the Morris Worm. That's because the vulnerabilities taken advantage of are most often in a variety of particular programs rather than some standard API or a few known awful (*cough*Outlook*cough) offenders. If a kernel version or the last few X11 versions had some huge flaws, or maybe Gnome or KDE, then we have a chance to worry. But you know what? The only one of those that Apache is involved in at all is the kernel. Server machines s often do not have X11, let alone Gnome, or KDE.. etc.. etc..

      So my extremely longwinded point is: We aren't immune, but the kind of attack that we're seeing on Windows right now is hard against Open Source Software. Infinite Diversity in Infinite Combinations.

    • This is fundamentally wrong.


      Theoretically, if you're system is ship shape, then only root, or someone with root access, can REALLY fuxor it up. However, there are many levels of fuxored below "REALLY fuxored", and no system is 100.0000% perfect. Unix is a security nightmare. It's security model is decrepit and is only being patched / kludged into anything resembling reasonable security. I fear that it is too established to be replaced with something completely different at this point (i.e. something that was still unix, but fundamentally different in security model).


      In general, I don't think it's a good idea to measure security success compared to the gimp of the security world (MS).

      • Unix is a security nightmare.

        Why do you say that? Certainly traditional security is simple, so you can't do the fine-grained things that other systems allow (not really true anymore with capabilities, but those aren't widely used or entirely standard). But simple has it's advantages - there are fewer ways to mess things up.

  • I think the big difference here is that most people at some point in the Linux commuintiy start to look at security as part of the system, not like Microsoft where security is only now being thought of.

    Lets face it Linux comes and has come with ipchains and now iptables for firewalling and many other UNIX flavors have similar features. Linux and the UNIX community think about things like proxy firewall combinations, where Windows is only now starting to think about this. It is not until the release of XP (or the anticipated release as it is not out) that windows is by default including a firewall.

    People in the unix community also tend to be more aware of what is going on on their system. They have logs and there are tools to view them.

    While I do not dismiss the possiblity that if Linux / UNIX got to be as popular as windows then there would be more 'attempts' I think that because of the nature of Linux you would have a much harder time of spreading a worm like code red.

    A good UNIX administrator is going to spend time in configuring his web server and securing it. If they do not think about this then they are no good.

    If you are wondering how secure your computer is try these two site. They'll help, but don't try this at work or you may piss off your admins. https://grc.com/x/ne.dll?bh0bkyd2 or http://scan.sygatetech.com/

  • Certainly the robust UNIX security model is one reason we haven't seen as many worms. The strategy of creating a separate "www" or "http" user to run Apache, a "db" user for the database, etc., is common and very wise. If somebody co-opts your web server, at least it can't wipe your db. It still has weaknesses -- it's sometimes necessary to grant more permission to certain users/processes than you might like, and it requires a lot of vigilance from sysadmins, but it works quite well.

    I wonder if there isn't a way of generalizing this to allow more sweeping, more generalized expressions of security rules. A UNIX install has soooo many little apps, and so many points of contact for everything, it's sometimes hard to say "I want all apps that could access X to have permissions Y, or go through acces point Z." TCP wrappers are a good example of the kind of thing I'm talking about -- they provide a single point of access and control for all things TCP, and they make it much easier to set up very broad rules that you know cover all possible cases.

    Am I making any sense here? How might an OS take on this issue in the general case? It seems like one next logical step for UNIX security.
    • The first step is POSIX 1003.1e 'capabilities', and is already partially supported in the current Linux kernel. Basically, it breaks the 'suser()' check for "are we running as root?" into lots of little checks: "are we allowed to open any file?" "are we allowed to use raw sockets?" "are we allowed to kill() other processes?" and so on. So instead of (for example) 'ping' being suid just so it can use a raw socket, it would have CAP_NET_RAW, and if subverted, the only thing the attacker gets is the ability to send raw packets (which may be leveragable, but makes it a LOT harder than just execve'in a root shell on the spot).

      The other big move is to support ACLs - access control lists - so you can say "fred, george and harry can write this file, members of group foo are only able to read it, and members of group bar aren't able to do anytying with it".

      SELinux, the LSM project, and the like, are the sort of thing we're aiming at....
  • Microsoft systems are more susceptable to worms(IMHO) because the level of compter knowledge is way higher for Unix users that it is for Microsoft users. I mean this sincerely, and not just as flamebait.

    Consider how many Unix users would actually just open their emails and run attachments blindly. I would venture that there are a ton more Microsoft users that actually do just that!
  • Dispite the difficulties in starting a worm on a Unix clone, such a feat is still within the realm of possibility. re there things that the Unix camp can be learning from Code Red and Nimbda?



    what difficulties?



    whenever an inexperienced user brings up a redhat 7.0 or lower box on our network, it is exploited within 12 hours. within 24 hours i have received email from admins on other networks informing me that the redhat box has been probing their network. 1 minute later i have informed yet another user that it takes more to do my job than booting off of cd and following instructions on the screen.

    someone out their has already taken advantage of the various vulnerabilities found in older distros.



    lessons learned? i am reminded of something my brother told me:

    Having your own box appeals to the pioneer spirit: your own plot of land to develop as you please, fighting off the savages, protecting from the elements.



    In other words, every time you run software which other people will somehow have access to (users running desktop software, server software connected to the internet , etc) you will need to constantly monitor and upgrade that software.


  • Let's not forget that what was probably the first worm, the Morris Worm, was released on Unix machines. I don't remember the year, but it was in the early days of the Internet when about all there was out there was Unix and VMS. The lesson that the Unix community took away from this and other incidents was that they needed to secure their machines and tighten up code. The point here is that no system is immune. When I first started out in the Internet field, almost all attacks were launched against Unix and VMS machines because that's about all that was hooked up to the Net on a constant basis. So, don't get smug just because Micrsoft is victimized today. After MS dies a firey death, something else will become the dominant system on the net and that will be the most attacked system.

  • God knows there are enough newbie sysadmins who feel that even though 30 years of sysadmin wisdom says never run as root, they feel they can because they understand the risks involved. They typically also give all their friends accounts on their system (Ooh! I have a multiuser OS! I'll give all my friends accounts!) Usually they stop doing that after the second or third time they get compromised and have their hard drives filled up with goat porn.

    Fortunately the default installs of most of the mainstream distributions are getting more secure as time goes by. And while RedHat traditionally isn't quite as easy to set auto-updating up for as Debian is, it's still pretty easy to keep up with the security patches for it. I'd really like to see the package maintainers package at least some of the more traditionally insecure packages (*Cough*Bind*Cough*) in ultra-paranoid configurations, say, statically compiled and chrooted. It hasn't been enough of an irritation for me to go do it myself though.

    We all pretty well know, though, that security is more what the user does with the OS rather than how inherently "secure" the OS is out of the box. FreeBSD is by reputation one of the most secure OSes available but I could take that thing and install a bunch of servers with holes in them and be no better off than if I was running Windows 2000 doing the same thing.

    • Finally someone says it:
      And while RedHat traditionally isn't quite as easy to set auto-updating up for as Debian is, it's still pretty easy to keep up with the security patches for it.
      It seems pretty clear to me that *this* is the real solution. The problem is lazy sysadmins, and you get more lazy sysadmins as you get more popular. So a real "easy to use" linux distribution has got to include a mechanism for automated security updates (and it had better be a *secure* mechanism). It does indeed sound like Debian is better off than RedHat in this respect, but eventually even RedHat will get it's act together...

      (How hard can it be to figure out a way to generate some extra revenue from this? "And for only $5/month, we'll set you up with the the Head Patch Automated Reinsecuritator Mechanism.")

  • If someone goes through the trouble of downloading/buying Linux and setting it up as a public server they're probably a lot more computer literate than most windows users. They certainly would understand the need for patches and probably read some kind Linux news site to keep up.

    Now if Linux had windows' market share, it would have to come pre-installed with a new PC and not require the user to do much more than just use the GUI. Which is fine as far as I'm concerned, but we can also assume a Linux dominated universe would be full of unpatched servers too.

    Maybe untreated Windows exploits are heading toward exinction. Its easy access to the internet that has created such a huge market for anti-virus software. Maybe we'll start seeing Windows shipping with an MS or a third party patch manager in the near future. Or something like NAV with a patch checker. "No viruses found, you are open to these attacks, please goto this URL to download the patches."
  • Why do you think it's harder to create a *nix worm? I mean the basic principles of worm propagation work under any platform if there are any security holes. Certainly *nix does occasionally suffer from security vulnerabilites, if perhaps less than Windows. Look at the ramen worm that was going around recently. I STILL get scans on my box for that vulnerability. Certainly the scale is less dramatic because of the fewer *nix systems out there, but it's not like writing a worm for unix is somehow more difficult than for windows.
  • Worms are definitely a problem on all platforms. But the *nix world has a bigger advantage over the Windows world. In our world, code is written with lots of thought towards quality and strong design. Windows, well, is questionable. Certainly *nix has exploits, but those that exist require a GREAT deal more skill to exploit than those that exist for Windows. Therein lies our safety net.

    Most people who have the skill to code worms for more secure and robust *nix platforms are probably mature/responsible enough through their experience to not do something so utterly foolish. However, if they do decide to do so, they end up trying to do a positive thing for the community! (Anyone remember those Linux worms that FIXED the exploits they took advantage of before moving onto the next box and cleaning themselves up?) Besides, look at the very few malicious worms we have seen for *nix platforms. They didn't last long. The OSS community has a VERY quick response time to big problems and the admins are generally more skilled and knowlagable about applying patches.

    I say, let's enjoy this while we can. It's kind of amusing to see MS admins scurry around, trying to stick fingers in all the leaks. It's risky to say "it serves them right", but that's for only weighing mundane factors in deciding what platform to use. And for those companies that reject OSS products, well, they get what they deserve for thinking "stuff that doesn't come from a company mustn't have any quality". Pah. Worms with the scale of NT aren't a concern for us. Let's parade this around as a reason to support and use open software.
  • Actually, it would probably be easier to attack UNIX with a worm. There are more UNIX machines out there than Windows machines, and most of them are probably just as poorly maintained in regards to security.

    So why don't people write more UNIX worms? I think the first big problem with a UNIX worm is the portability problem: getting a worm that runs well on all of the different CPUs, UNIXes, Linux distros, etc. out there would require a pretty basassed coder. Anyone good enough to do so probably wouldn't waste his time on a worm since he could get paid obscene amounts of money for coding something more productive.

    On a more positive note, I think worms generally target Windows because computer users in general don't really like Windows. Jokes about Windows being unstable/buggy/insecure/slow have gone from being a subsect of geek culture to a repetitive theme in popular culture. People run Windows mostly out of necessity, because it is the only desktop OS that provides access to a large variety of commercial software, and runs on cheap, non-proprietary hardware. People who use UNIX do so because they want to, and they like doing it; therefore they are less likely to produce something as randon as a worm. (I am leaving crackers/s'kiddies out of this as they have far different motivations.).
  • by VB ( 82433 ) on Friday September 21, 2001 @04:04PM (#2331626) Homepage

    While client market share for Windows is undisputed, Apache has close to 60% of the web server market. I haven't received a single readme.exe attachment.

    Current Nimda stats are:
    26900 attempts on 2 servers.

    Apache (on *n*x, anyway) is not vulnerable to worms in the same way IIS is since it runs as notroot.somegroup. The only thing an Apache web server worm (on *n*x) could do is muck up the web server.

    *n*x mail clients don't (at least yet) do a
    file this_attachment
    if file is ELF, or a.out
    chmod +x this_attachment
    execve this_attachment.

    This isn't to say *n*x is immune. Just why Win* is not. Not because of market share.
  • You could call it marketshare.. but the worm problem really isn't about an OS.. it's about individual applications and technologies.. the environments the worm can flourish in. A cross platform worm is entirely possible.

    As for our 'goals'.... who's goals are those? Who wants linux everywhere? Use the right tool for the right job. If MS actually made something that was better for a job, I'd use it. (IF.. big IF)
    1. Learn from OpenBSD to go over code with a 5 micron comb.
    2. Get rid of as many exploits as you can before your market share gets to 90%. (Still have some time here:)

    The biggest obstacle, AFAICT, is making solid security Ease-Zee.

    Certainly many commercial outfits haven't successfully solved this problem yet and there are still plenty of opportunities for spoofed trojans with fake internal certifications.

    I mean, when I download a package, it usually contains its own references to valid signatures, etc. Or, the md5 signature is kept in another file, but on the same ftp server.

    Better are package maintainers that digitally sign their products. I'd like to see more of that, maybe in conjunction with multiple certifying authorities that can verify the signator's credentials. I don't need a system that compromises the anonymity of me or the package writer - just something that verifies that a package originated with a consistent unique individual.

    Do modern CD distros of GNU/Linux and other OS come with anything like a set of multiple certifying authorities where package writers can register signatures in multiple places to minimize the chances that a fake can be passed off on innocent downloaders?

  • Rest easy. Yes Unix can have worms and in fact it has happend.
    This worm was fixed about as quickly as possable. The only real problem was getting the fix out as the worm had sereously disrupted the primary means of getting the patch out.
    The time delay for Microsoft patches is a great deal longer and is due to develupment delays not distrubution delays.
    There is also a delay due to NT admin fears the patch may disrupt the system. I doupt this is a realistic fear but I have heard it once or twice. I think this is more or less the end result of the ignorence Microsoft premotes amoung NT admin. That ignorence is probably responsable for more problems than the software itself.

    In short once a worm is created once known it should be a short time before bug fix.

    But not blindly....
    The reality is worms are a low likelyhood. You should stand ready for a whole range of issues worms are in the bag.
    Viruses are even less likely and nearly impossable. However IF we go getting paranoid about worms to the exclusion of all else... Viruses viruses viruses... becouse we are looking the other way.. won't you feel dumb..

    Keep an eye do the maintanence, read the logs, read slashdot, bug trap and so on.. keep an eye on the issues related to your system.

    Worms aren't the only problem. They are an issue. They aren't the only issue.
    Just don't get cought with your shorts down.

    And... don't wait for someone to fix it... yeah it'll happen in 10 or 20 minuts (vs the 10 to 20 days for Microsoft) but as we learnned with the last Unix worm..

    Min 1. You learn about defect
    Min 2. You look for someone fixing it
    Min 3. You find someone
    Min 4. You wait
    Min 5. You wait
    Min 6. It's done.. you download
    Min 7. Your still downloading
    Min 8. Hmm the network seems a bit slow.. your still downloading
    Min 9. Why is the network slow?
    Min 10. Your crashed... you got the worm before you got the patch... you lose try again..

    If someone fixes it first.. horray... if not.. don't wait...

    However rember this stuff requires a major deffect in the system to work. It'll only effect one platform and only one version of that platform.
    (With Linux it'll hit many distrobutions unless it's a distro screw up and not a real software defect..)
    • It's not an irrational fear. Service packs for NT (particularly SP6) have been known to do horrible things to third party applications. I had an application that ran just fine on SP3, but when we went to SP5 (might have been SP4) for Y2K, I could not get it to work. Eventually I rewrote it from scratch using a different set of APIs.
  • for a reason. There is next to zero similarity between any two installs of even the same kernel, and even less between two different kernels.


    With no guarantee of any given system calls, any given system libraries, any given applications, any given directory structure, any given TCP/IP stack, any given version of any given implentation of any given service, any given architecture or any given dialect of any given scripting language, worms have a limited scope to work with.


    The "Original" Internet worm was so dangerous, because at that time there was less diversity. Certain standard daemons were virtually guaranteed to be running, for example, built from basically the same source.


    Therein lay the danger for Unix - without diversity, a single virus or worm can cause untold damage. If it can affect one machine, it can affect many.


    (Biologists have woken up to the same lesson. For years, it was preached that simple systems were more stable than complex ones, but it was learned the hard way that that was not the case. Biodiversity offers protection, because it inhibits the spread of hazards. By making it non-trivial for an infection to pass on, you could guarantee that real-world viruses were self-limiting in scope.)


    Linux is relatively safe from virii and worms, for that same reason. There is sufficient diversity to ensure that propogation is non-trivial. The very "irritation" that turns away so many is Linux' greatest shield. With Windows, it's trivial to infect a registry, because there is only one and there's a standard way to access it. Linux has many "registries", and much code that people use won't be registered anywhere at all.


    Then, there's libraries. Windows 9x uses certain very standard libraries. If it's a 9x OS, you know what you can expect. For Linux, you've got elf & a.out formats, libc5, glibc 2.0/2.1/2.2, XFree 3/4, Bind 4/8/9 (or any number of alternative resolvers, including the one built-in to glibc), etc. You really don't know what to expect.


    Scripting languages? There's no telling WHAT anyone'll have. The only thing you can be sure of is that there will be a /bin/sh, but that might be ASH, BASH 1.x, BASH 2.x, or any other shell that someone decided would be fun to use as standard.


    To stay resident, the virii or worm also has to find a place to stay. Not easy to do, with Linux. With Windows, you've a choice of FAT16 or FAT32. Oh, and maybe NTFS, if you're using NT. With Linux, you could be using almost anything. Sure, people will probably use what's installed as standard, as FS migration is non-trivial, but that still leaves ext2, ext3, reiserfs or XFS, all of which one distribution or another uses.


    Finally, there's security within Linux. But which security are you using this week? There's GRSecurity, LSM/SELinux, RSBAC, POSIX ACLs, various other ACL implementations, socket ACLs, and any combination of the above.


    Oh, and that's not including intrusion detection software, honeypots, firewalls, and all sorts of other similar code.


    In short, you can envisage a worm or virus which affects Red Hat 6.2 / Intel distributions that use the standard libraries and kernel. But you can't have a worm or virus which affects ALL running Red Hat Linux boxes - the variation is just too great. It gets much worse when you talk of all Linux boxes, and many many orders of magnitudes of absurdity greater when you talk of all POSIX-compliant UNIX kernels.


    To answer the original question of "is the Unix community worried about worms", the answer is "that depends on how homogenious any person's network is". The "worry" level will probably be about the same as the homogeniety level.


    As for the community at large, the answer is probably "no". The community at large has such a high level of diversity that there is no single threat which could affect every system (or even a significant fraction of them).

    • I don't know... There have been some very nasty bugs in, for example, TCP-IP protocol stacks. They have been traced back to the same implementation across Windows, Linux, BSD etc.

      The one I was thinking of was capable of nuking most operating systems by injecting odd length packets (close to 64K in size).

      There's more commonality than you might think in places.
  • I think that it would be *possible* to write a worm targetting Linux machines right now, but it probably could never spread as quickly as the recent MS-specific worms we've seen. Even though many (most?) Linux distributions come with some relatively serious security flaws out-of-the-box, Linux is still a "geek OS". The average Linux user hopefully knows enough to apply most of the critical security updates, and won't be running too many unneeded services. Add to that the fact that while growing, there still aren't *that* many systems out there running Linux, and I'd say that the density of vulnerable Linux boxes out there is so low that a worm would have a difficult time spreading.

    As far as the future goes, though, unless the various distributors become more and more security conscious (I believe that they are doing this), we may be at risk. Doing such things as running potentially vulnerable services as their own userid, turning off unneeded ones, and only opening ports with an actual service that needs it open to the outside may seem like common sense to hopefully all of us, but these are things that distributions should automatically do for the newbie users.
  • Linux already has had a worm (or at least Redhat did). It exploited a problem with rpc.lockd. I still get portscans on 111.

    How quickly we forget that Linux too is vulnerable.

    • How quickly we forget that Linux too is vulnerable

      ipchains -A input -i eth0 -p tcp -s any/0 111 -j DENY
      Yes, linux is vulnerable. Simple recipe for keeping it safe: if you don't need it, turn it off. If you do need it, study the security history and upgrade the daemon if necessary. If it's sendmail, install postfix or configure it as non-root. If it's WU-ftpd try Pure-FTPD.

      On a side note, a default install of most linux distros turns a lot of stuff on that shouldn't be running if it's world accessible. So, like NT admins, linux admins need to study their install and find out what's there that you don't want. Upgrades are sometimes needed, services need to be stopped. The good news is that all linux worms to date are nothing more than automated script kiddies so if you've kiddie-proofed your setup, chances are you're OK.
  • by account_deleted ( 4530225 ) on Friday September 21, 2001 @04:27PM (#2331783)
    Comment removed based on user account deletion
  • Hello?

    Ramen? 1i0n? Adore? Sound familiar? It's far from the "realm of possibility" - they've already been done. And these worms haven't been eliminated, either. I work in network security, and I see SunRPC scans and DNS scans, and a whole slew of different kinds of scans on my network *several times an hour*. Yes folks, *hour*.

    The fact is, people are running unpatched systems. And yes, a good majority of these systems are running Linux. The fact that the scans aren't letting up says that administrators:

    A) Are too ignorant to know there's a problem
    B) Too ignorant to fix the problem
    C) Don't give a shit.

    The thing is, the Open Source community is quick to act on these security problems and crank out a fix. In the case of Microsoft, the worms are usually a lot more destructive, thus, they receive more attention.

    It's quite sad when people can't patch a two-month old exploit, however.
  • Now that would be an achievment. If you found a hole in Linux/BSD and found one in Windows (no biggie), then try for either platform. And have that platform try for either platform. Nimda, from what I understand, took a step in this direction in that it went out with e-mail and http.

    About the only worry I have about worms is all the impact on the network as a whole and the PITA my job is whenever one gets out.
  • by fobbman ( 131816 ) on Friday September 21, 2001 @04:34PM (#2331821) Homepage
    Is the Unix Community Worried About Worms?

    If some of you hardcore *nix users would take showers more often than major holidays this wouldn't be an issue.

    Those of us who have to sit in stuffy cubicles within a 10' radius of you thank you for your consideration of this matter.

  • Despite having seen it stated several times here, the RTM internet worm of 1988 was NOT the first worm. It wasn't even the first worm to crash machines, or the first network distributed attack...

    In 1980 Xerox Parc published a paper called 'Notes on the "Worm" Programs -- Some Early Experience with a Distributed Computation' by John F. Shoch and Jon A. Hupp. This describes some WORM programs that were written at Xerox PARC and used for useful things. Unfortunately an error in one of their programs caused a lot of dead machines.

    I think that the BITNET christmas card "virus" of December 1987 predates the Morris Worm of 1988. This was more of a trojan than a worm, but when you ran the "card" it mailed itself to everyone it could.

    Neither of these was Unix based.

    Z.
    1. Linux has a greater variety of software. Look at mail servers; we have exim, postfix, qmail and sendmail. A vulnerability in one cannot (easily) be exploited in another. The single largest target is Apache, which is by far the most popular web server software (with good reason; it is of very high quality). However, Apache has had almost no serious security flaws that I can think of; most exploits against Apache have exploited password sniff attacks or poorly secured applications hosted on the system, not targets for worms.
    2. On similar notes, even if linux/Unix takes over the world, there is likely to be a greater diversity of OS versions. At the current time, I can't see linux wiping out Solaris and AIX for a few years; I can see them coexisting and hopefully taking back ground from Windows, however.
    3. Even if linux takes over, wiping out proprietary Unix, there are still likely to be different hardware architectures in use (eg, x86, Itanium, Sledgehammer, SPARC, PPC, S/390) limiting the impact of a worm. By contrast Windows is x86 only (at the current time, although Itanium may come in soonish) which provides easier spreading of worms.
    4. While many MS server programs run as system or equivalent "super-user" type user ID's, many linux programs spend most of their time running as a non-privileged user (eg, apache runs as nobody or www, qmail runs as various uid's). Thus, the effect of an exploit is greatly lessened. The use of tools like chroot can also help lessen any impact (although chroot is not a foolproof solution).
    5. *nix worms have already hit; Solaris had the sadmind worm, linux had lion. They hit for the same reasons Code Red hit; unpatched systems. These had less impact, but it has to be asked whether that was due to lower market share or better security policies of administrators.
    There is the potential for these worms to hit, but I think the general architecture of linux and the diversity in applications should help to lessen the impact of such worms.
    • Are you related to the rest of the retards posting on here today? Your first point is just ludicrous. There is alot of Linux software but there is a ton of Windows software as well. Just like Windows, for every one good program there is a slew of shitty ones. So the number of apps a system has has nothing to do with its quality. Your second point lacks merit because you're comparing an OS originally written for 386 computers to Unicies that are designed to run on massively parallel systems with upwards of 64 processors and countless gigabytes of memory. As for point three, the architecture the system runs has little to do with system specific virii. Linux running on any ISA is going to have the same compiler which compiles and links shit the same way. This says nothing of logic exploits, if the same logic is shared on a bunch of ports the same exploit will exist. As for four, you're just a retard. That's all I can tell you. Windows NT has always had protected memory and support for multiple users. You can run whatever you want as whoever you want. Because you run around as administrator is not anyone else's fault.
      • Do remember that the first Unix was written for the PDP-11. An original IBM PC (8088 4.7MHz) is a powerhouse next to that. I suspect that the first Unix was security-free. It was written for the computer shared by Bell Labs programmers -- any security you could have implemented on that pitiful machine wouldn't have lasted an hour against one of those guys, so it was better to just trust them not to foul their own nests.

        That was in 1971, I think. Unix has come a very long way since then, including many security patches. One advantage it has is that it's 10 years older than DOS/Windows, so more holes have been patched. Another is that it was on multi-user computers from the beginning, while I think MS's first OS for servers (Win NT) first came out in the 90's -- so the unices may have a 20 year lead in thinking about security. And finally, some unices are open sourced, and even the proprietary ones are far more open about the way things work than Windows -- so there have been more friendly eyes looking for holes.

        There used to be mainframe OS's that were designed for security from the ground up. I wonder how those would stack up against the unices and Windoze where security was patched in after the original design was set? I think not so good anymore -- they haven't been exposed to decades of probing...

        C came out of a similar environment at about the same time. Hence all the standard string functions that simply trust the users not to do something that overflows the buffers. Actually checking for overflow ate up too many cycles, so they trusted the users instead. But why are we still using these unsafe functions?

  • I think one of the reasons that Linux/BSD/etc are more resiliant than MS OS's is that there is much more diversity in the open-source gene pool. There are so many Linux distros, BSD variants, installation options, etc. that a worm might have a hard time propogating for very long, due to the high variablity among servers.

    MS OS's, on the other hand, install to almost exactly the same configuration every time, and users don't usually bother to change many options. And there are only a handful of MS OS's, compared to open-source land.

    In the wild, hybrids seem to be more resistant to disease, more adaptable, and generally hardier. Linux/BSD are mutts.
  • Worm propagation is one of those n squared problems
    Actually it's one of those exponential problems. If we start with one infected system and every infected computer infects n more computers in an hour, then on the tth hour, n^t new systems will be infected. For example, if n = 2, then after 24 hours, then 33,554,431 systems will be infected. Of course, in practice, we run out of uninfected vulnerable systems after a while.
  • "Dispite the difficulties in starting a worm on a Unix"

    Error: Unjustified statement. Requires backup evidence.

  • by Ulwarth ( 458420 ) on Friday September 21, 2001 @05:00PM (#2331987) Homepage
    You can't force users to stay up to date with security patches or even know anything at all about security. But there are things that OS and distribution maintainers can do to make their software more secure out of the box. I realize that many Linux distributions already do some of this stuff, but I don't think any do all of it. And, it applies to any OS, including those written by Microsoft.

    • By default, don't run any services! Windows 98 is more "secure" than Windows NT because it doesn't run services. A machine that is not explictly set up by the admin to be a server has no business running web, ftp, or ssh access.
    • By default, firewall all incoming and outgoing traffic over the public interface. Leave the ports open on private interfaces (192.168.* and 10.0.0.*) so that they can still share files and printers and things on their LAN without frustration. There's no reason to make firewalling an option. If someone wants to run an external server, they need to explicitly punch a hole in the firewall to the outside world. If they want to turn off the firewall completely, they can do so - but doing so should be difficult enough that they have to know what they are doing to do so.
    • Get rid of telnet and rsh. Install them, maybe, but never have them run by default. Instead, give them ssh as a remote login option. Make sure ssh is properly configured (no root logins, no blank password logins).
    • Encourage users to use blank passwords for desktop use, and then make it possible to login in only from the console when your password is blank. This applies to root, too. Since it's convenient, people will do it - and if it's impossible to log in remotely when a user has a blank password, it's secure, too.
    • Authors of server software have to make security a priority from the begining. All user input should be carefully verified with a single, highly paranoid function that clips length and filters out any characters that are not explicitly needed. Keep careful track of "trusted" versus "untrusted" values in the code, possibly going as far to give them special names like untrusted_buf or trusted_url.
    • Disitributions should GET RID of old, clunky, insecure programs such as sendmail (replace with postfix), wuftp (replace with proftpd), inetd (replace with xinetd), etc.

    Following these steps, I think that distributions will be fairly safe from any discovered server vulnerabilities, and probably most client-side ones, as well.

  • I really love the "my answer to a Linux exploit is apt-get update" posts. Nothing like trusting a completely automated process to solve all of your problems. All it would take is a nice little bit of malicious code in some header to fuck a bunch of people over. If you're not going to review the code before you install it why the fuck are you so anal about using open source software?
    • Because somebody can. I can blindly trust some anonymous person somewhere who knows that I can't check him; or I can trust a fellow developer, who will get expelled from Debian if he tried to "fuck a bunch of people over" (i.e. accountability.) At least 3 or 4 people see any change that goes into any major program, and any number of people can look at the code, at any time. If you put a back door in, you will be found out, sooner or later, and people will know who did it.
  • by ffatTony ( 63354 )

    A worm that overpowers apache and executes code on my machine as user 'nobody' (The user my apache runs as) really doesn't concern me. I suppose it could delete most of my /tmp partition.

  • That's right: marketshare doesn't matter. And here, I'm taking "marketshare" to mean either (a) the number of servers sold or (b) the number of servers running.

    The reason why marketshare doesn't matter: every server connected to a TCP/IP network is "touching" every other server connected to that network. Marketshare has no bearing on which servers can possibly infect which other servers in a population, only connectivity does. Essentially, the "population" of unix servers on the internet all "touch" one another, just like the population of all IIS servers "touch" one another.

    That said, it hasn't really been a banner year for Linux/Unix/BSD worms. We've seen adore [sans.org], l1on [sans.org], cheese [cert.org], ramen [sans.org], sadmind/IIS [cert.org], lpdw0rm [lwn.net], and x.c [dartmouth.edu]. Absolutely none of these worms ripped through the Linux/Unix/Solaris/BSD population. This is indisputable. The question is why does one population have resistance, while the other doesn't? I think the answer is diversity on four levels:

    • CPU architecture. Sure, Linux/Unix/etc boxes are far and away x86-based, but having a sprinkling of SPARC, Alpha, Mips and PPC probably makes a difference - no single shellcode or exploit covers all architectures.
    • OS architecture. Instruction-level calling sequences probably prevent a "universal" shellcode from working on all OSes that a given CPU architecture runs.
    • Web server variety. Sure, Apache dominates, but WN, iPlanet and thttpd have a presence.
    • Userland software variety. A huge variety of email clients that don't share a common scripting language or address book format keeps NIMDA and SirCam like things from happening.

"Protozoa are small, and bacteria are small, but viruses are smaller than the both put together."

Working...