Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Security

Changes in the Network Security Model? 261

Kaliban asks: "As a Sysadmin, understanding network security is clearly an important part of my skillsets so I wanted to get thoughts on a few things that I've seen recently after some discussions with co-workers. Are network services becoming so complicated that application level firewalls (such as ISA Server) are absolutely necessary? Is the simple concept of opening and closing ports insufficient for networking services that require the client and server to open multiple simultaneous connections (both incoming and outgoing)?This leads me to my next question: has the paradigm of 'if you offer external services to the Internet then place those machines onto a perimeter network' been eroded? Are application level firewalls sophisticated enough to allow machines on your internal network to advertise services to the Internet? When is it alright to 'poke a hole in the firewall' to allow this? Personally, I think the answer is 'Never!' but perhaps I'm out of touch with current network security models."
This discussion has been archived. No new comments can be posted.

Changes in the Network Security Model?

Comments Filter:
  • by Eponymous Cowboy ( 706996 ) on Tuesday September 30, 2003 @12:45AM (#7091270)
    There are three disparate levels of security you need to consider, and it is advisable to take a three-tiered approach to the problem.

    First, for employees and others who have trusted access to your network, the answer is not to poke holes in your firewall. Rather, the answer is simple, just three letters. VPN. By setting up a secure, encrypted, authenticating channel, you bring your trusted users into your network. From your point of view and theirs, it is as if their machines were physically located on the other side of your firewall--just like having the machines right in your building.

    Second, for business partners and contractors who need limited access to a subset of services, but whom you do not trust fully, the answer is quite likely also a VPN, but not directly into your network. For services provided to these people, you want everything from your end first going through application-level firewalls, and then through the VPN, over the Internet, to them.

    Using a VPN in these cases prevents random hackers from entering your network on these levels.

    Finally, for the general public who simply need access to your web site, the ideal situation is to simply host the web site on a network entirely separate from yours--possibly not even in the same city. Use an application-level firewall to help prevent things like buffer overflows. Then, if your web server needs to retrieve information from other systems on your network, have it communicate over a VPN, just like the second-level users mentioned above--that is, through additional levels of firewalls to machines not directly on your primary network. (Basically, you shouldn't consider your web servers as trusted machines, since they are out there, "in the wild.")

    By following this approach, you expose nothing more than is necessary to the world, and greatly mitigate the risk of intrusion.
    • by redhog ( 15207 ) on Tuesday September 30, 2003 @01:06AM (#7091306) Homepage
      One thing that I need to consider at my current job is that you can NOT trust employees computers at home, even if you can trust employees - if they are running Windows, they are potential virus and worm vectors, and needs to be shielded off, so a simple VPN-solution is no solution.

      We've solved the most immediate problem by allowing only ssh, and giving employees with Windows a copy of WinSCP (an excelent, two-pane Windows-FTP-client-look-a-like front-end to scp), which they have had no problems using (they did not have any oportunity to work from home before, so they don't complain :).

      We also plan to later on introduce AFS and allow remote AFS mounts, and VNC remote-desktops.

      Locally, we have a simple port-based firewall, basically walling off all inbound traffic except ssh and http (and allowing nearly all outbound traffic), and keep our OpenSSH and Apache servers updated (have you patched the two ssh bugs reported on /. on your machines yet?).

      So, my advice is - keep it simple. Do not trust a too complicated system. And keep your software patched for the latest bugs - keep an eye on the security-update-service for your distro/OS and bugtraq.
      • Well, security through obscurity works. He should get a bunch of VAX boxes and force employees to use Macs at home.

        In all seriousness, he should be looking at a system that minimizes any potential damage, not some fance firewall solution that costs a bundle. Close down the ports and keep users from loading spyware or opening email with executable attachments.
        • . Close down the ports and keep users from loading spyware or opening email with executable attachments.

          The point is, though, there is no way to do this with their home computers. Sure you can do it with a well locked down internal network with all patches applied. Maybe. But you'd better have a damned hot perimiter system to scan for anything unauthorised being downloaded. You'll have to disallow encrypted content, too, and anything that could be a compression format you don't recognise. Or you c
      • Um... a vpn is a hole in the firewall.

        There really is nothing magic about vpns, they are quite dangerous - they can provide clear access to your internal network.

      • One thing that I need to consider at my current job is that you can NOT trust employees computers at home, even if you can trust employees - if they are running Windows, they are potential virus and worm vectors, and needs to be shielded off, ... We've solved the most immediate problem by allowing only ssh

        Allowing running ssh from a box that is potentially 0wn3d and running a key logger is a big hole in your security. Requiring a hardware firewall/VPN box on home systems could at least temporarily keep th

        • This pretty much covers my question - the home users (cablemodem) that have VPN access - will a Linksys (or SMC, or D-Link, or whatever) cablemodem router ( also a hardware firewall ) be enough security to insure that they don't get 'pwn3d' from the outside (assuming they remember to change the default password on the router, and they don't go poking holes in it so they can host a damn Quake 3 Arena server.)

          Oh yea, and assuming they enable WEP on the wireless ones. Which two out of three home owners (in m
      • Normally, all VPN solutions should include the possibility to enforce a firewall policy on the client side.

        Some of them also allow to enforce the use of an up to date antivirus ...

        (IIRC)
      • One thing that I need to consider at my current job is that you can NOT trust employees computers at home, even if you can trust employees - if they are running Windows, they are potential virus and worm vectors, and needs to be shielded off, so a simple VPN-solution is no solution.

        We've solved the most immediate problem by allowing only ssh, and giving employees with Windows a copy of WinSCP (an excelent, two-pane Windows-FTP-client-look-a-like front-end to scp), which they have had no problems using (th

        • What you should do is only use the sftp-server subsystem and give users a fake shell (unfortunately sshd needs it to effectively run -c ). Three lines and you're done. If they try to log into it or run anything else than sftp they get "You have no shell access" or something to that effect.

          If you allow them to use scp, yes they need to have a full shell then.

        • by julesh ( 229690 ) on Tuesday September 30, 2003 @06:47AM (#7092310)
          SSH allows portforwarding, even backward (i.e. you can run SSH-sessions into the company by contacting an outside server and connecting back over that very ssh-connection.

          There's a very simple solution to that.

          Put "AllowTcpForwarding no" in /etc/ssh_config

          Simple.

          (Aside: there is a note in the openssh manual that reads "Note that disabling tcp forwarding does not improve security in any way, as users can always install their own forwarders." I think this only applies if you give them unrestricted shell access. See another post in this thread for information about a restricted shell that allows scp to work but prevents other stuff from executing).

        • You can tunnel (and back-tunnel) any protocol through any other. Which kind of leads to the original question: Yes, you do need to be looking inside the packets. But there will always be ways around/through. People have tunneled through ICMP pings, DNS lookups, and GotoMyPC even reverse tunnels a VNC/PCAnywhere type application through HTTP. Those are all payloads (layer 7). Inspecting at layer 3 or 4 (IP/TCP) doesn't help, and even application-layer proxies (actually closer to layer 6) aren't likely to det
      • allowing nearly all outbound traffic
        Why? If it's not needed then block it. DENY ALL should be your default rule for both incoming AND outgoing connections.
      • One thing that I need to consider at my current job is that you can NOT trust employees computers at home, even if you can trust employees - if they are running Windows, they are potential virus and worm vectors, and needs to be shielded off, so a simple VPN-solution is no solution.

        You can't trust the corporate network either. The one and only time a virus/worm successfully got into my home was Blaster this summer when I was VPN'd into the office. Systems in the office infected the laptop I had brought h

    • by kennyj449 ( 151268 ) on Tuesday September 30, 2003 @01:16AM (#7091319)
      In my opinion, between the danger of worms transmitted above the application level and the existence of uneducated users (in many cases, uneducatable) as well as the whole physical security issue, even an internal network is not to be trusted (though few are actually worse than the Internet, except for pervasive wireless networks that don't use a strong, non-WEP encryption solution.) VPNs can definitely be very useful, but placing using them only at the outer edges of your network (e.g. internet-based links) leaves you wide open to any form of attack that originates from inside, which is always a danger no matter how good your external defenses are.

      Personally I don't think that physical seperation is necessary if you're going to be using a strong VPN, because of the fact that you can make it so that the only traffic that passes back and forth is through a VPN and is then no less secure (if anything more secure, except for the purposes of physical security) than if traffic were being passed over the internet. You also get the advantage of increased throughput, a single (or fewer) physical sites to manage, and lower bandwidth costs. Every little bit helps...

      In any case, it is my opinion that any computer which can communicate with others on the internet, no matter how well-restricted such communications are, should itself be considered non-trustworthy. It might be safer for being behind a firewall, but it can still grab a trojan or worm either through accidental or intentional means and become a staging point for internal attacks. It is for this reason that I personally believe that it is imperative to ensure that every computer on a network is secure and has personal firewalling of some form installed (if you're dealing with *nix workstations this is a no-brainer for a competent admin; Windows boxen will benefit greatly from simple solutions such as Tiny Personal Firewall.)

      This all goes double for boxen which are physically located outside of the network and which VPN inside (this is the reason for that last paragraph's worth of rambling.) A certain amount of distrust should be exercised for computers which can find themselves poorly protected from the dangers of the internet at times, and as such it is not only necessary to keep such boxes under close scrutiny and send their traffic through a decent firewall, but also to either educate users (as well as possible) on good security or require as a matter of policy that they utilize certain security measures (a personal firewall combined with a regularly-updated antivirus application is a potent combination that goes a long way towards keeping a computer clean.) Assuming that a VPN is a safe connection is a recipe for disaster; it prevents others from listening in but otherwise it is no better than any other old TCP/IP connection.

      VPNs, of course, can be quite useful on an internal network. Packet sniffers tend to have difficulty picking up on SSH as it is, but put that through a 1028-bit encrypted tunnel and it become exponentially more difficult to crack apart (and such layering protects you from vulnerability as there are now *two* effective locks which must be picked in order to gain entry.) It isn't going to make a difference between two servers connected with a crossover cable and which enjoy strict physical security, but when traffic is being passed over a network with old windows 95 boxen running Outlook, it pays to be prudent. Such encrypted seperation, when used intelligently, can often eliminate the need to physically seperate network segments when connectivity can be useful.

      Oh, one last point: if you're using a WLAN, it's only logical that unless it's strictly for visitors doing web surfing and chatting on AIM, a VPN is useful there as well. WEP is both less useful and far less effective.

      As for a good VPN technology to use for any application, IPSEC is always handy (and enjoys excellent and robust out-of-the-box support in the more recent revisions of... almost everything.)

      Sorry if this seems a bit unclear, but I've had a long day. :)
    • by smitty45 ( 657682 ) on Tuesday September 30, 2003 @01:19AM (#7091334)
      VPNs are great until you realize that they provide only *temporary* access to your office network. What happens to those road warrior's machines when they're not vpn'd but still on the internet ?

      are they firewalled properly ?
      are their virus definitions updated ?

      if no or "don't know" to either of those, then having a VPN will compromise any amount of safety it could bring. in other words, it's possible that the lastest and greatest worm that wasn't able to penetrate your office network until you patch is now vulnerable due to the work-at-home employee who VPNs in, and is now infecting everyone.

      a bottom line is to have a well thought out security policy and PROCESS....and that only comes with training, more training, and training. Some education would help, too. Even people like Mudge and Dan Greer don't stop learning.

      and for those who would call your questions stupid...they are the folks who are afraid to ask the stupid questions.
      • That's one of the unique features I do actually like about CheckPoint's FireWall-1 suite; their SecureClient VPN client software allows the firewall administrator to push firewall policies to be enforced locally on hosts intended to be VPN clients.

        It's not perfect of course, as a host could be compromised before SecureClient is installed, but in a controlled environment, that should never really be the case.

        --

        • their SecureClient VPN client software allows the firewall administrator to [...]

          Never trust a client side security solution. Sure, it helps, but reinforce it with added protection on the server side in case somebody subverts it (eg by using a hacked client or a reverse-engineered reimplementation that lack this feature).
          • Sane FW-1 administrators won't be allowing any and all protocols through the firewall/VPN gateway just because they appear to come from an authorised SecureClient VPN client.

            Having the remote security policy functionality does allow the firewall administrator to have a reasonable degree of trust in the VPN clients though, to the extent that they probably aren't 0wn3d and being actively used as gateways into the corporate network or whatever. Especially so if the clients (laptops, usually) are properly loc

    • by segment ( 695309 ) <sil&politrix,org> on Tuesday September 30, 2003 @03:02AM (#7091695) Homepage Journal
      First, for employees and others who have trusted access to your network, the answer is not to poke holes in your firewall.

      While this is simple to state, how many companies will follow this rule. Companies are not going to jail their users, so the first one who wants to listen to mp3's or streaming music, up goes Real, or Windows Media. What? You want to see the stock ticker from Bloomberg? Sure now you have multicasting crap. Get real, and that's not including someone who knows about things like datapipe.c

      Rather, the answer is simple, just three letters. VPN. By setting up a secure, encrypted, authenticating channel, you bring your trusted users into your network.

      You're either blind or too trusting in people. Remember the biggest security hole often comes directly from the inside. For instance, I know someone who has a VPN through IBM for her work. Lo and behold she wanted to take that same machine and hook up DSL to it. Say goodbye to security over VPN.

      I won't get too deep into this since I'm tired but a VPN isn't always the answer. The answer is actually education. Instead of spending on a Cisco Pix, or Nokia VPN machine, try holding monthly meetings with employees and make them aware of issues. Doesn't have to be a full blown Harvard presentation, but a quick power point presentation will actually teach them things they could carry on in their home or future place of employment. VPN's are like security through obscurity in a way. If someone wants in a VPN will do nothing to stop them

    • You can trust your employees [bbc.co.uk]?

      Dont ever believe that your employees wont attack you. Some will attack you by accident (bringing infected machines into the office or something), some will even attack you out of spite.

      You should only give trust to entities you have to trust in order to get the job done. You have to trust (some of) your servers or IT staff.. but you shouldn't have to trust most of the internal network.

      Where possible, you should treat your network machines in the same way as you'd tr

  • Keep it simple. (Score:4, Insightful)

    by SatanicPuppy ( 611928 ) <SatanicpuppyNO@SPAMgmail.com> on Tuesday September 30, 2003 @01:00AM (#7091289) Journal
    The beauty of the traditional firewall is it's simplicity. IPTables hasn't been exploited in forever, except through user error. It's reliable and secure, and easy to understand/debug.

    Application firewalls and filters are complex. To me that means more can go wrong, more holes can be found. And they have to be super effective, if they're a line of defense. Sounds nasty, like those stupid .NET commercials "1 Degree of seperation between you and your customer!" 1 degree? In what fairyland? Do they WANT to be hacked?

    For my money, leave the perimiter boxes. Defense in depth is a great strategy. They will get some, but they won't get them all.
    • Re:Keep it simple. (Score:4, Informative)

      by dzym ( 544085 ) on Tuesday September 30, 2003 @01:24AM (#7091347) Homepage Journal
      Just as a point of comparison:

      As part of a security test, we placed an NT4sp4 box with an unpatched install of the Option Pack--to install IIS (note that this is perhaps the most easily exploitable Windows configuration on the face of the planet)--behind an ISA sp1 firewall running on Windows 2000 sp3. We were unable to compromise or otherwise DoS either of the two NT servers with readily available exploit code for IIS or otherwise on either operating system.

      Now, it may be possible to still exploit the aforementioned NT boxes, but clearly it would have taken a great deal more effort than just running a NIMDA-alike on the NT4 box.

    • by 0x0d0a ( 568518 ) on Tuesday September 30, 2003 @02:38AM (#7091641) Journal
      You're right. Application firewalls are a terrible, unsolvable hack. Of course, firewall vendors love 'em, because you'll be paying them for updates until kingdom come, like antivirus vendors.

      Take a look at this part of the original post:

      Are network services becoming so complicated that application level firewalls (such as ISA Server) are absolutely necessary?

      Yes. They are. You know why? Because jackasses thought it would be a great idea to slap firewalls on everything. It's an easy, one-off fix that's centralized. Does jack for actual security, but it's easy to sell to management, so IT people constantly claim that everyone needs firewalls all over the damn place.

      So now we have a ton of firewalls inhibiting functionality all over the place. Do application vendors simply say "Gee, I guess we'll give up on doing interesting things with the network", due the best efforts of short-sighted sysadmins? No. They do ugly, slower, less reliable and harder-to-monitor things like rebuild everything and ram it through SOAP. And then sell the same stupid product right back to the "firewall-enabled" company. Now, everyone loses. The security is just as bad as before. The user gets a slower, less reliable experience. The sysadmin has a harder time monitoring usage and troubleshooting (since everything is obscured by the layer being used to bypass his firewall).

      Firewalls are the singly most-oversold computer product ever, having displaced antiviral tools in the last year or so. Nothing ticks me off more than some sysadmin shoving another firewall in front of users.
    • Traditional firewalls (port filters) are like using a picket fence to stop a flood.
  • Multiple Firewalls (Score:3, Interesting)

    by Renraku ( 518261 ) on Tuesday September 30, 2003 @01:01AM (#7091291) Homepage
    I can see where the desire for more than one firewall is going to go up. Here's an example. At the boarder, you might have a hardware firewall set up, before data can even get to the machines. Then you might have a per-cluster firewall, so each department or cluster of computers can set their own policies for what gets in and what doesn't. Then there would be the firewall on each machine, which could be set according to the uses of the machine. So there would be three layers of shielding before you even get to the security features of the OS itself. Or you could just go VPN like someone suggested. Another good idea would be to have some kind of username/password setup so that some people could bypass the first firewall, and the issue of 'trust' wouldn't be as big as allowing someone to zip through all the firewalls.
  • Immature Technology (Score:5, Informative)

    by John Paul Jones ( 151355 ) on Tuesday September 30, 2003 @01:04AM (#7091300)
    Are application level firewalls sophisticated enough to allow machines on your internal network to advertise services to the Internet?

    Nope. That should never happen.

    The problem here is that application-level firewalling is fraught with problems. The lack of intuitive management for this type of firewalling is a problem that quite a few companies are trying to solve -- with limited success, so far. The problem is that as you move up the OSI layers, the variables increase exponentially. If you think that 65,536 is a big number, try writing an application-level script that permits "acceptable" MAPI requests while denying "unacceptable" MAPI requests. How do you determine that this NFS packet is good, and this one is bad? From the same host to the same server? How about X11? SSH? Oh, and don't break anything while you're at it. Lions and tigers and bears, Oh my!

    These are the problems of an immature technology. As time passes, these issues might be somewhat mitigated, but there are plenty of "network administrators" that haven't fully grasped the concept of IP, and struggle with L3/L4 firewalling, to say nothing of moving up the stack.

    Here's a tip, though; look for Bayesian filters in firewalls in a few years. That will be a trip.

    • Bayesian filters (Score:3, Interesting)

      by SuperKendall ( 25149 ) *
      A general question - bayesian filters are great for email because a user trains them. But do you think it will ever be practical to "train" a firewall as to what is good and bad traffic? I guess to some extent you could use regression tools to generate the sorts of traffic you like... but it seems like such a thing would have to have a pretty high threshold in order not to drop any real traffic. I'm not sure such a device is pratical.
      • I don't really know enough about the subject, but might it be possible to train a firewall that certain types of DDOS attacks might be "bad traffic", such as repeated requests on certain ports, opening large numbers of http connections without continuing the transaction, etc?

        I'm pretty sure some firewalls do this sort of thing already, too...
      • well, I think if you accept 1% of attacks getting through is acceptable, and 0% dropped packets, then yes Bayesian filters are OK for you.
        But, 1% of network attacks are just as much trouble as letting 100% through. So why bother.

        But, having a bayesian-filter enabled firewall sounds like a really, really cool thing, with that latest security buzzword, so expect to see them soon after all!
      • Now, I'm not a security guy, but I see some use here. Not that a bayesian filter will be good for you alone, but they might be good in general. With each sys-admin giving completely different training, general system security might get a bit more diverse, and an attack would less often work on all systems. Of course, you'd want to have regular fire-walling as well, but why not have an application level bayesian filter to filter out a little bit more?
  • by Anonymous Coward
    People run a firewall to block services that are running but that they don't use. Riiight. Instead they should just *not* run the services that they don't want. Then they wouldn't need a firewall.

    Gee, even RedHat jumped into the firewall bandwagon. At install-time instead of selecting which services I want to run, it runs God knows what and asks me which *ports* I want to open. Now if I want to run some new network service I have to waste time learning how to fiddle with this "firewall" so that the new ser
  • ideal vs practical (Score:5, Insightful)

    by vt0asta ( 16536 ) on Tuesday September 30, 2003 @01:30AM (#7091378)
    You're going to get a lot of answers on how in the perfect world there will be DMZ this, several layers of routers that, firewalls in between them all, VPNs, NIDs,and a whole bunch of other things that may not be applicable.

    The answer really depends on what you are protecting and whether or not the security required to protect it is worth the cost.

    The only way application aware firewalls CHANGE the paradigm of network security models is for a certain class of protection. Usually that line of protection is or train of thought is "we would like something slightly better than nothing".

    If you need protection more than that, it sounds like you already know what is best practice. That hasn't changed, and you are not wrong to suggest to your co-workers otherwise.

    Think of it along the lines of what the military would do. Just because there is some new whiz bang motion tracking CCTV x10 ninja thing that shoots lazers. You better believe they are still going to have soldiers with rifles in watch towers, soldiers walking the perimeter, and 20ft of dead man zone and razor wire fences surrounding, along with the whiz bang consolidating gadget.
  • Some add'l tidbits (Score:5, Informative)

    by Anonymous Coward on Tuesday September 30, 2003 @01:31AM (#7091391)
    First off, remember - you won't be able to think of everything. No security model is complete without behind-the-wall systems, be they basic monitoring systems up through more sophisticated custom snort or proprietary IDS. It all depends on your paranoia level.

    There are a few ways to handle the bane of netadmins - 'I wanna get to my files!' VPN, as suggested, is one solution - but not without problems. Recent issues with X.509, OpenSSH hacks for IP-over-SSH, etc. You can mitigate the danger by using a set of consistent criteria for each of your requirements, like a checklist. For example:

    1) Is the service mission-critical? (BOFH them if no!)
    2) Can the service be offered through a less-vulnerable channel? NFS mounts moved to SFS, perhaps, or encrypted AFS as mentioned above.
    3) Is there a way to move the service into a perimeter network (or outside entirely)? Even if this means synchronizing a set of data to an outside machine via cron, if the data on the machine is less important than the internal network security, this can help.
    4) Once the user is connected, authenticated and accessed, *THEN* what can go wrong? What could they do maliciously? What could they do accidentally?

    Personally (and this is just me talkin', no creds here) I tend to reflexively say "NO!" until convinced otherwise. I know that there are services which *must* be available through the wall, but I want the requestors to have to work to convince me. Closed systems are more secure.

    Also, don't be afraid to investigate low-tech but simple and effective means of circumventing problems. First thing I ask users who want to get an occasional file home - "Can you mail it to yourself?" Second thing: "Would you be able to use a 'public folder' that I have synch to an accessible box, say, every half hour?"

    I second the opinion of iptables. It's a sharp tool, so be careful - but correctly applied, it kicks pants off most application or appliance firewalls. Invest the time to learn the sharp tool, and you'll realize that most of what you pay for on big expensive firewalls is manageability (i.e. Java GUIs, wizards, databases, multiple systems preconfigured - IDS, firewall, proxy, etc). Do the work.

    Good luck. Don't listen to people who berate you for 'not knowing things.' Attempting to learn them in advance - due diligence - is a sign of a good admin. Be thorough. And above all, find a friend who does the same kind of work, and check each other. Probe each others' networks. Try exploits posted on the net.

    Final, and most important - software updates. The boring part, but the most critical.

    Cheers.
    • by gbjbaanb ( 229885 ) on Tuesday September 30, 2003 @05:13AM (#7092034)
      First off, remember - you won't be able to think of everything.

      Thank you, you reminded me of the number 1 rule of security planning. In all of /. everyone is going on about VPN, SSH, etc - all technological solutions - and forgetting the real situation.

      Security is all about risk planning. There is no way you can either plug all the holes, restrict all the access properly, and manage all the resources. So, the question becomes not 'how to stop it', but 'what will I do when it goes tits up'?. Also, as someone undoubtedly has said, the only perfect security is in a concrete box, sunk to the bottom of the ocean. Well, yes.. but you always have to trade off security for usability. What's the point of being networked if no-one can access their files? People can access their files: dangerous security hole!

      You see - its OK having all the security products in place, setting them up perfectly, but then an employee logs on to the database and walks away with a backup of all the credit cards...

      and employee #2 gets a new toy, a wireless lan thing, and a passing hacker (theres always one), doesn't even have to raise a sweat listing off those same credit card numbers.

      Think *all* your employees are trustworthy (haha), well, what happens if someone walks into your offices (for a meeting, for instance), and surreptitiously plugs a wireless laptop into a network port, tucks it under the chair and walks off? Doesn't even have to be a spare port, they can plug in a little hub.

      You might as well ignore the technological security measures, sure you'd get hacked more often, but that just means you'd have to do a lot more work recovering the system - it does *not* mean that with the security products in place you'll never have to worry about performing that recovery process, so you dont need one.

      So, given that it may go wrong at any time, and you've figured out what you'd do when it happens. You also have a disaster recovery plan- for when the server room floods and is hit by lightning, or 2 hard discs go pop at the same time.

      Security - all about how much risk you'll accept, little to do with securing systems.
  • by Lurgen ( 563428 ) on Tuesday September 30, 2003 @01:33AM (#7091400) Journal
    The minute we started encapsulating protocols within other protocols, we made it absolutely necessary to have application-layer firewalls.

    RPC over HTTP is a good example of this, as are the many other protocols people see fit to encapsulate in HTTP (RDP / Terminal Services, instant messaging, etc).

    Originally, the rules were dead simple. One port == one protocol. Some protocols used multiple ports, but even then it was kept nice and simple. But no, not everybody liked this situation. In the interests of making IM available to more people, clients started using HTTP so that even office staff (behind firewalls and proxies) could use it. Sure, this was blatantly circumventing the firewalls that were put up for this very reason, but that didn't stop anybody.

    Application layer firewalls are a must-have. Of course, that will just force people to start using SSL... :(
    • by oniony ( 228405 ) on Tuesday September 30, 2003 @08:20AM (#7092781) Homepage
      Layering came about because of the inflexibility of systems administrators to react to the need for new services to be accessible. HTTP is one of the few protocols that are allowed through firewalls because of over-zealous blocking. Because of the need for applications to work, people have realised that the only way forward is to get their protocol to run over HTTP, hence SOAP and the rise of XML. I've experienced this need to layer first-hand on many occasions.

      Developers tend to do the least work necessary to achieve the result they desire. The fact that so many protocols run over HTTP now indicates that the developers of the applications that use these protocols have been unable to persuade the systems administrators to open ports so that they could their necessary applications to work. Instead they resorted to the harder task of layering to avoid the blocks.

      The sysadmin that said "I like people to do some work to convince me" says it all. The attitude is that of a power-monger. A pragrmatic sysadmin would work with the applications developers to find a solution. Maybe they frown upon opening ports for applications, but they should at least put the effort in to explore the options otherwise we'll always end up with this layering effect for every networked application. I wonder how long it takes before we end up with protocols running over HTML/HTTP to avoid the application firewalls that start blockting non-HTML HTTP traffic.
  • Lesson number 1: (Score:4, Insightful)

    by suso ( 153703 ) on Tuesday September 30, 2003 @01:34AM (#7091406) Journal
    Don't implicitly trust what you read on slashdot.org.

  • Face it folks. Provisioning security services at network perimeters is just wishful thinking, and this is not a new insight. Traditional packet filtering firewalls are absolutely necessary (do you walk around your neighborhood naked?) but they must become much more widely distributed *inside* large networks in order to be effective. The same applies to application filtering technologies (some of which are very promising) and all the other stuff people think of as perimeter defenses. Any attempt to set up la
  • "When is it alright to 'poke a hole in the firewall' to allow this? Personally, I think the answer is 'Never!' but perhaps I'm out of touch with current network security models."

    Those who don't need to pass traffic inside are afforded that luxury due to the fact that they don't have a job. Anyone can decently secure a network that doesn't interact with anything; the real trick is allowing business to flow as usual and *still* have an acceptable level of Security.
    • I seem to have misread the original piece. He was talking about passing into the *internal* network, not passing at all.

      (I hate jackoffs that don't read the original post correctly) :)
  • by egarland ( 120202 ) on Tuesday September 30, 2003 @01:47AM (#7091459)
    There is no one answer. If security is your only concern you should have as many layers of security as possible with firewalls between each layer locked down as tight as possible. That said, security is never your only concern. Cost, ease of maintenance, performance, and flexibility are all important in a network design. After all, the purpose of your company is probably to get something accomplished, not to avoid getting hacked. There are times when every different network configuration is appropriate from super secure to a cable modem router to a windows box right on the internet. There is no one answer.

    Application layer firewalls are another layer above port filtering. They can increase security and could, in theory, make it worthwhile to share a service hosted on a machine that is inside your network. I would only do that if you trusted the security of your internal network. Most network designs assume that once you get in to the "internal network" there is no more security and all your deepest company secrets are available to anyone browsing around. If this is true, you've probably made some bad decisions somewhere along the way and you should address those before you open any holes. If you are willing to maintain strict security on your internal network then the added simplicity of allowing Internet access to machines on it can be worth the risk. This can be a lot easer than setting up a dmz.

    Usually layers do make sense though, even if one of the layers is just a Linux box doing firewalling, routing and serving some services. One thing I like to do is to mix operating systems at different layers. That way if you get a worm of some kind that gets into one layer it won't penetrate to the layer behind. For example, internet facing servers are Linux based, desktops are Windows based.

    Another thing I have done when I absolutely needed a Windows based web server is to setup Apache as a reverse-proxy only forwarding requests to a particular subdirectory to the Windows server. This filtered out all the standard buffer overload attacks since none of them referred to that subdirectory name. It also made sure the requests were relatively well behaved and buffered outgoing data for the Windows box, reducing connection counts when it was under high load. This is an easy way to do an application layer firewall and if you are firewalling with a Linux box you can do it right on the firewall.
  • by canning ( 228134 ) on Tuesday September 30, 2003 @01:47AM (#7091461) Homepage
    When firewalls don't do the job Mike Fratto, Sep 29 2003

    Battle lines have been drawn, and volleys are being lobbed between the analyst and vendor camps. In dispute: Whether intrusion prevention is out of commission or the next network security salvation.
    On one side, Gartner has cast intrusion detection into its "Trough of Disillusionment", saying the tech has stalled and calling for these functions to be moved into firewalls. Meanwhile, intrusion-prevention product vendor ForeScout Technologies vows to identify and block attackers "with 100% accuracy".
    Call us Switzerland, but we say neither group has a lock on the truth.
    Network intrusion prevention (NIP) systems probably will not protect your network from the next zero-day exploit or troublesome worm, but they are not a waste of time or money, either.
    Our position puts us in the minority: Though we think NIP systems can enhance an existing security infrastructure, we do not consider integrating intrusion prevention and firewalls into a single unit a desirable goal.

    Firewalls vs NID Firewalls have a largely static configuration: firewall administrators define what is acceptable traffic and use the features of the firewall to instantiate this policy.
    Some firewalls provide better protection features than others. For example, an HTTP application-level proxy is far superior to an HTTP stateful packet-filtering firewall at blocking malicious attacks, but the basic idea is the same: Your firewall administrator can be confident that only allowable traffic will pass through.
    If you have doubts about your firewall, get a new one from a different vendor, send your firewall administrator to Firewall Admin 101, or get a new administrator.
    Not surprisingly, when we asked you why you are not blocking traffic using network-based intrusion detection (NID) systems, 63% of you said you use a firewall to determine legitimate traffic.
    But people make mistakes, so misconfigured firewalls are a common source of network insecurity.
    This simple fact has been used as a selling point for both intrusion detection and prevention systems, with vendors claiming their products will alert you to, or block, attacks that do get through.
    The answer: Instead of layering on more hardware, solve the fundamental problem of misconfiguration.

    Think configuring is easy? Unfortunately, though, it is not that simple. If you are enforcing traffic policy on your network using a stateful packet-filter firewall--such as Cisco Systems' PIX, Check Point Software Technologies' FireWall-1 or NetScreen's eponymous product--without security servers or kernel-mode features enabled, you should know that application-layer exploits, such as server-buffer overflows or directory-traversal attacks, will zoom right through. Stateful packet filters stop at Layer 4.
    Application-proxy firewalls can block some attacks that violate specific protocols, but face the facts: protection is limited to a handful of common protocols.
    The rest are not supported through a proxy, or are supported through a generic proxy, which is no better than a stateful packet filter.
    Still, NIP is not a replacement for firewalls and will not be in the foreseeable future. Why? The fundamental problem is false positives--the potential to block legitimate traffic.
    Before you can prevent attacks, you have to detect them, but NIP systems rely on intrusion detection, which is hardly an exact science.
    A properly configured firewall will allow in only the traffic you want. We need to feel this same confidence in IDSs before we can believe in NIP systems, but IDS vendors have employed lots of talented brain cells trying to raise detection accuracy, and they are nowhere close to 100%.

    Incoming! Despite these caveats, we believe a properly tuned NIP device can be instrumental in warding off most malicious traffic that gets past your firewall.
    There are several ways to block malicious traffic: If the NIP device i
  • Are you NUTS?! (Score:5, Insightful)

    by TheDarkener ( 198348 ) on Tuesday September 30, 2003 @02:04AM (#7091538) Homepage
    Is the simple concept of opening and closing ports insufficient for networking services that require the client and server to open multiple simultaneous connections (both incoming and outgoing)?

    I am the head sysadmin for a company that has many Linux, Windows, and Solaris servers, and other specialty systems such as Cobalt Raqs, proprietary satellite equipment like IP enabled RF Modems, MUXes, IPEs and a shitload of high-bandwidth routers in multiple POPs around the world. If you think that a firewall to protect your network is insufficient, especially for a network with mixed OSes and such, you are terribly wrong. Imagine working in an ISP. You have your private workstations, then your servers (DNS, MXes, etc.), then your colocation equipment. Put it all on the same network? Suuuuure!! WHOOPS! Someone hacked into a colo box and then used his r3wt account on that box to scan your internal network for other vulnerable boxes (all at the same time, using your T1/T3/OC-192 for hosting the world's biggest movie IRC bot). You didn't have a firewall and/or IDS to detect the initial portscan on the colo box, and now you don't know that he/she is sucking up your bandwidth and scanning your entire internal (well, to you it's internal, external, whatever) network for more boxes to royally *$#! up. Trust me. Once a box is rooted, you take it of as SOON AS POSSIBLE and reinstall. It's a shitty feeling knowing that someone owned YOUR network and now you have a shitload of crappy work to do over the weekend. Not to mention downtime, customer/employee complaints, fielding the hundreds of "I CAN'T CHECK MY E-MAIL!!! BOO HOO!" calls, and general feeling that maybe...just maybe there's a box that got 0wnz0r3d that you might not know about.

    The moral of this story, boys and girls, is that FIREWALLS ARE GOOD. Intrusion detection systems are GOOD. NAT is GOOD. TCP syncookies are GOOD. Everything on the Internet is vulnerable by default unless YOU TAKE THE TIME TO SECURE IT YOURSELF. Keep the colo systems on their own subnet. Shit, keep each SYSTEM on it's own 2 port VLAN with the uplink. Keep your servers on a DMZ. Keep your internal workstations on a TRUSTED, PRIVATE, NATted network. Close every damn port besides the ones that are used by servers. Do not open ANY ports to your trusted, internal network. If someone roots a box, at least they can't load an SSH trojan on port 2000 with no password and automatic root access to get back in later.
    • Small nit (Score:3, Insightful)

      by shani ( 1674 )
      Once a box is rooted, you take it of as SOON AS POSSIBLE and reinstall.

      One problem with this is that simply reinstalling a r00ted machine is no guarantee that it won't immediately be r00ted again.

      While being hacked sucks, it is the worst time to panic. Remember, when you suddenly notice something strange on the machine and realise you've been owned, it could have been compromised for weeks or even months.

      While you should immediately prevent it from doing further harm, you should also attempt to do a bi
  • Users (Score:3, Insightful)

    by Aadain2001 ( 684036 ) on Tuesday September 30, 2003 @02:05AM (#7091540) Journal
    If you allow users to select what goes in and out of their computers onto the internet, I guarenty you that within 24 hours of rolling out the system, 95% will have flipped the "allow everything" switch because they got anoyed with being asked every time they fired up a new application.
  • by rc.loco ( 172893 ) on Tuesday September 30, 2003 @02:15AM (#7091571)

    Firewalls are great at slowing down intrusions. However, without proper application security architecture and host-level security hardening, you cannot really protect a network-accessible resource. Often times, the only resource (network, application, host) that we can control 100% of the time so that it can be trusted is the host.

    Besides, the bulk of compromise situations occur INTERNALLY. Is that PIX on your WAN router really going to stop disgruntled Gary down in QA from trying out across 5 subnets the latest script kiddie tool that his roommate hooked him up with. If you spend quality time hardening your hosts, chances are you may really not lose more than a few hosts at a time during a significant compromise at the application-layer (e.g., a remote root sendmail hole, a bug in BIND). I think we need to revive the popularity of security "tuning" on the host side - a lot of people forgo it for strong network security but I think that the latter is a much more difficult perimeter to maintain.

    I've seen others post about the dangers of VPNs. I totally agree, they are conduits for information loss, but are likely to be mostly self-generated (internal). Example: Disgruntled Gary in QA sucks down the product roadmap details off the Intranet before giving his 2 weeks notice and starting to work for a competitor.

    Apologies to Gary's everywhere. ;-)

    • Valid points -- perhaps the next generation of firewalls will be purely internal, possibly built into our network switches? Maybe we're due for a new type of internal networking, where we can not only protect ourselves from the world in general, but from ourselves?

      Yes, I know we can do a limited amount of filtering internally already but there's nothing even close to what I have in mind. I'm thinking of application-layer filtering, perhaps even down to blocking specific attacks. Similar to Snort in some wa
  • by marbike ( 35297 ) on Tuesday September 30, 2003 @02:20AM (#7091588)
    I have been a firewall engineer for nearly four years. In that time I have come to the conclusion that you have a major trade off in the ultimate security of a system as compared to the usability of that system. An example is the explosion of VOIP and video conferencing in the last two years.

    H.323, SIP, SKINNY etc. all require many ports to be used which is a nightmare to a firewall admin. As a result, firewalls are evolving to include support for these systems, but my fear is that the overly (in my opinion) permissive nature of firewalls which allows these connection, is ripe for exploitation by future crackers/hackers.

    While I was supporting firewalls, my mantra was to close every damned thing I could and the users can suffer. But I also realize that in a modern network, usability is a major concern. Companies are deploying VIOP networks in record numbers while saving thousands of dollars each month. Companies need to reduce overhead to remain profitable, so they are looking at new technologies to help them. If the firewall industry cannot keep ahead of these technologies, it will ultimately fail.

    I think that the time of using access-lists to controll traffic is nearing an end. This will result in slower overall performance of firewall solutions as application level firewalling becomes mandatory, rather than the past of transport layer firewalling.

    I am afraid that I have no easy solutions, but I hope that the industry will be able to remain both secure *and* usable.

    Hell, perhaps in the future security will be built into operating systems and network resources, rather than the reacitve nature that we enjoy today.
    • Maybe VOIP is being rolled out incorrectly.

      If I were in charge of a VOIP rollout, I'd use IP-based phones (NOT software) and make the VOIP network physically separate, just like the old phone network was separate. Therefore, you'd have a separate firewall whose job is VOIP only and you don't have to open up your workstation network to a security nightmare.

      If people really insist on a computer-based VOIP system, a separate low-cost workstation can be used connected to the physically separate VOIP socket on
  • by altamira ( 639298 ) on Tuesday September 30, 2003 @02:34AM (#7091635) Journal

    There's a few very sophisticated application-level firewalls available on the market, but they all pertain to a very specific set of protocols. NFS and MAPI are none of them, as these are far too complex and it's too hard to distinguish bad from good traffic; HTTPS, on the other hand, is pretty well suited to full application layer inspection, and this can make it very practical to actually allow access to an application on your INTERNAL network from the outside. However, on the side of the application-level firewall, this requires very sophisticated rulesets that require modification whenever the application changes, and that require a very skilled administrator. Whale Communications makes one such product (e-Gap Application Firewall), which could easily be the most sophisticated application level firewall for HTTPS. There are other vendors though that offer reverse proxies including authentication that will do session management and only forward traffic belonging to live, authenticated sessions, that could possibly as well make it practical to have the application run on your internal network.

    Just think about it - in an ideal world, you could connect your database only to the web - no replication to the insecure area (DMZ), no (not in the Windows meaning of the word!) trust relationship with the DMZ, no poking holes in your firewall for DB/RPC/other proprietary communication protocols, no bringing out and maintaining the same set of hardware and software twice...

    BUT this comes at a price - secure application layer proxies require skill and money.

    Disclaimer: I work for a company that has implemented the Whale solution in Germany for 2 years. However, I chose the Whale solution for its technical merit solely.

  • by ObligatoryUserName ( 126027 ) on Tuesday September 30, 2003 @02:53AM (#7091667) Journal
    Sad to say, but in the future, the only reliable port will be 80. All clients will have all ports except 80 blocked by default (right now this seems like wishful thinking!) and no one will open any other port (it will give them a scary security warning!), and even if they wanted to, they might be blocked from doing so by their ISP.

    We're already seeing shades of this, but it hasn't reaced the majority of Internet users yet. Back in late 90's my company rolled out a product for schools that to be retooled when it was realized that many schools were firewalling everything except port 80. (They added a mini proxy server to the product that sent everything over 80.)

    I have a friend that's a sysadmin for a medium sized insurance company - and they had all their internal applications break a couple weeks ago when a MS worm started bouncing around the Internet. However, the problem wasn't that they were using Windows machines (I think all their servers were AIX...)- the problem was that their ISP (the regional phone company) had blocked off the port that all their applications used because it was the same port that the worm used to get into systems. Last I heard, the phone company was refusing to ever re-open the port. (The phone compnay made the change without even informing anyone at the insurance company, everything just stopped working and from what I heard it took them a day to figure out why their data wasn't getting through. I believe they were resigned to changing all their programs to work on a different port.)

    So, we've already come to the point where connections on other ports seem strongly subject to the winds of fate, and I see no reason the situation won't get worse. In most environments 80 is the only port that people would notice if it was blocked, and there are too many sysadmins out there who don't know any better. Right now, if I was developing an application that needed to communicate on the Internet, I would only trust that it could use port 80, and I wouldn't even bother looking at anything else. You can even see application enviornments starting to spring up now (Flash Central) where it's assumed that most applications will just share a port 80 connection.

    It sure is a sub-optimal situation, but I don't know what can be done to stop the trend. Ironically, such a situation makes simple port-blocking firewalls useless because all applications will be running on port 80 anyway.

    • I see where you're coming from, but your conclusions are not entirely accurate.

      There are too many financial institutions (to name just one aspect) whose apps require either different kinds of connection security from what you get from standard HTTP, and who won't be willing to take the "tunnel everything over the web port" approach.

      For end-user private use, to a degree, maybe.
    • This is one of those times when a good general-purpose IPTables firewall is a good idea. Actually one solution to your problem would be NAT (which is a VERY general word).

      In this case you might be able to solve your problem with pairs of nat boxes.

      Let's presume tha the virus talks back on port 1022 and your office servers are at 1.2.3.4, and that's the port that you're using. .. In front of your remote boxes you'd put a firewall that (among other things) would translate outbound connections to 1.2.3.4:

  • The varied answers did indicate that there is ambivalence in general about the idea of allowing a machine on the internal network to advertise services even if protected by an application level FW(such as ISA Server protecting an Exchange server). That's good because I thought I missed something in the past 2 years since my last sys admin job(tried my own non-IT business for a while for those who are curious).

    For those who did comment, thank you kindly. I appreciate the ideas and just so folks better un
    • I'm working on something similar... Exchange/OWA on the net.

      There are a couple people who just need to POP their email while away. Perdition POP3-proxy over SSL is a decent solution. Setup POP3 proxy box on a separate network (ie. DMZ) from the Exchange Server and you're set.

      There are a few that must have OWA access. For them, set up a reverse proxy with Apache/Squid and get a certificate for this server to communicate with your Exchange/OWA/IIS box.

      And forgoodnesssake relay all your email thru someth
    • For me, I like to have our web services, and our internal network. If you want to send files in or out, you have to put them on CD-R and transfer them to the outer network. So if we can afford a network at some time, that's what I'll do. But our business model matches that.

      On the other hand, for a large company like Newport News Shipbuilding, with > 10k employees, and more than 3000 engineers, that really isn't going to be practical, is it?

      Interesting thought... but suppose you were to have the two
    • You could run something like SquirrelMail, which is a webmail package that uses IMAP to talk to your mail server. I think the idea of using Apache as a proxy server to connect to an internal server with OWA is also a good one (as opposed to port forwarding or "poking a hole", which would look the same to the user but be significantly less secure). Either of these ideas should work fine with whatever OS you want.
  • There are badly designed services out there. Loads of them.

    These are services that are using an end-to-end protocol approach without provisions for a concentrator and filtering server within your company, requiring connections from desktop to desktop across corporate firewalls. There are services that hide their payload in normal http or https requests, requiring you to parse HTTP and XML in order to select which requests you pass on and which you don't. There are services that require backward connects on
  • by cheros ( 223479 ) on Tuesday September 30, 2003 @03:03AM (#7091697)
    If you want to do it right you'll always end up with a tiered model. Your basic stance should be not to trust anything or anybody, and open up from there (a bit like getting a mortgage ;-). Second stance is to always try and have two layers of defence in place instead of one (i.e. defence in depth), like NAT + proxy, just an example. Third stance is to NEVER allow direct interaction with internal hosts. This means that inbound services (SMTP, hosting web pages) should be done from a separate interface 'between' the Net and your internal network, called Demilitarised Zone of DMZ (apologies if this is old news, just trying to keep it clear). That's IMO also where VPN users come in, they can be given proxied equivalents of internal services, that keeps a network clear from oinks that have just managed to fiddle their VPN so they end up as routers between the Net and the internal network (yes, I know your policies should prevent them doing this, but see second stance ;-). Any supplier feeds come in on the same type of facility, you could even use a separate interface for it. And last but certainly not least, describe what you're actually trying to protect as that will give you some idea of the value loss if you end up with a breach, much easier to develop some defendable idea about budget requirements. For extra bonus points you can let senior management decide to put a value on those assets (i.e. give them enough rope ;-).

    But this is not where it ends, because you still haven't dealt with (a) inside abuse and (b) the possibility of failure. Good security design takes failure mode into account. Plan for when somehow your defenses are breached. Tripwire your firewalls and core systems and check them, lob the odd honeypot in the internal network which will give you early alerts that someone is scanning the place or a virus has entered (last year I caught one of them very early because of a rather suspicious Apache log) and make sure you have a patch strategy that has a short cycle time (depends on your risk tolerance, but especially your firewalls will need attention). Where possible, segregate the more critical facilities out so you can more accurately protect them (just consider your users hostile - don't answer the support phone for half a day if you want a more realistic version of that feeling ;-).
    Oh, and think about what platform you run your security services on. I don't prefer a Unix over Windows because it's more or less safe (that's actually more complex than appears at first glance - donning asbestos jacket ;-), I prefer Unix based facilities because I end up with less patching downtime as it rarely needs a complete restart. But that's just me. And READ those logs..

    Hope this helps. =C=

  • You are out of touch with current network security practices, but that's a good thing. Most security guys these days are just not thinking straight, IMHO. The first order of business is to clearly delineate your real internal network and your semi-publicly-accessible DMZ where public services are hosted. No traffic crosses the DMZ without going through a proxy service or application level gateway of sorts. Secondly only setup simple (and password protected I might add) proxies for outbound connectivity.
  • by harikiri ( 211017 ) on Tuesday September 30, 2003 @03:33AM (#7091797)
    We are a big Checkpoint shop (stateful inspection firewall). With regards to which is better, the issue seems more to be:
    1. What is the industry standard
    2. What can we get support for locally.

    Application firewalls have really done poorly here in Australia. I speak from experience - used to be a security 'engineer' (read, install Gauntlet), and have since moved on to network security administration.

    The main vendors I've seen in the marketplace are (or were) Gauntlet, Sidewinder, and Cyberguard.

    NAI dropped the ball with Gauntlet both here and abroad. The technology behind it is excellent, but the support really, really sucked. In addition, the administration was performed via a highly unintuitive java-based application, that everyone I knew *hated* to use. You often ended up simply going back to the command-line to configure the beasts.

    Sidewinder I have no formal experience with, but have heard good reviews. Secure Computings presence in Australia was limited to international firms that required its use. There was no "storefront" for quite some time.

    Cyberguard I have seen at a handful of places, mainly banks (and apparently also at various .gov.au sites).

    All of these are technically good products. But due to their lack of popularity and market presence, they don't get used.

    So it's a glorified packet filter I go to add a rule to now.. ;-)

  • by Dagmar d'Surreal ( 5939 ) on Tuesday September 30, 2003 @03:34AM (#7091799) Journal
    "[...] has the paradigm of 'if you offer external services to the Internet then place those machines onto a perimeter network' been eroded?"

    The simple answer to this question is "Definitely not." The use of a DMZ segment to keep production machines on their own physical network segment is likely to never become obsolete because the benefits of this simple step are so great.

    "Are application level firewalls sophisticated enough to allow machines on your internal network to advertise services to the Internet?"

    Whether they are or not is irrelevant. Only the barest minimum of your network should be exposed to another network (especially the Internet), and those hosts that _are_ should be unable to initiate connections to the rest of your network to reduce the impact of the loss of confidentiality in the case of an intrusion. While this may seem rather anal-retentive, to implement a proper application level firewall, the firewall can't just casually filter by generic service type. It _has_ to be able to distinguish a kosher query from a malicious one, and this requires a LOT of detailed work in the firewall rules to ensure that only the queries you want passed through can be passed. If you have a lot of custom CGIs with input parsing, this can turn into a nightmare of man-hours to maintain.

    "When is it alright to 'poke a hole in the firewall' to allow this? Personally, I think the answer is 'Never!' but perhaps I'm out of touch with current network security models."

    I mainly agree with you and feel that the answer is really "Almost never", with "never" requiring some support from the developers maintaining your site. If they're on-board with you on the concept of a DMZ, they'll help you by designing the production system so that connections could be made _to_ it from the intranet to extract information from the production hosts, instead of making the production hosts initiate connections to the intranet and increase the chance an intruder could do the same. If you can't control the access because it's some wacky proprietary protocol, institute a second DMZ (network cable is cheap and so are extra NICs). No other network should ever be allowed to reach inside your intranet.
  • Actually, having ISA Server publish your Exchange server (using RPC) or Outlook Web Access (OWA) is a great alternative to hosting yet another server you're going to have to patch and lock down. Configuring a firewall that is meant to be secure is much easier than trying to tie down a web server. Web servers on the edge don't even have the monitoring and reporting capability that you will need to know that things are running smoothly (or not). If all you want to let out is webmail, just publish OWA. ISA Ser
  • It depends... (Score:2, Insightful)

    by lelnet ( 702245 )
    ...on your specific security needs, and the needs of your user base. As always.

    At the moment, for my "day job" (which is really at night, but never mind that), I do sysadmin and networking stuff for an international investment bank. The information on our computers is worth on the order of tens of billions of dollars on the market, not to mention the very serious privacy implications if there were a compromise (which have specific legal consequences in some of the jurisdictions where we operate, and seriou
  • If your employees need remote access from home, and you are providing the laptop, consider a Linux-based laptop. No spyware, adware, viruses, email worms, etc. to worry about (assuming you have it properly secured of course...). They may complain they can't run the latest games and so on, but you're calling the shots, so tough - they were hired to do work for you. They'll probably be more productive for this reason alone, which is a second advantage.
  • by btg ( 99991 ) on Tuesday September 30, 2003 @05:47AM (#7092130)
    I have been involved with lots of different bits of security for a few years now, and quite a few people seem to think I know what I'm doing.

    Playing the "security component Lego" game is great fun, and a little intelligent thought will soon see you set up with a nice, best-practice architecture. This is how it will then fail.

    1. You will have unpatched machines which will be trivially rooted with a script-kiddie exploit. You will know that you should have patched, but you won't have the time, manpower, or authority to ensure the patches are in place.

    2. You will misconfigure something, and then miss the problem in reviews because you didn't get peer or professional verification of all your configs.

    3. You will get owned by an internal employee who has exactly the level of trust that you planned for, but abused it.

    4. Someone will walk in with a clipboard, bamboozle the secretary and walk out with your fileserver.

    5. You will create a whole bunch of really cool procedures, but the CIO / CTO won't back them when the first departments complain about lost productivity - this will undermine the whole thing and you will be back at square one.

    6. You will give someone VPN access, and they will connect their virus and worm ridden home machine. It will infect your network, and their kids will surf pr0n and share mp3s on your dime.

    7. Your backups will have some unforseen problem, your restore procedures won't work right because they aren't tested, and you will lose much company data (and your job).

    8. Your users will deliberately download trojan-ridden, virus infected, IE Object Overflow infested garbage, despite clear, explicit orders to the contrary being sent to them twice a day. They will do this because dancing rabbits are somehow more compelling than 'all those emails from the grumpy tech guys'.

    When we talk about the 'current paradigms', I don't even think about fancy technology, I think about these obvious threats that always apparently only happen to other people, because some wiseguy always knows better. "IF you do blah blah, like we do..."

    Your "paradigm" wish might be: "I want a network where every single part is doing as best it can to defend itself against the threat at the keyboard as well as the threat from external attack - not a perimeter, not 'tiers', but every part."
  • by graf0z ( 464763 ) on Tuesday September 30, 2003 @06:23AM (#7092252)
    and NIDS is that all current systems ready for production use are based on pattern matching, just like virus scanners. It detects a "bad packet" (like one containing a standard rootshell) if it has an according signatures in its database. Additionally it can enforce protocols (i.e. by dropping evil overlapping IP-fragments). Both happens at high costs as IP-fragments and TCP-segments have to be reassembled for inspection.

    These system may filter standard attacks (i.e. exploits you find at bugtraq, packetstorm) quite good, but You can image that it's easy to poke such a firewall by varying an attack. They know many standard ways of varying (like "/cgi-bin/../cgi-bin/" instead of "/cgi-bin/", or inserting NOPs into a rootshell), but there are a thousand and one way for doing the same thing, and most won't get detected.

    So: Do NOT think Your $XXXXXX application level oversecure paranoia firewall ransoms You from secure network design or patching Your systems! Instead, do the usual:

    • use seperated subnets of different security level (like a dmz)
    • hold Your systems on recent patchlevel
    • tighten Your configs, review them with >2 eyes
    • use proxies (maybe with authorization). Build virus-scanner into http- and smtp-proxy
    • do NOT consider Your internal network "save", so don't use telnet for administration Your internal *NIX servers
    The last point is due to the fact that it's too easy to inject hostile code into a browser. Most scripting attackts get NOT detected by state-of-the-art virusscanners if they are slightly modified. So consider the desktop workstations in Your network as a bunch of trojans.

    To summarize: You have a excellent chance of averting 99% of all attacks (as those are known attacks of script-kiddies/zombies/...) with standard techniques like the above mentioned. You have a good chance of making a random hacker to move away to an easier target. You have almost no chance of averting a skilled hacker with time who wants to get into YOUR machines.

    /graf0z.

  • Hi all,

    It's been very edifying listening to you guys talk about your DMZ servers and your application level firewalls and your apparently infinite budgets for admin time and hardware, but what about the real world of small (<10 employees) businesses with a single server running Windows 2000 Server and Exchange 2000 on a single network segment. No ISA, No Checkpoint, no time, no money, no dedicated admin, no understanding of why it might be a good idea. Mostly what we've got for these folks is a Linksys
    • I think as a small business you're going to need to look at the cost of getting the work done versus the cost of loss in the event something 'bad' happens.

      It is prudent to assume that something 'bad' will happen; it's just a matter of time. With that assumption, start figuring a monetary value next to the loss of each kind of data you have. How much would it cost you to rebuild your customer database, weather legal action from customers, etc. in the event that the customer database is broken into and de

  • A firewall/NIDS cannot inspect encrypted data if the encrypted tunnel does not end at it.

    Want to attack a httpd behind a mature NIDS? Establish an SSL-session to port 443 and send Your "GET /cgi-bin/dummy.pl?AAAAAAAA..."! NIDS blinded.

    To avoid this, You have to terminate the SSL-tunnel in front of Your IDS, i.e. by setting up a transparent http-proxy holding the X.509-certificate and the key-pair on Your "application layer firewall". Most products do not offer this possibility.

    /graf0z.

  • It sounds like you have a sensible perspective. References you might consult include:

    -Practical Unix and Internet Security, 3rd edition

    Building Internet Firewalls, 2nd edition.

    Securing Windows NT/2000 Servers for the Internet

  • This is an arms race, and as soon as you give ground at all, you've lost.

    The reason people are implementing things like 'Web Services', overloading port 80 to provide potentially insecure services on a port previously thought reasonably safe, is that they don't understand the need for security and firewalls, they're frustrated by the restrictions you - rightly - put on them, and they want to shortcut around the firewall. If you allow them to, they will. Furthermore, they will employ half trained code-monk

  • I have worked on both web services infrastructure tools and developed web services, so this is a subject of interest :-)

    I have a few thoughts on security:

    Every computer inside a firewall should be as secure as possible. One compromised computer should not necessarily compromise your network.

    Web service responses shoudl be document centric - SOAP is best used not as RPC but as a rich document (XML payload) request. Make requestors sign their requests.

    Use SOAP over HTTPS.

    Avoid using Windows :-) (Hey, th

  • For those few of you who don't know, Murphy's law is: "Anything that can go wrong will".

    Many years ago I ran across a listing of many corollaries to Murphy's law: Many of them apply to security admin (some directly), like:

    • If it can go wrong, it will. If it can't go wrong, it might.
    • Investment in security will continue until the cost of security exceeds the cost of the breach -- or until someone insists on getting some 'useful work' done.
    • Create a system that even a fool can use and only a fool will use
  • I think the single most common point of "failure" is usually overlooked in IT security as it is in many security practices (asset, facilities, etc.). Security professionals tend to focus on the target and neglect the threat. With that in mind, it is important to realize that not only is the black-hat outside your network a threat, but so is the happless user who makes a poor judgement call and unintentionally opens up a window of opportunity for threat and vulnerability to come together into that lovely c

Suggest you just sit there and wait till life gets easier.

Working...