Changes in the Network Security Model? 261
Kaliban asks: "As a Sysadmin, understanding network security is clearly an important part of my skillsets so I wanted to get thoughts on a few things that I've seen recently after some discussions with co-workers. Are network services becoming so complicated that application level firewalls (such as ISA Server) are absolutely necessary? Is the simple concept of opening and closing ports insufficient for networking services that require the client and server to open multiple simultaneous connections (both incoming and outgoing)?This leads me to my next question: has the paradigm of 'if you offer external services to the Internet then place those machines onto a perimeter network' been eroded? Are application level firewalls sophisticated enough to allow machines on your internal network to advertise services to the Internet? When is it alright to 'poke a hole in the firewall' to allow this? Personally, I think the answer is 'Never!' but perhaps I'm out of touch with current network security models."
Try a three-tiered approach (Score:5, Informative)
First, for employees and others who have trusted access to your network, the answer is not to poke holes in your firewall. Rather, the answer is simple, just three letters. VPN. By setting up a secure, encrypted, authenticating channel, you bring your trusted users into your network. From your point of view and theirs, it is as if their machines were physically located on the other side of your firewall--just like having the machines right in your building.
Second, for business partners and contractors who need limited access to a subset of services, but whom you do not trust fully, the answer is quite likely also a VPN, but not directly into your network. For services provided to these people, you want everything from your end first going through application-level firewalls, and then through the VPN, over the Internet, to them.
Using a VPN in these cases prevents random hackers from entering your network on these levels.
Finally, for the general public who simply need access to your web site, the ideal situation is to simply host the web site on a network entirely separate from yours--possibly not even in the same city. Use an application-level firewall to help prevent things like buffer overflows. Then, if your web server needs to retrieve information from other systems on your network, have it communicate over a VPN, just like the second-level users mentioned above--that is, through additional levels of firewalls to machines not directly on your primary network. (Basically, you shouldn't consider your web servers as trusted machines, since they are out there, "in the wild.")
By following this approach, you expose nothing more than is necessary to the world, and greatly mitigate the risk of intrusion.
Re:Try a three-tiered approach (Score:5, Interesting)
We've solved the most immediate problem by allowing only ssh, and giving employees with Windows a copy of WinSCP (an excelent, two-pane Windows-FTP-client-look-a-like front-end to scp), which they have had no problems using (they did not have any oportunity to work from home before, so they don't complain
We also plan to later on introduce AFS and allow remote AFS mounts, and VNC remote-desktops.
Locally, we have a simple port-based firewall, basically walling off all inbound traffic except ssh and http (and allowing nearly all outbound traffic), and keep our OpenSSH and Apache servers updated (have you patched the two ssh bugs reported on
So, my advice is - keep it simple. Do not trust a too complicated system. And keep your software patched for the latest bugs - keep an eye on the security-update-service for your distro/OS and bugtraq.
Re:Try a three-tiered approach (Score:2, Insightful)
In all seriousness, he should be looking at a system that minimizes any potential damage, not some fance firewall solution that costs a bundle. Close down the ports and keep users from loading spyware or opening email with executable attachments.
Re:Try a three-tiered approach (Score:2)
The point is, though, there is no way to do this with their home computers. Sure you can do it with a well locked down internal network with all patches applied. Maybe. But you'd better have a damned hot perimiter system to scan for anything unauthorised being downloaded. You'll have to disallow encrypted content, too, and anything that could be a compression format you don't recognise. Or you c
Re:Try a three-tiered approach (Score:3, Insightful)
There really is nothing magic about vpns, they are quite dangerous - they can provide clear access to your internal network.
Re:Try a three-tiered approach (Score:2)
Allowing running ssh from a box that is potentially 0wn3d and running a key logger is a big hole in your security. Requiring a hardware firewall/VPN box on home systems could at least temporarily keep th
Re:Try a three-tiered approach (Score:2)
Oh yea, and assuming they enable WEP on the wireless ones. Which two out of three home owners (in m
Re:Try a three-tiered approach (Score:2)
Some of them also allow to enforce the use of an up to date antivirus
(IIRC)
Re:Try a three-tiered approach (Score:3, Interesting)
We've solved the most immediate problem by allowing only ssh, and giving employees with Windows a copy of WinSCP (an excelent, two-pane Windows-FTP-client-look-a-like front-end to scp), which they have had no problems using (th
Openssh/WinSCP (Re:Try a three-tiered approach) (Score:2)
If you allow them to use scp, yes they need to have a full shell then.
correction (Score:2)
(original text got eaten by
Re:Openssh/WinSCP (Re:Try a three-tiered approach) (Score:2)
Try searching the archives for debian-isp.
Re:Openssh/WinSCP (Re:Try a three-tiered approach) (Score:2)
http://www.sublimation.org/scponly/
Re:Openssh/WinSCP (Re:Try a three-tiered approach) (Score:2)
Re:Try a three-tiered approach (Score:4, Informative)
There's a very simple solution to that.
Put "AllowTcpForwarding no" in
Simple.
(Aside: there is a note in the openssh manual that reads "Note that disabling tcp forwarding does not improve security in any way, as users can always install their own forwarders." I think this only applies if you give them unrestricted shell access. See another post in this thread for information about a restricted shell that allows scp to work but prevents other stuff from executing).
Re:Try a three-tiered approach (Score:3, Interesting)
Re:Try a three-tiered approach (Score:3, Insightful)
Re:Try a three-tiered approach (Score:2)
You can't trust the corporate network either. The one and only time a virus/worm successfully got into my home was Blaster this summer when I was VPN'd into the office. Systems in the office infected the laptop I had brought h
Re:Try a three-tiered approach (Score:3, Informative)
Re:Try a three-tiered approach (Score:2)
There are a few worms, I know (eg this one [sans.org]).
(OK, so I'm getting fed up of people who should know better not distinguishing between viruses and worms).
Re:OT: Definition rant (Score:2)
I think 'malware' is a good term that generally encompasses all such code. The rest have specific meanings, and therefore using them in cases where that specific meaning doesn't apply is probably a bad idea and prone to cause confusion.
Personally, I blame "good times" for it all.
Re:Try a three-tiered approach (Score:5, Interesting)
Personally I don't think that physical seperation is necessary if you're going to be using a strong VPN, because of the fact that you can make it so that the only traffic that passes back and forth is through a VPN and is then no less secure (if anything more secure, except for the purposes of physical security) than if traffic were being passed over the internet. You also get the advantage of increased throughput, a single (or fewer) physical sites to manage, and lower bandwidth costs. Every little bit helps...
In any case, it is my opinion that any computer which can communicate with others on the internet, no matter how well-restricted such communications are, should itself be considered non-trustworthy. It might be safer for being behind a firewall, but it can still grab a trojan or worm either through accidental or intentional means and become a staging point for internal attacks. It is for this reason that I personally believe that it is imperative to ensure that every computer on a network is secure and has personal firewalling of some form installed (if you're dealing with *nix workstations this is a no-brainer for a competent admin; Windows boxen will benefit greatly from simple solutions such as Tiny Personal Firewall.)
This all goes double for boxen which are physically located outside of the network and which VPN inside (this is the reason for that last paragraph's worth of rambling.) A certain amount of distrust should be exercised for computers which can find themselves poorly protected from the dangers of the internet at times, and as such it is not only necessary to keep such boxes under close scrutiny and send their traffic through a decent firewall, but also to either educate users (as well as possible) on good security or require as a matter of policy that they utilize certain security measures (a personal firewall combined with a regularly-updated antivirus application is a potent combination that goes a long way towards keeping a computer clean.) Assuming that a VPN is a safe connection is a recipe for disaster; it prevents others from listening in but otherwise it is no better than any other old TCP/IP connection.
VPNs, of course, can be quite useful on an internal network. Packet sniffers tend to have difficulty picking up on SSH as it is, but put that through a 1028-bit encrypted tunnel and it become exponentially more difficult to crack apart (and such layering protects you from vulnerability as there are now *two* effective locks which must be picked in order to gain entry.) It isn't going to make a difference between two servers connected with a crossover cable and which enjoy strict physical security, but when traffic is being passed over a network with old windows 95 boxen running Outlook, it pays to be prudent. Such encrypted seperation, when used intelligently, can often eliminate the need to physically seperate network segments when connectivity can be useful.
Oh, one last point: if you're using a WLAN, it's only logical that unless it's strictly for visitors doing web surfing and chatting on AIM, a VPN is useful there as well. WEP is both less useful and far less effective.
As for a good VPN technology to use for any application, IPSEC is always handy (and enjoys excellent and robust out-of-the-box support in the more recent revisions of... almost everything.)
Sorry if this seems a bit unclear, but I've had a long day.
vpn is NOT a magic word (Score:5, Insightful)
are they firewalled properly ?
are their virus definitions updated ?
if no or "don't know" to either of those, then having a VPN will compromise any amount of safety it could bring. in other words, it's possible that the lastest and greatest worm that wasn't able to penetrate your office network until you patch is now vulnerable due to the work-at-home employee who VPNs in, and is now infecting everyone.
a bottom line is to have a well thought out security policy and PROCESS....and that only comes with training, more training, and training. Some education would help, too. Even people like Mudge and Dan Greer don't stop learning.
and for those who would call your questions stupid...they are the folks who are afraid to ask the stupid questions.
Re:vpn is NOT a magic word (Score:3, Informative)
It's not perfect of course, as a host could be compromised before SecureClient is installed, but in a controlled environment, that should never really be the case.
--
Re:vpn is NOT a magic word (Score:2)
Never trust a client side security solution. Sure, it helps, but reinforce it with added protection on the server side in case somebody subverts it (eg by using a hacked client or a reverse-engineered reimplementation that lack this feature).
Re:vpn is NOT a magic word (Score:2)
Having the remote security policy functionality does allow the firewall administrator to have a reasonable degree of trust in the VPN clients though, to the extent that they probably aren't 0wn3d and being actively used as gateways into the corporate network or whatever. Especially so if the clients (laptops, usually) are properly loc
Re:vpn is NOT a magic word (Score:2)
unfortunately, in the case of blaster, some oranizations (not mine, thankfully) counted RPC as an 'essential' service, in which case blaster infected an otherwise quarantined network off while the admins were patching.
Fallacies of an unsecure admin (Score:4, Interesting)
While this is simple to state, how many companies will follow this rule. Companies are not going to jail their users, so the first one who wants to listen to mp3's or streaming music, up goes Real, or Windows Media. What? You want to see the stock ticker from Bloomberg? Sure now you have multicasting crap. Get real, and that's not including someone who knows about things like datapipe.c
Rather, the answer is simple, just three letters. VPN. By setting up a secure, encrypted, authenticating channel, you bring your trusted users into your network.
You're either blind or too trusting in people. Remember the biggest security hole often comes directly from the inside. For instance, I know someone who has a VPN through IBM for her work. Lo and behold she wanted to take that same machine and hook up DSL to it. Say goodbye to security over VPN.
I won't get too deep into this since I'm tired but a VPN isn't always the answer. The answer is actually education. Instead of spending on a Cisco Pix, or Nokia VPN machine, try holding monthly meetings with employees and make them aware of issues. Doesn't have to be a full blown Harvard presentation, but a quick power point presentation will actually teach them things they could carry on in their home or future place of employment. VPN's are like security through obscurity in a way. If someone wants in a VPN will do nothing to stop them
Re:Try a three-tiered approach (Score:2, Interesting)
You can trust your employees [bbc.co.uk]?
Dont ever believe that your employees wont attack you. Some will attack you by accident (bringing infected machines into the office or something), some will even attack you out of spite.
You should only give trust to entities you have to trust in order to get the job done. You have to trust (some of) your servers or IT staff.. but you shouldn't have to trust most of the internal network.
Where possible, you should treat your network machines in the same way as you'd tr
Re:Try a three-tiered approach (Score:2, Funny)
Keep it simple. (Score:4, Insightful)
Application firewalls and filters are complex. To me that means more can go wrong, more holes can be found. And they have to be super effective, if they're a line of defense. Sounds nasty, like those stupid
For my money, leave the perimiter boxes. Defense in depth is a great strategy. They will get some, but they won't get them all.
Re:Keep it simple. (Score:4, Informative)
As part of a security test, we placed an NT4sp4 box with an unpatched install of the Option Pack--to install IIS (note that this is perhaps the most easily exploitable Windows configuration on the face of the planet)--behind an ISA sp1 firewall running on Windows 2000 sp3. We were unable to compromise or otherwise DoS either of the two NT servers with readily available exploit code for IIS or otherwise on either operating system.
Now, it may be possible to still exploit the aforementioned NT boxes, but clearly it would have taken a great deal more effort than just running a NIMDA-alike on the NT4 box.
The problem is firewall admins (Score:4, Insightful)
Take a look at this part of the original post:
Are network services becoming so complicated that application level firewalls (such as ISA Server) are absolutely necessary?
Yes. They are. You know why? Because jackasses thought it would be a great idea to slap firewalls on everything. It's an easy, one-off fix that's centralized. Does jack for actual security, but it's easy to sell to management, so IT people constantly claim that everyone needs firewalls all over the damn place.
So now we have a ton of firewalls inhibiting functionality all over the place. Do application vendors simply say "Gee, I guess we'll give up on doing interesting things with the network", due the best efforts of short-sighted sysadmins? No. They do ugly, slower, less reliable and harder-to-monitor things like rebuild everything and ram it through SOAP. And then sell the same stupid product right back to the "firewall-enabled" company. Now, everyone loses. The security is just as bad as before. The user gets a slower, less reliable experience. The sysadmin has a harder time monitoring usage and troubleshooting (since everything is obscured by the layer being used to bypass his firewall).
Firewalls are the singly most-oversold computer product ever, having displaced antiviral tools in the last year or so. Nothing ticks me off more than some sysadmin shoving another firewall in front of users.
Re:Keep it simple. (Score:2)
Multiple Firewalls (Score:3, Interesting)
Immature Technology (Score:5, Informative)
Nope. That should never happen.
The problem here is that application-level firewalling is fraught with problems. The lack of intuitive management for this type of firewalling is a problem that quite a few companies are trying to solve -- with limited success, so far. The problem is that as you move up the OSI layers, the variables increase exponentially. If you think that 65,536 is a big number, try writing an application-level script that permits "acceptable" MAPI requests while denying "unacceptable" MAPI requests. How do you determine that this NFS packet is good, and this one is bad? From the same host to the same server? How about X11? SSH? Oh, and don't break anything while you're at it. Lions and tigers and bears, Oh my!
These are the problems of an immature technology. As time passes, these issues might be somewhat mitigated, but there are plenty of "network administrators" that haven't fully grasped the concept of IP, and struggle with L3/L4 firewalling, to say nothing of moving up the stack.
Here's a tip, though; look for Bayesian filters in firewalls in a few years. That will be a trip.
Bayesian filters (Score:3, Interesting)
Re:Bayesian filters (Score:2)
I'm pretty sure some firewalls do this sort of thing already, too...
Re:Bayesian filters (Score:2)
But, 1% of network attacks are just as much trouble as letting 100% through. So why bother.
But, having a bayesian-filter enabled firewall sounds like a really, really cool thing, with that latest security buzzword, so expect to see them soon after all!
Re:Bayesian filters (Score:3)
Firewall is mainly a buzzword (Score:2, Insightful)
Gee, even RedHat jumped into the firewall bandwagon. At install-time instead of selecting which services I want to run, it runs God knows what and asks me which *ports* I want to open. Now if I want to run some new network service I have to waste time learning how to fiddle with this "firewall" so that the new ser
ideal vs practical (Score:5, Insightful)
The answer really depends on what you are protecting and whether or not the security required to protect it is worth the cost.
The only way application aware firewalls CHANGE the paradigm of network security models is for a certain class of protection. Usually that line of protection is or train of thought is "we would like something slightly better than nothing".
If you need protection more than that, it sounds like you already know what is best practice. That hasn't changed, and you are not wrong to suggest to your co-workers otherwise.
Think of it along the lines of what the military would do. Just because there is some new whiz bang motion tracking CCTV x10 ninja thing that shoots lazers. You better believe they are still going to have soldiers with rifles in watch towers, soldiers walking the perimeter, and 20ft of dead man zone and razor wire fences surrounding, along with the whiz bang consolidating gadget.
Re:ideal vs practical (Score:2)
You can extrapolate that further. Assume you have financial data and that is compromised. That may be more costly (think class action lawsuit or such) than the cost to recover your systems. I was giving the lad the benefit of the doubt that he understands the various aspects of cost.
If you think back to the military analogy...it's not likely they are worried about the c
Re:ideal vs practical (Score:2)
> That may be more costly (think class action lawsuit or such)
> than the cost to recover your systems.
Security is not really cost efficient, compared to the cost of a clueless manager-oid denying all stories of intrusion. And actual damage to data can always be fobbed off to computer errors, you know Windows and such.
And in the very unlikely case of a class action suit, the trial will go on for so long, that all concerned managers will be d
Re:ideal vs practical (Score:2)
even a small network has a HUGE cost to recover from intrusion.
I wouldn't say that's so. I consider my company's network to be a small network; it has 2 servers and 3 workstations. If it was compromised, our best estimate of downtime is 1 working day. Even if this were to be charged out at the highest rate our company ever works for over all 3 of our employees,
Some add'l tidbits (Score:5, Informative)
There are a few ways to handle the bane of netadmins - 'I wanna get to my files!' VPN, as suggested, is one solution - but not without problems. Recent issues with X.509, OpenSSH hacks for IP-over-SSH, etc. You can mitigate the danger by using a set of consistent criteria for each of your requirements, like a checklist. For example:
1) Is the service mission-critical? (BOFH them if no!)
2) Can the service be offered through a less-vulnerable channel? NFS mounts moved to SFS, perhaps, or encrypted AFS as mentioned above.
3) Is there a way to move the service into a perimeter network (or outside entirely)? Even if this means synchronizing a set of data to an outside machine via cron, if the data on the machine is less important than the internal network security, this can help.
4) Once the user is connected, authenticated and accessed, *THEN* what can go wrong? What could they do maliciously? What could they do accidentally?
Personally (and this is just me talkin', no creds here) I tend to reflexively say "NO!" until convinced otherwise. I know that there are services which *must* be available through the wall, but I want the requestors to have to work to convince me. Closed systems are more secure.
Also, don't be afraid to investigate low-tech but simple and effective means of circumventing problems. First thing I ask users who want to get an occasional file home - "Can you mail it to yourself?" Second thing: "Would you be able to use a 'public folder' that I have synch to an accessible box, say, every half hour?"
I second the opinion of iptables. It's a sharp tool, so be careful - but correctly applied, it kicks pants off most application or appliance firewalls. Invest the time to learn the sharp tool, and you'll realize that most of what you pay for on big expensive firewalls is manageability (i.e. Java GUIs, wizards, databases, multiple systems preconfigured - IDS, firewall, proxy, etc). Do the work.
Good luck. Don't listen to people who berate you for 'not knowing things.' Attempting to learn them in advance - due diligence - is a sign of a good admin. Be thorough. And above all, find a friend who does the same kind of work, and check each other. Probe each others' networks. Try exploits posted on the net.
Final, and most important - software updates. The boring part, but the most critical.
Cheers.
Re:Some add'l tidbits (Score:4, Interesting)
Thank you, you reminded me of the number 1 rule of security planning. In all of
Security is all about risk planning. There is no way you can either plug all the holes, restrict all the access properly, and manage all the resources. So, the question becomes not 'how to stop it', but 'what will I do when it goes tits up'?. Also, as someone undoubtedly has said, the only perfect security is in a concrete box, sunk to the bottom of the ocean. Well, yes.. but you always have to trade off security for usability. What's the point of being networked if no-one can access their files? People can access their files: dangerous security hole!
You see - its OK having all the security products in place, setting them up perfectly, but then an employee logs on to the database and walks away with a backup of all the credit cards...
and employee #2 gets a new toy, a wireless lan thing, and a passing hacker (theres always one), doesn't even have to raise a sweat listing off those same credit card numbers.
Think *all* your employees are trustworthy (haha), well, what happens if someone walks into your offices (for a meeting, for instance), and surreptitiously plugs a wireless laptop into a network port, tucks it under the chair and walks off? Doesn't even have to be a spare port, they can plug in a little hub.
You might as well ignore the technological security measures, sure you'd get hacked more often, but that just means you'd have to do a lot more work recovering the system - it does *not* mean that with the security products in place you'll never have to worry about performing that recovery process, so you dont need one.
So, given that it may go wrong at any time, and you've figured out what you'd do when it happens. You also have a disaster recovery plan- for when the server room floods and is hit by lightning, or 2 hard discs go pop at the same time.
Security - all about how much risk you'll accept, little to do with securing systems.
Re:Some add'l tidbits (Score:2)
Encapsulating protocols is a "bad thing" (Score:4, Insightful)
RPC over HTTP is a good example of this, as are the many other protocols people see fit to encapsulate in HTTP (RDP / Terminal Services, instant messaging, etc).
Originally, the rules were dead simple. One port == one protocol. Some protocols used multiple ports, but even then it was kept nice and simple. But no, not everybody liked this situation. In the interests of making IM available to more people, clients started using HTTP so that even office staff (behind firewalls and proxies) could use it. Sure, this was blatantly circumventing the firewalls that were put up for this very reason, but that didn't stop anybody.
Application layer firewalls are a must-have. Of course, that will just force people to start using SSL...
Re:Encapsulating protocols is a "bad thing" (Score:4, Insightful)
Developers tend to do the least work necessary to achieve the result they desire. The fact that so many protocols run over HTTP now indicates that the developers of the applications that use these protocols have been unable to persuade the systems administrators to open ports so that they could their necessary applications to work. Instead they resorted to the harder task of layering to avoid the blocks.
The sysadmin that said "I like people to do some work to convince me" says it all. The attitude is that of a power-monger. A pragrmatic sysadmin would work with the applications developers to find a solution. Maybe they frown upon opening ports for applications, but they should at least put the effort in to explore the options otherwise we'll always end up with this layering effect for every networked application. I wonder how long it takes before we end up with protocols running over HTML/HTTP to avoid the application firewalls that start blockting non-HTML HTTP traffic.
Lesson number 1: (Score:4, Insightful)
Lesson number 2: (Score:2, Insightful)
Looking for security in all the wrong places (Score:2, Interesting)
Reality (Score:2)
Those who don't need to pass traffic inside are afforded that luxury due to the fact that they don't have a job. Anyone can decently secure a network that doesn't interact with anything; the real trick is allowing business to flow as usual and *still* have an acceptable level of Security.
Re:Reality (Score:2)
(I hate jackoffs that don't read the original post correctly)
I depends on the size of your network (Score:5, Interesting)
Application layer firewalls are another layer above port filtering. They can increase security and could, in theory, make it worthwhile to share a service hosted on a machine that is inside your network. I would only do that if you trusted the security of your internal network. Most network designs assume that once you get in to the "internal network" there is no more security and all your deepest company secrets are available to anyone browsing around. If this is true, you've probably made some bad decisions somewhere along the way and you should address those before you open any holes. If you are willing to maintain strict security on your internal network then the added simplicity of allowing Internet access to machines on it can be worth the risk. This can be a lot easer than setting up a dmz.
Usually layers do make sense though, even if one of the layers is just a Linux box doing firewalling, routing and serving some services. One thing I like to do is to mix operating systems at different layers. That way if you get a worm of some kind that gets into one layer it won't penetrate to the layer behind. For example, internet facing servers are Linux based, desktops are Windows based.
Another thing I have done when I absolutely needed a Windows based web server is to setup Apache as a reverse-proxy only forwarding requests to a particular subdirectory to the Windows server. This filtered out all the standard buffer overload attacks since none of them referred to that subdirectory name. It also made sure the requests were relatively well behaved and buffered outgoing data for the Windows box, reducing connection counts when it was under high load. This is an easy way to do an application layer firewall and if you are firewalling with a Linux box you can do it right on the firewall.
interesting article... (Score:3, Informative)
Battle lines have been drawn, and volleys are being lobbed between the analyst and vendor camps. In dispute: Whether intrusion prevention is out of commission or the next network security salvation.
On one side, Gartner has cast intrusion detection into its "Trough of Disillusionment", saying the tech has stalled and calling for these functions to be moved into firewalls. Meanwhile, intrusion-prevention product vendor ForeScout Technologies vows to identify and block attackers "with 100% accuracy".
Call us Switzerland, but we say neither group has a lock on the truth.
Network intrusion prevention (NIP) systems probably will not protect your network from the next zero-day exploit or troublesome worm, but they are not a waste of time or money, either.
Our position puts us in the minority: Though we think NIP systems can enhance an existing security infrastructure, we do not consider integrating intrusion prevention and firewalls into a single unit a desirable goal.
Firewalls vs NID Firewalls have a largely static configuration: firewall administrators define what is acceptable traffic and use the features of the firewall to instantiate this policy.
Some firewalls provide better protection features than others. For example, an HTTP application-level proxy is far superior to an HTTP stateful packet-filtering firewall at blocking malicious attacks, but the basic idea is the same: Your firewall administrator can be confident that only allowable traffic will pass through.
If you have doubts about your firewall, get a new one from a different vendor, send your firewall administrator to Firewall Admin 101, or get a new administrator.
Not surprisingly, when we asked you why you are not blocking traffic using network-based intrusion detection (NID) systems, 63% of you said you use a firewall to determine legitimate traffic.
But people make mistakes, so misconfigured firewalls are a common source of network insecurity.
This simple fact has been used as a selling point for both intrusion detection and prevention systems, with vendors claiming their products will alert you to, or block, attacks that do get through.
The answer: Instead of layering on more hardware, solve the fundamental problem of misconfiguration.
Think configuring is easy? Unfortunately, though, it is not that simple. If you are enforcing traffic policy on your network using a stateful packet-filter firewall--such as Cisco Systems' PIX, Check Point Software Technologies' FireWall-1 or NetScreen's eponymous product--without security servers or kernel-mode features enabled, you should know that application-layer exploits, such as server-buffer overflows or directory-traversal attacks, will zoom right through. Stateful packet filters stop at Layer 4.
Application-proxy firewalls can block some attacks that violate specific protocols, but face the facts: protection is limited to a handful of common protocols.
The rest are not supported through a proxy, or are supported through a generic proxy, which is no better than a stateful packet filter.
Still, NIP is not a replacement for firewalls and will not be in the foreseeable future. Why? The fundamental problem is false positives--the potential to block legitimate traffic.
Before you can prevent attacks, you have to detect them, but NIP systems rely on intrusion detection, which is hardly an exact science.
A properly configured firewall will allow in only the traffic you want. We need to feel this same confidence in IDSs before we can believe in NIP systems, but IDS vendors have employed lots of talented brain cells trying to raise detection accuracy, and they are nowhere close to 100%.
Incoming! Despite these caveats, we believe a properly tuned NIP device can be instrumental in warding off most malicious traffic that gets past your firewall.
There are several ways to block malicious traffic: If the NIP device i
Are you NUTS?! (Score:5, Insightful)
I am the head sysadmin for a company that has many Linux, Windows, and Solaris servers, and other specialty systems such as Cobalt Raqs, proprietary satellite equipment like IP enabled RF Modems, MUXes, IPEs and a shitload of high-bandwidth routers in multiple POPs around the world. If you think that a firewall to protect your network is insufficient, especially for a network with mixed OSes and such, you are terribly wrong. Imagine working in an ISP. You have your private workstations, then your servers (DNS, MXes, etc.), then your colocation equipment. Put it all on the same network? Suuuuure!! WHOOPS! Someone hacked into a colo box and then used his r3wt account on that box to scan your internal network for other vulnerable boxes (all at the same time, using your T1/T3/OC-192 for hosting the world's biggest movie IRC bot). You didn't have a firewall and/or IDS to detect the initial portscan on the colo box, and now you don't know that he/she is sucking up your bandwidth and scanning your entire internal (well, to you it's internal, external, whatever) network for more boxes to royally *$#! up. Trust me. Once a box is rooted, you take it of as SOON AS POSSIBLE and reinstall. It's a shitty feeling knowing that someone owned YOUR network and now you have a shitload of crappy work to do over the weekend. Not to mention downtime, customer/employee complaints, fielding the hundreds of "I CAN'T CHECK MY E-MAIL!!! BOO HOO!" calls, and general feeling that maybe...just maybe there's a box that got 0wnz0r3d that you might not know about.
The moral of this story, boys and girls, is that FIREWALLS ARE GOOD. Intrusion detection systems are GOOD. NAT is GOOD. TCP syncookies are GOOD. Everything on the Internet is vulnerable by default unless YOU TAKE THE TIME TO SECURE IT YOURSELF. Keep the colo systems on their own subnet. Shit, keep each SYSTEM on it's own 2 port VLAN with the uplink. Keep your servers on a DMZ. Keep your internal workstations on a TRUSTED, PRIVATE, NATted network. Close every damn port besides the ones that are used by servers. Do not open ANY ports to your trusted, internal network. If someone roots a box, at least they can't load an SSH trojan on port 2000 with no password and automatic root access to get back in later.
Small nit (Score:3, Insightful)
One problem with this is that simply reinstalling a r00ted machine is no guarantee that it won't immediately be r00ted again.
While being hacked sucks, it is the worst time to panic. Remember, when you suddenly notice something strange on the machine and realise you've been owned, it could have been compromised for weeks or even months.
While you should immediately prevent it from doing further harm, you should also attempt to do a bi
Re:Are you NUTS?! (Score:3, Interesting)
Knowledge that LIDS is present on a system being accessed, indeed if they can determine that LIDS is present, will send even the best hackers fleeing the moment they discover it. Anything built around a MAC (Mandatory Access Control) file system is bad mojo. You'd have to be working for a first world intelligence agency to even dream of sticking around.
C//
Users (Score:3, Insightful)
Who cares about the network? (Score:4, Interesting)
Firewalls are great at slowing down intrusions. However, without proper application security architecture and host-level security hardening, you cannot really protect a network-accessible resource. Often times, the only resource (network, application, host) that we can control 100% of the time so that it can be trusted is the host.
Besides, the bulk of compromise situations occur INTERNALLY. Is that PIX on your WAN router really going to stop disgruntled Gary down in QA from trying out across 5 subnets the latest script kiddie tool that his roommate hooked him up with. If you spend quality time hardening your hosts, chances are you may really not lose more than a few hosts at a time during a significant compromise at the application-layer (e.g., a remote root sendmail hole, a bug in BIND). I think we need to revive the popularity of security "tuning" on the host side - a lot of people forgo it for strong network security but I think that the latter is a much more difficult perimeter to maintain.
I've seen others post about the dangers of VPNs. I totally agree, they are conduits for information loss, but are likely to be mostly self-generated (internal). Example: Disgruntled Gary in QA sucks down the product roadmap details off the Intranet before giving his 2 weeks notice and starting to work for a competitor.
Apologies to Gary's everywhere. ;-)
Re:Who cares about the network? (Score:2)
Yes, I know we can do a limited amount of filtering internally already but there's nothing even close to what I have in mind. I'm thinking of application-layer filtering, perhaps even down to blocking specific attacks. Similar to Snort in some wa
Security versus usability (Score:3, Interesting)
H.323, SIP, SKINNY etc. all require many ports to be used which is a nightmare to a firewall admin. As a result, firewalls are evolving to include support for these systems, but my fear is that the overly (in my opinion) permissive nature of firewalls which allows these connection, is ripe for exploitation by future crackers/hackers.
While I was supporting firewalls, my mantra was to close every damned thing I could and the users can suffer. But I also realize that in a modern network, usability is a major concern. Companies are deploying VIOP networks in record numbers while saving thousands of dollars each month. Companies need to reduce overhead to remain profitable, so they are looking at new technologies to help them. If the firewall industry cannot keep ahead of these technologies, it will ultimately fail.
I think that the time of using access-lists to controll traffic is nearing an end. This will result in slower overall performance of firewall solutions as application level firewalling becomes mandatory, rather than the past of transport layer firewalling.
I am afraid that I have no easy solutions, but I hope that the industry will be able to remain both secure *and* usable.
Hell, perhaps in the future security will be built into operating systems and network resources, rather than the reacitve nature that we enjoy today.
Re:Security versus usability (Score:2)
If I were in charge of a VOIP rollout, I'd use IP-based phones (NOT software) and make the VOIP network physically separate, just like the old phone network was separate. Therefore, you'd have a separate firewall whose job is VOIP only and you don't have to open up your workstation network to a security nightmare.
If people really insist on a computer-based VOIP system, a separate low-cost workstation can be used connected to the physically separate VOIP socket on
Application-level firewalls (Score:3, Informative)
There's a few very sophisticated application-level firewalls available on the market, but they all pertain to a very specific set of protocols. NFS and MAPI are none of them, as these are far too complex and it's too hard to distinguish bad from good traffic; HTTPS, on the other hand, is pretty well suited to full application layer inspection, and this can make it very practical to actually allow access to an application on your INTERNAL network from the outside. However, on the side of the application-level firewall, this requires very sophisticated rulesets that require modification whenever the application changes, and that require a very skilled administrator. Whale Communications makes one such product (e-Gap Application Firewall), which could easily be the most sophisticated application level firewall for HTTPS. There are other vendors though that offer reverse proxies including authentication that will do session management and only forward traffic belonging to live, authenticated sessions, that could possibly as well make it practical to have the application run on your internal network.
Just think about it - in an ideal world, you could connect your database only to the web - no replication to the insecure area (DMZ), no (not in the Windows meaning of the word!) trust relationship with the DMZ, no poking holes in your firewall for DB/RPC/other proprietary communication protocols, no bringing out and maintaining the same set of hardware and software twice...
BUT this comes at a price - secure application layer proxies require skill and money.
Disclaimer: I work for a company that has implemented the Whale solution in Germany for 2 years. However, I chose the Whale solution for its technical merit solely.
The Internet Will Become Port 80 (Score:5, Interesting)
We're already seeing shades of this, but it hasn't reaced the majority of Internet users yet. Back in late 90's my company rolled out a product for schools that to be retooled when it was realized that many schools were firewalling everything except port 80. (They added a mini proxy server to the product that sent everything over 80.)
I have a friend that's a sysadmin for a medium sized insurance company - and they had all their internal applications break a couple weeks ago when a MS worm started bouncing around the Internet. However, the problem wasn't that they were using Windows machines (I think all their servers were AIX...)- the problem was that their ISP (the regional phone company) had blocked off the port that all their applications used because it was the same port that the worm used to get into systems. Last I heard, the phone company was refusing to ever re-open the port. (The phone compnay made the change without even informing anyone at the insurance company, everything just stopped working and from what I heard it took them a day to figure out why their data wasn't getting through. I believe they were resigned to changing all their programs to work on a different port.)
So, we've already come to the point where connections on other ports seem strongly subject to the winds of fate, and I see no reason the situation won't get worse. In most environments 80 is the only port that people would notice if it was blocked, and there are too many sysadmins out there who don't know any better. Right now, if I was developing an application that needed to communicate on the Internet, I would only trust that it could use port 80, and I wouldn't even bother looking at anything else. You can even see application enviornments starting to spring up now (Flash Central) where it's assumed that most applications will just share a port 80 connection.
It sure is a sub-optimal situation, but I don't know what can be done to stop the trend. Ironically, such a situation makes simple port-blocking firewalls useless because all applications will be running on port 80 anyway.
Re:The Internet Will Become Port 80 (Score:2)
I see where you're coming from, but your conclusions are not entirely accurate.
There are too many financial institutions (to name just one aspect) whose apps require either different kinds of connection security from what you get from standard HTTP, and who won't be willing to take the "tunnel everything over the web port" approach.
For end-user private use, to a degree, maybe.
Nice use for a firewall (Score:2)
In this case you might be able to solve your problem with pairs of nat boxes.
Let's presume tha the virus talks back on port 1022 and your office servers are at 1.2.3.4, and that's the port that you're using. .. In front of your remote boxes you'd put a firewall that (among other things) would translate outbound connections to 1.2.3.4:
Re:Port 135? (Score:2)
thanks for the info folks (Score:2, Interesting)
For those who did comment, thank you kindly. I appreciate the ideas and just so folks better un
Re:thanks for the info folks (Score:2, Informative)
There are a couple people who just need to POP their email while away. Perdition POP3-proxy over SSL is a decent solution. Setup POP3 proxy box on a separate network (ie. DMZ) from the Exchange Server and you're set.
There are a few that must have OWA access. For them, set up a reverse proxy with Apache/Squid and get a certificate for this server to communicate with your Exchange/OWA/IIS box.
And forgoodnesssake relay all your email thru someth
Re:thanks for the info folks (Score:2)
On the other hand, for a large company like Newport News Shipbuilding, with > 10k employees, and more than 3000 engineers, that really isn't going to be practical, is it?
Interesting thought... but suppose you were to have the two
Re:thanks for the info folks (Score:3, Informative)
Loads of badly designed services (Score:2, Insightful)
These are services that are using an end-to-end protocol approach without provisions for a concentrator and filtering server within your company, requiring connections from desktop to desktop across corporate firewalls. There are services that hide their payload in normal http or https requests, requiring you to parse HTTP and XML in order to select which requests you pass on and which you don't. There are services that require backward connects on
Keep it simple and sane - and DMZ (Score:4, Interesting)
But this is not where it ends, because you still haven't dealt with (a) inside abuse and (b) the possibility of failure. Good security design takes failure mode into account. Plan for when somehow your defenses are breached. Tripwire your firewalls and core systems and check them, lob the odd honeypot in the internal network which will give you early alerts that someone is scanning the place or a virus has entered (last year I caught one of them very early because of a rather suspicious Apache log) and make sure you have a patch strategy that has a short cycle time (depends on your risk tolerance, but especially your firewalls will need attention). Where possible, segregate the more critical facilities out so you can more accurately protect them (just consider your users hostile - don't answer the support phone for half a day if you want a more realistic version of that feeling
Oh, and think about what platform you run your security services on. I don't prefer a Unix over Windows because it's more or less safe (that's actually more complex than appears at first glance - donning asbestos jacket
Hope this helps. =C=
Well, (Score:2)
You are out of touch with current network security practices, but that's a good thing. Most security guys these days are just not thinking straight, IMHO. The first order of business is to clearly delineate your real internal network and your semi-publicly-accessible DMZ where public services are hosted. No traffic crosses the DMZ without going through a proxy service or application level gateway of sorts. Secondly only setup simple (and password protected I might add) proxies for outbound connectivity.
Thoughts from an Australian firewall admin (Score:3, Interesting)
1. What is the industry standard
2. What can we get support for locally.
Application firewalls have really done poorly here in Australia. I speak from experience - used to be a security 'engineer' (read, install Gauntlet), and have since moved on to network security administration.
The main vendors I've seen in the marketplace are (or were) Gauntlet, Sidewinder, and Cyberguard.
NAI dropped the ball with Gauntlet both here and abroad. The technology behind it is excellent, but the support really, really sucked. In addition, the administration was performed via a highly unintuitive java-based application, that everyone I knew *hated* to use. You often ended up simply going back to the command-line to configure the beasts.
Sidewinder I have no formal experience with, but have heard good reviews. Secure Computings presence in Australia was limited to international firms that required its use. There was no "storefront" for quite some time.
Cyberguard I have seen at a handful of places, mainly banks (and apparently also at various
All of these are technically good products. But due to their lack of popularity and market presence, they don't get used.
So it's a glorified packet filter I go to add a rule to now..
Best practices to the rescue (Score:4, Informative)
The simple answer to this question is "Definitely not." The use of a DMZ segment to keep production machines on their own physical network segment is likely to never become obsolete because the benefits of this simple step are so great.
Whether they are or not is irrelevant. Only the barest minimum of your network should be exposed to another network (especially the Internet), and those hosts that _are_ should be unable to initiate connections to the rest of your network to reduce the impact of the loss of confidentiality in the case of an intrusion. While this may seem rather anal-retentive, to implement a proper application level firewall, the firewall can't just casually filter by generic service type. It _has_ to be able to distinguish a kosher query from a malicious one, and this requires a LOT of detailed work in the firewall rules to ensure that only the queries you want passed through can be passed. If you have a lot of custom CGIs with input parsing, this can turn into a nightmare of man-hours to maintain.
I mainly agree with you and feel that the answer is really "Almost never", with "never" requiring some support from the developers maintaining your site. If they're on-board with you on the concept of a DMZ, they'll help you by designing the production system so that connections could be made _to_ it from the intranet to extract information from the production hosts, instead of making the production hosts initiate connections to the intranet and increase the chance an intruder could do the same. If you can't control the access because it's some wacky proprietary protocol, institute a second DMZ (network cable is cheap and so are extra NICs). No other network should ever be allowed to reach inside your intranet.
ISA Server in front of Exchange (Score:2, Informative)
It depends... (Score:2, Insightful)
At the moment, for my "day job" (which is really at night, but never mind that), I do sysadmin and networking stuff for an international investment bank. The information on our computers is worth on the order of tens of billions of dollars on the market, not to mention the very serious privacy implications if there were a compromise (which have specific legal consequences in some of the jurisdictions where we operate, and seriou
Linux laptops (Score:2)
How you will get r00ted (Score:4, Insightful)
Playing the "security component Lego" game is great fun, and a little intelligent thought will soon see you set up with a nice, best-practice architecture. This is how it will then fail.
1. You will have unpatched machines which will be trivially rooted with a script-kiddie exploit. You will know that you should have patched, but you won't have the time, manpower, or authority to ensure the patches are in place.
2. You will misconfigure something, and then miss the problem in reviews because you didn't get peer or professional verification of all your configs.
3. You will get owned by an internal employee who has exactly the level of trust that you planned for, but abused it.
4. Someone will walk in with a clipboard, bamboozle the secretary and walk out with your fileserver.
5. You will create a whole bunch of really cool procedures, but the CIO / CTO won't back them when the first departments complain about lost productivity - this will undermine the whole thing and you will be back at square one.
6. You will give someone VPN access, and they will connect their virus and worm ridden home machine. It will infect your network, and their kids will surf pr0n and share mp3s on your dime.
7. Your backups will have some unforseen problem, your restore procedures won't work right because they aren't tested, and you will lose much company data (and your job).
8. Your users will deliberately download trojan-ridden, virus infected, IE Object Overflow infested garbage, despite clear, explicit orders to the contrary being sent to them twice a day. They will do this because dancing rabbits are somehow more compelling than 'all those emails from the grumpy tech guys'.
When we talk about the 'current paradigms', I don't even think about fancy technology, I think about these obvious threats that always apparently only happen to other people, because some wiseguy always knows better. "IF you do blah blah, like we do..."
Your "paradigm" wish might be: "I want a network where every single part is doing as best it can to defend itself against the threat at the keyboard as well as the threat from external attack - not a perimeter, not 'tiers', but every part."
The problem with "application level firewalls" (Score:3, Informative)
These system may filter standard attacks (i.e. exploits you find at bugtraq, packetstorm) quite good, but You can image that it's easy to poke such a firewall by varying an attack. They know many standard ways of varying (like "/cgi-bin/../cgi-bin/" instead of "/cgi-bin/", or inserting NOPs into a rootshell), but there are a thousand and one way for doing the same thing, and most won't get detected.
So: Do NOT think Your $XXXXXX application level oversecure paranoia firewall ransoms You from secure network design or patching Your systems! Instead, do the usual:
To summarize: You have a excellent chance of averting 99% of all attacks (as those are known attacks of script-kiddies/zombies/...) with standard techniques like the above mentioned. You have a good chance of making a random hacker to move away to an easier target. You have almost no chance of averting a skilled hacker with time who wants to get into YOUR machines.
What about a small company? (Score:2)
It's been very edifying listening to you guys talk about your DMZ servers and your application level firewalls and your apparently infinite budgets for admin time and hardware, but what about the real world of small (<10 employees) businesses with a single server running Windows 2000 Server and Exchange 2000 on a single network segment. No ISA, No Checkpoint, no time, no money, no dedicated admin, no understanding of why it might be a good idea. Mostly what we've got for these folks is a Linksys
Re:What about a small company? (Score:2, Insightful)
It is prudent to assume that something 'bad' will happen; it's just a matter of time. With that assumption, start figuring a monetary value next to the loss of each kind of data you have. How much would it cost you to rebuild your customer database, weather legal action from customers, etc. in the event that the customer database is broken into and de
inspection and encryption are incompatible (Score:2)
Want to attack a httpd behind a mature NIDS? Establish an SSL-session to port 443 and send Your "GET /cgi-bin/dummy.pl?AAAAAAAA..."! NIDS blinded.
To avoid this, You have to terminate the SSL-tunnel in front of Your IDS, i.e. by setting up a transparent http-proxy holding the X.509-certificate and the key-pair on Your "application layer firewall". Most products do not offer this possibility.
Current Network Security (Score:2)
-Practical Unix and Internet Security, 3rd edition
Building Internet Firewalls, 2nd edition.
Securing Windows NT/2000 Servers for the Internet
Arms race and overloading (Score:2)
This is an arms race, and as soon as you give ground at all, you've lost.
The reason people are implementing things like 'Web Services', overloading port 80 to provide potentially insecure services on a port previously thought reasonably safe, is that they don't understand the need for security and firewalls, they're frustrated by the restrictions you - rightly - put on them, and they want to shortcut around the firewall. If you allow them to, they will. Furthermore, they will employ half trained code-monk
Security and Web Services (Score:2)
I have a few thoughts on security:
Every computer inside a firewall should be as secure as possible. One compromised computer should not necessarily compromise your network.
Web service responses shoudl be document centric - SOAP is best used not as RPC but as a rich document (XML payload) request. Make requestors sign their requests.
Use SOAP over HTTPS.
Avoid using Windows :-) (Hey, th
Murphy's law: Paranoie rules. (Score:2)
Many years ago I ran across a listing of many corollaries to Murphy's law: Many of them apply to security admin (some directly), like:
The real "greatest risk". (Score:2)
bad advice, son... (Score:4, Insightful)
Troll! (Score:2, Insightful)
Maybe what you're saying is "why ask on slashdot instead of asking people who know?" That would be a different question
Re:How Do I Compare then? (Score:3, Informative)
If your virus software is kept up to date then your Linksys will serve you well. Keep a good backup of your data for the times that your antivirus update comes after the virus/trojan/worm infection.
I might suggest your worst enemy is a coworker or familymember of said coworker.
-sid
Re:How Do I Compare then? (Score:2, Insightful)
how do you know whether to trust the XP updates? did MS break anything in the newest update? it's been known to happen
your linksys is probably pretty secure, I don't know of any exploits. doesn't mean there aren't any!
but you can do some proactive things:
- keep watch on CERT & other security sites.
- get some of the professional and hacker intrusion tools and ru
Re:How Do I Compare then? (Score:2)
Make sure you have some kind of antivirus program on those workstations. Then you have to worry about the stuff on the computers. Make sure there's a plan so you won't go out of business when that data gets accidentally deleted or stolen or burned or turned to ones and zeros by some virus.
Re:How Do I Compare then? (Score:2)
1) keep your firmware updated on the linksys
2) make sure the default passwords on the linksys have been changed
3) Make sure nobody plugs a wifi router or card into the system.
Also, make sure you have a virus scanner on each of those boxes, as there's nthing in your system to protect against malware.
Re:ok (Score:2)
Just because it's not mentioned, doesn't mean it's not there. Like someone else said, it's more a question of defaults.
Any recent Linux distribution will have IPTables installed (ealier versions had ipchains).
Starting around RH7.3, RedHat started running lokkit [linux.org.uk] by default on system setup. What lokkit does is, for any setting other than 'none', it locks out all/most incomming connections, but lets you specify that you want to allow specific ports inbound (lik
Re:ISA Server (Score:2)
If you can find anyone who has successfully penetrated a properly configured ISA server, I would consider it a great service if you would enlighten me about it. But, until then, I have reluctantly accepted the fact that ISA is a rather good firewall that offe