Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Changes in the Network Security Model?

Cliff posted more than 10 years ago | from the navigating-the-mutating-landscape dept.

Security 261

Kaliban asks: "As a Sysadmin, understanding network security is clearly an important part of my skillsets so I wanted to get thoughts on a few things that I've seen recently after some discussions with co-workers. Are network services becoming so complicated that application level firewalls (such as ISA Server) are absolutely necessary? Is the simple concept of opening and closing ports insufficient for networking services that require the client and server to open multiple simultaneous connections (both incoming and outgoing)?This leads me to my next question: has the paradigm of 'if you offer external services to the Internet then place those machines onto a perimeter network' been eroded? Are application level firewalls sophisticated enough to allow machines on your internal network to advertise services to the Internet? When is it alright to 'poke a hole in the firewall' to allow this? Personally, I think the answer is 'Never!' but perhaps I'm out of touch with current network security models."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered


wtf? (-1, Offtopic)

Anonymous Coward | more than 10 years ago | (#7091266)

has slashdot slashdotted itself?

Si, Slashdot se ha slashdotto (0)

Anonymous Coward | more than 10 years ago | (#7091282)

No es solamente usted. Necesito esperar mucho para ver las paginas de Slashdot. El Senor Taco necesita comprar computeradoras mas poderosas para su sitio de web.
Un otra cosa - bustamante es un sinonimo para disastre.

Re:Si, Slashdot se ha slashdotto (-1, Offtopic)

Anonymous Coward | more than 10 years ago | (#7091294)

SHUT THE FUCK UP, YOU DIRTY SANCHEZ! and fuck you too, lameness filter. fuck off and die. suck my cock, malda.

Wrong question (-1, Troll)

Dancin_Santa (265275) | more than 10 years ago | (#7091269)

You ought to be asking how you find yourself asking such rudimentary questions and yet consider yourself prepared to take on the role of system administrator.

While slammed for being paper tigers, Microsoft Certified engineers and Redhat Certified engineers have at least the proper background knowledge to confront the day to day operations of a corporate network. These simple questions that you pose are already taken care of, for the most part, and any sysadmin worth his salt has already set up scripts to handle any contingencies that may arise.

So the long and short of it is, go back to school and study up on the subject. If you already knew the answer, you wouldn't be asking the question.

bad advice, son... (3, Insightful)

vt0asta (16536) | more than 10 years ago | (#7091307)

He already has the standard generally accepted rule of thumb answer... "Never!". What he wanted to know was, if these newer fancy schmancy firewalls are changing these rules, where it might be acceptable. The answer ofcourse is it depends...not go study up and give back the same answer he already knows to some professor or cert authority. Long and short of it, you have the wrong answer.

Old Joke (0)

Anonymous Coward | more than 10 years ago | (#7091335)

hehehe, reminds me of an old joke:

Why isn't cliff a sysadmin?

Because the only thing he knows about services and holes is getting his hole serviced by cmdr taco and michael.

Re:Wrong question (-1, Flamebait)

Anonymous Coward | more than 10 years ago | (#7091313)

since your so smart why dont you answer his question

Re:Wrong question (-1, Flamebait)

Anonymous Coward | more than 10 years ago | (#7091322)

Wow, you sort of sound like a jerk. Maybe you should contribute some of your considerable intellect (at least from your point of view) and help someone out.

Re:Wrong question (-1, Flamebait)

Anonymous Coward | more than 10 years ago | (#7091330)

Wow, you must be fat.

Troll! (1, Insightful)

mveloso (325617) | more than 10 years ago | (#7091331)

Times change, and technology always requires you to keep up to date.

Maybe what you're saying is "why ask on slashdot instead of asking people who know?" That would be a different question :)

Try a three-tiered approach (5, Informative)

Eponymous Cowboy (706996) | more than 10 years ago | (#7091270)

There are three disparate levels of security you need to consider, and it is advisable to take a three-tiered approach to the problem.

First, for employees and others who have trusted access to your network, the answer is not to poke holes in your firewall. Rather, the answer is simple, just three letters. VPN. By setting up a secure, encrypted, authenticating channel, you bring your trusted users into your network. From your point of view and theirs, it is as if their machines were physically located on the other side of your firewall--just like having the machines right in your building.

Second, for business partners and contractors who need limited access to a subset of services, but whom you do not trust fully, the answer is quite likely also a VPN, but not directly into your network. For services provided to these people, you want everything from your end first going through application-level firewalls, and then through the VPN, over the Internet, to them.

Using a VPN in these cases prevents random hackers from entering your network on these levels.

Finally, for the general public who simply need access to your web site, the ideal situation is to simply host the web site on a network entirely separate from yours--possibly not even in the same city. Use an application-level firewall to help prevent things like buffer overflows. Then, if your web server needs to retrieve information from other systems on your network, have it communicate over a VPN, just like the second-level users mentioned above--that is, through additional levels of firewalls to machines not directly on your primary network. (Basically, you shouldn't consider your web servers as trusted machines, since they are out there, "in the wild.")

By following this approach, you expose nothing more than is necessary to the world, and greatly mitigate the risk of intrusion.

Re:Try a three-tiered approach (-1, Flamebait)

Anonymous Coward | more than 10 years ago | (#7091285)

I would just like to add to this that any firewall should block access from niggers. All niggers do is break stuff and steal, so you really don't want them on your network at all.

Re:Try a three-tiered approach (0)

Anonymous Coward | more than 10 years ago | (#7091577)

I would like to do that. Do you know how to achieve it with Zone Alarm free adition?

Re:Try a three-tiered approach (5, Interesting)

redhog (15207) | more than 10 years ago | (#7091306)

One thing that I need to consider at my current job is that you can NOT trust employees computers at home, even if you can trust employees - if they are running Windows, they are potential virus and worm vectors, and needs to be shielded off, so a simple VPN-solution is no solution.

We've solved the most immediate problem by allowing only ssh, and giving employees with Windows a copy of WinSCP (an excelent, two-pane Windows-FTP-client-look-a-like front-end to scp), which they have had no problems using (they did not have any oportunity to work from home before, so they don't complain :).

We also plan to later on introduce AFS and allow remote AFS mounts, and VNC remote-desktops.

Locally, we have a simple port-based firewall, basically walling off all inbound traffic except ssh and http (and allowing nearly all outbound traffic), and keep our OpenSSH and Apache servers updated (have you patched the two ssh bugs reported on /. on your machines yet?).

So, my advice is - keep it simple. Do not trust a too complicated system. And keep your software patched for the latest bugs - keep an eye on the security-update-service for your distro/OS and bugtraq.

Re:Try a three-tiered approach (2, Insightful)

littlerubberfeet (453565) | more than 10 years ago | (#7091340)

Well, security through obscurity works. He should get a bunch of VAX boxes and force employees to use Macs at home.

In all seriousness, he should be looking at a system that minimizes any potential damage, not some fance firewall solution that costs a bundle. Close down the ports and keep users from loading spyware or opening email with executable attachments.

Re:Try a three-tiered approach (0)

Anonymous Coward | more than 10 years ago | (#7091403)

"We've solved the most immediate problem by allowing only ssh, and giving employees with Windows a copy of WinSCP (an excelent, two-pane Windows-FTP-client-look-a-like front-end to scp), which they have had no problems using (they did not have any oportunity to work from home before, so they don't complain :)."
Couldn't your employees just securely copy a virus infected file from their home machine then?

"We also plan to later on introduce AFS and allow remote AFS mounts, and VNC remote-desktops."
VNC is not exactly the most secure solution for remote desktops. Not that I am saying it isn't cool, 'cause it sure as hell is.

Re:Try a three-tiered approach (2, Informative)

JebusIsLord (566856) | more than 10 years ago | (#7091513)

You can tunnel VNC through SSH though, making it quite secure (and, as an added bonus, faster through compression). The old VNC site even used to recommend it.

Re:Try a three-tiered approach (3, Insightful)

dhammabum (190105) | more than 10 years ago | (#7091445)

Um... a vpn is a hole in the firewall.

There really is nothing magic about vpns, they are quite dangerous - they can provide clear access to your internal network.

Re:Try a three-tiered approach (0)

Anonymous Coward | more than 10 years ago | (#7091474)

- if they are running Windows, they are potential virus and worm vectors, and needs to be shielded off, so a simple VPN-solution is no solution.

You left of Linux and Unix. I've seen employees with Linux boxes OWNED before.

Don't kid yourself, practically any OS that is useful and can be connected to the internet is vulnerable in some way.

A maintained firewall and current virus protection should be mandatory for anyone connecting to an internal network from the outside. Some vendors are starting to build policy enforcement of that requirement into their wares.

A winterm connecting to citrix should be pretty secure too.

Re:Try a three-tiered approach (0)

Anonymous Coward | more than 10 years ago | (#7091717)

> A winterm connecting to citrix should be pretty secure too.

Not to mention much more bandwidth friendly than VNC-over-SSH.

Also, he's allowing users fileserver access through SCP or AFS, so he still has to worry about viruses vectoring in. Which makes one wonder if all these bourgeois protocols he's using is really worth the bother. Why not just firewall a VPN?

Re:Try a three-tiered approach (1)

firewood (41230) | more than 10 years ago | (#7091640)

One thing that I need to consider at my current job is that you can NOT trust employees computers at home, even if you can trust employees - if they are running Windows, they are potential virus and worm vectors, and needs to be shielded off, ... We've solved the most immediate problem by allowing only ssh

Allowing running ssh from a box that is potentially 0wn3d and running a key logger is a big hole in your security. Requiring a hardware firewall/VPN box on home systems could at least temporarily keep the key loggers from phoning home.

Re:Try a three-tiered approach (1)

dago (25724) | more than 10 years ago | (#7091656)

Normally, all VPN solutions should include the possibility to enforce a firewall policy on the client side.

Some of them also allow to enforce the use of an up to date antivirus ...


Re:Try a three-tiered approach (5, Interesting)

kennyj449 (151268) | more than 10 years ago | (#7091319)

In my opinion, between the danger of worms transmitted above the application level and the existence of uneducated users (in many cases, uneducatable) as well as the whole physical security issue, even an internal network is not to be trusted (though few are actually worse than the Internet, except for pervasive wireless networks that don't use a strong, non-WEP encryption solution.) VPNs can definitely be very useful, but placing using them only at the outer edges of your network (e.g. internet-based links) leaves you wide open to any form of attack that originates from inside, which is always a danger no matter how good your external defenses are.

Personally I don't think that physical seperation is necessary if you're going to be using a strong VPN, because of the fact that you can make it so that the only traffic that passes back and forth is through a VPN and is then no less secure (if anything more secure, except for the purposes of physical security) than if traffic were being passed over the internet. You also get the advantage of increased throughput, a single (or fewer) physical sites to manage, and lower bandwidth costs. Every little bit helps...

In any case, it is my opinion that any computer which can communicate with others on the internet, no matter how well-restricted such communications are, should itself be considered non-trustworthy. It might be safer for being behind a firewall, but it can still grab a trojan or worm either through accidental or intentional means and become a staging point for internal attacks. It is for this reason that I personally believe that it is imperative to ensure that every computer on a network is secure and has personal firewalling of some form installed (if you're dealing with *nix workstations this is a no-brainer for a competent admin; Windows boxen will benefit greatly from simple solutions such as Tiny Personal Firewall.)

This all goes double for boxen which are physically located outside of the network and which VPN inside (this is the reason for that last paragraph's worth of rambling.) A certain amount of distrust should be exercised for computers which can find themselves poorly protected from the dangers of the internet at times, and as such it is not only necessary to keep such boxes under close scrutiny and send their traffic through a decent firewall, but also to either educate users (as well as possible) on good security or require as a matter of policy that they utilize certain security measures (a personal firewall combined with a regularly-updated antivirus application is a potent combination that goes a long way towards keeping a computer clean.) Assuming that a VPN is a safe connection is a recipe for disaster; it prevents others from listening in but otherwise it is no better than any other old TCP/IP connection.

VPNs, of course, can be quite useful on an internal network. Packet sniffers tend to have difficulty picking up on SSH as it is, but put that through a 1028-bit encrypted tunnel and it become exponentially more difficult to crack apart (and such layering protects you from vulnerability as there are now *two* effective locks which must be picked in order to gain entry.) It isn't going to make a difference between two servers connected with a crossover cable and which enjoy strict physical security, but when traffic is being passed over a network with old windows 95 boxen running Outlook, it pays to be prudent. Such encrypted seperation, when used intelligently, can often eliminate the need to physically seperate network segments when connectivity can be useful.

Oh, one last point: if you're using a WLAN, it's only logical that unless it's strictly for visitors doing web surfing and chatting on AIM, a VPN is useful there as well. WEP is both less useful and far less effective.

As for a good VPN technology to use for any application, IPSEC is always handy (and enjoys excellent and robust out-of-the-box support in the more recent revisions of... almost everything.)

Sorry if this seems a bit unclear, but I've had a long day. :)

vpn is NOT a magic word (5, Insightful)

smitty45 (657682) | more than 10 years ago | (#7091334)

VPNs are great until you realize that they provide only *temporary* access to your office network. What happens to those road warrior's machines when they're not vpn'd but still on the internet ?

are they firewalled properly ?
are their virus definitions updated ?

if no or "don't know" to either of those, then having a VPN will compromise any amount of safety it could bring. in other words, it's possible that the lastest and greatest worm that wasn't able to penetrate your office network until you patch is now vulnerable due to the work-at-home employee who VPNs in, and is now infecting everyone.

a bottom line is to have a well thought out security policy and PROCESS....and that only comes with training, more training, and training. Some education would help, too. Even people like Mudge and Dan Greer don't stop learning.

and for those who would call your questions stupid...they are the folks who are afraid to ask the stupid questions.

Re:vpn is NOT a magic word (0)

Anonymous Coward | more than 10 years ago | (#7091644)

"and for those who would call your questions stupid...they are the folks who are afraid to ask the stupid questions." The only stupid question is the question that hasn't been asked.

Fallacies of an unsecure admin (3, Interesting)

segment (695309) | more than 10 years ago | (#7091695)

First, for employees and others who have trusted access to your network, the answer is not to poke holes in your firewall.

While this is simple to state, how many companies will follow this rule. Companies are not going to jail their users, so the first one who wants to listen to mp3's or streaming music, up goes Real, or Windows Media. What? You want to see the stock ticker from Bloomberg? Sure now you have multicasting crap. Get real, and that's not including someone who knows about things like datapipe.c

Rather, the answer is simple, just three letters. VPN. By setting up a secure, encrypted, authenticating channel, you bring your trusted users into your network.

You're either blind or too trusting in people. Remember the biggest security hole often comes directly from the inside. For instance, I know someone who has a VPN through IBM for her work. Lo and behold she wanted to take that same machine and hook up DSL to it. Say goodbye to security over VPN.

I won't get too deep into this since I'm tired but a VPN isn't always the answer. The answer is actually education. Instead of spending on a Cisco Pix, or Nokia VPN machine, try holding monthly meetings with employees and make them aware of issues. Doesn't have to be a full blown Harvard presentation, but a quick power point presentation will actually teach them things they could carry on in their home or future place of employment. VPN's are like security through obscurity in a way. If someone wants in a VPN will do nothing to stop them

Re:Try a three-tiered approach (2, Interesting)

Ckwop (707653) | more than 10 years ago | (#7091858)

You can trust your employees [bbc.co.uk] ?

Dont ever believe that your employees wont attack you. Some will attack you by accident (bringing infected machines into the office or something), some will even attack you out of spite.

You should only give trust to entities you have to trust in order to get the job done. You have to trust (some of) your servers or IT staff.. but you shouldn't have to trust most of the internal network.

Where possible, you should treat your network machines in the same way as you'd treat an internet machine.. Obviously, you're going to have to give your network machines more access than an internet machine.. but treat them with the same suspicion.


Keep it simple. (3, Insightful)

SatanicPuppy (611928) | more than 10 years ago | (#7091289)

The beauty of the traditional firewall is it's simplicity. IPTables hasn't been exploited in forever, except through user error. It's reliable and secure, and easy to understand/debug.

Application firewalls and filters are complex. To me that means more can go wrong, more holes can be found. And they have to be super effective, if they're a line of defense. Sounds nasty, like those stupid .NET commercials "1 Degree of seperation between you and your customer!" 1 degree? In what fairyland? Do they WANT to be hacked?

For my money, leave the perimiter boxes. Defense in depth is a great strategy. They will get some, but they won't get them all.

Re:Keep it simple. (1)

AndroidCat (229562) | more than 10 years ago | (#7091304)

1 degree? In what fairyland? Do they WANT to be hacked?

Umm, this is Microsoft we're talking about right? While I would think no, experience suggests otherwise.

Re:Keep it simple. (3, Informative)

dzym (544085) | more than 10 years ago | (#7091347)

Just as a point of comparison:

As part of a security test, we placed an NT4sp4 box with an unpatched install of the Option Pack--to install IIS (note that this is perhaps the most easily exploitable Windows configuration on the face of the planet)--behind an ISA sp1 firewall running on Windows 2000 sp3. We were unable to compromise or otherwise DoS either of the two NT servers with readily available exploit code for IIS or otherwise on either operating system.

Now, it may be possible to still exploit the aforementioned NT boxes, but clearly it would have taken a great deal more effort than just running a NIMDA-alike on the NT4 box.

Re:Keep it simple. (0)

Anonymous Coward | more than 10 years ago | (#7091778)

Sounds more like the "URLScan" thing that MS ships than anything special done by ISA Server.

Re:Keep it simple. (1)

dzym (544085) | more than 10 years ago | (#7091782)

I'm talking about bare installs in default configurations with the only addition being forwarding HTTP from the ISA server.

The problem is firewall admins (3, Insightful)

0x0d0a (568518) | more than 10 years ago | (#7091641)

You're right. Application firewalls are a terrible, unsolvable hack. Of course, firewall vendors love 'em, because you'll be paying them for updates until kingdom come, like antivirus vendors.

Take a look at this part of the original post:

Are network services becoming so complicated that application level firewalls (such as ISA Server) are absolutely necessary?

Yes. They are. You know why? Because jackasses thought it would be a great idea to slap firewalls on everything. It's an easy, one-off fix that's centralized. Does jack for actual security, but it's easy to sell to management, so IT people constantly claim that everyone needs firewalls all over the damn place.

So now we have a ton of firewalls inhibiting functionality all over the place. Do application vendors simply say "Gee, I guess we'll give up on doing interesting things with the network", due the best efforts of short-sighted sysadmins? No. They do ugly, slower, less reliable and harder-to-monitor things like rebuild everything and ram it through SOAP. And then sell the same stupid product right back to the "firewall-enabled" company. Now, everyone loses. The security is just as bad as before. The user gets a slower, less reliable experience. The sysadmin has a harder time monitoring usage and troubleshooting (since everything is obscured by the layer being used to bypass his firewall).

Firewalls are the singly most-oversold computer product ever, having displaced antiviral tools in the last year or so. Nothing ticks me off more than some sysadmin shoving another firewall in front of users.

Re:Keep it simple. (1)

Lurgen (563428) | more than 10 years ago | (#7091653)

Traditional firewalls (port filters) are like using a picket fence to stop a flood.

Multiple Firewalls (2, Interesting)

Renraku (518261) | more than 10 years ago | (#7091291)

I can see where the desire for more than one firewall is going to go up. Here's an example. At the boarder, you might have a hardware firewall set up, before data can even get to the machines. Then you might have a per-cluster firewall, so each department or cluster of computers can set their own policies for what gets in and what doesn't. Then there would be the firewall on each machine, which could be set according to the uses of the machine. So there would be three layers of shielding before you even get to the security features of the OS itself. Or you could just go VPN like someone suggested. Another good idea would be to have some kind of username/password setup so that some people could bypass the first firewall, and the issue of 'trust' wouldn't be as big as allowing someone to zip through all the firewalls.

Re:Multiple Firewalls (0)

Anonymous Coward | more than 10 years ago | (#7091412)

Actually, a better solution would be to use a hardware firewall at each point. Every cable on the network should be hooked to a netscreen. Give me a call and I will hook you up with a deal. My name is Lee R. West, my home number is 405-348-0818, or you can call me in my chambers..er..office at 405-609-5140.


Anonymous Coward | more than 10 years ago | (#7091295)

slashdot used Windows Server 2003.

Take that you linux freaks!!!


Anonymous Coward | more than 10 years ago | (#7091343)

--SLOW AS MOLASSES -- THIS WOULDNT HAPPEN IF slashdot used Windows Server 2003.

Yea, rather then just being slow, the website would be down completely.


Anonymous Coward | more than 10 years ago | (#7091621)

and you sir, are a hellbound heretic! turn or burn.

Immature Technology (4, Informative)

John Paul Jones (151355) | more than 10 years ago | (#7091300)

Are application level firewalls sophisticated enough to allow machines on your internal network to advertise services to the Internet?

Nope. That should never happen.

The problem here is that application-level firewalling is fraught with problems. The lack of intuitive management for this type of firewalling is a problem that quite a few companies are trying to solve -- with limited success, so far. The problem is that as you move up the OSI layers, the variables increase exponentially. If you think that 65,536 is a big number, try writing an application-level script that permits "acceptable" MAPI requests while denying "unacceptable" MAPI requests. How do you determine that this NFS packet is good, and this one is bad? From the same host to the same server? How about X11? SSH? Oh, and don't break anything while you're at it. Lions and tigers and bears, Oh my!

These are the problems of an immature technology. As time passes, these issues might be somewhat mitigated, but there are plenty of "network administrators" that haven't fully grasped the concept of IP, and struggle with L3/L4 firewalling, to say nothing of moving up the stack.

Here's a tip, though; look for Bayesian filters in firewalls in a few years. That will be a trip.

Re:Immature Technology (1, Funny)

Anonymous Coward | more than 10 years ago | (#7091355)

How do you determine that this NFS packet is good, and this one is bad?

By checking the evil bit, of course!

Bayesian filters (2, Interesting)

SuperKendall (25149) | more than 10 years ago | (#7091394)

A general question - bayesian filters are great for email because a user trains them. But do you think it will ever be practical to "train" a firewall as to what is good and bad traffic? I guess to some extent you could use regression tools to generate the sorts of traffic you like... but it seems like such a thing would have to have a pretty high threshold in order not to drop any real traffic. I'm not sure such a device is pratical.

Re:Bayesian filters (1)

shut_up_man (450725) | more than 10 years ago | (#7091651)

I don't really know enough about the subject, but might it be possible to train a firewall that certain types of DDOS attacks might be "bad traffic", such as repeated requests on certain ports, opening large numbers of http connections without continuing the transaction, etc?

I'm pretty sure some firewalls do this sort of thing already, too...

Offtopic but since were talking firewalls (-1, Offtopic)

zymano (581466) | more than 10 years ago | (#7091341)

Why isn't there an opensource firewall ? If there is why don't all the linux distributions have one installed with the os? I have also heard Monopolysoft might be incorporating a firewall into their os. This seems like a feature that would get rid of alot of exploits.

there is (1)

SweetAndSourJesus (555410) | more than 10 years ago | (#7091352)

IPFW [freebsd.org]

Dunno what Linux guys do, though. I'm sure they have something equally groovy.

Re:there is (0)

Anonymous Coward | more than 10 years ago | (#7091615)

> IPFW [freebsd.org]
> Dunno what Linux guys do, though. I'm sure they have something equally groovy.

Linux kernel 2.4 users use the iptables/netfilter functionality. Very nice - stateful rules, logging, etc.

Re:Offtopic but since were talking firewalls (1)

BigDumbAnimal (532071) | more than 10 years ago | (#7091379)

Where have you been? (or are you a troll?) Ipfwd Ipchains Iptables And yes XP includes a firewall.

not troll (1)

zymano (581466) | more than 10 years ago | (#7091500)

i don't use linux alot and didn't know xp had built in firewall. docs for my computer didn't say anything about it. Why did i install zonealarm ?

Re:not troll (0)

Anonymous Coward | more than 10 years ago | (#7091563)

oh, I don't know, you must have missed a billion ads, postcards and PowerPoint presentations from Microsoft advertising built-in basic firewall in XP Pro.

Re:Offtopic but since were talking firewalls (1)

HermesHuang (606596) | more than 10 years ago | (#7091390)

There are plenty of firewalls for linux. In fact, when I installed Mandrake 9.1, my biggest problem was opening up the default firewall enough to let my server function as a server.

Re:Offtopic but since were talking firewalls (1, Interesting)

Anonymous Coward | more than 10 years ago | (#7091454)

The firewalls are there in linux distro's, the question is what are the default setups on distro of choice. Micro$oft has firewalls already in the more recent release's but again the question of defaults comes up. The ultimate issue comes down to what should be default settings and how much should be cut off. Of course the best setting is complete isolation so it becomes a matter of tradeoff's. Do we want features X,Y & Z plus their inherit weakness or do you default to no X,Y & Z giving more security, but the hassle of enabling X,Y or Z as needed. It's been mentioned many time's here on slashdot that security is all about tradeoff's and that's where the ultimate question comes in especialy when looking at it from a commercial point of view. What is acceptable to customers (in convenience, features & ease of use). As far as exploits they will always be there, question is how desirable are they to be found. To error is to be human, and no matter what we do there will always be way to find that error. If your that worried, the best firewall would be to be disconnected (again the tradeoff's).

ok (1)

zymano (581466) | more than 10 years ago | (#7091512)

Not a linux expert. The suse linux cd never mentioned firewalls. Thanks for clearing that one up.

Firewall is mainly a buzzword (2, Insightful)

Anonymous Coward | more than 10 years ago | (#7091364)

People run a firewall to block services that are running but that they don't use. Riiight. Instead they should just *not* run the services that they don't want. Then they wouldn't need a firewall.

Gee, even RedHat jumped into the firewall bandwagon. At install-time instead of selecting which services I want to run, it runs God knows what and asks me which *ports* I want to open. Now if I want to run some new network service I have to waste time learning how to fiddle with this "firewall" so that the new service will work, while making sure that those services I don't use are still protected from the outside. Fuuuuun.

Just say no to firewalls. Ask your vendor which services are running, and how to disable the services that you don't use. That's it.

Re:Firewall is mainly a buzzword (1)

altamira (639298) | more than 10 years ago | (#7091606)

Yes, as if running a Windows system without RPC these days was practical. It is not, therefore you need firewalls to secure Windows systems in a DMZ network.

How Do I Compare then? (1)

Strenoth (587478) | more than 10 years ago | (#7091367)

Given what I read above, how secure does my setup sound?

First, the company: Small landscape contracter with only 4 computers, all running windows XP. Website is hosted of the computers of our small-business DSL provider.

The computers are networked through a LinkSys Router with built-in firewall, in addition to which each computer has Norton Firewall (and System Works) installed.

I know that deliberate hacking chances are essentially nil for such a small company, but these measures combined with at-least weekly windows updates should pretty much secure us against any type of incidental or random attack, right?

And in case you are wondering, I am essentially the entire IT department, which leaves me working only part time, and even then half my time is spend doing general admin (not sys admin) work. It helps me get through college, gives them the tech help they need. My specialty was Electronics Technician in the Coast Guard, so I know I am not fully up to par for Sys Admin skills, but I am learning Fast!

Re:How Do I Compare then? (3, Informative)

sid crimson (46823) | more than 10 years ago | (#7091421)

Sadly, you are better off than the majority (?) of people. Ironically, it's possible you're more likely to fall prey to a bad MS Patch than anything else.

If your virus software is kept up to date then your Linksys will serve you well. Keep a good backup of your data for the times that your antivirus update comes after the virus/trojan/worm infection.

I might suggest your worst enemy is a coworker or familymember of said coworker.


Re:How Do I Compare then? (2, Insightful)

Anonymous Coward | more than 10 years ago | (#7091426)

the obvious hole in your systems is through outlook & explorer. once a machine gets hacked you must consider all systems hacked.

how do you know whether to trust the XP updates? did MS break anything in the newest update? it's been known to happen ...

your linksys is probably pretty secure, I don't know of any exploits. doesn't mean there aren't any!

but you can do some proactive things:

- keep watch on CERT & other security sites.

- get some of the professional and hacker intrusion tools and run them on your network. do some research, find out what exploits are "hot" these days.

get paid for doing this a few hours a week - nothing like learning Fast and getting paid for it!!

go Coasties!!

Re:How Do I Compare then? (0)

Anonymous Coward | more than 10 years ago | (#7091489)

If you are relying on application level firewalls and patches for security, then you are protecting yourself against things that are common and known. Patches come out *after* the exploit, more often than not. Never rely on them.
General rule is only open what you absolutely have to and only use what you must.
Don't be afraid to try complicated sounding solutions, there's a lot of helpful information that can be found through a simple google search.

ideal vs practical (4, Insightful)

vt0asta (16536) | more than 10 years ago | (#7091378)

You're going to get a lot of answers on how in the perfect world there will be DMZ this, several layers of routers that, firewalls in between them all, VPNs, NIDs,and a whole bunch of other things that may not be applicable.

The answer really depends on what you are protecting and whether or not the security required to protect it is worth the cost.

The only way application aware firewalls CHANGE the paradigm of network security models is for a certain class of protection. Usually that line of protection is or train of thought is "we would like something slightly better than nothing".

If you need protection more than that, it sounds like you already know what is best practice. That hasn't changed, and you are not wrong to suggest to your co-workers otherwise.

Think of it along the lines of what the military would do. Just because there is some new whiz bang motion tracking CCTV x10 ninja thing that shoots lazers. You better believe they are still going to have soldiers with rifles in watch towers, soldiers walking the perimeter, and 20ft of dead man zone and razor wire fences surrounding, along with the whiz bang consolidating gadget.

Re:ideal vs practical (0)

Anonymous Coward | more than 10 years ago | (#7091437)

The answer really depends on what you are protecting and whether or not the security required to protect it is worth the cost.

cost includes the cost to recover your ENTIRE NETWORK. once one machine is 0wn3D they all are - potentially - and you can't trust anything.

even a small network has a HUGE cost to recover from intrusion.

Re:ideal vs practical (1)

vt0asta (16536) | more than 10 years ago | (#7091480)

cost includes the cost to recover your ENTIRE NETWORK. once one machine is 0wn3D they all are - potentially - and you can't trust anything.
You can extrapolate that further. Assume you have financial data and that is compromised. That may be more costly (think class action lawsuit or such) than the cost to recover your systems. I was giving the lad the benefit of the doubt that he understands the various aspects of cost.

If you think back to the military analogy...it's not likely they are worried about the cost of rebuilding the walls...they are more likely worried about the lost nuclear weapon. Unless ofcourse you are in Soviet Russia...

Wow, we live in interesting times... (-1, Offtopic)

Anonymous Coward | more than 10 years ago | (#7091389)

We have actors running as Republicans, generals running as Democrats, and now this!


Some add'l tidbits (5, Informative)

Anonymous Coward | more than 10 years ago | (#7091391)

First off, remember - you won't be able to think of everything. No security model is complete without behind-the-wall systems, be they basic monitoring systems up through more sophisticated custom snort or proprietary IDS. It all depends on your paranoia level.

There are a few ways to handle the bane of netadmins - 'I wanna get to my files!' VPN, as suggested, is one solution - but not without problems. Recent issues with X.509, OpenSSH hacks for IP-over-SSH, etc. You can mitigate the danger by using a set of consistent criteria for each of your requirements, like a checklist. For example:

1) Is the service mission-critical? (BOFH them if no!)
2) Can the service be offered through a less-vulnerable channel? NFS mounts moved to SFS, perhaps, or encrypted AFS as mentioned above.
3) Is there a way to move the service into a perimeter network (or outside entirely)? Even if this means synchronizing a set of data to an outside machine via cron, if the data on the machine is less important than the internal network security, this can help.
4) Once the user is connected, authenticated and accessed, *THEN* what can go wrong? What could they do maliciously? What could they do accidentally?

Personally (and this is just me talkin', no creds here) I tend to reflexively say "NO!" until convinced otherwise. I know that there are services which *must* be available through the wall, but I want the requestors to have to work to convince me. Closed systems are more secure.

Also, don't be afraid to investigate low-tech but simple and effective means of circumventing problems. First thing I ask users who want to get an occasional file home - "Can you mail it to yourself?" Second thing: "Would you be able to use a 'public folder' that I have synch to an accessible box, say, every half hour?"

I second the opinion of iptables. It's a sharp tool, so be careful - but correctly applied, it kicks pants off most application or appliance firewalls. Invest the time to learn the sharp tool, and you'll realize that most of what you pay for on big expensive firewalls is manageability (i.e. Java GUIs, wizards, databases, multiple systems preconfigured - IDS, firewall, proxy, etc). Do the work.

Good luck. Don't listen to people who berate you for 'not knowing things.' Attempting to learn them in advance - due diligence - is a sign of a good admin. Be thorough. And above all, find a friend who does the same kind of work, and check each other. Probe each others' networks. Try exploits posted on the net.

Final, and most important - software updates. The boring part, but the most critical.


Encapsulating protocols is a "bad thing" (3, Insightful)

Lurgen (563428) | more than 10 years ago | (#7091400)

The minute we started encapsulating protocols within other protocols, we made it absolutely necessary to have application-layer firewalls.

RPC over HTTP is a good example of this, as are the many other protocols people see fit to encapsulate in HTTP (RDP / Terminal Services, instant messaging, etc).

Originally, the rules were dead simple. One port == one protocol. Some protocols used multiple ports, but even then it was kept nice and simple. But no, not everybody liked this situation. In the interests of making IM available to more people, clients started using HTTP so that even office staff (behind firewalls and proxies) could use it. Sure, this was blatantly circumventing the firewalls that were put up for this very reason, but that didn't stop anybody.

Application layer firewalls are a must-have. Of course, that will just force people to start using SSL... :(

Lesson number 1: (3, Insightful)

suso (153703) | more than 10 years ago | (#7091406)

Don't implicitly trust what you read on slashdot.org.

Re:Lesson number 1: (-1, Troll)

Anonymous Coward | more than 10 years ago | (#7091544)

don't trust anything related with linux and open source. bad stuff.

Looking for security in all the wrong places (2, Interesting)

m4dh4tter (712011) | more than 10 years ago | (#7091431)

Face it folks. Provisioning security services at network perimeters is just wishful thinking, and this is not a new insight. Traditional packet filtering firewalls are absolutely necessary (do you walk around your neighborhood naked?) but they must become much more widely distributed *inside* large networks in order to be effective. The same applies to application filtering technologies (some of which are very promising) and all the other stuff people think of as perimeter defenses. Any attempt to set up large networks as controlled domains with known security characteristics is a losing battle. The world needs to go to endpoint-driven security. A lot of companies are working on making this manageable and cost-effective. And while we're at it, that's also the place to incorporate highly granular access-control services. As long as you have machines on your network that can hit external web sites or have floppy drives or unauthorized wireless access points, your internal network *is* the internet.

Re:Looking for security in all the wrong places (1)

thogard (43403) | more than 10 years ago | (#7091771)

I have been working on the model that PC's can't turst each other. I'm finding that the model is unworkable without very smart switches and the cisco 29xx that I have don't count as smart enough. While I have been able to lock down some things, every time I do tests, the access lists leak like a sive. Maybe its time to buy a PC full of quad ethernet cards and set it up as a router.

Re:Looking for security in all the wrong places (0)

Anonymous Coward | more than 10 years ago | (#7091860)

do you walk around your neighborhood naked?

Yeah, why not?

Reality (1)

danielrm26 (567852) | more than 10 years ago | (#7091435)

"When is it alright to 'poke a hole in the firewall' to allow this? Personally, I think the answer is 'Never!' but perhaps I'm out of touch with current network security models."

Those who don't need to pass traffic inside are afforded that luxury due to the fact that they don't have a job. Anyone can decently secure a network that doesn't interact with anything; the real trick is allowing business to flow as usual and *still* have an acceptable level of Security.

Re:Reality (1)

danielrm26 (567852) | more than 10 years ago | (#7091450)

I seem to have misread the original piece. He was talking about passing into the *internal* network, not passing at all.

(I hate jackoffs that don't read the original post correctly) :)

I depends on the size of your network (4, Interesting)

egarland (120202) | more than 10 years ago | (#7091459)

There is no one answer. If security is your only concern you should have as many layers of security as possible with firewalls between each layer locked down as tight as possible. That said, security is never your only concern. Cost, ease of maintenance, performance, and flexibility are all important in a network design. After all, the purpose of your company is probably to get something accomplished, not to avoid getting hacked. There are times when every different network configuration is appropriate from super secure to a cable modem router to a windows box right on the internet. There is no one answer.

Application layer firewalls are another layer above port filtering. They can increase security and could, in theory, make it worthwhile to share a service hosted on a machine that is inside your network. I would only do that if you trusted the security of your internal network. Most network designs assume that once you get in to the "internal network" there is no more security and all your deepest company secrets are available to anyone browsing around. If this is true, you've probably made some bad decisions somewhere along the way and you should address those before you open any holes. If you are willing to maintain strict security on your internal network then the added simplicity of allowing Internet access to machines on it can be worth the risk. This can be a lot easer than setting up a dmz.

Usually layers do make sense though, even if one of the layers is just a Linux box doing firewalling, routing and serving some services. One thing I like to do is to mix operating systems at different layers. That way if you get a worm of some kind that gets into one layer it won't penetrate to the layer behind. For example, internet facing servers are Linux based, desktops are Windows based.

Another thing I have done when I absolutely needed a Windows based web server is to setup Apache as a reverse-proxy only forwarding requests to a particular subdirectory to the Windows server. This filtered out all the standard buffer overload attacks since none of them referred to that subdirectory name. It also made sure the requests were relatively well behaved and buffered outgoing data for the Windows box, reducing connection counts when it was under high load. This is an easy way to do an application layer firewall and if you are firewalling with a Linux box you can do it right on the firewall.

interesting article... (2, Informative)

canning (228134) | more than 10 years ago | (#7091461)

When firewalls don't do the job Mike Fratto, Sep 29 2003

Battle lines have been drawn, and volleys are being lobbed between the analyst and vendor camps. In dispute: Whether intrusion prevention is out of commission or the next network security salvation.
On one side, Gartner has cast intrusion detection into its "Trough of Disillusionment", saying the tech has stalled and calling for these functions to be moved into firewalls. Meanwhile, intrusion-prevention product vendor ForeScout Technologies vows to identify and block attackers "with 100% accuracy".
Call us Switzerland, but we say neither group has a lock on the truth.
Network intrusion prevention (NIP) systems probably will not protect your network from the next zero-day exploit or troublesome worm, but they are not a waste of time or money, either.
Our position puts us in the minority: Though we think NIP systems can enhance an existing security infrastructure, we do not consider integrating intrusion prevention and firewalls into a single unit a desirable goal.

Firewalls vs NID Firewalls have a largely static configuration: firewall administrators define what is acceptable traffic and use the features of the firewall to instantiate this policy.
Some firewalls provide better protection features than others. For example, an HTTP application-level proxy is far superior to an HTTP stateful packet-filtering firewall at blocking malicious attacks, but the basic idea is the same: Your firewall administrator can be confident that only allowable traffic will pass through.
If you have doubts about your firewall, get a new one from a different vendor, send your firewall administrator to Firewall Admin 101, or get a new administrator.
Not surprisingly, when we asked you why you are not blocking traffic using network-based intrusion detection (NID) systems, 63% of you said you use a firewall to determine legitimate traffic.
But people make mistakes, so misconfigured firewalls are a common source of network insecurity.
This simple fact has been used as a selling point for both intrusion detection and prevention systems, with vendors claiming their products will alert you to, or block, attacks that do get through.
The answer: Instead of layering on more hardware, solve the fundamental problem of misconfiguration.

Think configuring is easy? Unfortunately, though, it is not that simple. If you are enforcing traffic policy on your network using a stateful packet-filter firewall--such as Cisco Systems' PIX, Check Point Software Technologies' FireWall-1 or NetScreen's eponymous product--without security servers or kernel-mode features enabled, you should know that application-layer exploits, such as server-buffer overflows or directory-traversal attacks, will zoom right through. Stateful packet filters stop at Layer 4.
Application-proxy firewalls can block some attacks that violate specific protocols, but face the facts: protection is limited to a handful of common protocols.
The rest are not supported through a proxy, or are supported through a generic proxy, which is no better than a stateful packet filter.
Still, NIP is not a replacement for firewalls and will not be in the foreseeable future. Why? The fundamental problem is false positives--the potential to block legitimate traffic.
Before you can prevent attacks, you have to detect them, but NIP systems rely on intrusion detection, which is hardly an exact science.
A properly configured firewall will allow in only the traffic you want. We need to feel this same confidence in IDSs before we can believe in NIP systems, but IDS vendors have employed lots of talented brain cells trying to raise detection accuracy, and they are nowhere close to 100%.

Incoming! Despite these caveats, we believe a properly tuned NIP device can be instrumental in warding off most malicious traffic that gets past your firewall.
There are several ways to block malicious traffic: If the NIP device is inline, offending packets can be dropped silently, causing the connection to fail. Whether or not the connection is inline, the session can also be summarily dropped by sending TCP Resets or ICMP Unreachable messages to the client, server or both. Or, the offending IP address can be shunned--blocked--for a specific time period.
When we asked what it would take to make you use blocking, 57% of you cited needing assurance that there would be no false positives or that traffic would be blocked effectively.
These are legitimate concerns. During our tests of NIP devices, including NetScreen's IDP 500 and Network Associates' McAfee IntruShield 4000, in Network Computing's Syracuse University Real-World Labs, only a subset of signatures were defined enough to not alert on false positives consistently. These signatures were primarily TCP-based and violated a protocol or utilised known malicious strings.

Patch things up We have to bring up the "P" word. It is easy for us to beat you with the patch stick, but the truth is, many of our production systems are woefully out of date because we share the same legitimate reasons for being behind on patching.
Like you, the problem is mostly about not having enough time, time to schedule maintenance windows, test patched systems, and keep current on new vulnerabilities.
NIP devices can help here as well. Most attacks are against well-known applications and exploit well-known vulnerabilities, and this is where intrusion-based systems shine--detecting and blocking known attacks.
They can buy you the precious time you need to patch existing servers and provide an additional detection/protection layer.

An NIP taxonomy Now that we've established that NIP systems can be worth the money, let us look at the technology. There are two broad types of NIP systems: signature-based IPSs, which match packets or flows to known signatures, and traffic-anomaly IPSs, which learn normal flow behaviour for a network and alert to statistically significant deviant events.
Signature-based NIP products run the gamut from purpose-built systems, to integration between IDSs and firewalls, like the pairing of Internet Security Systems' RealSecure with Check Point's FireWall-1 or Cisco's IDS with PIX.
At a high level, these systems work the same: the NIP device monitors traffic flowing past the wire; attempts to match the traffic--packets or flows--to known signatures; and when there is a match, takes some action.
Often, the action is just an alert, but traffic can be blocked, too. NIP vendors typically issue signatures quickly after a vulnerability is publicised, so it is wise to keep current.
Several methods are used to detect malicious activity using signatures designed to send alerts on specific attacks and mutations of attacks. Signatures are difficult to create at the best of times, however, and without a thorough understanding of the vulnerability, signature creation is less effective. Signatures also can be based on common attack indicators.
For example, they may search for binary traffic where only ASCII traffic should be; look for anomalous packets, such as telnet traffic on a high-number port; or target malformed packets.
Of course, packets that match these fuzzier signatures do not always indicate an attack. For instance, the AOL Instant Messenger client for Mac OS X does not send a host: header on its HTTP/1.1 requests, which may trigger a protocol-anomaly alert.
Traffic-analysis intrusion prevention products monitor traffic patterns and capture snapshots of what constitutes normal traffic--traffic rates, which computers make connections to other computers, and so on--creating a picture of network behaviour.
Normal traffic also can be defined as part of policy enforcement. If your organisation's policy is to disallow telnet anywhere on the network, instances of telnet being used constitute, at minimum, a breach of policy.

--This article first appeared in Network Computing

Are you NUTS?! (5, Insightful)

TheDarkener (198348) | more than 10 years ago | (#7091538)

Is the simple concept of opening and closing ports insufficient for networking services that require the client and server to open multiple simultaneous connections (both incoming and outgoing)?

I am the head sysadmin for a company that has many Linux, Windows, and Solaris servers, and other specialty systems such as Cobalt Raqs, proprietary satellite equipment like IP enabled RF Modems, MUXes, IPEs and a shitload of high-bandwidth routers in multiple POPs around the world. If you think that a firewall to protect your network is insufficient, especially for a network with mixed OSes and such, you are terribly wrong. Imagine working in an ISP. You have your private workstations, then your servers (DNS, MXes, etc.), then your colocation equipment. Put it all on the same network? Suuuuure!! WHOOPS! Someone hacked into a colo box and then used his r3wt account on that box to scan your internal network for other vulnerable boxes (all at the same time, using your T1/T3/OC-192 for hosting the world's biggest movie IRC bot). You didn't have a firewall and/or IDS to detect the initial portscan on the colo box, and now you don't know that he/she is sucking up your bandwidth and scanning your entire internal (well, to you it's internal, external, whatever) network for more boxes to royally *$#! up. Trust me. Once a box is rooted, you take it of as SOON AS POSSIBLE and reinstall. It's a shitty feeling knowing that someone owned YOUR network and now you have a shitload of crappy work to do over the weekend. Not to mention downtime, customer/employee complaints, fielding the hundreds of "I CAN'T CHECK MY E-MAIL!!! BOO HOO!" calls, and general feeling that maybe...just maybe there's a box that got 0wnz0r3d that you might not know about.

The moral of this story, boys and girls, is that FIREWALLS ARE GOOD. Intrusion detection systems are GOOD. NAT is GOOD. TCP syncookies are GOOD. Everything on the Internet is vulnerable by default unless YOU TAKE THE TIME TO SECURE IT YOURSELF. Keep the colo systems on their own subnet. Shit, keep each SYSTEM on it's own 2 port VLAN with the uplink. Keep your servers on a DMZ. Keep your internal workstations on a TRUSTED, PRIVATE, NATted network. Close every damn port besides the ones that are used by servers. Do not open ANY ports to your trusted, internal network. If someone roots a box, at least they can't load an SSH trojan on port 2000 with no password and automatic root access to get back in later.

Re:Are you NUTS?! (1)

Lawngn0meXX (711030) | more than 10 years ago | (#7091657)

Sounds like work for the same company. I agree with you, an ISP or worse a hosting enviroment is the toughest to secure, and then there's DDOS attacks. A small business doesn't have to worry about the complexity that a hosting provider would have to worry about, but then again, the employees at a hosting company can make things really difficult for yourself. It's fun undoing the mess a legal secratary can make by answering every email in her inbox.

Users (2, Insightful)

Aadain2001 (684036) | more than 10 years ago | (#7091540)

If you allow users to select what goes in and out of their computers onto the internet, I guarenty you that within 24 hours of rolling out the system, 95% will have flipped the "allow everything" switch because they got anoyed with being asked every time they fired up a new application.

Who cares about the network? (4, Interesting)

rc.loco (172893) | more than 10 years ago | (#7091571)

Firewalls are great at slowing down intrusions. However, without proper application security architecture and host-level security hardening, you cannot really protect a network-accessible resource. Often times, the only resource (network, application, host) that we can control 100% of the time so that it can be trusted is the host.

Besides, the bulk of compromise situations occur INTERNALLY. Is that PIX on your WAN router really going to stop disgruntled Gary down in QA from trying out across 5 subnets the latest script kiddie tool that his roommate hooked him up with. If you spend quality time hardening your hosts, chances are you may really not lose more than a few hosts at a time during a significant compromise at the application-layer (e.g., a remote root sendmail hole, a bug in BIND). I think we need to revive the popularity of security "tuning" on the host side - a lot of people forgo it for strong network security but I think that the latter is a much more difficult perimeter to maintain.

I've seen others post about the dangers of VPNs. I totally agree, they are conduits for information loss, but are likely to be mostly self-generated (internal). Example: Disgruntled Gary in QA sucks down the product roadmap details off the Intranet before giving his 2 weeks notice and starting to work for a competitor.

Apologies to Gary's everywhere. ;-)

Re:Who cares about the network? (0)

Anonymous Coward | more than 10 years ago | (#7091597)

Why that dirty Gary... Fuck you Gary you dirty lowly bastard. Go and join a competitor, would you?

Re:Who cares about the network? (1)

Lurgen (563428) | more than 10 years ago | (#7091645)

Valid points -- perhaps the next generation of firewalls will be purely internal, possibly built into our network switches? Maybe we're due for a new type of internal networking, where we can not only protect ourselves from the world in general, but from ourselves?

Yes, I know we can do a limited amount of filtering internally already but there's nothing even close to what I have in mind. I'm thinking of application-layer filtering, perhaps even down to blocking specific attacks. Similar to Snort in some ways, only active rather than passive...

rc.loco is dead right when (s)he suggests that the bulk of compromise situations are internal. They are - perhaps the point of entry was external, and the attacker (be it an automated attack, such as a worm, or a pimply faced geek in his parents basement) outside your network, but the follow-up attacks are internal. Only the first hit stood any chance of being touched by your perimeter defences, everything else is easy from there.

Re:Who cares about the network? (1)

rc.loco (172893) | more than 10 years ago | (#7091704)

I like your idea about active IDS and I think it will appear sooner rather than later. The only problem with it are false positives, but...perhaps attack signatures can be isolated within network segments or chains of segments such that false positivies are reduced in other parts of the network? e.g., between switches or routers, where the traffic profile is relatively generic.

Security versus usability (2, Interesting)

marbike (35297) | more than 10 years ago | (#7091588)

I have been a firewall engineer for nearly four years. In that time I have come to the conclusion that you have a major trade off in the ultimate security of a system as compared to the usability of that system. An example is the explosion of VOIP and video conferencing in the last two years.

H.323, SIP, SKINNY etc. all require many ports to be used which is a nightmare to a firewall admin. As a result, firewalls are evolving to include support for these systems, but my fear is that the overly (in my opinion) permissive nature of firewalls which allows these connection, is ripe for exploitation by future crackers/hackers.

While I was supporting firewalls, my mantra was to close every damned thing I could and the users can suffer. But I also realize that in a modern network, usability is a major concern. Companies are deploying VIOP networks in record numbers while saving thousands of dollars each month. Companies need to reduce overhead to remain profitable, so they are looking at new technologies to help them. If the firewall industry cannot keep ahead of these technologies, it will ultimately fail.

I think that the time of using access-lists to controll traffic is nearing an end. This will result in slower overall performance of firewall solutions as application level firewalling becomes mandatory, rather than the past of transport layer firewalling.

I am afraid that I have no easy solutions, but I hope that the industry will be able to remain both secure *and* usable.

Hell, perhaps in the future security will be built into operating systems and network resources, rather than the reacitve nature that we enjoy today.

Application-level firewalls (3, Informative)

altamira (639298) | more than 10 years ago | (#7091635)

There's a few very sophisticated application-level firewalls available on the market, but they all pertain to a very specific set of protocols. NFS and MAPI are none of them, as these are far too complex and it's too hard to distinguish bad from good traffic; HTTPS, on the other hand, is pretty well suited to full application layer inspection, and this can make it very practical to actually allow access to an application on your INTERNAL network from the outside. However, on the side of the application-level firewall, this requires very sophisticated rulesets that require modification whenever the application changes, and that require a very skilled administrator. Whale Communications makes one such product (e-Gap Application Firewall), which could easily be the most sophisticated application level firewall for HTTPS. There are other vendors though that offer reverse proxies including authentication that will do session management and only forward traffic belonging to live, authenticated sessions, that could possibly as well make it practical to have the application run on your internal network.

Just think about it - in an ideal world, you could connect your database only to the web - no replication to the insecure area (DMZ), no (not in the Windows meaning of the word!) trust relationship with the DMZ, no poking holes in your firewall for DB/RPC/other proprietary communication protocols, no bringing out and maintaining the same set of hardware and software twice...

BUT this comes at a price - secure application layer proxies require skill and money.

Disclaimer: I work for a company that has implemented the Whale solution in Germany for 2 years. However, I chose the Whale solution for its technical merit solely.

The Internet Will Become Port 80 (4, Interesting)

ObligatoryUserName (126027) | more than 10 years ago | (#7091667)

Sad to say, but in the future, the only reliable port will be 80. All clients will have all ports except 80 blocked by default (right now this seems like wishful thinking!) and no one will open any other port (it will give them a scary security warning!), and even if they wanted to, they might be blocked from doing so by their ISP.

We're already seeing shades of this, but it hasn't reaced the majority of Internet users yet. Back in late 90's my company rolled out a product for schools that to be retooled when it was realized that many schools were firewalling everything except port 80. (They added a mini proxy server to the product that sent everything over 80.)

I have a friend that's a sysadmin for a medium sized insurance company - and they had all their internal applications break a couple weeks ago when a MS worm started bouncing around the Internet. However, the problem wasn't that they were using Windows machines (I think all their servers were AIX...)- the problem was that their ISP (the regional phone company) had blocked off the port that all their applications used because it was the same port that the worm used to get into systems. Last I heard, the phone company was refusing to ever re-open the port. (The phone compnay made the change without even informing anyone at the insurance company, everything just stopped working and from what I heard it took them a day to figure out why their data wasn't getting through. I believe they were resigned to changing all their programs to work on a different port.)

So, we've already come to the point where connections on other ports seem strongly subject to the winds of fate, and I see no reason the situation won't get worse. In most environments 80 is the only port that people would notice if it was blocked, and there are too many sysadmins out there who don't know any better. Right now, if I was developing an application that needed to communicate on the Internet, I would only trust that it could use port 80, and I wouldn't even bother looking at anything else. You can even see application enviornments starting to spring up now (Flash Central) where it's assumed that most applications will just share a port 80 connection.

It sure is a sub-optimal situation, but I don't know what can be done to stop the trend. Ironically, such a situation makes simple port-blocking firewalls useless because all applications will be running on port 80 anyway.

Port 135? (0)

Anonymous Coward | more than 10 years ago | (#7091772)

Your AIX admins were running a DCE RPC app over the public Internet? Damn -- not even most MCSEs are that stupid.

Re:The Internet Will Become Port 80 (1)

fuzzybunny (112938) | more than 10 years ago | (#7091859)

I see where you're coming from, but your conclusions are not entirely accurate.

There are too many financial institutions (to name just one aspect) whose apps require either different kinds of connection security from what you get from standard HTTP, and who won't be willing to take the "tunnel everything over the web port" approach.

For end-user private use, to a degree, maybe.

microsoft and laziness cause problems (0)

Anonymous Coward | more than 10 years ago | (#7091672)

A lot of network security problems come from microsoft and just laziness. Writing code is more than just putting a bunch of algorithms together. you must think first. imagine that you are a hacker and want to crack through those algorithms.

I do not think that Microsoft has done this. Maybe they think of security as afterthought to the software, but they do not think security first. this is the problem. A firewall thinks security first. A firewall intended to stop everything but what is allowed in. Why you want to poke holes through firewall? Only if completely necessary for use! Be very careful, test everything, think like you are the enemy and your system will be better off.

If system must advertise outside, test test test test test it! Invite grey hat hackers to break system first. Make honepot system and test it where it can not hurt anything. Let everybody hit the system. Keep all microsoft product behind dmz zone so that they are far behind all firewall and do not announce anything.

thanks for the info folks (2, Interesting)

Kaliban923 (712025) | more than 10 years ago | (#7091678)

The varied answers did indicate that there is ambivalence in general about the idea of allowing a machine on the internal network to advertise services even if protected by an application level FW(such as ISA Server protecting an Exchange server). That's good because I thought I missed something in the past 2 years since my last sys admin job(tried my own non-IT business for a while for those who are curious).

For those who did comment, thank you kindly. I appreciate the ideas and just so folks better understand, this question was speared by the fact that my current workplace has determined a need for webmail because apparently our VPN solution is both too complicated and we dont trust our users to have secure machines(I dont make those rules, I just live with them). There is one voice in my organization who wants us to open up an exchange server thats on our internal network because it will be protected by an ISA server and that just seems nuts. I rather just place a frontend Webserver on our DMZ/perimeter network with IMAP access to our exchange server(we only need email, not calendaring and other features) and use secure protocols to transmit authentication information. From this discussion I've concluded that there is no decisive answer and that I rather stick with our current network security model(screened subnet) rather than "poke a hole" in the firewall for the exchange server.

Re:thanks for the info folks (2, Informative)

sid crimson (46823) | more than 10 years ago | (#7091758)

I'm working on something similar... Exchange/OWA on the net.

There are a couple people who just need to POP their email while away. Perdition POP3-proxy over SSL is a decent solution. Setup POP3 proxy box on a separate network (ie. DMZ) from the Exchange Server and you're set.

There are a few that must have OWA access. For them, set up a reverse proxy with Apache/Squid and get a certificate for this server to communicate with your Exchange/OWA/IIS box.

And forgoodnesssake relay all your email thru something before it hits your virus-protected Exchange box. I suggest a Postfix [postfix.org] / Spamassasin [spamassassin.org] / ClamAV [elektrapro.com] setup.


Re:thanks for the info folks (1)

MickLinux (579158) | more than 10 years ago | (#7091807)

For me, I like to have our web services, and our internal network. If you want to send files in or out, you have to put them on CD-R and transfer them to the outer network. So if we can afford a network at some time, that's what I'll do. But our business model matches that.

On the other hand, for a large company like Newport News Shipbuilding, with > 10k employees, and more than 3000 engineers, that really isn't going to be practical, is it?

Interesting thought... but suppose you were to have the two-tiered network like that, and not pierced at all, and every building has one virtual server computer with RAID, with CD-R, that allows files to be uploaded and downloaded. Every employee has an internal account, and an external account. Then, mail that comes in goes to the external account, but the text gets stripped and forwarded to the internal account through a hardware+ROM only "text email server". You want to reply, you can do so in text. You want to reply with more than text, you have to go to the "external access computers" in front of a security desk, and everything is logged to tapes, including video of the room, and kept for a year.

In other words, I wonder if there might not be a market for single-purpose-only ROM firewall piercing servers (and, of course, the servers have their own firewalls, if necessary).

Oh, well. I rather suspect that there's a different ideal setup for every business, but "different" isn't all that different.

Loads of badly designed services (1, Insightful)

kris (824) | more than 10 years ago | (#7091693)

There are badly designed services out there. Loads of them.

These are services that are using an end-to-end protocol approach without provisions for a concentrator and filtering server within your company, requiring connections from desktop to desktop across corporate firewalls. There are services that hide their payload in normal http or https requests, requiring you to parse HTTP and XML in order to select which requests you pass on and which you don't. There are services that require backward connects on variable port numbers.

Don't let your security model be eroded by these. Tempting as it may be to have them, these services simply have no place within the enterprise. Their design is simply not fit for such an environment, despite all the advantages that service may be offering - the risk your corporation is taking by deploying it is simply to high. Talk to the vendor, tell them you'd really like their service and you'd like to deploy it, but they aren't offering a security model that is up to it. Stare your requirements and see if they have ideas to match them. If they don't, they do not understand enterprise. Avoid them.

On another note, application level firewalls are funny things. They parse and understand the application protocol. That makes them pretty sophisticated as firewalls go. It also makes them vulnerable to many of the same types of attacks can hit the applications that they are protecting.

Think about it: An application level firewall parsing POP, IMAP or HTTP not only can block or allow the protocol as a whole, but deny or allow individual commands, or users, or directories or whatever. That's nifty. In order for the FW to do this, it must parse folder names, user names, or commands. It must manage buffers for that. It must decode character sets. It must deal with strings with illegal characters in them. It must do all the same stuff that your applications often fails to do properly.

Use what application level firewalls offer to you, if you need it. If you don't, don't use them. They are to complex internally to be really secure.


Keep it simple and sane - and DMZ (3, Interesting)

cheros (223479) | more than 10 years ago | (#7091697)

If you want to do it right you'll always end up with a tiered model. Your basic stance should be not to trust anything or anybody, and open up from there (a bit like getting a mortgage ;-). Second stance is to always try and have two layers of defence in place instead of one (i.e. defence in depth), like NAT + proxy, just an example. Third stance is to NEVER allow direct interaction with internal hosts. This means that inbound services (SMTP, hosting web pages) should be done from a separate interface 'between' the Net and your internal network, called Demilitarised Zone of DMZ (apologies if this is old news, just trying to keep it clear). That's IMO also where VPN users come in, they can be given proxied equivalents of internal services, that keeps a network clear from oinks that have just managed to fiddle their VPN so they end up as routers between the Net and the internal network (yes, I know your policies should prevent them doing this, but see second stance ;-). Any supplier feeds come in on the same type of facility, you could even use a separate interface for it. And last but certainly not least, describe what you're actually trying to protect as that will give you some idea of the value loss if you end up with a breach, much easier to develop some defendable idea about budget requirements. For extra bonus points you can let senior management decide to put a value on those assets (i.e. give them enough rope ;-).

But this is not where it ends, because you still haven't dealt with (a) inside abuse and (b) the possibility of failure. Good security design takes failure mode into account. Plan for when somehow your defenses are breached. Tripwire your firewalls and core systems and check them, lob the odd honeypot in the internal network which will give you early alerts that someone is scanning the place or a virus has entered (last year I caught one of them very early because of a rather suspicious Apache log) and make sure you have a patch strategy that has a short cycle time (depends on your risk tolerance, but especially your firewalls will need attention). Where possible, segregate the more critical facilities out so you can more accurately protect them (just consider your users hostile - don't answer the support phone for half a day if you want a more realistic version of that feeling ;-).
Oh, and think about what platform you run your security services on. I don't prefer a Unix over Windows because it's more or less safe (that's actually more complex than appears at first glance - donning asbestos jacket ;-), I prefer Unix based facilities because I end up with less patching downtime as it rarely needs a complete restart. But that's just me. And READ those logs..

Hope this helps. =C=

Well, (1)

photon317 (208409) | more than 10 years ago | (#7091713)

You are out of touch with current network security practices, but that's a good thing. Most security guys these days are just not thinking straight, IMHO. The first order of business is to clearly delineate your real internal network and your semi-publicly-accessible DMZ where public services are hosted. No traffic crosses the DMZ without going through a proxy service or application level gateway of sorts. Secondly only setup simple (and password protected I might add) proxies for outbound connectivity. If a group wants a publicly-accessible service facing the net, don't poke it through onto the private network - make them put a seperate server in the DMZ, and make sure they don't establish any unneccesary trust between that machine and the inside network. And lastly, the simple model of firewalling ports is always a good thing. It's just that beyond that, for the ports which must be open to certain hosts, an application-aware transparent proxy or firewall can go a long ways into making sure the traffic doesn't show signs of attack. Hooking up snort is a good thing too. Once you get the rules tuned such that false positives are somewhat rare - set it up to trigger scripts that cause a firewall blackhole on IPs believed to be attacking you, and even make it smart enough to blackhole networks when enough IPs from that net send an attack. Beware denial of service when implementing this - you'll need to keep an eye on what gets blackholed, and should probably timeout the blackholes after a week or two anyways.

I know (0)

Anonymous Coward | more than 10 years ago | (#7091794)

why don't we put a harddisk into the application firewall and run our mailserver of it?

Maybe I'm of the older though school, but it should be the network daemon verifing the data that your firewall is allowing to pass.

Application firewall my ass, here, buy this donkey, it dosn't eat much, won't carry anything and is possibly dead - but a bargain all the same!


Thoughts from an Australian firewall admin (2, Interesting)

harikiri (211017) | more than 10 years ago | (#7091797)

We are a big Checkpoint shop (stateful inspection firewall). With regards to which is better, the issue seems more to be:
1. What is the industry standard
2. What can we get support for locally.

Application firewalls have really done poorly here in Australia. I speak from experience - used to be a security 'engineer' (read, install Gauntlet), and have since moved on to network security administration.

The main vendors I've seen in the marketplace are (or were) Gauntlet, Sidewinder, and Cyberguard.

NAI dropped the ball with Gauntlet both here and abroad. The technology behind it is excellent, but the support really, really sucked. In addition, the administration was performed via a highly unintuitive java-based application, that everyone I knew *hated* to use. You often ended up simply going back to the command-line to configure the beasts.

Sidewinder I have no formal experience with, but have heard good reviews. Secure Computings presence in Australia was limited to international firms that required its use. There was no "storefront" for quite some time.

Cyberguard I have seen at a handful of places, mainly banks (and apparently also at various .gov.au sites).

All of these are technically good products. But due to their lack of popularity and market presence, they don't get used.

So it's a glorified packet filter I go to add a rule to now.. ;-)

Best practices to the rescue (3, Informative)

Dagmar d'Surreal (5939) | more than 10 years ago | (#7091799)

"[...] has the paradigm of 'if you offer external services to the Internet then place those machines onto a perimeter network' been eroded?"

The simple answer to this question is "Definitely not." The use of a DMZ segment to keep production machines on their own physical network segment is likely to never become obsolete because the benefits of this simple step are so great.

"Are application level firewalls sophisticated enough to allow machines on your internal network to advertise services to the Internet?"

Whether they are or not is irrelevant. Only the barest minimum of your network should be exposed to another network (especially the Internet), and those hosts that _are_ should be unable to initiate connections to the rest of your network to reduce the impact of the loss of confidentiality in the case of an intrusion. While this may seem rather anal-retentive, to implement a proper application level firewall, the firewall can't just casually filter by generic service type. It _has_ to be able to distinguish a kosher query from a malicious one, and this requires a LOT of detailed work in the firewall rules to ensure that only the queries you want passed through can be passed. If you have a lot of custom CGIs with input parsing, this can turn into a nightmare of man-hours to maintain.

"When is it alright to 'poke a hole in the firewall' to allow this? Personally, I think the answer is 'Never!' but perhaps I'm out of touch with current network security models."

I mainly agree with you and feel that the answer is really "Almost never", with "never" requiring some support from the developers maintaining your site. If they're on-board with you on the concept of a DMZ, they'll help you by designing the production system so that connections could be made _to_ it from the intranet to extract information from the production hosts, instead of making the production hosts initiate connections to the intranet and increase the chance an intruder could do the same. If you can't control the access because it's some wacky proprietary protocol, institute a second DMZ (network cable is cheap and so are extra NICs). No other network should ever be allowed to reach inside your intranet.

ISA Server in front of Exchange (2, Informative)

snotty (19670) | more than 10 years ago | (#7091801)

Actually, having ISA Server publish your Exchange server (using RPC) or Outlook Web Access (OWA) is a great alternative to hosting yet another server you're going to have to patch and lock down. Configuring a firewall that is meant to be secure is much easier than trying to tie down a web server. Web servers on the edge don't even have the monitoring and reporting capability that you will need to know that things are running smoothly (or not). If all you want to let out is webmail, just publish OWA. ISA Server can add a layer of protection that a web server can't, including URLScan filtering, SecurID two-factor authentication, and pre-authentication. On top of that, if you want, you can install a Symantec virus filtering agent on the ISA Server and simultaneously filter out viruses in your webmail. There are hundreds of users who user ISA to protect their Exchange and webmail. Don't take my word for it though. Check out :

Serverwatch [serverwatch.com]
Microsoft's own site [microsoft.com]
ISAServer.org [isaserver.org]

The best answer is always to have defense in depth - Having a firewall in front of your web servers and email servers is good. Having an application aware firewall in front of your web/email servers is better. Having both and having a secure policy on them with AV software and keeping your machines patched is the best.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account