Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Providing a Whitelisted Wireless Hotspot?

timothy posted about 6 years ago | from the devil-in-the-details dept.

Networking 58

Ploxis writes "I volunteer some of my day managing a small network (and a ragtag band of computers) for a local nonprofit. I have been asked to set up a second, open, independent wireless network on site that will provide cost-free broadband Internet access to patrons. The catch is that they want to provide access only to a select group of about 25 websites while disallowing everything else. No objectionable sites, no mundane but non-relevant sites such as online banking or YouTube, and no other activities such as P2P or IM. They only want HTTP and HTTPS activity from a set of whitelisted websites." For the rest of Ploxis's question and his intial thoughts on making this happen, read on below."They'd also like any non-whitelisted URL to be redirected to a 'splash page,' which would just be some HTML providing a list of allowed sites by category. I'd host this page internally on the network. Their primary concerns are liability for access of illegal/objectionable materials and conserving their bandwidth, while still providing access to specific relevant tools online. My initial thought was simply an open wireless router, a set of remarkably restrictive firewall rules, and an in-house server as a custom DNS ... but that's pretty shaky (i.e. anyone specifying their own DNS can still get at whatever they want). I assume they'll need a router with some pretty significant traffic management capabilities as well, but that's not something I've investigated before. Anyone's experiences, recommendations, case studies, or maps of similar networks would be greatly appreciated."

cancel ×


Sorry! There are no comments related to the filter you selected.

Get an old machine put Linux on it... (2, Informative)

BitterOldGUy (1330491) | about 6 years ago | (#24696303)

and turn it into a router and make a domain for those folks?

Re:Get an old machine put Linux on it... (1)

poopdeville (841677) | about 6 years ago | (#24701253)

There's no need for domains. The firewall just has to forward any non-approved IPs to their "non-approved hosts" page. Easy as pie, and (while non-trivial) not worth our time.

Squid (4, Informative)

eln (21727) | about 6 years ago | (#24696401)

Configure a linux box as a router, put squid on it, set up your whitelist, and you're all set.

Re:Squid (4, Informative)

eln (21727) | about 6 years ago | (#24696525)

I should also add there's some iptables stuff involved too, but if you know the terms "squid" and "transparent proxy", Google will give you plenty of pages telling you how to set it up.

Re:Squid (1)

networkBoy (774728) | about 6 years ago | (#24696645)

I concur.
This is the way I operate our lab connection (allow connections only to equipment vendor sites for software).

Re:Squid (2, Informative)

Architect_sasyr (938685) | about 6 years ago | (#24699375)

I would suggest that adding a pac.localdomain to DNS might (might mind) be better. Write a proxy auto-configuration file and don't permit any access to the outside world (i.e. turn off ip4_forward and use iptables rules to enforce just in case). That way you don't have to worry about transparent connections (which sometimes cause issues with certain pages - like "submit" on /. every now and then for me here) and you can sort-of monitor HTTPS connections as well.

Just a thought, annihilate at will.

Re:Squid (2, Interesting)

networkBoy (774728) | about 6 years ago | (#24701153)

Sure that looks like a better solution, but squid over a linux router is easier and "good enough".
My caveat is that we have a strict usage policy and if you are caught circumventing my "good enough" solution you are not going to like the written warning. If you want general internet access you are expected to use your notebook and WiFi connection, and not connect to my lab network.

Re:Squid (2, Informative)

mikael_j (106439) | about 6 years ago | (#24696703)

Squid was my first thought as well, configure it as a transparent proxy and redirect all non-allowed traffic to the splash page. Combine that with firewall rules that block all non-DNS and non-HTTP traffic.


Re:Squid (0)

Anonymous Coward | about 6 years ago | (#24696845)

As of right now I don't think you can filter HTTPS with a whitelist on a transparent proxy.

HTTPS does get filtered if the computers browser is pointing to the proxy.

Re:Squid (2, Interesting)

Anonymous Coward | about 6 years ago | (#24697021)

Since there can be only one HTTPS site per IP address, that's not a problem. If one of the sites is an HTTPS site, just allow it in the firewall. It's on a different port, so the transparent proxy isn't going to see the connection. Make sure that the address in the firewall rule is kept up to date.

(Yes, I know there is a TLS extension which allows multiple sites to share an IP address, but since that is not universally implemented, no HTTPS site owner uses it, as it would break too many clients.)

Re:Squid (2, Informative)

simcop2387 (703011) | about 6 years ago | (#24702951)

thats not exactly true, you can have many per ip address for both http and https, BUT along with a squid proxy, you could also filter dns queries to other domains since https doesn't cover that.

Re:Squid (2, Informative)

cbiltcliffe (186293) | about 6 years ago | (#24697651)

pfSense has got this built in.
Install it on an old Pentium 266-400 or so, with 256MB RAM, if you can, and check the captive portal section.
Set your client WAP up on a NIC by itself, and you can configure captive portal on that interface. Ensure your login page has no login options...just a "You can't go here" type of thing.
Then, set up your allowed sites in the captive portal whitelist.
Problem solved, and you've stopped another machine from ending up in a landfill.

pfSense (2, Informative)

Fez (468752) | about 6 years ago | (#24696423)

Sounds like something that pfSense [] might be able to do, between squid and maybe the captive portal.

Re:pfSense (2, Informative)

bill_mcgonigle (4333) | about 6 years ago | (#24699253)

Not even that complex. I wrote a little tutorial, here [] - just invert the meaning of the block rule and add a default deny.

mod_proxy (4, Informative)

ak_hepcat (468765) | about 6 years ago | (#24696437)

mod_proxy, mod_rewrite

your friends at apache have most of the work done for you. All you have to do is slap it together and write some custom rules.

Linux as a firewall, to make sure that all http/http traffic gets redirected through the proxy

if the hostname in the url doesn't match what's in your rewrite rules (aka, to pass through) then rewrite it to your custom splash page.

no need for wacky dns tricks here.

Re:mod_proxy (1)

ShadowWraith (1322747) | about 6 years ago | (#24708741)

This is only good for http, and he mentioned that his employer wants to block all objectionable traffic, and this can include ftp, irc, and other protocols. You'd also have to block all non-http traffic.

Forget it (0)

Just Some Guy (3352) | about 6 years ago | (#24696471)

Is it hyper-critical that a visitor can only see one of 25 websites, or can you tolerate the idea that maybe one or two users can type in an IP? If you can't live with someone visiting an unapproved site if they're determined and resourceful enough to get around your restrictions, then scrap the whole damn project because you'll be playing whack-a-mole the whole time.

Put another way: you're on a nonprofit's budget. Is this the best way to spend its resources, or would you be better off tolerating the occasional unintended use?

Re:Forget it (3, Insightful)

halsver (885120) | about 6 years ago | (#24696761)

One of the requirements is that this is wireless. So he wants to cut out the random interlopers leeching his bandwidth.

Re:Forget it (4, Insightful)

Qzukk (229616) | about 6 years ago | (#24696783)

If it's only 25 sites (and not going to turn into "hundreds") then why play whack-a-mole? Set the default to Deny, look up those 25 IP addresses, and allow only 80 and 443 to those sites. That gets you 90% of the way there (the remaining 10% being virtualhosts on the same IPs). The rest of the IPs can be rewritten to a local webserver, which either is dedicated to this purpose or uses namebased virtual hosts to have it's own website, then the "default" vhost being the message you're putting up.

Make a simple script to add and remove IPs from the list and reload the rules, write down instructions on "What to do if stops working" or "What to do if you want to add", and you're done.

There are probably dozens of ways to actually implement this. Most of them will involve either custom wireless router firmware, or the wireless router plugged into a "real" router.

Re:Forget it (3, Informative)

coryking (104614) | about 6 years ago | (#24698961)

Wont work. The days of IP's meaning anything are long over. You are best to assume they will change in a week.

These 25 sites could be using round-robin DNS and change their IP every DNS lookup. They could be using some load balancer that plays games with DNS and hops you around the globe depending on their mood. You have no idea how they manage their IP space and you are insane to try :-)

Squid is a much better solution. You can get squid to whitelist by domain.

But seriously, the greater internet nerd contingent needs to get it in their head that the days of IP addresses being useful as any kind of fixed or even temporary identifier are over.

Re:Forget it (3, Insightful)

Skreems (598317) | about 6 years ago | (#24698969)

Why in god's name would you statically encode IP addresses when the DNS system is sitting right there to make sure you don't have to do that manual work? Besides, if they're including any reasonably sized site in that list, their DNS entries will resolve to a different IP address depending on the day of the week and the mood of their edge network provider, so it could be any of hundreds of IPs for a single address.

Re:Forget it (2, Insightful)

amorsen (7485) | about 6 years ago | (#24703105)

Why in god's name would you statically encode IP addresses when the DNS system is sitting right there to make sure you don't have to do that manual work?

Because that's how firewalls work, in general. Some firewalls will helpfully resolve domain names into IP addresses, but there's no guarantee that the IP addresses that the firewall gets from DNS are the same the client gets, so that is a dead end too.

To do better you need to look into the actual HTTP session. If the poster had a firewall which could do that, he would most likely know, and therefore wouldn't ask the question in the first place.

Re:Forget it (2, Insightful)

YrWrstNtmr (564987) | about 6 years ago | (#24696955)

Oh please. We don't know the context of this guys application, or what his non-profit does and who it applies to. Maybe he has a very valid reason.

Keep honest people honest, and only allow a small subset of sites.

Tell Them No (3, Funny)

techsoldaten (309296) | about 6 years ago | (#24696567)

Tell them no and strike a blow for Net Neutrality!


Re:Tell Them No (1)

orclevegam (940336) | about 6 years ago | (#24697041)

Tell them no and strike a blow for Net Neutrality!


That's not net neutrality, do you even know what that means? They're not running an ISP, they're just trying to provide access to a handful of websites for free.

Re:Tell Them No (1)

DangerFace (1315417) | about 6 years ago | (#24697283)

You tell him! Strike a blow for humour*!

*Humor for you Americans and your crazy talkin'.

Re:Tell Them No (-1, Flamebait)

Anonymous Coward | about 6 years ago | (#24699811)

Tell them no and strike a blow for Net Neutrality!

Otherwise, maybe you could ask the Chinese government for access to their list of US hardware and software whores who've helped them to implement the Great Firewall and to report to the government those who've attempted to evade it.

Or, as yet another alternative, you could purchase a set of used balls and quit.

Appropriate captcha for this one: coopers

Re:Tell Them No (0)

Anonymous Coward | about 6 years ago | (#24708379)

Net Neutrality is for people that actually pay for their internet.

If you really need to make it bullet proof... (1, Interesting)

Anonymous Coward | about 6 years ago | (#24696907)

You need a web proxy and a DNS proxy: The web proxy to restrict the URLs to those which are whitelisted and the DNS proxy to stop "clever" people from tunneling through DNS.

Re:If you really need to make it bullet proof... (1)

Zan Lynx (87672) | about 6 years ago | (#24697185)

The only way a "clever" person could fool a transparent proxy would be to corrupt the DNS of the proxy.

Or perhaps you're talking about one of those things that uses DNS as a protocol transport?

query TXT
query TXT
query TXT

Like that?

Most of these wireless AP thingies have a DNS proxy included already, it gets used to redirect people to the AP IP for the usage agreement page.

Re:If you really need to make it bullet proof... (-1, Offtopic)

Anonymous Coward | about 6 years ago | (#24697859)

I would have replied sooner, but Slashdot didn't let me. Today I've written a +5 insightful comment, and a +1 comment on this topic, but I guess being anonymous is so bad in and of itself that I have to wait more than half an hour before I can post again. That's why you only get this rant and no reply. Sorry.

Re:If you really need to make it bullet proof... (1)

Hatta (162192) | about 6 years ago | (#24699025)

You'd need a firewall too, to drop non HTTP traffic. Otherwise a clever user could just use SSH to tunnel their HTTP traffic and even related DNS requests.

Re:If you really need to make it bullet proof... (2, Funny)

Anonymous Coward | about 6 years ago | (#24699587)

Absolutely. Another essential ingredient is electricity. And an internet uplink. Who are you? Captain Obvious?

Re:If you really need to make it bullet proof... (0)

Anonymous Coward | about 6 years ago | (#24707101)

*slow blush* I'm so embarrassed for him

Re:If you really need to make it bullet proof... (0)

Anonymous Coward | more than 5 years ago | (#24834905)

I think it's kinda cute

Re:If you really need to make it bullet proof... (1)

ancientt (569920) | about 6 years ago | (#24699963)

Use squid with a domain based whitelist. Set /etc/resolv.conf with "nameserver" and use dnsmasq to provide dns lookup for squid, which has the added benefit of using your /etc/hosts for lookup exclusively. You then set up a script to look up each of the IPs of the desired sites automatically from an external server, probably with nslookup though there might be a more efficient method, on an hourly basis from a trusted DNS server. Dnsmasq handles your dhcp as well, adding further simplicity to the situation.

I think. You'll want to read a manual and guide along the way to get the configs right.

tinyproxy (4, Informative)

argent (18001) | about 6 years ago | (#24696971)

Instead of squid, use tinyproxy. You're not primarily interested in caching, you're interested in access control. Tinyproxy gives you much finer control of that, and it's also ... well ... tiny.

Just set up a "no proxy" rule for the sites you want them to get to, and redirect everything else to a 404 server.

Re:tinyproxy (4, Informative)

oyenstikker (536040) | about 6 years ago | (#24698791)

As the stated goals are to provide access to a very small number of pages and limit bandwidth, caching is a great idea.

Re:tinyproxy (1)

poopdeville (841677) | about 6 years ago | (#24701317)

But definitely non-essential, especially if caching can be implemented behind the blocking solution.

Perform a DNS lookup on each of the 25 domain name (2, Interesting)

mysidia (191772) | about 6 years ago | (#24697071)

Of the allowed sites.

Use any commercial router and access point, or even a WRT-54G. Drop the list of allowed ips into an access list

Deny traffic for all other ips.

Use separate rules to deny traffic to ports other than 80 and 443

Mod parent up (1)

jabithew (1340853) | about 6 years ago | (#24697373)

Just using a firewall; nice idea. You'd have to keep on top of DNS lookups though.

The router I got from my ISP actually allows you to do this by default. It also lets you redirect to another page, which would allow an error message to be displayed. Can you think of a way to do this with kit available in normal routers?

Use scriptable devices (1)

mysidia (191772) | about 6 years ago | (#24724423)

I would expect a simple shell script on a workstation or laptop should be able to assist with maintaining the ip list; or make a Linux virtual machine/put it on the cloud, and only ever run the VM while updating the list. It would be sensible to use a script and format the output so it can be pasted into the device to perform updates according to any DNS change.

It is fairly commonplace to have script-generated firewall configurations like this. Esp. when a single site has several firewalls (I.e. for backup internet links); it is beneficial, for example, for attacker blacklist bad ips to be synced, and often this would be done using a database at a management point.

It bears mentioning that Cisco871W ISRs and other similar models, if the proper features are licensed can do straight URL blocking, though configuration may be challenging. URL filtering on those devices is meant to be provided by a third-party vendor, however there is reportedly a way to allow some URLs [] and default-deny everything else.

(You manually enter exclusive allowed domains, configure no filtering vendors, and turn off allow mode)

The 871W router+AP combos are approximately $400. This is not inexpensive, but is still a bit cheaper than buying and dedicating a full-blown server just to do filtering while also requiring a separate unit for the AP, especially when it comes to noise pollution and power consumption over time (and ongoing costs of electricity, UPS capacity (since all servers need an UPS or will potentially have a short life), and higher probability of PC failure anyways [mechanical disk drives fail predictably after a few years, _Especially_ when used for years in disk-intensive I/O-hungry apps like a squid cache]).

OpenDNS? (2, Informative)

jabithew (1340853) | about 6 years ago | (#24697429)

OpenDNS were talking about adding this as a pay-for service [] , which would be cheaper and easier than setting up a dedicated Linux box, which is the normal proposed solution to any problem posed to Slashdot.

Incidentally, the thread I linked has some other solutions posted in it.

Re:OpenDNS? (1)

ukatoton (999756) | about 6 years ago | (#24703143)

That's at DNS level.

Anyone with a static DNS server entry (or enough knowledge to look) would get around it instantly.

Others have done this on a much larger scale... (0)

Anonymous Coward | about 6 years ago | (#24697641)

China, for example...

Sorry... couldn't help but troll.

openwrt (0)

Anonymous Coward | about 6 years ago | (#24697669)

Grab a wrt54gl, install openwrt, and configure.

You can host your "splash" page, as well as your whitelist.

Done and done.

Options, options. (1)

T3Tech (1306739) | about 6 years ago | (#24698477)

I would suggest a linksys-WRT54GL/Buffalo/Asus/etc. wifi router running OpenWRT. If you're only allowing to a relative handful of sites (~25), iptables rules wouldn't be too cumbersome. Add on a captive portal package (wifidog, nocat, etc.) and you're good. Though the basic captive portal redirection could be handled simply with iptables too, but one of the packages could make things easier to administer/monitor. I know that wifidog uses libhttpd as a web server that runs on the router, so you could run the captive page on the router rather than using an internal web server for it and opening up a hole into the network you probably don't want that accessible from the wifi.

Going the route suggested by others of putting squid (which can offer the added benefit of caching thus taking a bit of the load off the ISP bandwidth usage) and linux on pretty much any old PC is certainly another option. There are several firewall distros that would make this fairly simple. However in this case, you would need to add a wireless card to the box or use an AP. If you don't already have these laying around, then from the cost perspective a router that can be flashed with OpenWRT, DD-WRT, etc. would make more sense.

Maybe I'm missing something, but how would one use DNS to get around iptables rules which only allow packets to specified IP addresses? If there's a concern then I suppose using something like dnsmasq and only allowing the firewall to contact external (or internal even) DNS servers is possible. IOW, don't allow DNS queries from clients to pass the firewall.

BTW/Disclaimer/what-have-you, I do development work (router firmware) for an ad-based free hotspot company.

Dont forget to include dependencies! (2, Informative)

coryking (104614) | about 6 years ago | (#24698909)

Whatever you do, make sure you whitelist any dependencies these 25 websites use. I'm thinking of things like google-analytics, any kind of javascript library that is third-party hosted (Google Code or YUI) and ad code here. If you whitelist those as well, your patrons browsers might act a little funky depending on your solution.

Re:Dont forget to include dependencies! (1)

Z-MaxX (712880) | about 6 years ago | (#24700049)

Why would you whitelist google-analytics? Isn't it some sort of usage-tracking service? I'd rather minimize the information collected on me.

I use the NoScript Firefox 3 extension, and I set google-analytics to UNTRUSTED. I have no problems browsing with that.

Mikrotik would be perfect (1)

x69 (67469) | about 6 years ago | (#24700411)

Mikrotik will do everything you need and more.

You would need build your own using a RB/411A, CA/411, R52H, AC/SWI and a 12-24volt power supply and you would be all set. [] []

The guys over at [] sell preassembled AP's and will even walk you through configuring it.

Dans Guardian (2, Interesting)

PhilipJLewis (104782) | about 6 years ago | (#24702323)

Setup a transparent proxy and use dansguardian [] . I've set this up and had it running for several months. It *easily* supports whitelited/blacklisted sites, domains (using regular expressions even), and mime types. It can also block objectionable content based on keyword groups and ratings etc. Very good indeed.

How about....ddwrt on a linksys router? (0)

Anonymous Coward | about 6 years ago | (#24704067)

I have used ddwrt and they have tons of features... deny all and allow only certain websites....would not cost too much to setup....

here is how I would do it (0)

Anonymous Coward | about 6 years ago | (#24704131)

set up a proxy on your network, most proxies can strictly restrict the list of allowed sites fairly easilly and you should be able to set up such a splash page.

put a second network card in the proxy to connect the wireless access point to. Use iptables DNAT to force all http traffic to the proxy.

for https I would just use iptables rules to filter and then SNAT the traffic.

They won't get a splash page for https requests but few people manually type https addresses in my experiance so this shouldn't be too much of a problem.

Try a D-Link Router (1)

codemaster2b (901536) | about 6 years ago | (#24705265)

I have a router that does that. Provides for up to 40 white-listed URLs, and only those. Dual firewalls, all the latest, even QoS (not that it matters). $100 @ Newegg. D-Link DIR-655.

Does not provide a bounce page, that I'm aware of.

DD-WRT (1)

darrenkw (1085901) | about 6 years ago | (#24705375)

DD-WRT will allow you to do this using their "hotspot" option. You set a list of sites that are allowed without logging in then when they try to go to other sites it brings up the "login" page. You can customize that page to whatever you want.

Mikrotik RouterOS (2, Interesting)

the right sock (160156) | about 6 years ago | (#24705553)

Simplest, quickest way to do it, and does everything you're looking to do.

They put a relatively decent shell interface on top of linux that hides a lot of the complexity, and also have a good GUI management utility (I don't use it myself, but it can do everything the shell can).

It'll run on most hardware, including x86. You'd have to buy a license, $45, but it's worth the time saved figuring out how to get all the different parts tied in together.

And there is an active community forum with helpful people in case you run in to trouble.

transparent DNS redirect (0)

Anonymous Coward | about 6 years ago | (#24708907)

Among all the other ideas, you should also consider forcing all DNS queries to a service like where you can easily maintain a whitelist and blacklist the rest of the internet.

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>