Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Web Hosts — One-Stop-Shops For Mass Hacking?

Soulskill posted about 3 years ago | from the digital-walmarts-for-script-kiddies dept.

Security 70

jjp9999 writes "More than 70,000 websites were compromised in a recent breach of InMotion. Thousands of websites were defaced and others had alterations made to give users a hard time accessing their accounts and fixing the damage. A similar attack hit JustHost back in June, and in a breach of Australian Web host DistributeIT just prior to that, hackers completely deleted more than 4,800 websites that the company was unable to recover. The incidents raise concern that hacker groups are bypassing single targets and hitting Web hosts directly, giving them access to tens of thousands of websites, rather than single targets. While the attacks have caused damage, they weren't as malicious as they could have been. Rather than defacing and deleting, hackers could have quietly planted malware in the sites or stolen customer data. Web hosting companies could be one of the largest holes in non-government cybersecurity, since malicious hackers can gain access through openings left by the Web host, regardless of the security of a given site."

Sorry! There are no comments related to the filter you selected.

Obvs (0)

Anonymous Coward | about 3 years ago | (#37578728)

Duh?

Re:Obvs (0)

Anonymous Coward | about 3 years ago | (#37580052)

This /. story is one of those "self assessment" opportunities.

If this story is "news" to you... gtfo the internet.

Seriously. You are too clueless to be out here.

Next week's "self assessment" story: "Joomla plugins discovered to be the biggest mass server-rape vector in history!" ZOMG NOWAI!!!!!

Seriously. For everyone's sake. Get off.

What kind of moronic hosting ? (1)

unity100 (970058) | about 3 years ago | (#37578742)

"incidents raise concern" -> as if this is something new ? it has been so since internet had become available for masses to host websites personally. anyone who had remotely got affiliated with hosting industry knows that.

why the fuck is this submitted and accepted as if it is something new ?

Re:What kind of moronic hosting ? (1)

AliasMarlowe (1042386) | about 3 years ago | (#37579446)

"incidents raise concern" -> as if this is something new ? it has been so since internet had become available for masses to host websites personally. anyone who had remotely got affiliated with hosting industry knows that.

Yup. And this is one of the reasons I host my own site myself, at home. Where there have been no intrusions (not yet, anyway). Where the backup system works, with an off-site copy updated weekly. It's not a very important site to anyone else (typically only 30 GB/month in traffic), but it's important enough to me that I look after it.

My reaction to discovering that there are bozos with web sites who don't have backups and trust others with their site security: Sorry, fellas, but I sure hope you enjoyed getting reamed with that baseball bat, 'cos you were asking for it...

Re:What kind of moronic hosting ? (0)

Anonymous Coward | about 3 years ago | (#37580068)

"My reaction to discovering that there are bozos with web sites who don't have backups and trust others with their site security: Sorry, fellas, but I sure hope you enjoyed getting reamed with that baseball bat, 'cos you were asking for it... "

You mean like "the cloud"? :-P

Re:What kind of moronic hosting ? (1)

fractoid (1076465) | about 3 years ago | (#37583202)

Aye. That was my first thought: "How the fuck are you 'unable to recover' data these days?" Assuming that you're (a) a company that depends on data. (There is no (b)).

I mean seriously, if you're a web hosting company would you not back things up? Maybe they'd lose a day or a week worth of updates, but losing everything? Geez.

Kill the Fuckers (0)

Anonymous Coward | about 3 years ago | (#37578744)

The sooner these people are hunted down and feed a face full of lead, the better.

TCP/IP: single point of failure? (0)

Anonymous Coward | about 3 years ago | (#37578752)

I mean, if you can somehow hack TCP/IP itself, you've just rooted the entire Internet.

Re:TCP/IP: single point of failure? (1)

Oxford_Comma_Lover (1679530) | about 3 years ago | (#37579116)

WTF does that even mean?

TCP/IP is a protocol--actually two protocols, one over the other.

You can fake packets in it. So what? That doesn't automagically give you root on anything.

Now if there's actually enough of an error in the TCP/IP code to give you kernel control from there, sure, you've rooted only *half* of the internet (or whatever percentage run the same kernel code). But (1) that code is looked over once or twice by security people and (2) that code is such a headache that even with the source code, almost all crackers would prefer to find a much easier target to deal with, and (3) the last time I looked at the code, around 2004, the comments on it, on the apple side, had not been updated since the mid-eighties, IIRC. Maybe the early nineties. (Although the code had changed.) Which doesn't exactly make it easier. (4) Such an error is unlikely to be found there to begin with. But not impossible.

TCP/IP Kernel code is kind of like Buckaroo Banzai doing neurosurgery. [To Jeff Goldblum]: "Don't tug on that, you never know what it might be attached to."

Re:TCP/IP: single point of failure? (1)

Jawnn (445279) | about 3 years ago | (#37579202)

Here it comes..., wait for it....
Whooosh!

Re:TCP/IP: single point of failure? (1)

Oxford_Comma_Lover (1679530) | about 3 years ago | (#37579366)

Well, perhaps I didn't see it that way because it wasn't analagous, and so took it as if serious. =)

A hosting provider is, on average, much more vulnerable than TCP/IP code.

unable to recover? (4, Insightful)

joshuac (53492) | about 3 years ago | (#37578766)

completely deleted more than 4,800 websites that the company was unable to recover

They host (at least) 4,800 websites yet they don't have a working backup system in place? Amazing.

Re:unable to recover? (1)

Anonymous Coward | about 3 years ago | (#37578810)

Most hosters have a "no guaranteed backup" policy. It isn't their data and they're not getting paid to operate an archive. Their job is to host web sites. You have to keep your own backups of your own data. Backups at hosting facilities are for convenience only, so that you can restore from a source that is close to the servers.

Re:unable to recover? (1)

MightyMartian (840721) | about 3 years ago | (#37579034)

Still, the other side of that is the website's obligation that your data remain secure. Yes, you should back up your own data, but neither does the hosting company simply have a right to leave its systems vulnerable to penetration. After all, deleting websites is one thing, what about data theft?

Re:unable to recover? (2)

Oxford_Comma_Lover (1679530) | about 3 years ago | (#37579134)

Why would it need a "right to leave its systems vulnerable to penetration?" You could as easily say that "customers of hosting providers don't have a right to rely on the security of providers." You have to pick where to allocate responsibility. As the hosting provider usually writes the contract, guess where responsibility usually lies?

Re:unable to recover? (2)

guruevi (827432) | about 3 years ago | (#37578976)

You would be amazed at how much data gets lost daily because enterprises are unable to keep a working backup system.

I've worked for a web host and the more money they threw at their backup solution (this one is shiny, this one is integrated with your management platform, this one gives control to your customers, this one gives blowjobs, ...) the more unpredictable it got. They would fail to complete and/or fail to notify, 1 tape would fail and apparently the metadata for the whole backup was on that 1 tape. Or it would correctly back up everything but the restore would take a week (which was suicide for a lot of dotcoms).

I eventually wrote an rsync script to backup to a disk system and told the developers how to parse the logs and deploy it. Sure it wasn't shiny, no such thing as bare metal restore and didn't have a lot of other features but it f*in worked. I then also learned that singular tapes are relatively expensive and about as reliable as a medium as your average hard drive.

Re:unable to recover? (1)

icebike (68054) | about 3 years ago | (#37579342)

Rsync is not a backup.

Most often the cause of data loss is the brain fart or the fat finger on the part of the operator, and a backup system that syncs a mass deletion to another drive is not something you want to put a lot of trust in Perfect copies of a corrupted database are equally useless.

Periodic full-image backups are essential, which is why the industry went to tape in the first place.
Substituting Disks for tape tempts people to take a cheap and dangerous way out.

Re:unable to recover? (1)

nabsltd (1313397) | about 3 years ago | (#37579626)

Rsync is not a backup.

rsync creates a copy, and is thus a backup [wikipedia.org] .

Add in domain knowledge (e.g., the database must be quiesced before the snapshot of the filesystem is taken, and you rsync from the snapshot) and with the "--backup" flag, you can even keep mutliple versions without duplication, which acts the same as a full/incremental tape rotation. This gives you an easy to maintain, reliable backup system. And, if you still need your crutch, you could back up the rsync copy to tape.

Periodic full-image backups are essential

That depends on the hosting company. If they let you load any OS, then maybe, but if they provide you with a clone of their standard image, then only the changes made by the customer need to be backed up...any "full-image" restore would start by restoring from the source image.

Re:unable to recover? (2)

FreakyGreenLeaky (1536953) | about 3 years ago | (#37581598)

Nonsense. Incremental rsyncs with periodic full rsyncs is a perfect, more efficient and faster backup system. Importantly, recovery is lightning fast compared to tape. Do some research: you can rsync only files which have changed, or everything. With a simple script on the backup machine which uses redundant storage (or machines, some offsite) you can archive these snapshots by day, week, month, whatever.

We haven't used tape for over a decade and recovering a specific file with a specific timestamp or doing a mass recovery is just as simple - and quick.

Your post smacks of so many who have drunk the The Backup Company (tm) koolaid and who refuse to believe that a better system exists. You're an IT manager, right? :D

Re:unable to recover? (1)

kmoser (1469707) | about 3 years ago | (#37582972)

Sometimes a "good enough" solution is better than a "perfect" theory.

Re:unable to recover? (1)

asdf7890 (1518587) | about 3 years ago | (#37588320)

Rsync is not a backup.

No, but it is a great tool for creating and maintaining both online and offline backups. With a little scripting you can create a very efficient (in terms of storage space and bandwidth) system of snapshots in your online backups. Large files such as massive database backups need slightly different handling than smaller objects but rsync can help you with them too. And if you are backing up live database files you are doing it wrong. Any good RDBS features the ability to produce a consistent backup without taking the database down or even pausing it - the resulting file(s) are what you backup with your tool of choice.

The two problems you list, human error and lack of synchronisation, exist with any backup system that is not properly thought out, but full disc images are not necessary (no harm in taking them though if you can take the downtime of having everything read-only or simply off while the image is made). rsync can be used to maintain offline backups (which is essentially what your full disc images are) too and you can synchronise everything on disk to get a clean backup without pausing the whole filesystem long enough for a full image to be taken (LVM snapshots are a useful method of achieving this). A common problem with online backups is that the live machines have write access to the backup services, so an attacker can kill or damage both at once, but with simple careful though you can disconnect your backups with an intermediary so that neither the live machine nor the backup machines can authenticate against each other directly or otherwise (so as long as your snapshots go back far enough that a clean version still exists once the problem is noticed the issue can't affect all you online backups). Snashots going back far enough also protects you from the accidental delete circumstance you mention (again, if you notice in time).

rsync on its own is not a backup system (it is a very efficient file copying tool) as no other single tool is, including those that make full disk images, but it can be used as the core component of a well designed backup arrangement.

While we are on one of my pet issues (backups): how often do you test those full disk images? A backup should not be considered a true backup unless it is tested regularly. You don't want to wait until you need the backup system, to find that something has been going wrong for months undetected. Some sort of automated test is the goal if you can arrange it (test that need manual intervention are prone to error or just not being done when people get busy), though make sure your tests reports positives as well as negatives instead of a system where you assume no news is good news (no news could simply mean the backup checking routine has failed to run).

Re:unable to recover? (1)

reasterling (1942300) | about 3 years ago | (#37579440)

You know I have had great success in the past with bacula [bacula.org] . I think it still uses rsync for the actual transfer of files across the network. It maximizes disk space on the backup server by only storing one copy of any file. I am not entirly sure how it does it, but you can provide network backup for 50+ clients (this was my use case) with disk space of only about twice that of a single client assuming all your clients are running the same os.

Re:unable to recover? (1)

rev0lt (1950662) | about 3 years ago | (#37580754)

We use bacula as a virtual tape backup solution for serveral servers, and I love it. We run a weekly full backup, followed by daily incremental snapshosts. That said, bacula has his shortcomings - it is no disaster recovery solution, it works on files and not at the filesystem level, so if you need to backup live data applications such as databases, either you create a dump first, which isn't practical to be made daily on big databases, or use a filesystem that supports snapshotting, back up from a snapshot and hope that the on-disk data is application-consistent. Also, because the file nature of the backup, a full backup job of a mail server can take several hours over a gigabit connection. For disaster recovery or full/incremental disk image backup of big datastores, there are a lot of (comercial) tools that provide continuous data protection - think versioned filesystem replication, with minimal impact to running systems.

Re:unable to recover? (1)

Miamicanes (730264) | about 3 years ago | (#37579502)

Backing up remotely hosted web applications, even on a dedicated server you control, is amazingly hard to do. Let's start with MySQL. We back up our database daily using Zmanda. A few months ago, we had to restore it. It took THREE DAYS. Why? Because MySQL's backup and restore workflow has basically gone nowhere in 5 years, and hasn't scaled well to accommodate gigabyte-sized partitioned databases. From what I've read, the main problem is that it has no efficient bulk-insertion protocol. It inserts one record, updates the indices, picks its butt, grooms itself a bit, inserts the next record the same way, and continues for the next 47 million records (getting slower and slower each time). It works fine when you have a database with a few thousand rows, but completely goes down in flames when you try scaling it to really huge sizes. We literally ended up having to split our largest table into two -- one for semi-recent records, and one for really ancient ones, so in a disaster we could bring the semi-recent ones back up quickly, then let MySQL piss around for the next week restoring the other 80% at its usual glacial pace without having the entire app be down the whole time.

There's also the matter of sheer backup size. Suppose your server has about 100 gigabytes of data on the hard drive, 95% of which is related to your application. Tar it, download the tarball, and you've just burned through half of what Comcast will let you download for the entire month. Assuming, of course, that your cable modem doesn't crap out at some point before it's done and cause the whole thing to fail.

This is a problem nobody really seems to like talking about. Lots of Linux hacks exist and have been abundantly written about, but they all seem to have been written for an era when file size was measured in megabytes, or at least your largest file didn't exceed the amount of ram by more than 2-4 times. When you start talking about 16 gigabyte files, everything goes straight down the toilet.

Re:unable to recover? (1)

marcosdumay (620877) | about 3 years ago | (#37579628)

"Let's start with MySQL. We back up our database daily using Zmanda. A few months ago, we had to restore it. It took THREE DAYS."

Well, I can't say exactly what happens to MySQL, since I've never restored a MySQL database, but it could be a problem with Zmanda. (Didn't try Zmanda either, as a rule I avoid products whose website yell "The lider in XYZ".)

"Suppose your server has about 100 gigabytes of data on the hard drive, 95% of which is related to your application. Tar it, download the tarball, and you've just burned through half of what Comcast will let you download for the entire month."

That is why you do differential backups (on disk you just need a complete backup once, ever), and use the rsync protocol to get the files over the network.

If your service has new 100's of gigabytes every month, you should think about hosting it at your place.

Re:unable to recover? (1)

nabsltd (1313397) | about 3 years ago | (#37579670)

A few months ago, we had to restore it. It took THREE DAYS. Why? Because MySQL's backup and restore workflow has basically gone nowhere in 5 years, and hasn't scaled well to accommodate gigabyte-sized partitioned databases.

Use InnoDB and set "innodb_file_per_table" to "ON", and you can back up and restore database files without using SQL INSERT commands.

It's still going to take a while with 100GB of data, but it's just hard drive (and maybe network) speed and no CPU.

Re:unable to recover? (1)

lgarner (694957) | about 3 years ago | (#37580300)

You can bitch about MySQL all you want (many do, I don't), but if it takes you three days you're using the wrong database or engine. If you're using a cheesy open-source CMS that requires MySQL, you're using the wrong CMS. And so forth. Of course, I'm sure that you have other nodes running your vital site so you're not offline that whole time.

Plus, your issue doesn't sound like it has anything to do with remote hosting, so it's not relevant here. What you describe will also occur on a local system.

Also, as I recall, you can disable keys during insertions. That might help.

But really, if you're hosting something like this on a shared host, and backing it up using your home cable Internet connection, it must not be too important anyway.

Re:unable to recover? (0)

Anonymous Coward | about 3 years ago | (#37582364)

There's a little more to this than you are telling. If it takes 3 days it might be because of a huge amount of data and not enough IO or cycles. It is a common problem where in normal use there are plenty of resources but during certain special case operations there just aren't. Database engine or restore mode may be of no use at all.

Though in this case it does look like user error regarding the software and method chosen. It is correct he could get better speed by tweaking options during insert and perhaps backing up the file system data rather than rows in SQL format. Replication is also extremely useful for disaster proofing a setup.

I don't think any storage system that stores hundreds of gigabytes is going to make it easy to backup or restore in a smooth and timely fashion without putting in some considerable effort to choose and configure the best approaches.

Re:unable to recover? (1)

Miamicanes (730264) | about 3 years ago | (#37585308)

I used MySQL because it's free, and because it's the database I grew up with. The problem is, the moment you move more than a baby step away from mysqldump, you can pretty much forget about good documentation and free software... and non-free in this context almost always means "several hundred or thousand dollars". As a matter of reflex, I pretty much quit reading the moment I see anything that requires commercial licensing, because I know I can't afford it. MySQL used to do a decent job of maintaining its community documentation, but ever since Oracle bought them, I don't think they've so much as edited a typo.

The truth is, the situation with free (as in beer, if not liberty) MySQL really has been kind of like a boiling frog for the past few years. There are a LOT of things that "just work" with MyISAM and small tables that will fall over and die (or at least present unanticipated complications) if you try to do the same thing with InnoDB partitioned tables, and the community documentation doesn't do a very good job of pointing them out or making them obvious. 95% of the time, the only place you'll even find those nasty complications and side effects documented is in a wiki footnote at the bottom of one of the manual pages in the "Partitioning" section.

Re:unable to recover? (1)

rev0lt (1950662) | about 3 years ago | (#37580798)

You could try to use master/slave replication to do a binary backup - sychronize the slave, then shut it down and copy the datafiles. The backup is somewhat version-dependant, but it is a lot quicker than running a sql script. Also, you may want to have a look at cdp [wikipedia.org] tools.

Re:unable to recover? (1)

cduffy (652) | about 3 years ago | (#37583356)

Let's start with MySQL.

You could, but I'd rather start with PostgreSQL. As long as you have log archival, your backup process looks like this: Run pg_start_backup(), rsync the actual live database while it's still being written to, run pg_stop_backup(), done.

(Restore? Copy those files back, start up the database, and it replays operations from the archive logs to get back from the restored dump to the current point in time... or a point in time you specify, if you'd rather replay to, say, just before someone ran a UPDATE with a missing WHERE clause).

Re:unable to recover? (1)

marcosdumay (620877) | about 3 years ago | (#37579588)

If you are backing it up to disk, take a look at rdiff-backup. It is quite similar to rsync, but creates a versioned backup so it won't propagate a corrupted database or deleted file to your backup (unless you tell it to do so).

Re:unable to recover? (1)

maxwell demon (590494) | about 3 years ago | (#37579676)

I've worked for a web host and the more money they threw at their backup solution (this one is shiny, this one is integrated with your management platform, this one gives control to your customers, this one gives blowjobs, ...)

Which one gives blowjobs? Is it suitable to be installed at home? :-)

Re:unable to recover? (1)

cavebison (1107959) | about 3 years ago | (#37581840)

http://www.smh.com.au/technology/security/4800-aussie-sites-evaporate-after-hack-20110621-1gd1h.html [smh.com.au]

"In assessing the situation, our greatest fears have been confirmed that not only was the production data erased during the attack, but also key backups, snapshots and other information that would allow us to reconstruct these servers from the remaining data."

Re:unable to recover? (1)

NSN A392-99-964-5927 (1559367) | about 3 years ago | (#37582102)

Have you ever tried doing a cron job with cloud computing? I am trying to stay clear of the evil cloud!

well.. (0)

Anonymous Coward | about 3 years ago | (#37578780)

If you're worried about your host provider leaving doors unlocked use www.firehost.com. They lock everything down to where the only thing that has the possibility of being hacked is your poor web code. At that point it IS your fault.

nuff said.

WARNING: The summary is WRONG! (1)

Anonymous Coward | about 3 years ago | (#37578792)

Okay. Let me start off by saying that I am a highly qualified individual with an online degree in chemical mathematics again. After reading the summary, I have not only come to the conclusion that it is incorrect, but that it is also not stargazer. It is actually pew pew along the lines of magazine.

Sorry I came to the garbage of this place and realized it.

Re:WARNING: The summary is WRONG! (2)

mrbester (200927) | about 3 years ago | (#37579108)

Don't you try to out-weird me. I get stranger things than you free with my breakfast cereal.

When services are concentrated... (1)

DRBivens (148931) | about 3 years ago | (#37578824)

This is an ongoing problem when services are concentrated under one roof: it gives potential attackers a much richer target, with many more juicy pieces of low-hanging fruit in a convenient, easy-to-hit area.

Cloud and remote-hosting services are not bad; in many cases they are a wonderfully effective deployment tool. Customers must be careful, though, to ensure their provider implements good security practices and that their backup solution truly allows for service recovery after a disaster.

Unfortunately, this information is rarely presented on the service's website or in their ad brochure.

Re:When services are concentrated... (1)

morcego (260031) | about 3 years ago | (#37579056)

Say what you will, but I refuse to use shared hosting.

All my stuff is hosted on dedicated (self-managed) servers.

When I see stuff that is made with the sole intent of "making things easier for the user", like Plex, CPanel etc, I raid an eyebrow. I can't criticize Plex directly, having very little knowledge of its internals. But from the little I've seen from CPanel, they use some very customized, less than fully patched versions that make me not willing to trust my sites to their product.

Also, shared hosting usually caters to the lower denominator, have lots of software installed that YOU don't need (because other people do or might) etc. It is a security nightmare.

No, thank you. I will pay more but will keep my servers secure. VPSs are better than shared, but still more dangerous than dedicated.

 

Re:When services are concentrated... (0)

Anonymous Coward | about 3 years ago | (#37582498)

I also keep away from shared.

Shared locks you in a cage that you end up configuring your application to cater to where it would be better instead to adjust the system to cater to your application. Given how unconfigurable shared hosting can be and the security cage it locks you into this can mean spending months solving a problem rather than hours. You can also rest assured that as time goes by your host will become ever more outdated to the point where you will find yourself in the outrageous situation of using mysql 3 when everyone else is using mysql 5. You will eventually have to move.

I hate what I like to call "Wizard Apps" (CPanel, etc) with a vengeance. While occasionally these may provide convenience, in the long run they enable users to do things without actually knowing what it is they are doing. Like shared hosting, they also lock you into a cage as it is very uncommon for such apps to give you the flexibility you will find in the shell. In my view, if someone only has the skills to do what they need to do almost exclusively via wizards when it comes to something such as setting up a server then they are handicapped, they shouldn't be working in that field.

This is a common problem I see daily with LAMP developers who can only use phpmyadmin. I tell them to use the CLI. If they can't then they are gone for one simple reason. You have to deal with raw queries in PHP development. If you can't handle SQL in the CLI how can you be expected to handle it properly in PHP? On top of that, why would any system administrator want to have to install and maintain yet another website (phpmyadmin) for a worker that is completely useless without it but that shouldn't need it at all. To add even more these wizards are rarely bug free and often require someone with low level skills to spend a lot of time dealing with the inadequacies wizards typically feature. The last time someone used phpmyadmin I had to regularly take time out to help them for problems such as:

It has issues resolving the type of computed columns for display falling back to hex rather than showing a string.
The paging is extremely inefficient and causes an extremely slow query if you jump to a high page number with no warning to the user.
The dumps are often invalid and have problems with things such as putting statements in the correct order to deal with constraints.

I'm all for shaving a few hour or days off here or there using automation and GUI based Wizards but many of these applications don't serve that purpose or only give the illusion or serving that purpose. There is a place for wizards, for example the installation of a new system. If properly provisioned they can be great for untying the hands of competent system admins to work on less trivial tasks. Sadly, many of them are abused instead as a poor and inappropriate substitute for possessing the necessary skills to do the work that is required of developers and system administrators. Shared is particularly notorious for fostering the improper use of automation and wizards. As shared often denies any access to lower level operations a ceiling is inflicted upon users who may become trapped as they become exclusively dependent on wizards. Any decent worker will not want to work on shared because it's like working with your hands tied behind your back.

I think it is worth pointing out that if you are or have a competent Linux user on hand shared is actually not much more economically superior than dedicated.

The cheapest dedicated servers go for about 30USD a month which is pocket money. There is a good chance you will be able to get more out of that than you will from a shared host for the same price and you'll probably have better options for scaling. You might find shared hosting for cheaper but at the rate the cheapest dedicated servers go for and the flexibility you get the saving probably isn't worth it.

In my analysis the biggest problem is that demand for competent and affordable system admins/developers far outstrips the supply. This situation has maintained a huge population of incompetent workers whose mistakes actually sometimes increase demand for competent workers. We see whole applications and methodologies put in place to cater to these individuals though often those creating these crutches are themselves crippled. We see competent workers having to clean up the mess made by incompetent workers.

In my experience during my computing career I can comfortable confess that a significant portion of other workers I have worked with or witnessed should have no place in the IT industry at all and I find it easier to get rid of them then to spend weeks making applications that allow them to do what they could do in the shell in 30 seconds if they could be bothered to learn which its self should take much less time than it takes to create a wizard.

If you are only willing to go lowest price you should also only be willing to get lowest value.

Hackers _could have_? (1)

ToasterMonkey (467067) | about 3 years ago | (#37578868)

Is it necessary to point out that they could have done worse? The bank robber that could have murdered all the hostages and set fire to the bank is still a criminal is still a bank robber and still a criminal.

What is the intent of writing things this way, to make us think they were doing us a favor?

Re:Hackers _could have_? (0)

Anonymous Coward | about 3 years ago | (#37578984)

I think you misunderstand. They are not pointing out how it could be worse, but rather that if the hacker's motives were different we might not even know about it yet if ever.

This is a major consideration at this level of security and the only sure approach is to assume your host is compromised.

Wild West (1)

elysiuan (762931) | about 3 years ago | (#37578942)

The hosting industry really has segmented itself along pricing lines. The overhead to start a small hosting business is so low that there are hundreds if not thousands of hosting 'companies' that offer a very mediocre product but can get by on providing for the cheap and the clueless.

When you see these types of operations with 'unlimited' resource plans starting at 2 or 3 bucks a month is it any surprise that system security is not a core compentency?

While not a universal truth I've found you most often get what you pay for especially as you leave the budget shared hosting segment and move towards VPS or dedicated offerings.

Re:Wild West (2)

fermion (181285) | about 3 years ago | (#37579006)

I do not think that the low price represents security, I think it represents uptime and general service. I have used services with low prices and the only issue was uptime. I suppose for very low price services there might be an issue with backups. I also suppose with very low prices, there is going to be an issue of bandwidth and processing power.

As has been shown, even the high end services are extremely vulnerable to attacks. No one seems to have that core competency, or at be willing to pay for it.

Re:Wild West (0)

Anonymous Coward | about 3 years ago | (#37586162)

It seems that every web hosting company has their weaknesses, it doesn't matter if they are very cheap or premium. I'm with AVS Networks for web hosting who don't seem to have problems.

Single Targets (0)

Anonymous Coward | about 3 years ago | (#37578952)

"The incidents raise concern that hacker groups are bypassing single targets and hitting Web hosts directly, giving them access to tens of thousands of websites, rather than single targets."

I am concerned that this is happening, that these hackers are doing this, it really concerns me.

Re:Single Targets (0)

Anonymous Coward | about 3 years ago | (#37579608)

Moderate this guy +1 Concerned!

Server Execution is the Issue (5, Informative)

zx2c4 (716139) | about 3 years ago | (#37579000)

Most quality web hosting provides customers with shell access to the web server, or when cases where they don't, usually something like PHP is installed that usually allows for arbitrary execution.

On a web server that hosts a few thousand sites, using the Bing IP Search [bing.com] , you can find a list of all the domains. Usually there will be a lowest hanging fruit that's easy enough to pluck. Or, if you can't get shell access through a front-facing attack, you can always just sign up for an account with the hosting company yourself.

So once you have shell, then it's a matter of being a few steps ahead of the web host's kernel patching cycle. Most shared web hosting services don't utilize expensive services like ksplice and don't want to reboot their systems too often due to downtime concerns. So usually it's possible to pwn the kernel and get root with some script-kiddie-friendly exploit off exploit-db. And if not, no doubt some hacker collectives have repositories of unpatched 0-day properly weaponized exploits for most kernels. And even if they do keep their kernel up to date and strip out unused modules and the like, maybe they've failed to keep some [custom] userland suid executables up to date. Or perhaps their suid executables are fine, but their dynamic linker suffers from a flaw like the one Tavis found in 2010 [pastie.org] . And the list goes on and on -- "local privilege escalation" is a fun and well-known art that hackers have been at for years.

So the rest of the story should be pretty obvious... you get root and defeat selinux or whatever protections they probably don't even have running, and then you have access to their nfs shares of mounted websites, and you run some idiotic defacing script while brute-forcing their /etc/shadow yada yada yada.

The moral of the story is -- if you let strangers execute code on your box, be it via a proper shell or just via php's system() or passthru() or whatever, sooner or later if you're not at the very tip top of your game, you're going to get pwn'd.

Re:Server Execution is the Issue (0)

Anonymous Coward | about 3 years ago | (#37579402)

One thing I wonder is because they usually have many computers with load-balancing and the like, wouldn't you be better off not defacing the website and just hiding your access and gathering as much information as you can from the users of the site while using the servers to brute force their own passwords?

VMs are cheap. Lawsuits are expensive (3, Insightful)

preaction (1526109) | about 3 years ago | (#37579030)

Every day someone comes into #httpd on freenode asking "How do I protect one user's site from another user's site when both are using PHP or CGI or whatever else?" and the answer is invariably "It will cost too much to bother."

If you are a business and you are taking in customer information, you should be held responsible when another user on that server actually figures out how much money that information is worth.

There is no excuse. A VM is about $20 a month. A DynDNS account is less. Shared hosting is for personal home pages, not businesses.

Re:VMs are cheap. Lawsuits are expensive (0)

Anonymous Coward | about 3 years ago | (#37579270)

If you're taking credit card information, you are held responsible. Using a shared web host, or even a VM, is against PCI DSS rules, and if anything happens (even if it's the webhost's fault), you are fined 200,000 USD (50k each from Visa, MC, Discover, and Amex) + you'll have to pay to have cards replaced, any losses from fraud, and to clean up the mess (auditors, etc). The fines go up if it happens a second or third time (400k)...

And I'll repeat, a VM won't do it. You need a minimum of 2 servers... 1 for the database, 1 running a hypervisor (xen/vmware) and a domU for each service you want to run (ie: 1 domU for postfix; 1 for httpd; etc).

And there are requirements for that too... firewalls; restricted and monitored access; IDS; etc.

You fail 1 requirements and you get hacked, and you get the fines.

Re:VMs are cheap. Lawsuits are expensive (1)

Dj-Zer0 (576280) | about 3 years ago | (#37579740)

PCI rules applies to processors more than the seller, please think outside the box thats not how its done :p

Re:VMs are cheap. Lawsuits are expensive (0)

Anonymous Coward | about 3 years ago | (#37579786)

Wow... that is incredibly wrong... Here the PCI site.. start reading: https://www.pcisecuritystandards.org/index.php

And if that's what your company thinks of PCI, I would start polishing your resume:

Cost of Average Data Breach Climbs to $7.2 Million
http://www.pcworld.com/businesscenter/article/221582/cost_of_average_data_breach_climbs_to_72_million.html

Re:VMs are cheap. Lawsuits are expensive (1)

lgarner (694957) | about 3 years ago | (#37580338)

Not even close. PCI provides standards of how personal information is stored and handled; it doesn't give a rat's ass who's doing it.

Re:VMs are cheap. Lawsuits are expensive (0)

Anonymous Coward | about 3 years ago | (#37581672)

Easy: LXC and/or SELinux.

Re:VMs are cheap. Lawsuits are expensive (3, Informative)

ista (71787) | about 3 years ago | (#37582718)

Sorry, but VMs are just a different flavor of shared hosting and your recommendation doesn't do any good. With VMs, VPS or dedicated servers hosted on a network operated by clueless network admins simply gives you a new kind of insecurities. For example, when some other dedicated server is sending out spoofed ARP replies to take over your default gateway, you do open your box to simple man-in-the-middle attacks.
And dedicated servers won't help if you're operating them with a clueless admin - and exactly those are the one's who are asking such stuff in #httpd.

I've been working at a quite large web host for more than ten years now. When taking into account the ratio of shared vs. dedicated customers, I see a higher ratio of dedicated customers being hacked every day: the number of possible insecurities is simply higher.

With "classic" shared hosting, your host is running a single kernel and relies on unix permissions to separate sites from each other: a flaw in the kernel or when setting permissions will expose the host. Having proper permissions set is an easy task (just say no to "chmod 777"), so your cracker usually has to target the kernel, usually from a local user account (e.g some "hacked" website running year-old, insecure installs of Wordpress or something else).

With VMs, your host is running a single hypervisor and relies on that hypervisor to properly separate VMs from each other: so a flaw in that hypervisor or its configuration will give the cracker full access to every VM. A (security-wise) proper configuration isn't that obvious to many guys, so this is really an issue.
What's usually required: local user access to a single VM, usually by exploiting their outdated, insecure phpBB/whatever-install.
After that, just take a look at what kind of virtual hardware you're seeing, and e.g. start googling for "vmware exploit".
However, many VMs, VPS and dedicated servers are simply poorly administrated and both shared and dedicated websites poorly operated.

I've seen a hundreds of shared hosting sites exploited within a single day via insecure, customer-installed scripts - but none of those exploiters was ever able to take over our shared hosting environment. The reason is simple: our admins actually do care about their servers, care about their own reputation and take pride in what they do. We also do develop our custom kernel patches inhouse and du manually check wether we actually do need a newer kernel (fixing old and introducing new vulnerabilities) or wether we just would like to backport those patches to our own set of kernels. We're not only running "usually" hardended systems, but customers are granted access only to a specially hardended chroot environment with hand-selected suid binaries, paranoid logfile monitoring and custom kernel patches preventing and alerting any not-whitelisted privilege escalation or any non-whitelisted uid-0-process (so far, those alerts have only been accidentally set off by interns doing their job in unexpected, not-whitelisted ways). Our systems also automatically trigger counteractions, like e.g. temporarily firewalling brute-force password cracking attempts to non-existant users and freezing strange-behaving processes on our servers. And once some notice on a possible vulnerability does come up, at least three admins in parallel do investigate those issues and think about how to solve those issues.

Within years, the most publicity on our shared hosting security was due to some guy who used an insecure, customer php-script to replace a customer's index.html with some content like

#:~$ id
uid=0(root) gid=0(root) groups=0(root)

Of course, the permissions of index.html still did belong to the customer ... and the Apache logfile clearly showed a POST to the insecure php-script with the same timestamp than the one of index.html.

We're also offering dedicated servers - who also run in a hardened environment, but still do run "usual" linux distributions and windows installs.
Hardending takes place at a different layer: private VLANs (layer 2 access only to DHCP/rescue server and default gateway, not to any other box on the same network), static ARP tables on routers and DHCP server, switch port security and the like do prevent dedicated servers from taking over other servers or our infrastructure. So in essence, the security of the dedicated server is solely to the administrator of that server.

I also do see at least a dozen dedicated servers (both physical hardware or using some virtualization layer like Xen/HyperV/vmware/openvz) being cracked every day; as I don't have access to those boxes, the actual numbers may be higher, I may only assume from the number of boxes temporarily taken offline e.g. due to outgoing DDoS-traffic. We usually take down those machines, ask our users to investigate and in case of signs of a break-in to reinstall their box - many don't care about that and try to "remove" rootkits manually without proper training on doing so. Many only restore their latest backups, and so expose the same vulnerabilities than before. Sometimes, they accidentally do restore the rootkits. And in some cases, customers do reimage their box, do install the latest security fixes, but reinstall their four-year-old phpBB install which has most likely been used by the cracker to gain user permissions.

A few times, support requests have been escalated to our admins - the customer wanted us to take a look at their server, which had been hacked five times a month and the customer doesn't have a clue why this keeps happening. When investigating, we usually do see a lot of reasons who are humiliating to any serious admin: at least a dozen daemons running for no obvious reason (e.g. three backup softwares in parallel), user accounts with both insecure passwords and sudo permissions; logfiles being turned off and no monitoring in place. Having "the latest security patches installed" isn't worth a dime when you're still running a five year old linux distro who didn't release security patches for at least two years now.

Hell, even in our own forums users do recommend VMs or VPS to those who'd like to "try out Linux" and were not able to install Linux on their home hardware. It certainly frightens me a lot when someone recommends to go for a dedicated server, a VM, VPS or the like to manage on their own ("doesn't take any more than 5 minutes a month") when all they need is just some competently managed shared hosting webspace AND some thought on who will manage any applications installed within that webspace. In #httpd on freenode, you're seeing exactly those "admins" and not admins of any serious web host.

So after all, there is no excuse to incompetently operate website content. If you recommend incompetent website operators to also become clueless admins, this won't enhance security at all but will create just more VMs/VPS/servers for those crackers out there to take over, host phishing sites, bot herds or take part in DDoS networks.

Re:VMs are cheap. Lawsuits are expensive (0)

Anonymous Coward | about 3 years ago | (#37583778)

It is not that difficult to protect. Here is one way to do it:
http://www.cloudlinux.com/docs/cagefs/
It was made specifically for shared hosts.

DERP? (0)

Anonymous Coward | about 3 years ago | (#37579204)

What fucking century is this story from?

insider information here (3, Interesting)

bsDaemon (87307) | about 3 years ago | (#37579314)

I was going to mod, but I decided to post instead. I used to work at one of the companies mentioned, and what I hear through my channels is kind of retarded. One of the so-called "admins", who really ought to have known better, set up a tunnel from a personal VPS to an internal machine which had no internet-accessible address -- just the tunnel. The VPS got popped and that gave them access to an internal machine which had SSH keys as root to every single VM node and shared hosting box, as well as every dedicated machine on which the customer didn't have root access.

All the VPS accounts were vulnerable, because the host nodes were compromised, so even if a VPS customer had root, they were vulnerable, too. However, that was the kind of irresponsible, non-professional crap that I saw going on there and is why I left about 2 years ago: I assumed that the longer I stayed, the more likely it was to tarnish my reputation and ruin my career. Well, that and the fact they paid for shit and worked my like a salve tied to a shift bench on a factory floor. But then, I don't really know what anyone can expect web hosting is pretty much the fast food of it, and that's the level of talent that one can reasonably expect to retain for very long, or attract in the first place in most cases.

Some how the VPS that I left hosted there didn't get whacked, though. I guess they just forgot about me.

Re:insider information here (1)

m85476585 (884822) | about 3 years ago | (#37581528)

Did you work for Inmotion? I have a website there, and I'm wondering if I should move it. This is not the first incident I've had there. Anyone have a suggestion for a secure and affordable web host?

Re:insider information here (0)

Anonymous Coward | about 3 years ago | (#37582138)

I currently work at Inmotion. The previous poster must have worked for one of the other hosts mentioned, because that's not what happened with us nor has that ever happened here. It's also a much better place to work than he described, and our staff are all knowledgeable and talented. I say stick with us. You've got a good group of people working for you; and, you can reach us any time. After what happened last Sunday, our systems are getting locked down pretty tight, even internally. Like this article says, this type of attack is not limited to just us. The difference is our team of SysAdmin's are smart, and are working hard to prevent this from happening again.

Re:insider information here (1)

flydpnkrtn (114575) | about 3 years ago | (#37582974)

From a completely different source, I heard that the original description of the compromise, namely "One of the so-called 'admins', who really ought to have known better, set up a tunnel from a personal VPS to an internal machine which had no internet-accessible address -- just the tunnel" - did in fact happen as described. Duplicated, shared SSH keys led to this massive compromise (here's a hint: don't do that. build individual keys for individual servers, or at least build separate "groups" of keys for groups of servers, so that one compromise doesn't lead to hundreds of VPSs getting compromised).

I would say that either you're being misinformed, or you're spreading misinformation.

Aruba and Dreamhost (1)

Dynamoo (527749) | about 3 years ago | (#37579416)

I've seen mass compromises on Aruba in Italy and Dreamhost in the US too over recent weeks.

whats new? (1)

Dj-Zer0 (576280) | about 3 years ago | (#37579714)

are you guys lacking original material? did the script kiddies g.f gave the moderator a blowjob?

ONE exploit (0)

Anonymous Coward | about 3 years ago | (#37580548)

thats all i need to screw millions of websites
and it does work
has worked
and will continue to work

we are watching

Re:ONE exploit (0)

Anonymous Coward | about 3 years ago | (#37581834)

shut the fuck up

As long as they dont hit.... (1)

hesaigo999ca (786966) | about 3 years ago | (#37589194)

As long as they leave godaddy alone, we are all safe, we can all go back to work now, phew!

inmotion web hosting (0)

Anonymous Coward | about 3 years ago | (#37650508)

I don't know if inmotion hosting is good. I use ipage web hosting for few of my sites. and I got very good price. It is the cheapest price. get it here: cheapest ipage web hosting [cheaphostingchoice.com]

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?