Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Best Practices For Infrastructure Upgrade?

timothy posted more than 4 years ago | from the thinking-ahead dept.

Networking 264

An anonymous reader writes "I was put in charge of an aging IT infrastructure that needs a serious overhaul. Current services include the usual suspects, i.e. www, ftp, email, dns, firewall, DHCP — and some more. In most cases, each service runs on its own hardware, some of them for the last seven years straight. The machines still can (mostly) handle the load that ~150 people in multiple offices put on them, but there's hardly any fallback if any of the services die or an office is disconnected. Now, as the hardware must be replaced, I'd like to buff things up a bit: distributed instances of services (at least one instance per office) and a fallback/load-balancing scheme (either to an instance in another office or a duplicated one within the same). Services running on virtualized servers hosted by a single reasonably-sized machine per office (plus one for testing and a spare) seem to recommend themselves. What's you experience with virtualization of services and implementing fallback/load-balancing schemes? What's Best Practice for an update like this? I'm interested in your success stories and anecdotes, but also pointers and (book) references. Thanks!"

Sorry! There are no comments related to the filter you selected.

Cloud Computing(TM) (-1)

gavron (1300111) | more than 4 years ago | (#30188798)

VMWare servers. Distributed SANs. Services spread over the cluster with full failover. Multiple connecting switches for iSCSI and the SAN controllers.

E

Re:Cloud Computing(TM) (0)

Anonymous Coward | more than 4 years ago | (#30188826)

Maybe the first question should really be: what's your budget?

Re:Cloud Computing(TM) (5, Insightful)

lukas84 (912874) | more than 4 years ago | (#30188874)

No, the budget questions comes later.

The first questions are: What are your businesses requirements regarding your IT infrastructure? How long can you do business without it? How fast does something need to be restored?

Starting with those requirements, you can start with possible designs that fit those solutions - for example, if the requirement is that a machine must be operational at last a week after a crash, you can build computers from random spare parts and hope that they'll work. If the requirement is that it should be up and running in two days, you will need to buy servers from a Tier 1 vendor like HP or IBM with appropriate service contracts. If the requirement is that everything must be up and running again in 4 hours, you'll need backups, clusters, site resilience, replicated SAN, etc. pp.

The question of Budget comes into play much later.

Re:Cloud Computing(TM) (2, Funny)

Anonymous Coward | more than 4 years ago | (#30188990)

I disagree when you have a budget of 800$ and some shoestrings it eliminates a lot of questions ;)

Re:Cloud Computing(TM) (2, Insightful)

lukas84 (912874) | more than 4 years ago | (#30189098)

Yes, but for example management wanting 24/7 2 hour up&running SLA and having hired a single guy with a budget of 800$ will not work - this is important to get sorted out early. Management needs to know what they want and what they'll get.

Re:Cloud Computing(TM) (3, Insightful)

mabhatter654 (561290) | more than 4 years ago | (#30189268)

Except of course that management ALREADY HAS that because they've been very lucky for 7 years. Why spend money for what works (never mind we can't upgrade or replace any of it because it's so old)

I think what the article is really asking is what's a good model to start all this stuff. Your looking at one or two servers per location (or maybe even network appliances at remote sites) We read all this stuff on Slashdot and in the deluges of magazines and marketing material...where do we start to make it GO?

Re:Cloud Computing(TM) (3, Interesting)

lorenlal (164133) | more than 4 years ago | (#30189774)

I think what the article is really asking is what's a good model to start all this stuff. You're looking at one or two servers per location (or maybe even network appliances at remote sites).

I totally agree with your premise. In my experience taking something that appears to work (when you realize you've really just been lucky) requires some time to bring about the change that the business really needs.

Now, as for having two servers per location, that heavily depends on how those sites are connected. Are they using a dedicated line or a VPN? That's important since that'll affect what hardware needs to be located where. It's possible (even if unlikely) that some sites would only need a VPN appliance... But since the poster seems to want general advice:

VMWare ESXi is a pretty good starting place for getting going on virtualization. I've had a great experience with it for testing. When you feel like you've got a good handle, get the ESX licenses.

If SAN isn't in your budget, I still recommend some sort of external storage for the critical stuff... Preferably replicated to another site... But you can run the OS on local storage, especially in the early stages. But you'll need to get everything onto external storage to implement the VMotion services and instant failover. Get a good feel for P2V conversion. It'll save you tons of time when it works... It doesn't always, but that's why you'll always test, test and test.

As for the basic services you stated above (www, ftp, email, dns, firewall, dhcp):
Firewall (IMHO) is best done on appliance. Which should be anywhere you have an internet connection coming in. I'm sure you knew that already, but I'm trying to be thorough.
Email is usually going to be on its own instance (guest, cluster, whatever)... But I find that including it in the virtualization strategy has been quite alright. In fact, my experience with virtualization has been quite good except when there is a specific hardware requirement for an application (a custom card, or something like that). USB has been much less of a headcache since VMWare has support for it now, but there are also network based USB adapters (example: USBAnywhere) that provide a port for guest OSes in case you don't use VMWare.

Re:Cloud Computing(TM) (1)

symbolset (646467) | more than 4 years ago | (#30190128)

where do we start to make it GO?

It can be helpful to engage an independent VAR. Not all, but some, offer presales assistance that includes needs assessment and design for free or at low cost. They do this with the hope that by demonstrating their technical prowess you will be more comfortable with buying from them, and in the hope that you'll engage their engineering teams for best-practice deployment consulting.

It sounds like the organization in the fine article doesn't have a lot of experience with this. Modern systems can be complex and a single configuration error can lead to downtime, wide-open security, and more. Ask slashdot is nice, but it's not a dialog with a certified professional with years of experience who's on your site and has spent some time understanding your network and needs.

Re:Cloud Computing(TM) (0)

Anonymous Coward | more than 4 years ago | (#30189506)

The tension between budget and business requirements can be useful but it is largely a paper tiger. A budget without a business requirement is a recipe for failure. The budget can help you refine the requirement but ultimately if you cannot pay for what you require, you're not likely to be in business very long. Putting budget first is wasteful and likely to lead to a network that doesn't fit the needs of the business.

Understand the requirements first and plan to meet them. If there is extra budget then consider adding more or better hardware and services. If there is not enough budget; if the requirements are firm, the network plan efficient and the infrastructure has to be replaced all at once, then start looking for another job. Otherwise plan for replacements over several years.

Re:Cloud Computing(TM) (1)

trevelyon (892253) | more than 4 years ago | (#30189166)

Wow, someone who really seems to know what they are talking about. You sure you meant to post here? Couldn't agree with you more, requirements come first (although I've seen them often get revised down during the budgeting phase).

Re:Cloud Computing(TM) (1)

mysidia (191772) | more than 4 years ago | (#30190494)

And when you need it back up within 5 minutes, and no data loss (other than data that didn't occur due to downtime)?

Re:Cloud Computing(TM) (-1, Offtopic)

Anonymous Coward | more than 4 years ago | (#30188828)

GNAA REBORN UNDER NEW LEADERSHIP

DiKKy Heartiez - Berlin, Norway

President timecop of the GNAA has died today. He died at the age of 55 from excessive lulz in his apartment in Tokyo, Japan while watching faggot cartoons of preteen girls beeing raped by giant testicles. The world will remember him as a total faggot douchebag who had the opportunity to unite the best trolls seen upon the face of the internet into one special hardcore machine of destruction, unfortunately he failed, instead devoting his internet carreer to animu. Although he died like a true hero he will be forever remembered as a total failure.

In the wake of his death the GNAA is thought to perish like all the other so called trolling organizations. The writing is on the wall, they say. The GNAA smells worse than BSD, they say.They have said this for a long time. The GNAA has lived, with a very faint pulse, for years.

DIKKY HEARTIEZ CLAIMS THE PRESIDENCY OF THE GNAA!!!!!!!

With the death of timecop still shocking our chats, not many are able to see ahead. But there is one visionary Nord who has great plans for the new GNAA.
"Under my leadership the GNAA will become the new home of all trolls on the internet. The GNAA will regain its old strength and will be feared by bloggers and jews alike. The time for CHANGE is now." DiKky HearTiez told a shocked audience outside the Gary Niger Memorial Plaza, Nigeria, earlier today. The GNAA will move its Internet Relayed Communications to a new location, following reports of a massive "Distributed Denial Of Service" attack on its previous location, making it unreliable.
"Our operatives are in need of a robust and safe communications service with can_flood for everyone." An anonymous source at the GNAA Black Ops department told reporters at the same conference.

KLULZ supports DiKKy Heartiez presidency!

The infamous KLULZ internet radio station supports DiKKy Heartiez for the new GNAA president.
"KLULZ is behind him 100% and will be broadcasting his speeches and support him in every way possible, we wish him the best of luck and an outstanding presidency. May many blogs burn under DiKky Hearties." This was stated by KLULZ Operations Manager and Gay Nigger g0sp when asked to comment on KLULZ involvement.

About President timecop

DEAD.

About DiKKy HearTiez

The world famous internet nord from Norway LOL HY living in a fjord LOL HY. Currently the new President of the new GNAA. He is also a radiodj on KLULZ and active in many irc chats. Known for several epic trolls in his time. Led the GNAA operation Intel Crapflood 21, who succesfully made GNAA owners of the biggest thread on Slashdot until fixed by admins. Also deeply involved in the war on blogs, and is the one who provided JesuitX with the real screenshots of Faggintosh Leopard. His leadership abilities, high iq and instoppable urge to troll, coupled with his fat Norwegian welfare check will enable him to become the best President the GNAA ever had.

About KLULZ

KLULZ is the internets radio station, bringing you news about the GNAA, hosting shows by prominent djs such as DiKKy, l0de, g0sp, jenk and many others. KLULZ supports DiKKy Heartiez. With mature content this channel is not suitable for children or people under the age of 18. Klulz radio can be heard at http://klulz.com/listen.pls

About GNAA:

GNAA (GAY NIGGER ASSOCIATION OF AMERICA) is the first
organization which gathers GAY NIGGERS from all over America and abroad for one
common goal - being GAY NIGGERS.

Are you GAY [klerck.org] ?

Are you a NIGGER [mugshots.org] ?

Are you a GAY NIGGER [gay-sex-access.com] ?

If you answered "Yes" to all of the above questions, then GNAA (GAY NIGGER
ASSOCIATION OF AMERICA
) might be exactly what you've been looking for!

Join GNAA (GAY NIGGER ASSOCIATION OF AMERICA) today, and enjoy
all the benefits of being a full-time GNAA member.

GNAA (GAY NIGGER ASSOCIATION OF AMERICA) is the fastest-growing
GAY NIGGER community with THOUSANDS of members all over United States of
America and the World! You, too, can be a part of GNAA if you join
today!

Why not? It's quick and easy - only 3 simple steps!

  • First, you have to obtain a copy of GAYNIGGERS FROM OUTER SPACE THE MOVIE [imdb.com] and watch it. You can download the movie [idge.net] (~130mb) using BitTorrent.
  • Second, you need to succeed in posting a GNAA First Post [wikipedia.org] on slashdot.org [slashdot.org] , a popular "news for trolls" website.
  • Third, you need to join the official GNAA irc channel #GNAA on irc.gnaa.us, and apply for membership.

Talk to one of the ops or any of the other members in the channel to sign up
today
! Upon submitting your application, you will be required to submit
links to your successful First Post, and you will be tested on
your knowledge of GAYNIGGERS FROM OUTER SPACE.

If you are having trouble locating #GNAA, the official GAY NIGGER
ASSOCIATION OF AMERICA
irc channel, you might be on a wrong irc network.
The correct network is Hardchats, and you can connect to
  irc.hardchats.com as our official server. Follow this link [irc] if you are using an irc client such as mIRC.

If you have mod points and would like to support GNAA, please moderate this post up.

Copyright (c) 2003-2009 Gay Nigger Association of America [www.gnaa.us]

It has come to my attention that the entire Linux community is a hotbed of so called 'alternative sexuality,' which includes anything from hedonistic orgies to homosexuality to pedophilia.

What better way of demonstrating this than by looking at the hidden messages contained within the names of some of Linux's most outspoken advocates:

  • Linus Torvalds [microsoft.com] is an anagram of slit anus or VD 'L,' clearly referring to himself by the first initial.
  • Richard M. Stallman [geocities.com] , spokespervert for the Gaysex's Not Unusual 'movement' is an anagram of mans cram thrill ad.
  • Alan Cox [microsoft.com] is barely an anagram of anal cox which is just so filthy and unchristian it unnerves me.

I'm sure that Eric S. Raymond, composer of the satanic homosexual [goatse.fr] propaganda diatribe The Cathedral and the Bizarre, is probably an anagram of something queer, but we don't need to look that far as we know he's always shoving a gun up some poor little boy's rectum. Update: Eric S. Raymond is actually an anagram for secondary rim and cord in my arse. It just goes to show you that he is indeed queer.

Update the Second: It is also documented that Evil Sicko Gaymond is responsible for a nauseating piece of code called Fetchmail [microsoft.com] , which is obviously sinister sodomite slang for 'Felch Male' -- a disgusting practise. For those not in the know, 'felching' is the act performed by two perverts wherein one sucks their own post-coital ejaculate out of the other's rectum. In fact, it appears that the dirty Linux faggots set out to undermine the good Republican institution of e-mail, turning it into 'e-male.'

As far as Richard 'Master' Stallman goes, that filthy fudge-packer was actually quoted [salon.com] on leftist commie propaganda site Salon.com as saying the following: 'I've been resistant to the pressure to conform in any circumstance,' he says. 'It's about being able to question conventional wisdom,' he asserts. 'I believe in love, but not monogamy,' he says plainly.

And this isn't a made up troll bullshit either! He actually stated this tripe, which makes it obvious that he is trying to politely say that he's a flaming homo [comp-u-geek.net] slut [rotten.com] !

Speaking about 'flaming,' who better to point out as a filthy chutney ferret than Slashdot's very own self-confessed pederast Jon Katz. Although an obvious deviant anagram cannot be found from his name, he has already confessed, nay boasted of the homosexual [goatse.fr] perversion of corrupting the innocence of young children [slashdot.org] . To quote from the article linked:

'I've got a rare kidney disease,' I told her. 'I have to go to the bathroom a lot. You can come with me if you want, but it takes a while. Is that okay with you? Do you want a note from my doctor?'

Is this why you were touching your penis [rotten.com] in the cinema, Jon? And letting the other boys touch it too?

We should also point out that Jon Katz refers to himself as 'Slashdot's resident Gasbag.' Is there any more doubt? For those fortunate few who aren't aware of the list of homosexual [goatse.fr] terminology found inside the Linux 'Sauce Code,' a 'Gasbag' is a pervert who gains sexual gratification from having a thin straw inserted into his urethra (or to use the common parlance, 'piss-pipe'), then his homosexual [goatse.fr] lover blows firmly down the straw to inflate his scrotum. This is, of course, when he's not busy violating the dignity and copyright of posters to Slashdot by gathering together their postings and publishing them en masse to further his twisted and manipulative journalistic agenda.

Sick, disgusting antichristian perverts, the lot of them.

In addition, many of the Linux distributions (a 'distribution' is the most common way to spread the faggots' wares) are run by faggot groups. The Slackware [redhat.com] distro is named after the 'Slack-wear' fags wear to allow easy access to the anus for sexual purposes. Furthermore, Slackware is a close anagram of claw arse, a reference to the homosexual [goatse.fr] practise of anal fisting. The Mandrake [slackware.com] product is run by a group of French faggot satanists, and is named after the faggot nickname for the vibrator. It was also chosen because it is an anagram for dark amen and ram naked, which is what they do.

Another 'distro,' (abbrieviated as such because it sounds a bit like 'Disco,' which is where homosexuals [goatse.fr] preyed on young boys in the 1970s), is Debian, [mandrake.com] an anagram of in a bed, which could be considered innocent enough (after all, a bed is both where we sleep and pray), until we realise what other names Debian uses to describe their foul wares. 'Woody' is obvious enough, being a term for the erect male penis [rotten.com] , glistening with pre-cum. But far sicker is the phrase 'Frozen Potato' that they use. This filthy term, again found in the secret homosexual [goatse.fr] 'Sauce Code,' refers to the solo homosexual [goatse.fr] practice of defecating into a clear polythene bag, shaping the turd into a crude approximation of the male phallus, then leaving it in the freezer overnight until it becomes solid. The practitioner then proceeds to push the frozen 'potato' up his own rectum, squeezing it in and out until his tight young balls erupt in a screaming orgasm.

And Red Hat [debian.org] is secret homo [comp-u-geek.net] slang for the tip of a penis [rotten.com] that is soaked in blood from a freshly violated underage ringpiece.

The fags have even invented special tools to aid their faggotry! For example, the 'supermount' tool was devised to allow deeper penetration, which is good for fags because it gives more pressure on the prostate gland. 'Automount' is used, on the other hand, because Linux users are all fat and gay, and need to mount each other [comp-u-geek.net] automatically.

The depths of their depravity can be seen in their use of 'mount points.' These are, plainly speaking, the different points of penetration. The main one is obviously/anus, but there are others. Militant fags even say 'there is no/opt mount point' because for these dirty perverts faggotry is not optional but a way of life.

More evidence is in the fact that Linux users say how much they love `man`, even going so far as to say that all new Linux users (who are in fact just innocent heterosexuals indoctrinated by the gay propaganda) should try out `man`. In no other system do users boast of their frequent recourse to a man.

Other areas of the system also show Linux's inherit gayness. For example, people are often told of the 'FAQ,' but how many innocent heterosexual Windows [amiga.com] users know what this actually means. The answer is shocking: Faggot Anal Quest: the voyage of discovery for newly converted fags!

Even the title 'Slashdot [geekizoid.com] ' originally referred to a homosexual [goatse.fr] practice. Slashdot [kuro5hin.org] of course refers to the popular gay practice of blood-letting. The Slashbots, of course are those super-zealous homosexuals [goatse.fr] who take this perversion to its extreme by ripping open their anuses, as seen on the site most popular with Slashdot users, the depraved work of Satan, http://www.eff.org/ [eff.org] .

The editors of Slashdot [slashduh.org] also have homosexual [goatse.fr] names: 'Hemos' is obvious in itself, being one vowel away from 'Homos.' But even more sickening is 'Commander Taco' which sounds a bit like 'Commode in Taco,' filthy gay slang for a pair of spreadeagled buttocks that are caked with excrement [pboy.com] . (The best form of lubrication, they insist.) Sometimes, these 'Taco Commodes' have special 'Salsa Sauce' (blood from a ruptured rectum) and 'Cheese' (rancid flakes of penis [rotten.com] discharge) toppings. And to make it even worse, Slashdot [notslashdot.org] runs on Apache!

The Apache [microsoft.com] server, whose use among fags is as prevalent as AIDS, is named after homosexual [goatse.fr] activity -- as everyone knows, popular faggot band, the Village People, featured an Apache Indian, and it is for him that this gay program is named.

And that's not forgetting the use of patches in the Linux fag world -- patches are used to make the anus accessible for repeated anal sex even after its rupture by a session of fisting.

To summarise: Linux is gay. 'Slash -- Dot' is the graphical description of the space between a young boy's scrotum and anus. And BeOS [apple.com] is for hermaphrodites and disabled 'stumpers.'

FEEDBACK

What worries me is how much you know about what gay people do. I'm scared I actually read this whole thing. I think this post is a good example of the negative effects of Internet usage on people. This person obviously has no social life anymore and had to result to writing something as stupid as this. And actually take the time to do it too. Although... I think it was satire.. blah.. it's early. -- Anonymous Coward, Slashdot

Well, the only reason I know all about this is because I had the misfortune to read the Linux 'Sauce code' once. Although publicised as the computer code needed to get Linux up and running on a computer (and haven't you always been worried about the phrase 'Monolithic Kernel'?), this foul document is actually a detailed and graphic description of every conceivable degrading perversion known to the human race, as well as a few of the major animal species. It has shocked and disturbed me, to the point of needing to shock and disturb the common man to warn them of the impending homo [comp-u-geek.net] -calypse which threatens to engulf our planet.

You must work for the government. Trying to post the most obscene stuff in hopes that slashdot won't be able to continue or something, due to legal woes. If i ever see your ugly face, i'm going to stick my fireplace poker up your ass, after it's nice and hot, to weld shut that nasty gaping hole of yours. -- Anonymous Coward, Slashdot

Doesn't it give you a hard-on to imagine your thick strong poker ramming it's way up my most sacred of sphincters? You're beyond help, my friend, as the only thing you can imagine is the foul penetrative violation of another man. Are you sure you're not Eric Raymond? The government, being populated by limp-wristed liberals, could never stem the sickening tide of homosexual [goatse.fr] child molesting Linux advocacy. Hell, they've given NAMBLA free reign for years!

you really should post this logged in. i wish i could remember jebus's password, cuz i'd give it to you. -- mighty jebus [slashdot.org] , Slashdot

Thank you for your kind words of support. However, this document shall only ever be posted anonymously. This is because the 'Open Sauce' movement is a sham, proposing homoerotic cults of hero worshipping in the name of freedom. I speak for the common man. For any man who prefers the warm, enveloping velvet folds of a woman's vagina [bodysnatchers.co.uk] to the tight puckered ringpiece of a child. These men, being common, decent folk, don't have a say in the political hypocrisy that is Slashdot culture. I am the unknown liberator [hitler.org] .

ROLF LAMO i hate linux FAGGOTS -- Anonymous Coward, Slashdot

We shouldn't hate them, we should pity them for the misguided fools they are... Fanatical Linux zeal-outs need to be herded into camps for re-education and subsequent rehabilitation into normal heterosexual society. This re-education shall be achieved by forcing them to watch repeats of Baywatch until the very mention of Pamela Anderson [rotten.com] causes them to fill their pants with healthy heterosexual jism [zillabunny.com] .

Actually, that's not at all how scrotal inflation works. I understand it involves injecting sterile saline solution into the scrotum. I've never tried this, but you can read how to do it safely in case you're interested. (Before you moderate this down, ask yourself honestly -- who are the real crazies -- people who do scrotal inflation, or people who pay $1000+ for a game console?) -- double_h [slashdot.org] , Slashdot

Well, it just goes to show that even the holy Linux 'sauce code' is riddled with bugs that need fixing. (The irony of Jon Katz not even being able to inflate his scrotum correctly has not been lost on me.) The Linux pervert elite already acknowledge this, with their queer slogan: 'Given enough arms, all rectums are shallow.' And anyway, the PS2 [xbox.com] sucks major cock and isn't worth the money. Intellivision forever!

dude did u used to post on msnbc's nt bulletin board now that u are doing anti-gay posts u also need to start in with anti-black stuff too c u in church -- Anonymous Coward, Slashdot

For one thing, whilst Linux is a cavalcade of queer propaganda masquerading as the future of computing, NT [linux.com] is used by people who think nothing better of encasing their genitals in quick setting plaster then going to see a really dirty porno film, enjoying the restriction enforced onto them. Remember, a wasted arousal is a sin in the eyes of the Catholic church [atheism.org] . Clearly, the only god-fearing Christian operating system in existence is CP/M -- The Christian Program Monitor. All computer users should immediately ask their local pastor to install this fine OS onto their systems. It is the only route to salvation.

Secondly, this message is for every man. Computers know no colour. Not only that, but one of the finest websites in the world is maintained by a Black Man [stileproject.com] . Now fuck off you racist donkey felcher.

And don't forget that slashdot was written in Perl, which is just too close to 'Pearl Necklace' for comfort.... oh wait; that's something all you heterosexuals do.... I can't help but wonder how much faster the trolls could do First-Posts on this site if it were redone in PHP... I could hand-type dynamic HTML pages faster than Perl can do them. -- phee [slashdot.org] , Slashdot

Although there is nothing unholy about the fine heterosexual act of ejaculating between a woman's breasts, squirting one's load up towards her neck and chin area, it should be noted that Perl [python.org] (standing for Pansies Entering Rectums Locally) is also close to 'Pearl Monocle,' 'Pearl Nosering,' and the ubiquitous 'Pearl Enema.'

One scary thing about Perl [sun.com] is that it contains hidden homosexual [goatse.fr] messages. Take the following code: LWP::Simple -- It looks innocuous enough, doesn't it? But look at the line closely: There are two colons next to each other! As Larry 'Balls to the' Wall would openly admit in the Perl Documentation, Perl was designed from the ground up to indoctrinate it's programmers into performing unnatural sexual acts -- having two colons so closely together is clearly a reference to the perverse sickening act of 'colon kissing,' whereby two homosexual [goatse.fr] queers spread their buttocks wide, pressing their filthy torn sphincters together. They then share small round objects like marbles or golfballs by passing them from one rectum to another using muscle contraction alone. This is also referred to in programming 'circles' as 'Parameter Passing.'

And PHP [perl.org] stands for Perverted Homosexual Penetration. Didn't you know?

Thank you for your valuable input on this. I am sure you will be never forgotten. BTW: Did I mention that this could be useful in terraforming Mars? Mars rulaa. -- Eimernase [slashdot.org] , Slashdot

Well, I don't know about terraforming Mars, but I do know that homosexual [goatse.fr] Linux Advocates have been probing Uranus for years.

That's inspiring. Keep up the good work, AC. May God in his wisdom grant you the strength to bring the plain honest truth to this community, and make it pure again. Yours, Cerberus. -- Anonymous Coward, Slashdot

*sniff* That brings a tear to my eye. Thank you once more for your kind support. I have taken faith in the knowledge that I am doing the Good Lord [atheism.org] 's work, but it is encouraging to know that I am helping out the common man here.

However, I should be cautious about revealing your name 'Cerberus' on such a filthy den of depravity as Slashdot. It is a well known fact that the 'Kerberos' documentation from Microsoft is a detailed manual describing, in intimate, exacting detail, how to sexually penetrate a variety of unwilling canine animals; be they domesticated, wild, or mythical. Slashdot posters have taken great pleasure in illegally spreading this documentation far and wide, treating it as an 'extension' to the Linux 'Sauce Code,' for the sake of 'interoperability.' (The slang term they use for nonconsensual intercourse -- their favourite kind.)

In fact, sick twisted Linux deviants are known to have LAN parties, (Love of Anal Naughtiness, needless to say.), wherein they entice a stray dog, known as the 'Samba Mount,' into their homes. Up to four of these filth-sodden blasphemers against nature take turns to plunge their erect, throbbing, uncircumcised members, conkers-deep, into the rectum, mouth, and other fleshy orifices of the poor animal. Eventually, the 'Samba Mount' collapses due to 'overload,' and needs to be 'rebooted.' (i.e., kicked out into the street, and left to fend for itself.) Many Linux users boast about their 'uptime' in such situations.

Inspiring stuff! If only all trolls were this quality! -- Anonymous Coward, Slashdot

If only indeed. You can help our brave cause by moderating this message up as often as possible. I recommend '+1, Underrated,' as that will protect your precious Karma in Metamoderation [slashdot.org] . Only then can we break through the glass ceiling of Homosexual Slashdot Culture. Is it any wonder that the new version of Slashcode has been christened 'Bender'???

If we can get just one of these postings up to at least '+1,' then it will be archived forever! Others will learn of our struggle, and join with us in our battle for freedom!

It's pathetic you've spent so much time writing this. -- Anonymous Coward, Slashdot

I am compelled to document the foulness and carnal depravity [catholic.net] that is Linux, in order that we may prepare ourselves for the great holy war that is to follow. It is my solemn duty to peel back the foreskin of ignorance and apply the wire brush of enlightenment.

As with any great open-source project, you need someone asking this question, so I'll do it. When the hell is version 2.0 going to be ready?!?! -- Anonymous Coward, Slashdot

I could make an arrogant, childish comment along the lines of 'Every time someone asks for 2.0, I won't release it for another 24 hours,' but the truth of the matter is that I'm quite nervous of releasing a 'number two,' as I can guarantee some filthy shit-slurping Linux pervert would want to suck it straight out of my anus before I've even had chance to wipe.

I desperately want to suck your monolithic kernel, you sexy hunk, you. -- Anonymous Coward, Slashdot

I sincerely hope you're Natalie Portman [geocities.com] .

Dude, nothing on slashdot larger than 3 paragraphs is worth reading. Try to distill the message, whatever it was, and maybe I'll read it. As it is, I have to much open source software to write to waste even 10 seconds of precious time. 10 seconds is all its gonna take M$ to whoop Linux's ass. Vigilence is the price of Free (as in libre -- from the fine, frou frou French language) Software. Hack on fellow geeks, and remember: Friday is Bouillabaisse day except for heathens who do not believe that Jesus died for their sins. Those godless, oil drench, bearded sexist clowns can pull grits from their pantaloons (another fine, fine French word) and eat that. Anyway, try to keep your message focused and concise. For concision is the soul of derision. Way. -- Anonymous Coward, Slashdot

What the fuck?

I've read your gay conspiracy post version 1.3.0 and I must say I'm impressed. In particular, I appreciate how you have managed to squeeze in a healthy dose of the latent homosexuality you gay-bashing homos [comp-u-geek.net] tend to be full of. Thank you again. -- Anonymous Coward, Slashdot

Well bugger me!

ooooh honey. how insecure are you!!! wann a little massage from deare bruci. love you -- Anonymous Coward, Slashdot

Fuck right off!

IMPORTANT: This message needs to be heard (Not HURD [linux.org] , which is an acronym for 'Huge Unclean Rectal Dilator') across the whole community, so it has been released into the Public Domain [icopyright.com] . You know, that licence that we all had before those homoerotic crypto-fascists came out with the GPL [apple.com] (Gay Penetration License) that is no more than an excuse to see who's got the biggest feces-encrusted [rotten.com] cock. I would have put this up on Freshmeat [adultmember.com] , but that name is known to be a euphemism for the tight rump of a young boy.

Come to think of it, the whole concept of 'Source Control' unnerves me, because it sounds a bit like 'Sauce Control,' which is a description of the homosexual [goatse.fr] practice of holding the base of the cock shaft tightly upon the point of ejaculation, thus causing a build up of semenal fluid that is only released upon entry into an incision made into the base of the receiver's scrotum. And 'Open Sauce' is the act of ejaculating into another mans face or perhaps a biscuit to be shared later. Obviously, 'Closed Sauce' is the only Christian thing to do, as evidenced by the fact that it is what Cathedrals are all about.

Contributors: (although not to the eternal game of 'soggy biscuit' that open 'sauce' development has become) Anonymous Coward, Anonymous Coward, phee, Anonymous Coward, mighty jebus, Anonymous Coward, Anonymous Coward, double_h, Anonymous Coward, Eimernase, Anonymous Coward, Anonymous Coward, Anonymous Coward, Anonymous Coward, Anonymous Coward, Anonymous Coward, Anonymous Coward, Anonymous Coward. Further contributions are welcome.

Current changes: This version sent to FreeWIPO [slashdot.org] by 'Bring BackATV' as plain text. Reformatted everything, added all links back in (that we could match from the previous version), many new ones (Slashbot bait links). Even more spelling fixed. Who wrote this thing, CmdrTaco himself?

Previous changes: Yet more changes added. Spelling fixed. Feedback added. Explanation of 'distro' system. 'Mount Point' syntax described. More filth regarding `man` and Slashdot. Yet more fucking spelling fixed. 'Fetchmail' uncovered further. More Slashbot baiting. Apache exposed. Distribution licence at foot of document.

ANUX -- A full Linux distribution... Up your ass!

I do it wrong

Laying here in the shadows of my room, I squint up at my love. My Ms. Portman. I am sore and tired after fucking her for eight solid hours. My chapped and aching dick is soaking in grits to relieve the pain. She gets on her knees and starts lapping the grits up out of the bowl. She places her beautiful hands on my penis and starts to lick the grits off my achy piece.
Massaging my nutsack she....

    WAIT, I DO IT WRONG!

Yanking my dick out of her mouth I throw her to the ground and shove it in to her gaping freshly fisted ass [goatse.fr] .

"OH BIG ASS SPORK!! Fuck my ass, fuck my ass good. DEEPER, my stallion, deeper!! Make a Beowulf cluster of sperm on my back!!"

"Imagine a Beowulf cluster of this baby!"

    I DO IT WRONG!

I continue to hump her alabaster form. Glistening with beads of sweat, she bites her lip in delight as I tear her ass open with my engorged dick.

"Queen Amidala!!" I shreik as I near climax.

She looks up at me and screams, "You are so alive in me, unlike *BSD or VA Software!!! Fill me with seed!! Yes, Yes, Yess!!!!"

"For me you are calling, hhhmmm?"

"YODA?!? What the fuck, can't you see I am using the force here?"

He savagely kicks my Natalie aside, he pulls out his large green penis and impales me...

    I DO IT WRONG!

All your sporkz are belong to the dead homiez!!

Re:Cloud Computing(TM) (1)

Foofoobar (318279) | more than 4 years ago | (#30189000)

Note that he did say VMWare on a cluster. I have an idiot at my office trying to do VMWare all on one server and failing to realize this still creates one point of failure. If you are going to do virtualization, the only benefit comes when you invest in a cluster otherwise don't do it at all.

Re:Cloud Computing(TM) (1)

lukas84 (912874) | more than 4 years ago | (#30189056)

A lot of Windows software can make virtualization a necessity, since running certain components on the same machine may create an unsupported configuration or be a security nightmare. For example, a Terminal Server and DC on the same machine is a security nightmare.

Another arguement against (1)

omb (759389) | more than 4 years ago | (#30189512)

Windoze

Re:Another arguement against (1)

oatworm (969674) | more than 4 years ago | (#30190268)

To be fair, you probably shouldn't allow direct WAN-accessible SSH access to your Linux-driven OpenLDAP server, either. Allowing significant public access to applications hosted on the same box that all your user names and passwords are stored on (or replicated on, if you have more than one of said boxes lying around) might be a bit more secure on Linux than it is on Windows, but it doesn't mean it's a good idea.

Think of it this way - do you think it's a good idea to mix Gnome and directory services on the same box? Feeling a little uneasy right about now, right?? Okay, how about allowing users remote access to said server with sufficient permissions where they could log in, launch an X Window of some sort, and run OpenOffice on that server? Yeah, I wouldn't do it either if I could avoid it, which is the entire point of not mixing terminal servers and domain controllers.

Re:Cloud Computing(TM) (0)

Anonymous Coward | more than 4 years ago | (#30189152)

That's not true. Running as a VM guest makes it easy to move an image to another machine as time and budget allow. Just because you don't have a cluster right now, doesn't mean it's stupid to go that path.

Re:Cloud Computing(TM) (1)

pe1rxq (141710) | more than 4 years ago | (#30189222)

Also not completely true...

When your new cluster comes in and it is not the same architecture (e.g. Ultrasparc instead of your current x86 box) your not going anywhere with your shine VM.

You should make sure the application itself can be scaled, not the machine it is running on.
Sometimes that means using virtualization because the application is a bitch...
But a lot of applications can be scaled without virtualization.
The administrator that uses virtualization for his fileserver should be fired because he is incompetent. His data itself can easily be moved from his old single cpu box to the new SAN array.

Re:Cloud Computing(TM) (2, Insightful)

mabhatter654 (561290) | more than 4 years ago | (#30189540)

Why would you buy a cluster not the same architecture? You don't know what you're talking about. VMs generally aren't used to change architecture like that. In a Virtualized Cluster the "OS" is just another data file too! Just point an available CPU to your file server image on the SAN and start it back up... that's smart, not lazy!

Most people need virtualization because managing crappy old apps on old server OSes is a bitch. The old busted apps are doing mission critical work, customized to the point the manufacture won't support them and management doesn't want to pay out for the new version... or the new version doesn't support the old equipment. The leading purpose for VMs is to get new shiny hardware with a modern OS and backup methods to segregate your old hard to maintain configurations to instances. Then the old and busted doesn't crash the core services anymore. Instances that used to be on dedicated, busted hardware that used to require a call-out can be rebooted from your couch in your jammies! (I vote VNC on iPhone as thee killer admin app!) VMs include backup at the VM level, so those old machines that refused to support backup can be backed up "in spite of" the software trying to prevent it.

Re:Cloud Computing(TM) (1)

mabhatter654 (561290) | more than 4 years ago | (#30189352)

not really, you can split your VMs between 2-3 servers and do the migrations manually in the beginning. Once you make the virtual images the hard work is done, even if you just run 2 images per server, you've saved money or increased reliability. Now that you have VMs you can reinstall from backup tapes to another configured server so you have a start at disaster recovery. Once that part is done it's a function of how much money you are allowed to throw at the solution (blades, clusters, sans, etc)

Latest Trends (1)

Lally Singh (3427) | more than 4 years ago | (#30188836)

I've been looking at hp c3000 chassis office-size blade servers, which may serve as your production+backup+testing setup, and scale up moderately for what you need. Compact, easily manageable remotely, and if you're good about looking around, not terribly overpriced. Identical blades make a nice starting point for hosting identical VM images.

Re:Latest Trends (0)

Anonymous Coward | more than 4 years ago | (#30189252)

the eff is for fags looking for free music.

Why? (2, Informative)

John Hasler (414242) | more than 4 years ago | (#30188872)

Why virtual servers? If you are going to run multiple services on one machine (and that's fine if it can handle the load) just do it.

Re:Why? (4, Funny)

MeatBag PussRocket (1475317) | more than 4 years ago | (#30188982)

redundancy.

Re:Why? (2, Insightful)

John Hasler (414242) | more than 4 years ago | (#30189058)

> redundancy.

+5 Funny.

Re:Why? (2, Informative)

lukas84 (912874) | more than 4 years ago | (#30189060)

Virtualization does not automatically imply redundancy, and VM-level high availability will not protect you against application failures.

Re:Why? (0)

Anonymous Coward | more than 4 years ago | (#30189460)

What a shame you can't even spell your own nick correctly. Tard.

Re:Why? (1)

nurb432 (527695) | more than 4 years ago | (#30189160)

Virtual was my first thought too.

Just p2v his entire data center first, then work on 'upgrades' from there.

Re:Why? (2, Informative)

nabsltd (1313397) | more than 4 years ago | (#30189856)

Just p2v his entire data center first,

This brings to mind one other big advantage of VMs that help with uptime issues: fast reboots.

Some of those old systems might have to be administered following "Microsoft best practices" (reboot once a week just to be safe), and older hardware might have issues with that, plus it's just slower. Add in the fact that VMs don't have to do many of the things that physical hardware has to do (memory check, intialize the RAID, etc.), and you can reboot back to "everything running" in less than 30 seconds.

Although you never want to reboot if you can avoid it, this one factor gives you some serious advantages. If you have to apply a patch that requires a reboot, you can do so just by making sure the server isn't being used right now, and it's likely that people won't even notice. Of course, you don't do this until after you have done the same thing on the test server, and know that the patch won't cause issues.

then work on 'upgrades' from there.

And the test environment is a big thing that VMs can provide to help those upgrades. Just p2v the system, then clone it to create the test version. Use snapshots and torture the test system as much as you want.

I'd say (5, Informative)

pele (151312) | more than 4 years ago | (#30188876)

don't touch anything if it's been up and running for the past 7 years. if you really must replicate then get some more cheap boxes and replicate. it's cheaper and faster than virtual anything. if you must. but 150 users doesn't warrant anything in my oppinion. I'd rather invest in backup links (from different companies) between offices. you can bond them for extra throughput.

Re:I'd say (2, Insightful)

The -e**(i*pi) (1150927) | more than 4 years ago | (#30188986)

I doubt with only 150 people they would want to spend the money to have a server at every office in case that offices link went down. I agree wholeheartedly that the level of redundancy talked about is overkill. Also will WWW, mail, DNS, ... even work if the line is cut regardless if the server is in the building?

Re:I'd say (1)

hairyfeet (841228) | more than 4 years ago | (#30189902)

But since we are talking about SEVEN year old machines he can actually just pick up some nice off lease machines, save a ton o' cash, and give them much better than they are running now. Here is a ten pack [surpluscomputers.com] of dual Xeon servers for $1200 shipped. With something like that he could set up 2 in each office (so he has fail over) and at 2.4Ghz they have enough power to run VMs no problem.

With SMBs IMHO it is all about getting the best bang for the buck. They will typically keep machines for longer than larger businesses, so getting a decent amount of hardware now at a good price will help in the long run. With a good deal like this he can even have a couple of spares set up and ready in case of disaster recovery off site. Just load the latest image and off you go.

Dealing with plenty of SMBs over the years I have found to this be the main issue, as they simply don't have the budgets for the latest and greatest, and frankly they don't need the latest either. I have bought plenty of off lease gear from SurplusComputers and never had a bit of trouble. But for the setup he is talking about he might even be able to get by with just one in each office if he has a seriously tight budget. Maybe something like this [surpluscomputers.com] . But he didn't say how many offices he has, nor how tight his budget is, so if he has more than a couple of branches to deal with he'd probably be better off with the 10 pack.

Sure we would all like new gear with nice support contracts to back them up, but in my experience most small companies just don't got the money, hence the 7 year old gear still in use. Better to get decent off lease stuff and have fail over than to only buy a single new machine because his budget is too tight. And if they are still running 7 year old gear I'm betting his budget ain't great to start with.

Re:I'd say (1)

The -e**(i*pi) (1150927) | more than 4 years ago | (#30189968)

that is an awesome deal.
One thing to worry about with old P4 stuff is how much power it will use. Each of those uses probably $15 or more each month in electricity depending on where you live. It probably does not come out of your IT budget, but it still costs the company money.

Re:I'd say (1)

oatworm (969674) | more than 4 years ago | (#30190318)

$150/month in a company with 150 employees is barely a rounding error, assuming it's even remotely profitable. That's $1/employee; assuming that this guy's in the US, they're standard-issue white collar drones, and he's not working in the Bay Area or anywhere else where salaries are distorted, they're probably each pulling in roughly $3000-4000 a month in salary alone, not including benefits or business payroll taxes.

Performance-per-watt becomes far more important when you're running a datacenter. When you're just getting an office network set up, who cares?

Re:I'd say (1)

onepoint (301486) | more than 4 years ago | (#30189946)

if it works keep it running. You are correct in everything you point out. if anything, start first with a full replicated system setup, then a proper back up. next test the new systems, back up never seem to work on the first try so get the bug's worked out.

after this I have no real idea on what you need to do.

And the Key Factor is.... (0)

VonSkippy (892467) | more than 4 years ago | (#30188886)

Lets cut to the chase - how much MONEY do you have. It's all well to ask pie-in-the-sky questions, but then reality sets in and we find you can't afford it.

Why don't you start with what you CAN afford, and then go from there (cause you know that's what your PHB and Bean Counters are going to tell you).

Re:And the Key Factor is.... (2, Informative)

lukas84 (912874) | more than 4 years ago | (#30188946)

Again, wrong approach. Ask the higher-ups what kind of availability they want. The cost is derived from their wishes.

Think about the complexity of duplication (4, Insightful)

El Cubano (631386) | more than 4 years ago | (#30188904)

there's hardly any fallback if any of the services dies or an office is disconnected. Now, as the hardware must be replaced, I'd like to buff things up a bit: distributed instances of services (at least one instance per office) and a fallback/load-balancing scheme (either to an instance in another office or a duplicated one within the same).

Is that really necessary? I know that we all would like to have bullet-proof services. However, is the network service to the various offices so unreliable that it justifies the added complexity of instantiating services at every location? Or even introducing redundancy at each location? If you were talking about thousands or tens of thousands of users at each location, it might make sense just because you would have to distribute the load in some way.

What you need to do is evaluate your connectivity and its reliability. For example:

  • How reliable is the current connectivity?
  • If it is not reliable enough, how much would it cost over the long run to upgrade to a sufficiently reliable service?
  • If the connection goes down, how does it affect that office? (I.e., if the Internet is completely inaccessible, will having all those duplicated services at the remote office enable them to continue working as though nothing were wrong? If the service being out causes such a disruption that having duplicate services at the remote office doesn't help, then why bother?)
  • How much will it cost over the long run to add all that extra hardware, along with the burden of maintaining it and all the services running on it?

Once you answer at least those questions, then you have the information you need in order to make a sensible decision.

Re:Think about the complexity of duplication (1)

CharlyFoxtrot (1607527) | more than 4 years ago | (#30189074)

Parent is right. KISS : keep it simple & stupid, there's a reason some of those servers have been running for 7 years straight. Don't make the error of over thinking it and planning for more than your organization needs (fun though it may be.) You can overthink your way from a simple install to a Rube Goldberg Machine.

balancing act (1)

TheSHAD0W (258774) | more than 4 years ago | (#30188924)

Beware of load balancing, because it will tempt you into getting too little capacity for mission-critical work. You need enough capacity to handle the entire load with multiple nodes down, or you will be courting a cascade failure. Load balancing is better than fallback, because you will be constantly testing all of the hardware and software setups and will discover problems before an emergency strikes; but do make sure you've got the overcapacity needed to take up the slack when bad things happen.

Get someone experienced on the boat! (5, Insightful)

lukas84 (912874) | more than 4 years ago | (#30188934)

You know, you could've started with a bit more details - what operating system are you running on the servers? What OS are the clients running? What level of service are you trying to achieve? How many people work in your shop? What's their level of expertise?

If you're asking this on Slashdot now, it means you don't enough experience with this yet - so my first advice would be to get someone involved who does. Someone with many people with lots of experience and knowledge on the platform you work on. This means you'll have backup in case something goes south and your network design will benefit from their experience.

As for other advise, make sure you get the requirements from the higher-ups in writing. Sometimes they have ridiculous ideas regarding they availability they want and how much they're willing to pay for it.

Take your time (4, Insightful)

BooRadley (3956) | more than 4 years ago | (#30188970)

If you're like most IT managers, you probably have a budget. Which is probably wholly inadequate for immediately and elegantly solving your problems.

Look at your company's business, and how the different offices interact with each other, and with your customers. By just upgrading existing infrastructure, you may be putting some of the money and time where it's not needed, instead of just shutting down a service or migrating it to something more modern or easier to manage. Free is not always better, unless your time has no value.

Pick a few projects to help you get a handle on the things that need more planning, and try and put out any fires as quickly as possible, without committing to a long-term technology plan for remediation.

Your objective is to make the transition as boring as possible for the end users, except for the parts where things just start to work better.

Affordable SME Solution (2, Interesting)

foupfeiffer (1376067) | more than 4 years ago | (#30188994)

I am still in the process of upgrading a "legacy" infrastructure in a smaller (less than 50) office but I feel your pain.

First, it's not "tech sexy", but you've got to get the current infrastructure all written down (or typed up - but then you have to burn to cd just in case your "upgrade" breaks everything).

You should also "interview" users (preferrably by email but sometimes if you need an answer you have to just call them or... face to face even...) to find out what services they use - you might be surprised to find something that you didn't even know your Dept was responsible for (oh, that Panasonic PBX that runs the whole phone system is in the locked closet they forgot to tell you about...)

Your next step is prioritizing what you actually need/want to do... remember that you're in a business environment so having redundant power supplies for the dedicated cd burning computer may not actually improve your workplace (but yes, it might be cool to have an automated coffee maker that can run on solar power...)

So now that you know pretty much what you have and what you want to change...

Technology wise, Virtualization is definitely your answer... and there's a learning curve:
    VMWare is pretty nice and pretty expensive.
    Virtualbox (I use) is free but doesn't have as many enterprise features (automatic failover)
    Xen with Remus or HA is the thinking man's setup

All of the above will depend on reliable hardware - that means at least RAID 1, and yes you can go with SAN but be aware that it's a level of complexity you might not need (for FTP, DNS, etc.)

Reading what you've listed as "services" it almost sounds like you want a single linux VM running all of those things with Xen and Remus...

Good luck, and TEST IT before you deploy it as a production setup.

openVZ (3, Funny)

RiotingPacifist (1228016) | more than 4 years ago | (#30189002)

For services running on linux, openVZ can be used as a jail with migration capabilities instead of a full on VM,

DISCLAIMER: I don't have a job so I've read about this but not used it in a pro environment yet

Re:openVZ (1)

ckdake (577698) | more than 4 years ago | (#30189078)

I do have a job and I have used OpenVZ in a production environment :) Scrapped 2 machines running VMware ESX, put OpenVZ on them, and we can handle over 3x the number of Virtual Machines ("containers" in OpenVZ land) on the same hardware without paying the cost of VMware licenses. Highly recommended.

Don't do it (5, Insightful)

Anonymous Coward | more than 4 years ago | (#30189012)

Complexity is bad. I work in a department of similar size. Long long ago, things were simple. But then due to plans like yours, we ended up with quadruple replicated dns servers with automatic failover and load balancing, a mail system requiring 12 separate machines (double redundant machines at each of 4 stages: front end, queuing, mail delivery, and mail storage), a web system built from 6 interacting machines (caches, front end, back end, script server, etc.) plus redundancy for load balancing, plus automatic failover. You can guess what this is like: it sucks. The thing was a nightmare to maintain, very expensive, slow (mail traveling over 8 queues to get delivered), and impossible to debug when things go wrong.

It has taken more than a year, but we are slowly converging to a simple solution. 150 people do not need multiply redundant load balanced dns servers. One will do just fine, with a backup in case it fails. 150 people do not need 12+ machines to deliver mail. A small organization doesn't need a cluster to serve web pages.

My advice: go for simplicity. Measure your requirements ahead of time, so you know if you really need load balanced dns servers, etc. In all likelihood, you will find that you don't need nearly the capacity you think you do, and can make due with a much simpler, cheaper, easier to maintain, more robust, and faster setup. If you can call that making due, that is.

Google(tm) Cloud (1, Funny)

ickleberry (864871) | more than 4 years ago | (#30189016)

Outsource everything to "de cloud", because that way when everything fails spectacularly it isn't your fault.

Re:Google(tm) Cloud (2, Insightful)

jabithew (1340853) | more than 4 years ago | (#30189180)

It is if you recommended outsourcing everything to the cloud.

don't forget the network as well like the switche (0)

Joe The Dragon (967727) | more than 4 years ago | (#30189032)

don't forget the network as well like the switches and maybe the cables as well. Also if you find any hubs get rid of then ASAP.

also for the servers they should be linked to each other with gig-e.

Re:don't forget the network as well like the switc (1)

simon13 (121780) | more than 4 years ago | (#30189132)

Yeah, I thought this was obvious, but until a few weeks ago our head office (which I only visit occasionally) had been using a non-switched hub to connect about 10 PCs together, plus the internet router. Big face-palm!! As soon as I realised that I went out and bought a $25 switch to replace it. Suddenly their database didn't experience slowdowns anymore. Surprise!

Re:don't forget the network as well like the switc (1)

oatworm (969674) | more than 4 years ago | (#30190488)

Heh. I worked in a small office once where their backbone was a 24-port hub. Better yet, they were using thin clients for everything, so they were slamming that hub every single second of every single day. Once the hub was replaced, it was amazing how many of their "performance issues" disappeared...

Trying to make your mark, eh? (3, Insightful)

GuyFawkes (729054) | more than 4 years ago | (#30189076)

The system you have works solidly, and has worked solidly for seven years.

I, personally, am TOTALLY in agreement with the ethos of whoever designed it, a single box for each service.

Frankly, with the cost of modern hardware, you could triple the capacity of what you have now just by gradually swapping out for newer hardware over the next few months, and keeping the shite old boxen for fallback.

Virtualisation is, IMHO, *totally* inappropriate for 99% of cases where it is used, ditto *cloud* computing.

It sounds to me like you are more interested in making your own mark, than actually taking an objective view. I may of course be wrong, but usually that is the case in stories like this.

In my experience, everyone who tries to make their own mark actually degrades a system, and simply discounts the ways that they have degraded it as being "obsolete" or "no longer applicable"

Frankly, based on your post alone, I'd sack you on the spot, because you sound like the biggest threat to the system to come along in seven years.

These are NOT your computers, if you want a system just so, build it yourself with your own money in your own home.

This advice / opinion is of course worth exactly what it cost.

Apologies in advance if I have misconstrued your approach. (but I doubt that I have)

YMMV.

Re:Trying to make your mark, eh? (0)

Anonymous Coward | more than 4 years ago | (#30189238)

My thoughts exactly. The reason why IT is a lousy job is because unless you're going to deliver a fantastic improvement in features that will overjoy the customer, or resolve a drastic problem with reliability, anything other than complete invisibility is quite undesirable. After 7 years everyone is probably used to how things work and if you change anything and it doesnt work, you're going to be to blame.

While you may streamline and improve things that people dont see, they wont care even a little bit if it throws a rod and stops working.

Slowly and quietly replace the hardware thats too old to be reliable and where reasonable, change or upgrade software to improve reliability and fault tolerance.

Revolutionary change may result in a promotion and raise. Odds are it'll lead to lots of late nights, lots of stress, and a lot of angry users.

Re:Trying to make your mark, eh? (4, Interesting)

bertok (226922) | more than 4 years ago | (#30189250)

I, personally, am TOTALLY in agreement with the ethos of whoever designed it, a single box for each service.

...

Virtualisation is, IMHO, *totally* inappropriate for 99% of cases where it is used, ditto *cloud* computing.

I totally disagree.

Look at some of the services he listed: DNS and DHCP.

You literally can't buy a server these days with less than 2 cores, and getting less than 4 is a challenge. That kind of computing power is overkill for such basic services, so it makes perfect sense to partition a single high-powered box to better utilize it. There is no need to give up redundancy either, you can buy two boxes, and have every key services duplicated between them. Buying two boxes per service on the other hand is insane, especially services like DHCP, which in an environment like that might have to respond to a packet once an hour.

Even the other listed services probably cause negligible load. Most web servers sit there at 0.1% load most of the time, ditto with ftp, which tends to see only sporadic use.

I think you'll find that the exact opposite of your quote is true: for 99% of corporate environments where virtualization is used, it is appropriate. In fact, it's under-used. Most places could save a lot of money by virtualizing more.

I'm guessing you work for an organization where money grows on trees, and you can 'design' whatever the hell you want, and you get the budget for it, no matter how wasteful, right?

Re:Trying to make your mark, eh? (3, Interesting)

GuyFawkes (729054) | more than 4 years ago | (#30189276)

Get real, for 150 users at WRT54 will do DNS etc....

Want a bit more poke, VIA EPIA + small flash disk.

"buy a server".. jeez, you work for IBM sales dept?

Re:Trying to make your mark, eh? (2, Funny)

dbIII (701233) | more than 4 years ago | (#30189940)

There's two ways of looking at these things.
To me a room full of dedicated machines each running a single simple thing due to the 1990s approach of replacing a server with a dozen shit windows boxes that can't handle much but are cheap screams "a dozen vunerable points of critical failure".
Even MS Windows has progressed to the point where you don't need a single machine per service anymore in a light duty situation. Machines are going to fail, you may be lucky and it could be after they have served their time and been sold off, but fans, power supplies or a pile of other components that will stop the machine delivering the service will fail someday. A couple of half decent machines with rendundant power supplies which will give you the option to have all of your services within a decent timeframe if one goes down is a far better option than a pile of critical points of failure depending on the reliability of $5 fans.
Such things are cheaper now than a roomfull of crap boxes.
Now if I was the story submitter I'd put together a plan to have a box or two that can take over any of those required services at short notice. Someday something will break, and it's better to have a box ready or a plan you can read at 2am instead of bumbling through. Of course, GuyFawkes would fire me for that while if he was doing it his way I'd simply try to talk him out of his NT3.51 philosophy. Where is he going to buy a WRT54 at 2am on a Sunday morning in 2015 anyway?

Re:Trying to make your mark, eh? (2, Insightful)

bertok (226922) | more than 4 years ago | (#30190232)

Get real, for 150 users at WRT54 will do DNS etc....

Want a bit more poke, VIA EPIA + small flash disk.

"buy a server".. jeez, you work for IBM sales dept?

I'm responding to your comment:

I, personally, am TOTALLY in agreement with the ethos of whoever designed it, a single box for each service.

I recommended at least two boxes, for redundancy. He may need more, depending on load.

For a 150 user organization, that's nothing, most such organisation are running off a dozen servers or more, which is what the original poster in fact said. With virtualization, he'd be reducing his costs.

One per service is insane, which is what you said. If you wanted dedicated boxes for each service AND some redundancy, that's TWO per service!

Backpedaling and pretending that a WRT54 can somehow host all of the services required by a 150 user organization is doubly insane.

Re:Trying to make your mark, eh? (2, Insightful)

pe1rxq (141710) | more than 4 years ago | (#30189312)

Is it so hard to not mix up dhcpd.conf and named.conf? Do you need virtualization for that?

Let me give you a hint: YOU DON'T

Re:Trying to make your mark, eh? (1)

dbIII (701233) | more than 4 years ago | (#30190020)

Years ago the Microsoft DNS implementation had a very nasty memory leak and used a lot of cpu - you really did need a dedicated DNS machine for small sites and to reboot it once a week.
I think that's why people are still thinking about putting it in a virtual box so it can't eat all the resources, even for a pile of trivial services that a sparcstation 5 could handle at low load.

Re:Trying to make your mark, eh? (2, Interesting)

bertok (226922) | more than 4 years ago | (#30190154)

Years ago the Microsoft DNS implementation had a very nasty memory leak and used a lot of cpu - you really did need a dedicated DNS machine for small sites and to reboot it once a week.
I think that's why people are still thinking about putting it in a virtual box so it can't eat all the resources, even for a pile of trivial services that a sparcstation 5 could handle at low load.

In practice, everyone just builds two domain controllers, where each one runs Active Directory, DNS, DHCP, WINS, and maybe a few other related minor services like a certificate authority, PXE boot, and the DFS root.

I haven't seen any significant interoperability problems with that setup anywhere for many years.

Still, virtualization has its place, because services like AD have special disaster recovery requirements. It's a huge mistake to put AD on the OS instance as a file server or a database, because they need to be recovered completely differently. The last thing you want to be doing during a restore is juggling conflicting restore methods and requirements!

Re:Trying to make your mark, eh? (0)

Anonymous Coward | more than 4 years ago | (#30189448)

Poppycock. You can buy small form factor single core pc's for under $200, or even a refurbished 3-4 year old server box for close to the same price. Depending on the environmental and space considerations, you can pick the platforms to suit and keep the costs minimal. Shoot, even a $200 netbook would have more cpu power and storage than most 7 year old computers, generate little or no heat, and demand a fraction of the power. If this guy is smart, he can cut electrical costs and cooling costs substantially without changing a perfectly functional architecture.

What doesnt make sense is grossly overcomplicating things by trying to shove too much into some large scale platform and then further complicate it with a virtualization layer. We gave up mainframes and thin clients/fat servers didnt work for a reason.

Sure, its cool and technically challenging. Whats the business reason/driver for going the cool/challenging route again?

If the OP decides to quit 2 months after implementing his super cool setup because the job after that is completely boring, who can come in and grasp what he's set up and maintain/upgrade it? Another finicky tech guru that wants to play with the stuff on the job and gets bored and walks off a couple of months later?

Re:Trying to make your mark, eh? (1)

bertok (226922) | more than 4 years ago | (#30190126)

Poppycock. You can buy small form factor single core pc's for under $200, or even a refurbished 3-4 year old server box for close to the same price. Depending on the environmental and space considerations, you can pick the platforms to suit and keep the costs minimal. Shoot, even a $200 netbook would have more cpu power and storage than most 7 year old computers, generate little or no heat, and demand a fraction of the power. If this guy is smart, he can cut electrical costs and cooling costs substantially without changing a perfectly functional architecture.

What doesnt make sense is grossly overcomplicating things by trying to shove too much into some large scale platform and then further complicate it with a virtualization layer. We gave up mainframes and thin clients/fat servers didnt work for a reason.

Sure, its cool and technically challenging. Whats the business reason/driver for going the cool/challenging route again?

If the OP decides to quit 2 months after implementing his super cool setup because the job after that is completely boring, who can come in and grasp what he's set up and maintain/upgrade it? Another finicky tech guru that wants to play with the stuff on the job and gets bored and walks off a couple of months later?

$200 machine = no raid, no ECC memory, no hardware monitoring, no support for server OS-es, not to mention that most netbooks can't run 64-bit, which means the latest Windows server is Just Not An Option.

Good advice! Lets run ALL of our business critical functions off laptops just to avoid learning about new technology! Lets all run on mixed hardware and have to deal with drivers from fifty vendors!

You really don't understand what virtualization provides, so maybe you should read up on it a little bit before you go spouting off.

It's not hard, it's not "massively complex" unless you go out of your way to do it wrong, and it has nothing to do with mainframes or thin clients.

Virtualization isn't some "super cool" buzzword technology, it's a money saver. It reduces costs massively. It makes hardware maintenance an order of magnitude cheaper and safer. There's a reason everyone is switching to it.

If you can't keep up with technology, you shouldn't be in IT.

Re:Trying to make your mark, eh? (1)

rantingkitten (938138) | more than 4 years ago | (#30190206)

Why does he need virtualisation for most of that? Just run multiple services on a single machine. It's not like dhcp and dns are all that resources intensive -- put both services on a machine, configure them, and start them. What's the advantage of virtualising that? Sounds like a lot of unnecessary overhead to me.

Depending on how heavy the load is, that same machine could probably handle postfix, apache, and some kinda ftp server too. That's more or less what you said anyway, but I don't get why you think it requires virtualisation. If a service starts misbehaving you just restart that service instead of rebooting the virtual machine.

Although, for 150 people, a WRT router running non-crap firmware (e.g., ddwrt or tomato) would probably suffice for dns and dhcp. There's a practically off-the-shelf solution for fifty bucks instead of mucking around with higher-end hardware and virtual machines.

Re:Trying to make your mark, eh? (1)

Robert Larson (1451741) | more than 4 years ago | (#30190364)

I'd tend to agree here. Buy a couple of blades. Implement vSphere with DRS and HA and possibly FT. Centralize all these core services. HA/FT will provide the fault tolerance at the core. Then spend on buffing redundant network links for remote sites and/or network capacity as needed. Simplify simplify. Minimize the number of VMs providing core services. Put as much as you can into a cloud.

Re:Trying to make your mark, eh? (1)

DaMattster (977781) | more than 4 years ago | (#30189948)

Having a separate box for each service is not necessarily a good idea. This is energy inefficient and you have a lot of wasted computing resources. That said, virtualization that has been done with little thought or planning is a disaster waiting to happen. I for one, would use Cirtix XENServer. Smaller services such as DNS, DHCP, and FTP can be collapsed into a virtualization server and dedicate one core to each service. If you are adventurous, you could use that same box for routing using OpenBSD. This makes much better use of a mutlicore server. More critical services such as WWW and E-Mail are best left on their own servers. A balance of techniques work better than an either or approach.

Re:Trying to make your mark, eh? (1)

AF_Cheddar_Head (1186601) | more than 4 years ago | (#30190294)

This guy has it right.

I do this kind of thing for a living, upgrading small military sites that support 50-100 users. Most of these sites haven't seen new hardware for several years and have a stand-alone AD. We provide new hardware and bring them into an integrated AD.

Start adding up the costs of VMWare, I know ESXi is free but you very quickly need/want the management tools of VSphere and they ain't cheap, and it is significantly cheaper to use not virtual boxes combining compatible services.

2-4 servers and a small Equallogic SAN can go a long ways towards providing what you need. Less than 50K in hardware and software licenses.

Depending on connectivity and redundancy requirements a DC at each site also providing internal DNS, DHCP and WINS (UGH!!) a mail server with a mail relay at the central office and a File and Print server should do it. VPN appliance (Cisco 5510) to put it all be a firewall at corporate.

I provide a bit more redundancy and security for the military sites but that's the basics.

Re:Trying to make your mark, eh? (1)

h4rr4r (612664) | more than 4 years ago | (#30190478)

Or use something other than Vmware.

Kvm + libvirt + virtmanager will most likely be fine for what you describe.

What 150 users? (5, Insightful)

painehope (580569) | more than 4 years ago | (#30189090)

I'd say that everyone has mentioned that big picture points already, except for one : what kind of users?

150 file clerks or accountants and you'll spend more time worrying about the printer that the CIO's secretary just had to have which conveniently doesn't have reliable drivers or documentation, even if it had what neat feature that she wanted and now can't use.

150 programmers can put a mild to heavy load on your infrastructure, depending on what kind of software they're developing and testing (more a function of what kind of environment are they coding for and how much gear they need to test it).

150 programmers and processors of data (financial, medical, geophysical, whatever) can put an extreme load on your infrastructure. Like to the point where it's easier to ship tape media internationally than fuck around with a stable interoffice file transfer solution (I've seen it as a common practice - "hey, you're going to the XYZ office, we're sending a crate of tapes along with you so you can load it onto their fileservers").

Define your environment, then you know your requirements, find the solutions that meet those requirements, then try to get a PO for it. Have fun.

P2V and consolidate (4, Interesting)

snsh (968808) | more than 4 years ago | (#30189092)

The low-budget solution: buy one server (like a Poweredge 2970) with like 16GB RAM, a combination of 15k and 7.2k RAID1 arrays, and 4hr support. Install a free hypervisor like Vmware Server or Xen, and P2V your oldest hardware onto it. Later on you can spend $$$$$ on clustering, HA, SANs, and clouds. But P2V of your old hardware onto new hardware is a cost-effective way to start.

Re:P2V and consolidate (0)

Anonymous Coward | more than 4 years ago | (#30189212)

Wow! A completely sensible reply, on Slashdot.

This is exactly the route to go, though I'd go with the T- or R710's.

You've got to watch out on the I/O load, though. That will kill you far before memory or CPU. 2 CPU x 4 cores gives a tremendous amount of power, especially against 7 year-old hardware.

Do RAID 10. You might want to look at OpenFiler to use as an iSCSI target for backups.

Re:P2V and consolidate (0)

Anonymous Coward | more than 4 years ago | (#30189916)

Leave it to some asshole on slashdot to recommend server models, ram, and hard drive speed (!) without understanding a damn thing about anything.

Re:P2V and consolidate (1)

AF_Cheddar_Head (1186601) | more than 4 years ago | (#30190332)

Yeah go ahead and price P-to-V capability in VMWare, last I checked it wasn't in the free ESXi version.

Oh by the way make sure your hardware has Virtualization Support built in or 64-bit OS in the VM is out of the question.

Implementing virtualization in a production environment is not as easy or cheap as a lot of people seem to think.

I have implemented it and don't think it's the right choice for small one-man operation. A large data center absolutely but not the small branch office. Expensive, especially if you need hardware-level redundancy.

Upgrade vs Overhaul? (1)

turtleshadow (180842) | more than 4 years ago | (#30189144)

Really what your being unspecific about is the difference between upgrade versus an overhaul.

From the floor up (power, cooling, cabling, footprint) is an overhaul.
If you want a phase approach or some other piecemeal approach still you have to consider each a small overhaul within a larger system.

7 year old equipment is likely not going to be cascaded so really your considering it as candidate for heart transplant which means building a some sort of life support while the new system (heart) is brought on line in parallel. This is very expensive in time, budget, and resources.

Your really going to know your business' processes over the course of more than a "business year" so as to do everything without problems.

Business moments like tax time, EOY reports, monthly invoicing periods, HR/payroll are to be expected and must still function.
Un predictables like supporting business audits (like having to pull up old records, on systems that no longer read them?) and changes in executive leadership also would impact an upgrade/overhaul.

At no time did you ever mention disaster recovery plan, regular offsite backup strategy or a business continuity plan. These are often overlooked or dealt with inappropriately during normal business times and should be verified prior to beginning. A major overhaul or upgrade could or ought to trigger any one of these at any moment.

I have been there, and I have been there when everyone in the room craps in their pants when the tapes have been found to be lost or unreadable or blank.

Real question (0, Troll)

Sepiraph (1162995) | more than 4 years ago | (#30189150)

How did you get put in charge of such a project when it is obvious that you have no clue on carrying out the tasks?

Simple and straightforward = complex (4, Insightful)

sphealey (2855) | more than 4 years ago | (#30189168)

So let's see if I understand: you want to take a simple, straightforward, easy-to-understand architecture with no single points of failure that would be very easy to recover in the event of a problem and extremely easy to recreate at a different site in a few hours in the event of a disaster, and replace it will a vastly more complex system that uses tons of shiny new buzzwords. All to serve 150 end users for whom you have quantified no complaints related to the architecture other than it might need to be sped up a bit (or perhaps find a GUI interface for the ftp server, etc).

This should turn out well.

sPh

As far as "distributed redundant system", strongly suggested you read Moans Nogood's essay "You Don't Need High Availability [blogspot.com] " and think very deeply about it before proceeding.

Don't forget hosting (1)

Jon.Burgin (1136665) | more than 4 years ago | (#30189172)

Why have the headaches, why not have it hosted companies like Rackspace make it so easy and simple. You can also use there cloud services real cheap and easy setup a server in less than 5 minutes and only pay for the memory bandwidth you need, need more? just a few mouse clicks away.

Confuscious Say.. (1, Insightful)

Anonymous Coward | more than 4 years ago | (#30189174)

..if it aint broke..

Re:Confuscious Say.. (1)

mabhatter654 (561290) | more than 4 years ago | (#30189714)

fix it till it's broke!

Check virtual load balancers (0)

Anonymous Coward | more than 4 years ago | (#30189370)

If you consider virtualisation and high availability check with vendor like Zeus (www.zeus.com) to get software version of load balancer (both local and global) that can run in virtual environment.

Maybe this is really a uni project (3, Interesting)

natd (723818) | more than 4 years ago | (#30189376)

What I see going on here, as others have touched on, is someone who doesn't realise that he's dealing with a small environment, even by my (Australian) standards where I'm frequently in awe of the kinds of scale that the US and Europe consider commonplace.

If the current system has been acceptable for 7 years, I'm guessing the users needs aren't something so mindbogglingly critical that risk must be removed at any cost. Equally, if that was the case, the business would be either bringing in an experienced team or writing a blank cheque to an external party, not giving it to the guy who changes passwords and has spent the last week putting together a jigsaw of every enterprise option out there, and getting an "n+1" tattoo inside his eyelids.

Finally, 7 years isn't exactly old. We've got a subsidiary company of just that size (150 users, 10 branches) running on Proliant 1600/2500/5500 gear (ie 90's) which we consider capable for the job, which includes Oracle 8, Citrix MF plus a dozen or so more apps and users on current hardware. We have the occasional hardware fault which a maintenance provider can address same day, bill us at ad-hoc rates yet we still see only a couple of thousand dollars a year in maintenance leaving us content that this old junk is still appropriate no matter which we we look at it.

The fundamental mistake you're making... (1)

machinegestalt (452055) | more than 4 years ago | (#30189458)

From your post, you not looking at this with the right perspective, not asking the right questions, nor asking them to the right people. You state that you have been put in charge of "maintaining" and never once mention anything about your company's predicted growth, development plans, future computation needs, near and long term service offerings, uptime requirements, security requirements or so forth. You have to do a requirements analysis that extends to between five and ten years and design a system that can grow seamlessly with your employer, meeting their current and expected needs in all pertinent areas.

If you can develop a system that does what is required on paper, the next step is to implement it in parallel with the existing system, and transition services and users over in phases. After all services have been transitioned, you can decommission the old infrastructure piece by piece.

Re:The fundamental mistake you're making... (1)

dbIII (701233) | more than 4 years ago | (#30190156)

Personally I'd see planning for redunancy or replacement as a good exercise to see how things really do run instead of how they are supposed to run. Even if no hardware or software is actually deployed it means no nasty surprises when a paticular box does go down.

A possibly helpful response (0)

Anonymous Coward | more than 4 years ago | (#30189496)

I'm a systems admin at a small college with about 1000 desktop machines in the buildings. We were a strictly Sun/Solaris shop for a long time, but in the last couple years we've invested in some 1U dual processor Xeon boxes. These run Ubuntu Server and Xen. We're in the progress of moving services from physical Solaris servers to virtual Xen servers. Two x86 servers can basically replace our old 16 server Sun rack. We'll likely keep our storage array around for a while, but so far LDAP, email, and web services have been migrated. DHCP and DNS could easily be migrated and if you buy 2U servers with enough large hard drives, a seperate storage array probably wouldn't be necessary.

One Box Per Service (1)

KalvinB (205500) | more than 4 years ago | (#30189550)

Unless you have power problems or financial restrictions you're better off with dedicated boxes. I currently run 3 old computers. Ubuntu, Windows XP, Windows 2003 with Apache on XP running PHP sites and doing reverse proxy for the IIS server on the 2003 box. Ubuntu handles memcache. Because I'm not made out of money I'm going to virtualize all three systems onto one quad core system which will cost around $600 rather than $1800 for three new systems. It'll also cut down on power usage.

Slowness can be caused by any number of issues. An old harddrive can cause a system to be sluggish. Just imaging the existing systems onto brand new drives could make things better. Upgrading the network to 1Gbit or just making sure the switches you have are performing could help. Putting more memory into existing systems could also speed things up.

Make sure the power supplies are running well, fans aren't clogged with dust, and that proper cooling is in place.

If all else is not sufficient, progressively purchase new systems to replace old ones and give the old ones to charity after 6 months to make sure everything is good.

Keep it simple (0)

Anonymous Coward | more than 4 years ago | (#30189568)

Lots of other people have already pointed this out, but I'll chime in: don't mess with what works.

Unless you have a huge influx of people coming in or a change in the way the network will be used, stick to the current set up. Do not go virtual or load balance and complicate things. That may even void your support contracts if you have any. Assuming you have to upgrade, try this:

1. Buy new servers for each service, just like it was before.
2. Buy at least one extra server. Maybe more.
3. Set up one new server at a time, keeping the old one on hand, in case something on the new server doesn't work perfectly. You should always always be able to revert back during the transition.
4. Make images of the new servers. Use clonezilla or something similar. Then, if one server dies, you have an image that can be transferred to a spare machine (see #2).

The big things here are that you should keep things simple, have a backup in case of hardware/software failure, and do one service at a time. That insures if something goes wrong, you know which server caused the problem.

services list... (1)

itzdandy (183397) | more than 4 years ago | (#30189620)

www, ftp, email, dns, firewall, dhcp

decide what truly needs to be distributed. DNS, DHCP, firewall. What is likely not necessary to distribute WWW, FTP, email.

DNS can be replicated with BIND or you can do a DNS server that uses MySQL and replicate the mysql database. DHCP must run at each site but you need to decide if you want DNS updated with DHCP. If so, you need to decide if you want those hostnames available across the network. DHCP can update DNS when a client requests an address, DNS can then replicate between each sites DNS server and in the end, you could access that machine from anywhere on the network that is permitted by your firewall runes.

for firewall, consider just using iptables and a bash script to download the current config and then replace some placeholders in the file with the local IP information. I have done this where I keep a copy of the firewall config on an internat webpage and just download the file, sed out my LOCALIPADDRESS and WANIPADDRESS with the local IP, and write that data to iptables on a schedule with cron. That way you can make a broad change to the firewalls at each site in a single file.

email doesnt like to be distributed. consider simple keeping a hot spare, even at a remote site, using something like DRDB to keep the email store in sync. Because you already have DNS everywhere you can quickly adjust the DNS entries for the email server. Use low TTL numbers so downtime is minimized. Then you can ssh into the remote machine and mount the store, then start the email services and you are in business.

There's no such thing as generic best practice (1)

petes_PoV (912422) | more than 4 years ago | (#30189698)

Only what's best for your specific situation.

Once you have met your legal and other regulatory minimum requirements, the rest of the upgrade programme is down to your decision makers. For example: some prefer not to implement hot-standby (relying instead on perhaps a third-party, or business insurance), some make it a 100% absolute requirement for each and every server they possess, you can't just make a statement in isolation, you'll need guidance from the people who control the money - as that's what it all boils down to.

Once you have the answers to two questions:

- what do you value

- how much are you willing to spend for what degree of risk

You can start to make plans. All the best practices I have come across appear to have been written by or for government departments where budgets are effectively infinite, and the worst possible scenario is to open yourself to criticism from your peers and rivals. In the real world neither of these conditions exist. Further, while it's not always good to re-invent the wheel, blindly following one scheme without understanding it's values, shortcomings or benefits means you will certainly not get the best value for your organisation and will not provide a solution that is best for their circumstances..

There is however one best practice you should follow: get everything (esp. from your own people) in writing - who said what, when and to whom.

Microsoft Essential Business Server (0, Offtopic)

VTBlue (600055) | more than 4 years ago | (#30189738)

If you have heard of Small Business Server, Microsoft just released a 3 server solution for businesses of your size called EBS. It will do everything you just outlined including setting the foundation for branch office scenarios with redundancy. With EBS, you get SharePoint, Exchange, Fax serving, AD, DNS, DHCP, firewall, FTP, IIS for web serving all included. Because it is built on Windows Server 2008, you get access to all the services that it provides. It will be a huge leap in user experience for your end-users and you'll finally stop fire fighting and actually allow time to deal with the real IT/Business challenges.

Rather than pushing the features, the real work you need to do is to identify business requirements and map them to features, implementation costs, and upkeep costs.

Once you have a sane, self-managing system in place, you can start to role out self-service IT systems for your users so they don't bother you for password resets. Some would say that you're putting yourself out of a job by doing this, but if you play your cards right and plan out the technical and the social aspects of the project, you will really be a hero and you'll probably be seen in a more respectable light.

visit http://www.microsoft.com/ebs [microsoft.com]

Astroturfing.. (2, Insightful)

Junta (36770) | more than 4 years ago | (#30190326)

If MS is going to astroturf, you need to at least learn to be a bit more subtle about it. That post couldn't have been more obviously marketing drivel if it tried. Regardless of technical merit of the solution (which I can't discuss authoritatively).

The post history of the poster is even more amusingly obvious. No normal person is a shill for one specific cause in every single point of every post they ever make.

To all companies: please keep your advertising in the designated ad locations and pay for them, don't post marketing material posing as just another user.

Probably forgo virtualization (1)

Junta (36770) | more than 4 years ago | (#30189766)

If the administration 'team' has equal access to all the services today on disparate servers, I don't think virtualization is necessarily a good idea, the services can be consolodated in a single OS instance.

In terms of HA, put two relatively low end boxes in each branch (you said 7 year old servers were fine, so high end is overkill). Read up on linux HA which is free, and use DRBD to get total redundancy in your storage as well as a cheap software mirror or raid 5. Some may rightfully question the need for HA, but this approach is pretty dirt cheap at low scale.

Re:Probably forgo virtualization (1)

Junta (36770) | more than 4 years ago | (#30190264)

And it *should* go without saying, but just in case: none of this excuses a good backup plan. HA strategies will dutifully replicate incoming data into all the redundant copies as fast as it can to recover from hardware/os/service death as fast as possible. This includes propagating an accidental deletion or corruption as fast as it can.

Something like ZFS or rsync with hardlinks for incremental is a good first line of defense, but you should have a backup plan with removable media that can be taken offsite and also means that no matter how bugged/fubar your backup solution is on tuesday, it can't possibly corrupt your monday backup.

Ep9T? (-1, Troll)

Anonymous Coward | more than 4 years ago | (#30189844)

when 3onE playing

Backup fabric/infrastructure (1)

mlts (1038732) | more than 4 years ago | (#30189848)

Don't forget that with all the shiny new servers, to have some sort of backup fabric in place for each and every one of them.

I'd focus on four backup levels:

Level 1, quick local "oh shit" image based restores: A drive attached to the machine where it can do images of the OS and (if the data is small) data volumes. Then set up a backup program (the built in one in Windows Server 2008 is excellent). This way, if the machine tanks, you can do a fast bare metal by booting the OS CD, pointing it to the backup volume, pointing out the new OS volume, click "restore", walk off.

Level 2, a network backup server: The server would be a machine with a large amount of disk, and a tape autochanger. It would run at the low end Retrospect or Backup Exec, upper end, Networker, ArcServe, or TSM. And it would do d2d2t backups, so grabbing the data from machines is fast so you can do the most with a backup window. Then, with the tape array, make a rotation system factoring offsites to Iron Mountain, as well as onsite backups. Of course, this server would handle archiving, perhaps with a dedicated DLT-ICE (or similar WORM tech) drive for backups that can't be tampered with.

Level 3, offsite strategy: If you need to have stuff up 24/7, consider a hot or warm site that can take over should something happen to the main site. Even if you don't need an offsite server room, you do need offsite backup storage and rotation planning. Usually this is Iron Mountain's domain, but it can't hurt to also have a tape safe on some leased company property only known by the top IT brass just in case.

Level 4, the cloud: Cloud storage is costly. There are also security issues with it. However, the advantage is that if your data center gets completely obliterated, the data is still accessible. I'd recommend having some form of encryption (PGP comes to mind, perhaps on the cheap, TrueCrypt containers), and storing your core business tax data (Quickbooks/Peachtree) here. You want to store what you need to recover the business, but you don't want to store too much because you are paying lots of cash for it. Last time I checked, for the cost per month you use a cloud provider for a terabyte of storage, an external 1TB drive a month was cheaper. But you are paying for cloud storage's SLA and relability.

I know backup fabric is usually the last thing on an IT department's minds, but it is VERY important, and may mean the company exists or doesn't exist when (not if) something happens.

Tailor this to your requirements and budget, of course.

Some possible goals (1)

giladpn (1657217) | more than 4 years ago | (#30190014)

You got a lot of posts pointing out the error of your ways; basically what people are saying - it sounds gung ho, there is no clear reasoning in the post justifying your shift.

Maybe they are a bit strong but note there is a lot of experience behind them.

Having said that, I would like to take a kinder gentler tone. Once you go through your fundamental reasons for wanting change, I'd suggest you choose ONE big thing that you want to do. Changing everything at once is usually not so hot.

So what could be a goal that would make your users happier and you a hero? Well, don't know, but I can tell what is typical in many such cases
- lowering capital costs (less spending on physical servers and their maintenance) while keeping everything running is one; cloud computing may help on that
- faster performance is one, but only in those places were users are actually complaining. Making a list of those places and fixing them one at a time would be an approach.
- new business needs is another one, but for that - leave everything that works alone and focus on solving very well the new business need. Your partners are your CEO, CFO, marketing etc...

For example, seems from your post that the overall architecture of the system is actually quite decent. So you may want to just repeat that same architecture in an updated way in a cloud computing approach, save some money and prepare for the next computing trend. If you decide that is for you, move one server at a time, arrange fail-over in the cloud, and prove one-at-a-time that it works as fast as the old stuff.

Bit of advice: don't just do virtualization without knowing why. If the business reason is economics, then jump over virtualization to the next trend, cloud computing. If it isn't economics, don't bother with virtualization at all.

Consider your goals and choose ONE. 'Nuff said.

Simple solution: vmware + amazon as a backup (1)

mveloso (325617) | more than 4 years ago | (#30190030)

If you have external access at your offices, leave everything as-is. Image everything, and use Amazon as a backup machine. Simple, low-cost, and basically on-demand.

More info about the setup would be good, but if everything's been running, don't touch it - back it up.

Separate data centres (1)

David Gerard (12369) | more than 4 years ago | (#30190152)

At least for external services like www. Big red buttons do get pushed. I worked at one company where the big red button in the data centre got pushed, all power went off immediately (the big red button is for fire safety and must cut ALL power) and the Oracle DB got trashed, taking them off air for four days; their customers were not happy. They got religion about redundancy.

Redundancy is one of those things like backups, support contracts, software freedom, etc. that management don't realise how much you need until you get bitten in the arse by the lack of it. You clearly get it, which is good.

(I have a similar problem at present: an important dev machine has (a) no service redundancy (b) no disk redundancy. (b) is unlikely, (a) requires duplicating all services including a proprietary version control system onto another box. I'm going to have to switch on an old Ultra 60 that's been decommissioned. Argh.)

Some advice (1)

plopez (54068) | more than 4 years ago | (#30190242)

1) don't screw up. This is a great opportunity to make huge improvements and gain the trust and respect of your managers and clients. Don't blow it.

2) Make sure you have good back ups. Oh you have them? When was the last time you tested them?

3) Go gradually. Don't change too many things at once. This makes recovering easier and isolating the cause easier.

4) Put together a careful plan. Identify what you need to change first. Set priorities.

5) Always have fall back position. Take the old systems offline, cut over to the new system. If the old system fails, rollback. And leave the old systems available for a while until you feel assured they are stable.

6) Don't drink the koolaid. Any product purporting to help migrations should be avoided unless people you trust have used it and/or you are very familiar with it.

7) Always remember point number 1. Being conservative and careful are your best tools.

Most of the poster don't 'get it' (2, Interesting)

plopez (54068) | more than 4 years ago | (#30190280)

The question is not about hardware or configuration. It is about best practices. This is a higher level process question. Not an implementation question.

Linux Vserver (2, Informative)

patrick_leb (675948) | more than 4 years ago | (#30190286)

Here's how we do it:

- Run your services in a few vservers on the same physical server:
    * DNS + DHCP
    * mail
    * ftp
    * www
- Have a backup server where your stuff is rsynced daily. This allows for quick restores in case of disaster.

Vservers are great because they isolate you from the hardware. Server becomes too small? Buy another one, move your vservers to it and you're done. Need to upgrade a service? Copy the vserver, upgrade, test, swap it with the old one when you are set. It's a great advantage to be able to move stuff easily from one box to another.

Why all the VM hate? (1)

deadwill69 (1683700) | more than 4 years ago | (#30190508)

I don't see what all the fuss is about vm's. It allows you to continue to run one service per "box" and cut down on the amount of servers. Using vm's has allowed us consolidate numerous slightly used, dedicated boxes. In turn, we have improved out fail overs with vmware's management console and snap shots saved to a SAN. Near instantaneous recovery without all the head aches. We still do tape and spinning disk backups depending on how critical the machine's mission. There are still a lot of services the best practices requires they have their own box: Infrastructure services being the critical one. All the rest do just fine virtualized. As for the remote offices, the should need more than slaved DHCP,DNS, LDAP/Active Directory, gateway, and a firewall unless your using the remote location for load balancing on web, connection redundancy, etc. We use an MPLS to one of our remote office for this ourselves. HTH, will
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?