Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Experiences with Replacing Desktops w/ VMs?

Cliff posted more than 8 years ago | from the a-novel-idea dept.

442

E1ven asks: "After years of dealing with broken machines, HAL incompatibility, and other Windows frustrations, I'd like to investigate moving to an entirely VM-based solution. Essentially, when an employee comes in in the morning, have them log-in, and automatically download their VM from the server. This gives the benefits of network computing, in that they can sit anywhere, if their machine breaks, we can instantly replace it, etc, and the hope is that the VM will run at near-native speeds. We have gigabit to all of the desktops, so I'm not too worried about network bandwidth, if we keep the images small. Has anyone ever tried this on a large scale? How did it work out for you? What complications did you run of that I probably haven't thought of?"

cancel ×

442 comments

Sorry! There are no comments related to the filter you selected.

Inevitably (0, Redundant)

paxmaniac (988091) | more than 8 years ago | (#15924732)

Do it in Linux - works perfectly and seamlessly!

Re:Inevitably (-1, Troll)

Anonymous Coward | more than 8 years ago | (#15924773)

Haha, no it doesn't you fucking teabagger.

Besides, Chuck Norris isn't teabagging, he's potato-sacking.

Re:Inevitably (-1, Offtopic)

creimer (824291) | more than 8 years ago | (#15924866)

I can't see Chuck Norris either teabagging or potato-sacking. Bagging and sacking his opponents, absolutely.

I work like that now, but 2000 miles away (3, Informative)

kabz (770151) | more than 8 years ago | (#15924785)

I work at a client site where I implement large software. I have my own laptop, which due to sadly lacking Oracle WAN performance, I primarily use as a dumb terminal to various Citrix apps, and Windows Remote Desktop at my home office where I can run Visual Studio, db-based apps etc.

This works great, with one major caveat. If the network starts stuttering, performance of remote desktop and citrix both suffer badly. Otherwise, the benefits are great: much reduced amount of sensitive data on laptop, access to a higher performance office machine, less app latency when talking to 'local' databases 2000 miles away.

Re:I work like that now, but 2000 miles away (0)

Anonymous Coward | more than 8 years ago | (#15924913)

And this has what again to do with the question?

Re:Inevitably (0)

Anonymous Coward | more than 8 years ago | (#15924871)

yeah but what would you do with it?

Re:Inevitably (-1, Troll)

Anonymous Coward | more than 8 years ago | (#15925037)

You fucking shitbag zealot faggot. I fucked your bitch ass mom last night. She works perfectly and seamlessly ;)

user icons (-1, Troll)

Anonymous Coward | more than 8 years ago | (#15924735)

Hey Slashdot, why don't you stop fucking around with your CSS Javascript-y message board and implement user icons like the rest of the web?

Re:user icons (0, Insightful)

Anonymous Coward | more than 8 years ago | (#15924746)

So we could get inline goatse, rather than obfuscated links and ascii art?

Re:user icons (0, Redundant)

Anonymous Coward | more than 8 years ago | (#15925027)

Author of the parent comment here, I'm not a troll and clearly I was saying it would be a dumb idea to have custom images considering the troll contingent here. If anything I was speaking against it and against trolls, you dumbass moderator.

Re:user icons (0, Offtopic)

WilliamSChips (793741) | more than 8 years ago | (#15924762)

Because avatars on a forum of this size(or even anything approaching such) are a disaster waiting to happen? It only works for Digg because theirs are so small you're better off sticking with the default. And it works for GateWorld because you have to get a certain post number for a custom av and the mods fully delete posts.

Re:user icons (0, Offtopic)

Eideewt (603267) | more than 8 years ago | (#15925038)

But I do not want to see your ugly mug.

FP (-1, Offtopic)

Anonymous Coward | more than 8 years ago | (#15924738)

First Post!

No 3D (4, Interesting)

sarathmenon (751376) | more than 8 years ago | (#15924740)

There are a lot of complications using a VM - there's no 3D, no good audio etc.. Plus if your base computer does not fit into the HAL, you can't expect much out of the VM. I am actually surprised at this - a VM will give you the benifit of portability, but if that was your goal you'd be better off giving a laptop to all your employees.

Re:No 3D (1, Informative)

Anonymous Coward | more than 8 years ago | (#15924859)

There are a lot of complications using a VM - there's no 3D, no good audio etc.

Exactly. A good test to see how the speed on how the proposed system would perform is using terminal services (obviously on a remote computer). Even with gigabit ethernet the speed is going to drag on many applications. IMHO, if the author is running a Windows network, he or she should just use remote desktop and then standardize each computer with a default installation. Personal data will be saved on the server while all the heavy lifting software will be done locally (with the option of remotely executing high speed aps on other powerful remote machines). To speed up the updating of computers, if they are all identical he can simply use ghost software to copy hard drives. If a computer has a software problem, swap the hard drive with a previously ghosted copy (of the standard installation) and then ghost copy to that hard drive later--or ghost by network. I've done this and it works. It also makes it really easy to train junior sysadmins. Just make sure you don't allow users to store anything other than temporary files on the local systems HDs.

Re:No 3D (5, Informative)

innosent (618233) | more than 8 years ago | (#15924954)

For Windows, use roaming profiles and default installations. For Linux, rsync works quite well for the base OS (say, a staggered start time at night based on IP), with OpenLDAP doing auth and home dirs stored on central server(s) and mounted via NFS. New system setup becomes a knoppix CD, partition the new drive, format partitions, mount them, rsync the distribution to the new machine, chroot, and setup boot loader. You could of course script all of this, and this is very similar to what I do for kiosk systems (Linux/Firefox setup), except the kiosks don't change, so it's just a big tarball via sftp instead of rsync. You could also do tarballs, and keep the last few versions as backups in case you screw something up. If the hardware is identical, use the distribution of your choice, but if there are several different systems, you may want to use one with good hardware detection (like knoppix).

Um, wouldn't a ... (4, Interesting)

Bin_jammin (684517) | more than 8 years ago | (#15924748)

thin client be a cheaper and easier solution per seat?

Re:Um, wouldn't a ... (0)

Anonymous Coward | more than 8 years ago | (#15924765)

run terminal services and thin clients.

Much better solution IMHO..

PXE Boot (5, Informative)

numbski (515011) | more than 8 years ago | (#15925016)

I think I have to disagree. Most of the better gigabit nics out there support PXE boot. Get a small boot loader image going. If these will all be on the same lan segment, at boot time it will grab the latest loader image, boot the small loader (~2MB). The loader can then boot the full OS image.

You can then just capture or encapsulate the computing session to an image file. It's not a full virtualized environment, as you still get the benefits of the cpu horsepower at the workstation, but if corruption occurs ou just roll back the session file. I think.

This is how Windows hibernation functions in a nutshell, just dumps RAM to a file I think. I haven't tried this in practice, but it should work.

Re:Um, wouldn't a ... (4, Informative)

OriginalSpaceMan (695146) | more than 8 years ago | (#15924766)

Plus, on a LAN using thinclients will be just as fast, visually, as a local PC. Hell, I play video's over my RDP thinclients and it works quite well.

Re:Um, wouldn't a ... (3, Interesting)

t1n0m3n (966914) | more than 8 years ago | (#15924875)

"Plus, on a LAN using thinclients will be just as fast, visually, as a local PC. Hell, I play video's over my RDP thinclients and it works quite well." Funny you should make that statement; video via RDP on my locally connected 100Mbps link is absolutely horrible. I have several computers, and I use RDP to access them all. Every time I try to watch a video, I find myself copy/pasting the link to my local computer to actually watch the video.

Re:Um, wouldn't a ... (1, Informative)

Anonymous Coward | more than 8 years ago | (#15925021)

Funny you should make that statement; video via RDP on my locally connected 100Mbps link is absolutely horrible. I have several computers, and I use RDP to access them all. Every time I try to watch a video, I find myself copy/pasting the link to my local computer to actually watch the video.

Are you using cheap (or onboard) NICs, cheap switches and long wires? Do the math:
100Mbps link over long wires + cheap NICs (like Realtek) + cheap switches (like Planet) = 10Mbps link
100Mbps link over short wires + 3com NICs + 3com switches = 100Mbps link

Re:Um, wouldn't a ... (1)

flngroovy (8003) | more than 8 years ago | (#15924841)

Maybe not. Rather than having several huge servers with a ton of RAM back in the server room, you could offload this work to the clients hardware.

VMWare offers a solution for this already, but I'm not sure I would jump into it just yet. In case you didn't know, Vista will handle the HAL issue with no problems (According to MS).

http://www.microsoft.com/technet/windowsvista/depl oy/depenhnc.mspx [microsoft.com]

Gay (-1, Flamebait)

Anonymous Coward | more than 8 years ago | (#15924753)

That's the most stupid idea in the world!

Re:Gay (0)

Anonymous Coward | more than 8 years ago | (#15924976)

You've obviously never maintained a Thin Clien Network...

Citrix (3, Interesting)

Bios_Hakr (68586) | more than 8 years ago | (#15924761)

Sounds like you want something like Citrix.

Although, what you could do is automagically have a standard WinXP workstation login on startup. Next, have VMWare in the startup folder so that it begins as soon as the computer logs in. Finally, have VMWare point to a disk image loaded on your server. The employees will then see a full-screen VMWare ready to authenticate on the network and begin their day.

If you really wanted to be fancy, have that image automagically map to a network drive on your SAN/NAS as the D:\ drive. Tell employees to use the D:\ drive to store all work-related documents.

It could work. But you'd be looking at maybe 5 minutes for the morning boot-up. Not to mention all the employees hammering the network for a 2~4gb image at 7am will really thrash the servers.

If you insist on doing this, go a bit further. Activate that WoL crap and autoboot the workstations at staggered times between 6am and 7am.

Please, god, no. (1)

grrowl (953625) | more than 8 years ago | (#15924836)

Citrix is probably the worst software I've ever had the displeasure of using. Buggy, slow when it shouldn't be, and just generally horrible to use. Just... don't.

Re:Please, god, no. (1)

j235 (734628) | more than 8 years ago | (#15924850)

I do all my work through one or two citrix sessions (one here in the US, one in Germany).

It's not THAT bad, but yes, Citrix is rather annoying.

Re:Please, god, no. (3, Informative)

wwest4 (183559) | more than 8 years ago | (#15924856)

Just don't what... misconfigure or misapply the technology? If "Citrix" is anything, it's too expensive in some situations and inappropriate for others. Maybe you were just using some Citrix software to do something it's not ideal at doing, or otherwise using it incorrectly... in any case, it's kinda silly to malign an entire software suite with a vague anecdote.

Re:Please, god, no. (0)

Anonymous Coward | more than 8 years ago | (#15924888)

>" ...in any case, it's kinda silly to malign an entire software suite with a vague anecdote."

You must be REALLY NEW here...

Re:Please, god, no. (1)

nolife (233813) | more than 8 years ago | (#15924904)

So what do you use that you are comparing to Citrix?

Re:Please, god, no. (2, Funny)

Monkelectric (546685) | more than 8 years ago | (#15924951)

If their hiring practices are any indicator ... every 6 months or so Citrix calls me and asks if I'll come in for an interview -- I ask what the salary is, and after I stop laughing at them I say no thanks.

Re:Citrix (4, Informative)

discord5 (798235) | more than 8 years ago | (#15924938)

Sounds like you want something like Citrix.

Citrix (or another similar product) is exactly what he should be looking into. Downloading entire disk images over a network is just a pain in the ass everytime someone boots. However Citrix isn't the solution to all things, yet it beats VMs for most practical applications.

But you'd be looking at maybe 5 minutes for the morning boot-up. Not to mention all the employees hammering the network for a 2~4gb image at 7am will really thrash the servers.

See, that's the big negative point in the entire setup. The bootup time is a pain in the neck, but people can live with that easily. They'll fetch their cups of coffee, have the morning conversation with coworkers and will return about 10 minutes after their machines have booted up. The real issue is the server getting hammered every morning, slowing these boottimes as more machines get added to the network.

I can hear it now: set up a second server, set up a third... etc etc. Yes, set up a bunch of servers that do nothing all day but hand out images, and don't forget about the backup servers (you don't want one of those servers to crash in the morning taking out the entire accounting department). I'm seeing an entire rack of machines at this point doing nothing but handing out images, wired up to really expensive network gear, doing nothing really useful. Don't get me wrong in this last statement, the usefulness of this construction is that you can easily exchange pc's and images not having to worry about hardware, software installed on each users pc, etc. But there's a lot of more cost-effective ways to achieving something that works similar.

Take that budget for those image servers, and backup servers, VM-software licenses, and networkgear, and buy a single server and a good backup mechanism (or a backup server in failover). Spend some time on setting up profiles and think about what software is present on all machines. Take an image of every machine you install differently, and copy that to the server. Buy software like Citrix (or anything else resembling it) to have special applications available at one server (think backups here), and you have a pretty decent solution that doesn't hammer your network/servers every morning and gives you a headache by 10am because some people aren't getting their images.

I've seen the concept of VM images on a server, and I've seen people get bitten by it because they didn't forsee the amount of storage and network traffic involved. Most of these people didn't have a need for such an elaborate solution. Hell, I've seen half a serverfarm run vmware because "it was a good way to virtualize systems, and make things easily interchangable" while those people would've been much more satisfied with a "simpler" failover solution (note those quotes, denoting that failover also requires thought, but usualy ends up being a cheaper solution hardware wise).

On top of it all, using VMs for desktop operating systems uses up a lot of resources. You're running an operating system, that runs software that runs another operating system. Some would say that it's hardly noticeable, but why waste the resources? You'll make todays hardware run like last years, which for most applications is not an issue, but most likely you're going to run last years hardware like hardware from two years ago because you'd have to invest in new desktops for the entire company otherwise.

Let's talk mobility for a moment. Imagine your salesman with his laptop and flashy UMTS (or whatever standard they've cooked up) connection on the road. He's going to want to be able to check his mail on the road, so he'll have to get an image over a connection that can hardly manage streaming video... Nope, you're going to give him his operating system, install his software and pray to god he doesn't send too many large documents over that very expensive UMTS connection. That sort of starts breaking the principle of having images for every PC.

In my humble opinion, this kind of virtualisation is a great idea, but it's going to end up costing you an arm and a leg, and is eventually going to be abandoned because it'll cause you more headaches than it's worth. Go for a cheaper and more viable solution like Citrix, and when the desktop PCs die, just reinstall the image.

Activate that WoL crap and autoboot the workstations at staggered times between 6am and 7am.

People actually use wakeup on lan on desktops?

Re:Citrix (1)

cryptoluddite (658517) | more than 8 years ago | (#15924945)

So use rsync to download the images. Why on earth would you use WinXP just to start up vmware? Just do a quick linux install of any distro and use vmware for linux. In my experience the linux version is much better anyway (for whatever reason).

One caution, with some distro's vmware has to actually interpret a lot of programs (due to guest / host vm spaces overlapping). On these system combinations vmware will be *really* slow... basically anything fedora. On other systems you're talking maybe 5% average overhead, depending on workload of course.

Re:Citrix (0)

Anonymous Coward | more than 8 years ago | (#15925025)

Can you recommend a distro of linux that works well with vmware?

Why not just use sunrays? (5, Insightful)

scubamage (727538) | more than 8 years ago | (#15924763)

Get some Sun Microsystems SunRays. Seriously.. thats exactly how they work. Your session can be saved on server and resumed anywhere else you plug in your smart card. One server and all of the terminals you need.

Re:Why not just use sunrays? (0)

wrex (16592) | more than 8 years ago | (#15924838)

Because Sunrays are really sucky. take it from a former Sun Microsystems instructor. They really are. You're better off with a Linux solution, for Multiple reasons (not going to go into all of them, now. Just research it. Start with cost-factor and go from there).

Re:Why not just use sunrays? (5, Informative)

CapeBretonBarbarian (512565) | more than 8 years ago | (#15924909)

Because Sunrays are really sucky. take it from a former Sun Microsystems instructor. They really are. You're better off with a Linux solution, for Multiple reasons (not going to go into all of them, now. Just research it. Start with cost-factor and go from there).

Come on, you're going to have to give some additional information than that. We use Sun Rays quite a bit in our classrooms and labs and if you have the bandwidth and a good server on the other end, you're in the money. Sessions can be keyed to an access card and will follow you around the campus. If a Sunray breaks down, just swap in a new one and the session continues exactly as you left off. Pull your card, come back in a week, and pick up exactly where you left off. Everything resides on the server. No maintenance required at all on the client side.

What version of the Sun Ray server software were you using that made it so "sucky"? From my experience, they worked great for us. The only downside we had is that streaming video over Citrix to the Sun Rays didn't work so hot. However, streaming video natively from the Sun Ray server to the thin clients worked fine so the problem there was probably with Citrix Metaframe.

Sun has also recently upgraded the Sun Ray thin clients so they have gigabit ethernet, plus they now hsve a more complete end-to-end solution that will allow you to run Windows apps on your Sun Ray (in addition to all the Solaris/Unix apps) thanks to their Tarantella purchase. You'll still need some Terminal Server licenses, but you'll save on the Citrix.

You could try calling the local Sun reps and see if they'll give you a demo. They did that for us - drove 6 hours to our workplace and set up a server and clients to demonstrate it for us.

Re:Why not just use sunrays? (2, Informative)

Sampizcat (669770) | more than 8 years ago | (#15924862)

We used Sunrays at my old workplace. They worked fine and were very reliable - just throw in your card, put in your password and away you went. I highly recommend them.

And no, I don't not work for/am not in any way affiliated with Sun Microsystems - I just really like their product.

Sampizcat

Thin Client? (0)

Anonymous Coward | more than 8 years ago | (#15924768)

Wouldn't setting up a terminal server and thin clients be cheaper and more efficient to manage? Granted, this puts processing in a central location rather than on the client side, but in an office environment, this should not be a problem, and should have the same performance as using the VM.

Snowcrash (0, Offtopic)

adisakp (705706) | more than 8 years ago | (#15924771)

I guess you're a Neal Stephenson fan and want to work for the gov't?

Look at LTSP.ORG (5, Informative)

EDinNY (262952) | more than 8 years ago | (#15924776)

LTSP.ORG does somthing similar. You run X clients on a common "server" and view it with an X server on almost anything with 64 megs or more of memory.

might not be cost-effective (1, Insightful)

Anonymous Coward | more than 8 years ago | (#15924788)

in terms of CPU cycles, that'll be a huge load on the servers while the desktops go underutilized (well actually, those VM players seem to be pretty piggy, you need 2G RAM or you'll max out the CPU) And the interactivity won't be as good as native Windows desktops.

What about the documents people create and edit, as well as apps they might want to download or install themselves? If they store them "locally", they'll be gonzo when you swap in a new image. There'll be some unhappy campers.

Not so sure about the architecture... (4, Insightful)

steppin_razor_LA (236684) | more than 8 years ago | (#15924791)

I'm a vmware/virtualization fan, but I don't think this is the best application. It seems to me that it would be smarter to use terminal services / citrix / a thin client approach

If you were going to use vmware, make a standard image and push it out to the local hard drives. don't update that image unless it is time to push out a new set of windows udpates/etc. if you need to update the image though, that is going to be *hell* on your network/file servers.

I think it makes more sense to run a virtualized server than a desktop.

Also, you might end up paying for 2x the XP licenses since you'd have to pay for the host + guest operating systems.

Re:Not so sure about the architecture... (0)

Anonymous Coward | more than 8 years ago | (#15925005)

The really nice thing about citrix clients is that you do not even have to use windows [citrix.com] on the client side. You just have to point your browser at the right citrix helper binary...I set it up for my wife (who works for a large BC hospital chain) now we are a windows free environment... at least at home. She tells me that the linux client setup on Slackware runs faster than her newer XP pc at work with 512meg of ram! The citrix client runs smoothly on Xwindows with a minimal X gui like Windowmaker or Xface and really smokes with 128 meg of ram and a P111 450! It runs a little slower however on KDE but that is the price you pay. If I had gobs of ram like her work pc I am sure it would cook along fine even with a big gui like kde.

Still Windows (2, Interesting)

klaiber (117439) | more than 8 years ago | (#15924797)

Well, you'd still be running Windows (if that's your poison), and so your users would still be subject to (say) all the Outlook or Explorer weaknesses and exploits. The main upsides I'd see are
(a) presumably all VMs have the same device model, so you'd be running the same image everywhere, and
(b) assuming you carfully partition out the users' data to a different volume, you can give them a "fresh" virtual machine (a fresh Windows registry!) every time.

Nice and useful, but still not bomb-proof.

The way we do it... (3, Informative)

DarkNemesis618 (908703) | more than 8 years ago | (#15924800)

Where I work, we have a domain so a user can log onto any computer and have their email & favorites all set up. In their profile, it automatically maps their departmental network drives and their personal network drive (where they're supposed to save their documents to). The normal programs are installed on every machine, and it's not hard to temporarily install any special programs they need on the machine they use in the event theirs in unusable. The only issue we have is that for some reason, no matter how much we tell them to save on the network, they apparently refuse to listen and save stuff on their hard drive. And then subsequently blame us if their hard drive dies and they lose data. But that's another story.

Re:The way we do it... (2, Informative)

ejdmoo (193585) | more than 8 years ago | (#15924828)

Configure folder redirection. Then the "My Documents" folder will be on the network, and users won't have to know anything special to save there.

The desktop is still a problem though.

Re:The way we do it... (1)

killjoe (766577) | more than 8 years ago | (#15924885)

The whole windows profile thing is just a pile. God help you if you ever want to change your domain for example. How long ago did unix invent the idea of mounting your home directory on a network server anyway?

Re:The way we do it... (1)

cptgrudge (177113) | more than 8 years ago | (#15924968)

The "desktop" doesn't even matter if you are doing real profile redirection. The C:\Documents and Setings\%USERNAME% folder gets redirected to a location that the domain admin chooses. Put it on a file share on a server and be done with it. You can't really do anything about the local drive, especially if you have sloppy legacy programs that stupidly require local admin access.

Regular users don't need local admin rights on their computer anymore with most apps, but it possibly may require descending into packaging hell to get your legacy apps to run with the correct permissions. It usually involves fishing out installer created registry entries and copied files to make sure that they have permissions so that the user can edit/read to them. But sometimes, that old app just Needs Admin Rights.

In that case, you really can't help the user creating some folder structure of their own creation at the root of the drive just for themselves. I dunno. Maybe hide all the local drives through Active Directory?

Re:The way we do it... (0)

Anonymous Coward | more than 8 years ago | (#15924863)

Where I work, we have a domain so a user can log onto any computer and have their email & favorites all set up. In their profile, it automatically maps their departmental network drives and their personal network drive (where they're supposed to save their documents to).

They do that where I work, too... only for some reason our roming profiles get corrupted every so often, and the first solution when you call up with a problem is "Let me blow away your roaming profile, that should fix it."

And every 3 days or so, Visual Studio thinks it needs to "configure for first time use" on the desktop that I've been using every day.

I love this solution.

rootkits (1)

whateverrrrrrrr (995949) | more than 8 years ago | (#15924802)

Wouldn't this make your system a lot more vulnerable to rootkits?

Why not use Thin Clients & Blade Servers? (1)

tyler@mango.net.nz (129548) | more than 8 years ago | (#15924803)


Why not use a more centralised approach, with a rack of blade servers running the client VM machines, load balanced using VMWare and thin clients on the desktops?

This means replacing the users desktop hardware is very easy, they can use 'their' PC image from any Thin Client on the network, or over VPN from home, and to wipe and reload their PC is automated from within VPWares consoles.

Thin client. (1, Interesting)

Anonymous Coward | more than 8 years ago | (#15924806)

Have you looked into thin clients? You're describing them. Doing it with Linux is simple, faster, easier on servers, etc. Novell put in a solution for us...10K users login to a few dozen servers every day across the US. SLED 10 workstations (thin clients) have some software on them and some on the server. User files are on the server. When we want to upgrade boxes we upgrade the servers and are done. User somehow breaks the box (not that malware and viruses are big issues at this point, but sometimes things happen with users who maliciously boot from CDs) and we push out a new thin client image to that workstation. No onsites as we use remote X sessions and VNC if needed.

I have a dream job and could really work from home for most of it except meetings w/my boss when he gives me my bonus. :-)

As an aside (1)

kafka47 (801886) | more than 8 years ago | (#15924822)

I'm not experienced with a VM setup like the one you describe, but let me offer this - if you have them download their images every morning you may run straight into a brick wall. Performance testers call this "the 9am syndrome", and you'll need some fairly serious server bandwidth to handle everyone copying such a large file. This will turn your network, and the disc you're serving the images from, into a seething pile of molasses. OK maybe I'm being a shade gloomy, but I'd recommend not going the download route it if all possible. Even if you have 1GB to the desk.

/K

Re:As an aside (1)

grcumb (781340) | more than 8 years ago | (#15925033)

"Performance testers call this "the 9am syndrome", and you'll need some fairly serious server bandwidth to handle everyone copying such a large file. This will turn your network, and the disc you're serving the images from, into a seething pile of molasses."

One word: Multicast [wikipedia.org] .

I've seen a room full of PCs simultaneously boot and load the same ~1GB Linux partition on a 100Mb network in no time. If they hadn't told me how it was working, I'd never have known they weren't loading a local partition.

My experience... (2, Informative)

Starbreeze (209787) | more than 8 years ago | (#15924827)

I needed a quick and cheap solution for some Windows machines for our QA group to test things on. We bought some VMWare Workstation licenses and ran 6 VMs running XP on each beefy machine. (About the limit for a machine with 4GB RAM) Granted, there are better VM solutions than Workstation, but we wanted cheap and quick. Don't count on it for anything mission critical. About every two weeks, VMWare would basically eat itself and the Linux box. However, it was easy from a maintenance point of view, because I could VNC in and see all 6 VMs at once. Also, since VMWare has a cloning feature, anytime QA infected the machines with something nasty, or just pissed off XP, I could re-clone it. Also remember that any VM hogging resources can slow down other VMs on the same host.

However, for the context that you are speaking about, I would take the advice of individuals below and look at Citrix or roaming profiles.

Keep it simple (1)

nateman1352 (971364) | more than 8 years ago | (#15924830)

I can understand completely the desire to centralize computing resources so that you can cut desktop maintainance costs, but even if you have gigabit to all the desktop systems, thats still nowhere near the speed of an internal hard disk, also what happens for laptop users? Perhaps you could solve this using a replication system of some kind that just checks if the images on the server are different than the locally stored ones, and if so use a binary patching system to update the local version, but that in of itself likely would be a maintainence nightmere.

Also my personal experience with VMware, Virtual PC and Qemu has consistently been that its there is a noticable difference in speed between native hardware and the VM. In the interest of customer satisfaction (users get pissed with slow systems) I would keep using native hardware personally. Of couse you can minimize driver/HAL problems by keeping your hardware as standardized as possible (buy the same model from the same company for everyone as much as you can.)

In short my humble opinion is keep it simple.

Re:Keep it simple (1)

c.morrissey (990575) | more than 8 years ago | (#15924858)

Also my personal experience with VMware, Virtual PC and Qemu has consistently been that its there is a noticable difference in speed between native hardware and the VM.


If the user slows their system with adware, spyware, and the network with p2p and IM software that you need to remove because their desktop is chugging ( dependent on how strict you are with software installs and network traffic ) I think you would see a huge speed difference with VMing their system thats "so fresh and so clean clean".

Although I do agree with high level system administration with keeping their os native, designers, programmers, and any one with large system requirements and graphic processing power. But then again we arn't talking about these "slashdotters" anyhow.

Aw come on (1)

mnmn (145599) | more than 8 years ago | (#15924834)

You can have remote profiles, and even link the desktop, my documents etc to remote folders.
Why go through the overhead of a VM? Citrix is one idea, but the most efficient thing is to just make their profiles remote.

USE A THIN CLIENT TERMINAL (0)

Anonymous Coward | more than 8 years ago | (#15924837)

USE A THIN CLIENT TERMINAL

USE A THIN CLIENT TERMINAL

Setup a machine to serve out a bunch of virtual terminals.
Have your machines running a thin client. Citrix thin clients, citrix server.

Or my personal favorite, which is rdesktop on Debian stable.

Why bother with all this 'download this/download that' bullshit? Just use a thin client, obviously the think windows clients require to much work to maintain and are too much of a pain in the ass for what you need.

Enterprise Desktop (3, Interesting)

phoebe (196531) | more than 8 years ago | (#15924840)

Enterprise Desktop was recently announced by VMware it sounds closer to what you are looking for?

Enterprise Desktop Products

Support the needs of a global workforce by providing virtualized computing environments to enterprise employees, partners, and customers to secure access and manage resources. Provide a full PC experience without compromising security or hampering users. Improve desktop manageability, security, and mobility by apply virtualization technologies to client PCs and the data center for server hosted desktops.

http://www.vmware.com/products/enterprise_desktop. html [vmware.com] .

Re:Enterprise Desktop (1)

mrbooze (49713) | more than 8 years ago | (#15924911)

Yes, VMWare definitely pushes solutions like this pretty heavily. I would recommend contacting VMWare and/or researching their offerings to see how they are architected. (You don't have to *use* VMWare, of course, you can just get an understanding of concepts/tools/practices and look at other vendors or open source solutions as well.)

I saw a presentation by a representative of one of Cook County's departments that was deploying some solutions like this. VMWare's Enterprise Desktop running on servers in the data center. And the employees at their desks didn't even have real computers, they just had dedicated Wyse terminals or other remote-access hardware that presented their virtual desktop to them automatically.

There's a lot of plusses and minusses to scenarios like this. What can work great for some companys might not for others, and even in those companies you'll almost certainly have to identify some people who for whatever reason really do need a local native desktop. But for your security guards and secretaries and accountants and whatnot, virtual desktops could be a good solution.

Linux + QEMU + kqemu + qcow images (1)

ndogg (158021) | more than 8 years ago | (#15924848)

I can't give you the exact details on how this would be done because I haven't actually tried it, but it should be workable.

The idea is that all your desktop machines would be running a minimal Linux install that can easily be replaced on short notice using various imaging techniques.

Basically these machines would just enough to run a graphical login, wherein after a user logs in, it runs a script that fetches that user's QEMU disk image from some network drive and puts it on a local hard disk. It would then boot up QEMU with that image.

Those disk images would be in QEMU's ideal format, qcow. Qcow has a number of nice features including AES encryption and compression. Also, the disk images can be separated by base images and changes (which can be committed back to the base image).

Finally, I would try to contact Fabrice Bellard so that you can install the kqemu accelerator on all the machines or see how well it works with the Free QVM86 replacement (NB: its development seems to have been frozen for almost a year now).

Uh, ever heard of CAPACITY PLANNING? (0)

Anonymous Coward | more than 8 years ago | (#15924855)

If you have to ask such a question, it is clear that you don't know anything about Capacity Planning.
Your VMware solution eats RAM and CPU cycles away from the target PC. You may or may not have that capacity available on your existing PCs. You may have not considered that power-user in the corner who is already using 1.6 GB of RAM (in disk-cache or whatever else keeps her productive as she flips between open windows from multiple applications) --as you "plan" to take 800MB away from her with your VMware solution. If you had any responsibility for budgeting the purchases of the PCs, which includes predicting how long they will last, you would know you have to do a capacity plan. But the fact that you ran gigabit networking to each of your desktops tells me you have your head firmly up your ass anyway. Suggest this post should not be on Slashdot because it is incompetent. Now if you had asked how to do Capacity Planning, that would be an entirely valid subject of general interest to everyone.

And this would be an improvement how?... (4, Insightful)

maggard (5579) | more than 8 years ago | (#15924868)

So a lot of expensive desktops emulating, um, pretty much themselves, using funky somewhat pricy software, running substantial images pulled off of expensive servers over an expensive network (bacause GB'net or not, a building full of folks starting up in the morning is gonna hammer you.) Then comes the challenge of managing all of those funky images, reconciling the oddities of an emulated environemnt, etc.

Could you make it work? Sure. But I gotta wonder if it'd be worth it.

Is gonna be any better then a well managed native environment? Or going Citrix clients? Or Linux/MacOS/terminals (chose your poison) boxes instead of MS Windows?

I hear your pain, I just think you're substituting a known set of problems with a more expensive, more complex, more fragile, baroquely elaborate, well, more-of-the-same.

It doesn't sound like much of an improvement really, just new and more complex failure modes, at extra cost.

Though, I guess, if you're looking for a new, challenging, and complex environment this would be it; just take your current one and abstract it another level. I wouldn't want to be the one footing the bill, or trying to rely on any of it, but at least it'd be something different.

Re:And this would be an improvement how?... (1, Informative)

GreatDrok (684119) | more than 8 years ago | (#15925044)

We have bought a number of quad opteron machines recently because we do a lot of background number crunching and they need to run Linux. However, everyone has also been using laptops for Windows software. At my suggestion we have been configuring VMware images of XP Pro with Office for each user and installing vmware-player on each of these Linux workstations.

We have a Linux server that runs Samba for roaming profiles to the current Windows laptops and this works OK as it does mean if a laptop dies the user has all their configuration stored on the server but unless the replacement machine is configured exactly like their old one (and the users do have various needs for software beyond just the basics so they often do differ) the roaming profile doesn't exactly work and there is a bit of fiddling.

With the VM setup the users are able to use their image on any machine (shortly even on the Macs) and it is theirs regardless so the roaming profile works well too. This also means that Windows only uses up a small part of their workstation so we can gang the quads together into a cluster and do some serious work. The best part is that each night we do an rsync of the home directories (to another server and external drives to be stored in a firesafe) which also contains their VM and so if they screw up their Windows system we can just copy back the one from the day before and all is well. Far better than Windows Restore which isn't entirely able to put a machine back into a previous state.

Finally, the price of all this destroys any other solution I can think of for running Windows apps in a largely Linux environment. The player is free, the Windows and Office licences we had already bought, Linux is free and we have got a 40 processor Opteron cluster available that effectively cost us nothing too because we needed to put desktops in to replace the laptops that some idiot thought would be a good idea when the company first got started. Every user has a local vmplayer on their Linux machine. They are getting dual 20" monitors which is better than the 15" laptop with a 17" monitor attached as the laptops can't drive anything better so they can run Windows on one monitor and have Linux on the other.

With the current situation they were all running lots of Linux apps using VNC to our few available Linux machines and lots of terminals (cygwin or putty) but had Windows because there is still a perception that we need Office, mostly PowerPoint, although I have made sure they all have OpenOffice. They were crying out for more compute power as the company grew so for not much money I was able to buy 10 Quad Opteron workstations to give them the power, dual monitors don't cost much now either and vmplayer gives them Windows on one screen and Linux on the other, or Linux on both if they don't need to be running Windows apps. They still have the laptops for presentations and mobile use but they don't need to use them every day which should prolong their lives so there is a saving there too. What's not to like?

Check out emulab (0)

Anonymous Coward | more than 8 years ago | (#15924870)

Emulab has the ability to dynamically load images over the network...uses a multicast protocol as well in order to make the pushes more efficient. Full loads of 50-100 nodes in under 10 minutes.

Been there, Done that. Can't say I liked it. (1)

Loopy (41728) | more than 8 years ago | (#15924872)

I've done this for a major PC OEM and for a couple of smaller tech shops. The single biggest complaint everyone has is that the performance is abysmal. When people are used to having on-board AGP/PCI-E graphics, plenty of RAM and snappy hard drives, putting them on remote storage or (/shriek) thin clients is just about guaranteed to piss off anyone not doing data entry in a simple spreadsheet.

On the other hand, it serves as a roundabout method for keeping people from doing things like downloading games and movies, as the thin clients and such will usually only support basic 2D rendering at anything resembling acceptable speeds.

Something similar already exists (1)

Markopolis (554320) | more than 8 years ago | (#15924873)

The company I work for Applianz [applianz.com] has been doing something very similar for several years. Applianz creates network appliances for large commercial software companies using a technique of every user running on a seperate VM including the server. Instead of downloading the whole VM to each user the system just connects them thin client but the idea of one disposable VM per user is the same. At least for our application the it works extremely well and allows user's virtual PCs to be disposed and recreated at will so that users have a perfect experience every time they use the system.

Re:Something similar already exists (1)

captainspic (995952) | more than 8 years ago | (#15924944)

Sounds pretty interesting, will your solution work with non windows OSs?

Re:Something similar already exists (1)

Markopolis (554320) | more than 8 years ago | (#15924961)

Applianz just recently released a beta client for Macs with Tiger. It automatically connects Mac users to a Windows VM running the commercial software application in the network appliance and maps across their GUI, printing and file exports.

Solution to the wrong problem. (1, Insightful)

Anonymous Coward | more than 8 years ago | (#15924880)

I personally think your existing setup is was not well thought out and planned and you are now looking for a bandaid.

I guess your HAL problems are the major issue. You CAN overcome over 95% of those issues with the MS deployment configuration tools and ghosting (here [microsoft.com] and here [microsoft.com] is a start). It takes some engineering commitment to get that up and going but once the framewrok is on place, the minisetup should not be a problem across different hardware. I realy do think it is worth the inital time and effort for something like this.

Considering my above statements..
I have worked at many places and the ones with good backend engineering are much better off in the long run. I am not trying to knock anyone down here but honestly, if your facility is run by tier technicians, you get what you have now. Imagine going through an upgrade or service pack release? Some companies can perfrom those on 500 PCs in a single night without ever actually visting a PC. Some speand weeks doing one at a time. Unfortunatly, the later of the two is the nature of the business when "support" is contracted out. Someone doing engineering is no where to be found. The tools are freely available from MS and third parties to make all of your various PCs pretty much act as one.

Back in school... (3, Insightful)

SvnLyrBrto (62138) | more than 8 years ago | (#15924884)

They just used NIS and NFS, and the net effect was pretty much exactly what you describe... Sit down at any machine, log in, and your environment loads exactly the way you left it on the last machine, everything's safely backed up at the server end, and the client machines are pretty much disposable and interchangeable, and so on. Only difference if you're not farting around with virtual machines... ie. you're not quite as "cutting edge" but on the desktops themselves, don't you want a more proven system? So why wouldn't you just do the same thing, and use said proven, if something of a pain to administer, system?

As an alternative to NIS, Netinfo does much the same thing, only it wasn't designed by people quite so sadistic as NIS. You'd still be using NFS though...

cya,
john

Three different takes on this (4, Informative)

prisoner-of-enigma (535770) | more than 8 years ago | (#15924886)

First off, I don't think VM'ing your desktops is the answer. Current VM's really dumb down the hardware. You lose 3D, sound, and most of them run a bit slower than native (some quite a bit slower). Couple that with the size of most VM images (my Vista image is about 12GB) and you're really looking at a poor solution.

Here's what you should be thinking about:

- Get some kind of desktop management suite like Altiris. You can push software deployments easily, and it's very easy to lock machines down to the point where users can't fsck them up. I've consulted for companies that do this with hundreds of desktops and it's a very robust, reliable system.

- Go with a thin client setup like Citrix or Terminal Server. Users run nothing on their local hardware. Instead, everything runs on the big server. Downsides are similar to VM's (thin clients are notorious for very lightweight support for anything but the most basic sound and graphics) but you are at least spared the massive network thrashing of hundreds of users logging on and pulling down VM images at 8AM every morning.

- If it's users messing up machines that you're worried about, you might want to consider a solution by Clearcube. They take away everything except the keyboard, mouse, and monitor. The guts of the PC reside in a server rack in what is essentially a PC on a blade. The blades are load balanced and redundant, so swapping them out is a breeze. And users *can't* load software on them because there's no USB ports, no floppy drive...nothing! Unless you allow them to download it from the Internet, *nothing* is going to get on those machines if you don't want it to.

VM's make sense for server consolidation. I don't think they've yet gotten to the point where desktops run on them as a form of protection or reliability. There's too many other solutions that work better and have fewer downsides. The problem here isn't Windows per se, it's the fact that your workstations aren't locked down properly to prevent your users from doing stupid stuff in the first place. Fix that and suddenly you'll find a Windows workstation environment isn't the hassle it once was.

Only advantage over citrix (1)

terminal.dk (102718) | more than 8 years ago | (#15924891)

Only advantage over citrix is, that each user can be allowed to screw up his daily copy of the vmware machine.
Otherwise Citrix and thin clients are probably better. Well, thin clients would always be better, also for this.
Then you just revert to OK snapshot for the user every day. No copying.

Patching would be difficult, as you would have to patch x VMs rather than x/30 citrix servers

Why do they need their own images? (1)

failure-man (870605) | more than 8 years ago | (#15924896)

Is there some special reason why the users need to have their own XP image? If not wouldn't it be easier to just force them to save their work on a network share and ghost the machines back to the stock image every night?

Independet Software Vendors wouldn't talk to you (3, Insightful)

mi (197448) | more than 8 years ago | (#15924898)

An "unsupported configuration"...

I had done some analysis on this recently (1)

renjipanicker (697704) | more than 8 years ago | (#15924903)

On a desktop machine (single-proc, 1GB, etc, cost ~ 3000/- USD), our product would build in 9.5 minutes, while on a server class machine (dual proc, 2GB, etc, cost ~ 800/- USD) with 5 builds going simultaneously, each build would complete in about 4 minutes. So you may want to consider about 1 server machine for every 5 developers (or users), with each developer having a thin terminal running RDP client. This would have been the most viable solution for us, except we had already invested in desktop machines.

The Collective (0)

Anonymous Coward | more than 8 years ago | (#15924922)

Some folks at Stanford do this. They call their system the Collective [stanford.edu] . They use VMWare and support Windows VMs and Linux VMs, depending on the app that's needed, at least according to the paper [stanford.edu] .

You're not qualified (1, Insightful)

syousef (465911) | more than 8 years ago | (#15924924)

You're asking for advice on /. suggests you're not qualified.

Several ways to fix this and get qualified:
1) Trial it on a small number of less important users. Get feedback. Make sure you listen to that feedback. Allow a decent period of time for the trial so initial teething problems can be sorted. Allocate sufficient resources to deal with early issues. This is the hard way to learn...through experience.

2) Hire expertise - someone that's done this before, to implement and advise. Make sure it's not a vendor since you won't know if you're being screwed till its too late.

3) Get some training.

DO NOT try to implement this for a large number of users in one hit. You're a fool if you do.

Amazing (1, Informative)

Anonymous Coward | more than 8 years ago | (#15924956)

You have no idea who you are talking to, yet you can judge the individual. They asked a simple question of a bunch of geeks to see if others have done it. Nothing more. And to be totally honest, I can not think of a better site to obtain useful info (mixed with absolutely worthless info, fud, and comdemnation).

Yet, you throw out basically worthless info. I am sure that they will be trialing it. But if others have done it, and offer useful info, they can also check out paths to take (or avoid).

To E1ven: Please try it out on a couple of different set-ups and let us know. It would be useful to see how it works with Xen (combined with qemu for the windows stuff).

Amazing my left foot (-1, Flamebait)

syousef (465911) | more than 8 years ago | (#15924971)

Thank you for posting as AC. Shows you can back up what you say.

You do NOT experiment on this stuff with a large corporate user base. This is not a Uni lab.

The equivalent question in an automotive board would be to say: "I've done a lot of work with my car's electrical system but I've never replaced an engine. Has anyone else done it?".

Re:Amazing my left foot (1, Informative)

Anonymous Coward | more than 8 years ago | (#15925020)

He never said that he would experiment with a large corporate base. He is exploring options. Nothing more. BTW, it is companies that take chances that grow fast. For example, it was Bob Crandell at AMR that took moved the the Sabre system into doing a large number of inovative ideas. Once Carter de-regulated the industry, AMR was then able to surpass the other airlines in size. Other companies that push the inovation such as Google and Amazon are then able to grow in size quickly. A better example is Walmart. Sam Walton was very conservative WRT how the company was ran. But the one place that he spent money on was technology (even though he did not understand it). In fact, when other companies were pushing big mainframes, he pushed walmart on Windows. Now, that others are pushing into Windows, Walmart is quietly pushing onto Linux. By the time that the industry realizes this and starts the move, Walmart's system will be paid for. There costs will be a fraction of the others.

In contrast, it is when a company locks down everything and is afraid to move forward with new ideas that dies (or nearly dies). For example when they start saying that the company should not change things, then they are in a death spiral.

My primary desktop IS a VM... (1)

thzinc (679235) | more than 8 years ago | (#15924930)

I've had a few issues in getting used to using a VM as my primary desktop, but I've found it's a very elegant solution to portability and hardware upgrades. I don't need to worry about "upgrading" computers, synchronizing data between my desktop and laptop, and backing up my entire system state.

I use VMware Server on my Fedora Core 5 desktop and my Windows XP laptop with a USB 2.0 hard drive containing my VM image. I've found it's worked well for most things I do, including development, watching videos, working in Photoshop, etc. Backups are quite nice too: a quick tar cz foldername | split -b 1073741824 - foldername.date.tar.gz. away. VMware's products are quite mature; I have only had a few issues during the VMware Server beta that the development team resolved right away.

IT WORKS GREAT (1)

kemo_by_the_kilo (971543) | more than 8 years ago | (#15924934)

Winblows Term serv. rdesktop live boot or wise terms....... although a cdrom and a miniitx is cheaper- no hd just cd with boot cd of rdesktop. imo

Why not? But... (1)

Franso6 (976942) | more than 8 years ago | (#15924939)

Basically, downloading the VM everytime would be tedious (even with good servers and good bandwidth) and would anyways be unfeasable for mobile users.
Citrix has the advantage of thin client but has numerous disadvantages on a user experience point of view (not an individual environment, you have to be online...)
Some of the 'physical' problems you'll meet with running VMs will indeed be the lack of support for accelerated graphics, I guess, extra memory needs that usually exeed the initial estimation, exotic drivers and functionalities (laptop 'sleep', wireless cards...) and (perhaps) the time synchronization issue.
You'll still have to maintain your host OS for every piece of hardware. And that might be non-trivial even with Linux (again, think of laptops).
A great advantage of VMs on the desktop is that you can offer several VMs to your user (different ones for internet access and office work, or a 'personal' workstation and a 'corporate' workstation, or for development folks a 'development' workstation and a 'production' one, you can also say that you have an 'internet-access' workstation that you undo every day and a 'production' one that doesn't have a access to internet at all, possibly on different vlans using dot1q on the host) without having to reinstall/reboot/add machines. Just make sure you negotiate licenses schemes for that kind of set-up.
Your 'host' OS should provide a GUI for choosing to either use the currently installed image(s) or to download a 'fresh' one from the server. Integrating that kind of flexibility in AD is not very easy to achieve but with sysprep and some clever scripting can be possible.
User data management can also be a problem in 'disposable' VMs. I'd guess that offline folders (or whatever it's called today) can be kind of a solution but you really want to make sure it works as advertised before deploying that in large scale.
  Also think of maintaining the software (security updates...) on your VMs. They may be difficult to maintain as you can't control whether they're on or not and even whether they're still existing or not...
I think it's feasible (I've actually been using that in my test environment for a while, but it was a very small network with only a dozen users or so and not doing actual business with it) but expect it to be challenging to plan, prepare and roll-out.

just my $.02

Smells like X (2, Insightful)

Baloo Ursidae (29355) | more than 8 years ago | (#15924943)

Sounds like you're trying to solve the same problem X11 [wikipedia.org] is designed to solve. Have you looked into getting a bunch of X terminals and one super-powerful machine?

Storage for these? (1)

lavalyn (649886) | more than 8 years ago | (#15924962)

What size are you expecting each image to take? Windows XP isn't exactly lax on storage space, and applications for them can take another gigabyte without difficulty. Preloading a few gigabytes does take a bit of time; I suppose after that you'd use Windows sharing.

I think the previous comments about Citrix or such are a better solution. Terminal services, while not exactly cheap, may also work well for you. For a Unix environment, xdmcp is feasible in many circumstances. But as far as smart clients go, I'd be less than enthusiastic about remote VMware images. For one, you'd still need to run (say) a Linux host operating system underneath, which has much of the same difficulties as you'd see in Windows.

we did this in the past... (1, Insightful)

TwoEdge77 (92704) | more than 8 years ago | (#15924986)

It was called using a mainframe and 3270 terminals. Very reliable, easily updated.

I can't speak for Linux... (1)

HomerNet (146137) | more than 8 years ago | (#15924996)

...or Mac type VMs, but as for Windows... ...don't. It's a massive nightmare. Any changes, and I mean any changes to the base configuration of the computer the user is sitting at result in unforseeable and often nightmarish problems with the virtual machine. It's especially bad with any proprietary software which may or may not have been designed to be flexible enough to handle virtualization. Then there's network problems, which are too numerous to really go into.

Just don't do Windows on a VM. It sucks.

What are the typical applications being used? (0)

Anonymous Coward | more than 8 years ago | (#15925001)

I didnt see anyone ask what applications you are trying to run. If its just typical office applications without any custom software then you could use Puppy Linux. I run it on a 400Mhz winterm with 256Mb ram off of a 2Gb flash card in a cf/ide adapter it boots in less that a minute supports pretty much all current hardware and will install on an ancient 1Gb hard drive with plenty of free space and supports logging in to windows fileservers..... you can even put it on a 256Mb usb thumb drive for a modern computer and boot from that and still get on the network. Oh and did I mention it looks almost exactly like windows?

Anybody else agree.... why kill your network? , I dont care if its 10Gb fiber there are better solutions than running everything over the network. If you really need to run windows apps you will still have the flexability to run them through a VM or a program called wine about the only thing that you cant run very well are games(ie Elder Scrolls: Oblivion) but thats not really working anyway. So what do you think?
Just remember there are thousands of people on here that will give you excellent recommendations all you have to do is ask good questions.

how about vm like, except not (0)

Anonymous Coward | more than 8 years ago | (#15925015)

why not use something akin to custom knoppix/ubuntu/suse live cd....then have samba shares and ldap login...you just have to push out new cd images periodically for maintainance/upgrades...also, as someone stated above, there are thin clients...cd's would be ghetto solution - thin clients, the more expensive, but asthetically pleasing and easily remote managable solution

In addition, I think IBM is supposed to have a completely web based collaboration/email/office suite type thingy...so you can kinda centralize that stuff too, seperate from clients (think its java based, so works on macs, windows, linux, ect..)

Isn't it funny that people are encountering the same problems that faced the computer industry 30 years ago...leading to the SAME solutions...virtual machines (yeah 30+ years old) and server/terminals setups... =P

Silly (0)

Anonymous Coward | more than 8 years ago | (#15925018)

They'd just as easily be able to screw up their image as they would their native PC. Just use profiles/rdp like everybody else. You don't have to try to squeeze every bit of technology into your setup.

Does it have to be Windows? (3, Interesting)

SanityInAnarchy (655584) | more than 8 years ago | (#15925036)

Hmm. Your main issue is going to be switching machines. I see three ways of doing this:

Some virtual machines let you suspend to a file. This is nice if you must run Windows, or some other uncooperative OS. But, that still means suspend to a file, which will take some time. As for the disk, that would be fairly trivial -- your host OS would be Linux over NFS, so your disk image is an NFS file.

Issue to watch for here: Local cache. I don't care how fast your gigabit is, that server is going to feel some stress. I tried setting up gigabit just for file sharing, and it was never as fast as it should have been, yes I was using Jumbo Frames, and it's just a crossover cable, yes it was cat6. And even if that's flawless, there's the server at the other end. You probably want good local caching, probably local disk caching. InterMezzo would have been good, but they've pretty much died. You might try simply throwing tons of RAM at the problem, or you might try cachefs (never got it working, but maybe...) or maybe one of the FUSE things.

Second way: Don't use VMs. VMs will never be as fast as a native OS. But "native OS" can still work roughly the way the VM image does above, if your hardware is identical. With Linux and Suspend2, you can suspend and resume from pretty much anything you can see as a block/swap device. So, all of the above caching issues apply, but just run it as a network OS, have one range of IPs for machines still booting and logging in, and another for fully functional machines. Here, when the user logs in, the bootstrap OS tells itself to resume the OS image from the network.

You could also do this with Windows by copying a local disk image around -- after you hibernate, boot a small Linux which rsyncs the whole disk across the network, including hiberfile.sys. Everything besides the OS itself would be stored over the network already anyway (samba).

I don't know if this will work -- after all, no hardware is truly identical. But it may be worth a shot.

Advantage: Both Linux and Windows XP know to trim the image a bit on suspend, so it won't be a whole memory image, just relevant stuff. Truly native speed.

Disadvantage: If I'm wrong, then you won't be able to properly resume on a different box.

Finally, you could stick to software which supports saving sessions and resuming them. I know Gnome at least, and maybe KDE, had this idea of saving your session when you log out -- and telling all applications to do so -- so that when you log back in after a fresh boot, it's like resuming from a hibernate.

Advantages: Fastest and most space-efficient out of all of them. Least administrative overhead -- in the event of a crash, there isn't nearly as much chance for bad stuff to happen. Easily works cross-platform, native speed on any supported platform. Simplest to implement, in theory.

Disadvantage: Not really implemented. 99% of all software may remember useless things like window size and position, but very few actually store a session. If you mostly roll your own software, this may be acceptible.

And of course, you could always do web apps, but those won't be anywhere near native speed -- yet.

All approaches share one flaw, though -- bad things happen when a box goes down. With a VM image (or a suspend image), if you crash, you'll obviously want to restore from a working image -- but what about the files? If they're on a fileserver, does your working image properly reconnect to the fileserver, or does it assume it's still connected (thus having weird things cached)? The third option (saving sessions) is the safest here, because in the event of a crash, programs behave the same way they would on a single-user desktop. But you still lose your session.

What others are suggesting -- various terminal server options -- is much slower, but it also means that as long as the application server is up, so is your session. If you crash, you can switch to another machine and literally be exactly where you were. For this to work with any of the above approaches, you would need to mirror the memory of the running VM to the server as it's going. DRBD [drbd.org] does this for disk, but I don't know of anything that will mirror RAM across a network, and I imagine such a beast would be much slower than a terminal solution. If you're using custom apps, of course, you can do something like that -- that's essentially what MySQL replication is for, saving all data and session information to another box so that if the primary one dies, the secondary one can do an IP takeover and you won't notice a thing. Combine that with DRBD for static content and config files, and you can replicate a whole webserver.

But I really can't think of a practical way to do that kind of seamless switchover for programs not written for it, other than faking it with terminals. You can even use NAS for audio, so the real bottleneck would be if you're doing any serious graphics -- and I don't mean Photoshop or Gimp, I mean Fear or Quake 4.

You'll still want either software suspend or a virtual machine on the server side -- since you now have all your eggs in one basket, that basket had better be armor-plated. That means ECC RAM, hot-swappable if you can, hot-swappable SATA or SCSI RAID, checksumming all around, and something like Nagios [nagios.org] to page you when anything fails. Even then, backups, backups, backups. And backups. Ideally, in the event of a catastrophic failure, you can bring up a new server in less than 10 minutes and give everyone their session from yesterday and a backup from an hour ago -- could be a backup server running DRBD which takes periodic snapshots.

Snapshotting could be an entire image (saves space), or could be something done with dm_snapshot, which is voodoo, but works anyway. Here's one way to do it: Setup your main disk image as a partition, then have at least two more, smaller (but still huge) partitions for COW data. Map your main partition to a snapshot origin device. For each COW device, set it up as a snapshot, then alternate nuking one and setting it up again. This way, you always have at least one reasonably coherent snapshot from an hour ago (if your cron job runs hourly), and you'll usually have two, in addition to a recent (as in, NOW-recent) backup of the current disk image, useful if you can cut off DRBD before the catastrophic failure is replicated. But even if that's useless, DRBD is a hell of a lot faster and more accurate than an rsync cron job.

Of course, create as many snapshots as you like, if you have the space.

Anyway, I've probably given you more than enough ideas to figure something out on your own.

Have you asked vmware? (1)

Servo (9177) | more than 8 years ago | (#15925039)

VMware is still a relatively unproven technology firm. Since they are pushing the virtualized desktop environment that you're interested in they should be able to provide some references. VM technology has been around for a long time but desktop side VM's are something I'd be cautious of without the vendor being able to demonstrate that it actually works in a real world environment.

That being said, I think that the business case could be made. People have been trying to come up with the same result using different methods for a while, but none have been overly successful. Using Citrix has come the closest but in my experience Citrix is only good for certain tasks, not the entire desktop environment. There are other thinclient solutions out there and other less costly alternatives though that vmware desktops may not be as practical a solution as the coolness factor would make it seem.

Video session on Citrix, VMWare and XP (1)

osij2is (995957) | more than 8 years ago | (#15925041)

Brian Madden (brianmadden.com) is an excellent source for info on Citrix and Virtualization. Yesterday, he published a video with Brian Oglesby whose done a lot with ESX and virtualization techniques that you're looking at doing.

Watch the video here [brianmadden.com] (http://www.brianmadden.com/content/content.asp?id =620). He shows a lot of the benchmarks and gives a great sense of how to use what resources you've got, or if you're building from scratch. Basically, Windows XP Pro VMs on ESX server do NOT scale well in comparison to Terminal Server sessions or Citrix sessions. I'd go into further detail, but the video explains it all.

Problems at scale require Solutions at scale! (0, Troll)

ElitistWhiner (79961) | more than 8 years ago | (#15925051)

You have a large installed base. Shit's hittin' the fan.

Steve Jobs has this campaign where he wants PC users to switch to Apple hardware. Talk to Steve about a Corporate Sponsored PC switch to his MacOS X on Intel running WINDOWS. Your BusinessCase might cross market with Apple's marketing strategy to provide your shop a soft landing on a solution to the problem. A win-win.

Minimize risk, provide longterm solution

Virtualized desktops gives you more than that. (1, Interesting)

joewhaley (264488) | more than 8 years ago | (#15925055)

Running your desktops on virtual machines gives you a lot more than just centralized control. As everyone knows, all problems in computer science can be solved with an extra level of indirection. Once your machine is virtualized, desktop management becomes a whole lot easier.
  • Mobility. Your "machine" is just a bucket of bits. Once your "machine" is virtualized, you are no longer tied down to a single piece of hardware. You can sit anywhere and have your complete environment. Having a hardware issue? No problem, just walk up to another machine and start using it where you left off.

  • Isolation. Once everything is wrapped up in a virtualized sandbox, many security problems become a lot easier. You can easily isolate and monitor what the guest is doing, and it's darn near impossible [slashdot.org] for even malicious software to cause serious damage. User screwed up the configuration or got infected by spyware? Just roll back to an earlier VM snapshot. Better yet, have them boot into a pristine image every time. Thus, the solution to just about everything is just a power-cycle.

  • Easy management. Running on a virtual machine gives you a standard platform, so you can keep a single golden image instead of the N different images for each piece of hardware. Just keep that image up to date, and periodically push new versions out to users. User having trouble? You can get an exact replica of their whole environment for debugging, without the user having to do anything.

You can get some of these benefits with thin clients and/or Citrix, but those have their own share of problems. Thin clients have lots of problems, the most obvious of which is if the network goes down, you are hosed. Working on a laptop and/or with an intermittent connection is not possible. Besides, nowaways it's pointless. Decent hardware is so cheap, it no longer makes since to strip down hardware at the client side. In fact, many times desktop PCs turn out to be *cheaper* than thin clients. (God, I love economies of scale...)


Disclaimer: I work at moka5 [moka5.com] , a startup company out of Stanford that does desktop PC virtualization. We have a beta product called "LivePC Engine" that adds a demand-paging layer to VMware, so you can run your PC environment from anywhere (without having to download the whole thing), share it with other machines, and subscribe to other people's shared LivePCs and automatically get updates as they are posted.

VMware ACE (3, Informative)

BlueLines (24753) | more than 8 years ago | (#15925059)

http://www.vmware.com/products/ace/ [vmware.com]

"With VMware ACE, security administrators package an IT-managed PC within a secured virtual machine and deploy it to an unmanaged physical PC. Once installed, VMware ACE offers complete control of the hardware configuration and networking capabilities of an unmanaged PC, transforming it into an IT-compliant PC endpoint."
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>