Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Virtualizing Workstations For Common Hardware?

kdawson posted more than 4 years ago | from the rock-to-stand-on dept.

Ubuntu 349

An anonymous reader writes "We have approximately 20 workstations which all have different hardware specs. Every workstation has two monitors and generally runs either Ubuntu or Windows. I had started using Clonezilla to copy the installs so we could deploy new workstations quickly and easily, when we have hardware failures or the like, but am struggling with Windows requiring new drivers to be installed for all new hardware. Obviously we could be booting into Ubuntu and then load a Windows virtual machine after that, but I'd prefer not to have the added load of a full GUI underneath Windows — we want maximum performance possible. And I don't think the multi-monitor support would work. Is it possible to have a very basic virtual machine beneath to provide hardware consistency whilst still allowing multi-monitor support? Does anyone have any experience with a technique like this?"

cancel ×

349 comments

Sorry! There are no comments related to the filter you selected.

Isn't that called an... (1, Insightful)

Anonymous Coward | more than 4 years ago | (#31892022)

Hypervisor?

Re:Isn't that called an... (-1, Offtopic)

Anonymous Coward | more than 4 years ago | (#31892050)

No, it's called *a* hypervisor. "An" happens when the next word sounds like it starts with a vowel.

Re:Isn't that called an... (-1, Offtopic)

Anonymous Coward | more than 4 years ago | (#31892108)

except when the next word starts with an 'h', then the 'h' becomes silent and you use "an"

Re:Isn't that called an... (1, Informative)

Anonymous Coward | more than 4 years ago | (#31892124)

Wow, it is only 'an' when the 'h' is actually silent, such as 'an hour'. When the 'h' makes an actual sound, such as hypervisor, the word 'a' is used instead.

Re:Isn't that called an... (2, Funny)

Antidamage (1506489) | more than 4 years ago | (#31892186)

It's much more professional to call it "an iypervisor". Or something like that. Y is nature's spare vowel.

Re:Isn't that called an... (2, Funny)

FatdogHaiku (978357) | more than 4 years ago | (#31892240)

So, all this time I've been pronouncing it "yipervisor" and people were snickering at THAT... I guess it's a relief really... I thought it was because of my mohawk/mullet.

I like to think of it as "The Mohlet".

Re:Isn't that called an... (0, Offtopic)

tumnasgt (1350615) | more than 4 years ago | (#31892244)

The use of 'an' with h is very common, though incorrect. From what I understand, it is because some accents do not pronounce the h on 'historic' so 'a historic event' is 'an (h)istoric event' for these people (correctly), then people that do pronounce the h started to use 'an' with 'historic' (incorrectly), and it spread to other words like a disease.

Using 'an' with 'historic' when pronouncing the h is almost acceptable, anyone using it with any other word where the h is pronounced should be banned from using the English language.

Re:Isn't that called an... (1, Informative)

PenguSven (988769) | more than 4 years ago | (#31892130)

except when the next word starts with an 'h', then the 'h' becomes silent and you use "an"

Only if you're an american. the rest of the English speaking world manages to pronounce H's fine.
Let's try it together. H-E-R-B-S.

Re:Isn't that called an... (1)

hedwards (940851) | more than 4 years ago | (#31892172)

I thought it was H-E-R-B because there's a fucking H in it.

Re:Isn't that called an... (2, Insightful)

Trepidity (597) | more than 4 years ago | (#31892398)

Initial 'h' is actually dropped considerably more frequently in UK English than US English; e.g. "an 'istoric event" in British but "a historic event" in American.

Re:Isn't that called an... (1)

PenguSven (988769) | more than 4 years ago | (#31892468)

I've never once heard the word historic pronounced as "istoric", by anyone, English or otherwise.

Re:Isn't that called an... (1)

Z34107 (925136) | more than 4 years ago | (#31892514)

Here to bring the sample size to n=2.

I live in the American midwest, and I say "an 'istoric event." But I pronounce the "h" in "that's historic" or "historical."

Conclusion: I'm a freak.

Re:Isn't that called an... (1)

BobPaul (710574) | more than 4 years ago | (#31892500)

I thought it was the British that said 'erb. I've always said herb...

Re:Isn't that called an... (2, Insightful)

camperdave (969942) | more than 4 years ago | (#31892532)

What gets me is the song Henry the Eighth [youtube.com]

H - E - N - R- Y
Ennery! (Ennery!)
Ennery! (Ennery!)
Ennery the eighth, I am, I am.
Ennery the eighth, I am.

Re:Isn't that called an... (2, Funny)

Anonymous Coward | more than 4 years ago | (#31892496)

Since America owns the internet, it's assumed by default that Hs are NOT silent. If the UK ever makes a 21st century contribution beyond the destruction of their own car brands, we'll consider defaulting Hs to silent.

Re:Isn't that called an... (1)

BitZtream (692029) | more than 4 years ago | (#31892374)

Not when it sounds like it starts with a vowel, it an is used exclusively when the next word starts with a vowel, A is when it doesn't.

Any deviation from this isn't 'proper', regardless of how common it may be.

Yes (1, Interesting)

solid_liq (720160) | more than 4 years ago | (#31892028)

It's call Norton Ghost.

No, it's not (-1, Troll)

Anonymous Coward | more than 4 years ago | (#31892088)

It's called FUCING YOUR MOTHER. Damn she knows what she's doing.

Trolls unite mothafackrel!

Re:No, it's not (-1, Offtopic)

Anonymous Coward | more than 4 years ago | (#31892170)

THAT just made my day. Us mothafackrels are for sure unitin.

Re:No, it's not (-1, Offtopic)

Anonymous Coward | more than 4 years ago | (#31892174)

Fackre? Jim Fackre? Hey.... I knew you in college. How's Midge?

Re:Yes (2, Interesting)

Anonymous Coward | more than 4 years ago | (#31892126)

A bit off-topic, but related to virtualization, so here's my question:

What's the best way for me to make a "snapshot" of an existing, functional Windows XP system, such that I can boot up (a copy of) this system at a later point in time?

Background: I have a computer running Windows XP, with a bevy of development tools (including databases, IDE's, build system, etc.) installed, involving loads of configurations, etc. I have not current use for this environment, but for legacy purposes would like the option of firing it up in the future, should I need to do a demo or explain it to someone else.

I have no real experience with virtualization, but it sounds relevant / useful here. What I'm picturing is an "image" / snapshot of the system, which I can later run within a virtual machine in some other operating system. How can I do that? Or do you recommend a different approach?

Thanks!

Re:Yes (0, Informative)

Anonymous Coward | more than 4 years ago | (#31892152)

A bit off-topic, but related to virtualization, so here's my question:

What's the best way for me to make a "snapshot" of an existing, functional Windows XP system, such that I can boot up (a copy of) this system at a later point in time?

Background: I have a computer running Windows XP, with a bevy of development tools (including databases, IDE's, build system, etc.) installed, involving loads of configurations, etc. I have not current use for this environment, but for legacy purposes would like the option of firing it up in the future, should I need to do a demo or explain it to someone else.

I have no real experience with virtualization, but it sounds relevant / useful here. What I'm picturing is an "image" / snapshot of the system, which I can later run within a virtual machine in some other operating system. How can I do that? Or do you recommend a different approach?

Thanks!

I recommend not going with Microsoft. Since you went with Microsoft, good luck with XP in the future. Too bad you didn't pick an open system with no proprietary technologies and file formats. If you had, you'd easily be able to move your data to any more modern system. But you went with the monopolist and now you got the shaft. At this point the only thing you're good for is an example to others of what not to do. Have a nice day!

Re:Yes (4, Informative)

PenguSven (988769) | more than 4 years ago | (#31892164)

VMWare has a tool to create an image from a "real" PC.http://www.vmware.com/products/converter/

Re:Yes (0)

Anonymous Coward | more than 4 years ago | (#31892228)

http://lifehacker.com/316355/create-a-backup-image-of-your-system-with-driveimage

I've actually used their previous guide and it worked well several times: http://lifehacker.com/software/geek-to-live/partition-and-image-your-hard-drive-with-the-system-rescue-cd-292972.php

OMG it saved so much time on a briefly (2-4mo) unstable system.
Boot to linux > overwrite volume with image > DONE!
---Instead of:---
Boot from windows XP cd > install > install updates > install updates > ... etc > install essential programs > done ._.

Re:Yes (2, Interesting)

YrWrstNtmr (564987) | more than 4 years ago | (#31892534)

What's the best way for me to make a "snapshot" of an existing, functional Windows XP system, such that I can boot up (a copy of) this system at a later point in time?

DriveImage XML [runtime.org]

Re:Yes (3, Interesting)

Z34107 (925136) | more than 4 years ago | (#31892566)

What you're looking for is ImageX. You can get it from the Windows AIK [microsoft.com] . (It says "Windows 7 AIK", but it will work on XP.)

Recipe for win:

  1. Create [mydigitallife.info] a Windows PE flash drive. This pretty much gets you a bootable Vista/7 kernel.
  2. Copy ImageX.exe from the WAIK onto the flash drive.
  3. Boot your computer from the flash drive. Use imagex /capture /compress fast c: z:\file_on_external.wim "description in quotes" to create a .WIM image file.

You can take that WIM image and re-apply it to your computer at a later date. Windows activation and all of your programs will be preserved. You can also mount WIM files like directories using imagex /mount.

However, you will not be able to take an XP install and move it to a system with different hardware. XP's drivers and HAL will throw a fit if you move it to a computer that's too different, although similar-enough hardware will "mostly work."

You can download and run Sysprep from Microsoft before you capture an image. It strips out some of the hardware and user-specific settings and returns the computer to XP's "mini setup" mode, where it will ask you for username/password/CD key/whatever. But even then, XP images are still very hardware bound; more often than not an image won't work until booting from an XP CD and doing a repair install.

Your legacy XP system. (2, Informative)

Anonymous Coward | more than 4 years ago | (#31892570)

Well you probably should do a couple of things, possibly more, just to be safe / conenient for varying possible future use scenarios.

1) Make an image copy of the entire drive, and any others that are referenced by your configurations. Boot sector, partition table, C partition, other partitions / drives, the whole set. There is really no general substitute for having a copy of every single factor that could affect your ability to recreate the system exactly as it is if you need to do that on a physical machine or with some future set of virtualization tools. Since it is next to impossible to rebase / reconfigure applications that have configurations referring to paths under D:, E:, F:, DVD/CD drive as O:, whatever, you'll want to note all the mappings that could be relevant to making the applications work again and copy that data too.

2) Look at the free "disk2vhd" tool from Microsoft's sysinternals site. Maybe it can help convert your physical C partition into a VHD image which you could potentially eventually boot with something like "Windows XP mode" or Microsoft Virtual PC or Hyper V. Read up on some physical to virtual scenarios using Virtual PC and Hyper V and XP Mode and see what is most likely to work for you. There are various good technet / microsoft / msdn / 3rd party FAQs and blog posts about the good and bad points of doing physical to virtual mappings like that with their various tools.

3) It is possible you could make some use of the Windows AIK or MDOP tools to help your physical to virtual conversions. One thing that is commonly done before capturing an image from a physical machine before virtualizing it is "sysprep /generalize" which takes out some of the machine specific device drivers, licensing activation data, etc. so that the resultant image is more generically transportable to a different machine or VM. YMMV. The blogs / recipes online above will guide you as to the best options.

4) Check out Virtualbox the free VM system from Sun/Oracle. Read their forums about some physical to virtual capturing scenarios. I'm often more impressed with the functionality of virtualbox than microsoft's virtual pc / XP mode, so maybe it would be a better choice for you. Though the tools to do p2v conversions are kind of weak in both camps, nothing truly a click once automatic process.

5) There are probably some good ways to do physical to virtual conversions with a LINUX OS too; the qemu/kvm hypervisor is pretty effective at virtualizing XP in recent versions of LINUX like Fedora 13 beta or Ubuntu 10.04 beta 2 both of which are newly available, though the qemu/kvm virtualization has been working well for years. OpenSuse11.2 should work too. Anyway there are various tools you can use to capture the images of the XP C partitions and other partitions into QCOW or other such formats that can be used with the VM software to run the virtualized system. Again device drivers loaded into the physical XP system will often possibly be problematic so either remove them manually or sysprep /generalize the physical OS or just try booting the VM into safe mode and then getting rid of the old drivers. Whatever works.

6) Of course the easiest solution probably hasn't been invented yet, and next year's VM systems might not even be compatible with some of the disk formats and configurations todays VM systems use, so, again, that's why it's good to keep a full image or physical copy of the original drives handy in case you want to convert them again later.

Re:Yes (2, Insightful)

cdrudge (68377) | more than 4 years ago | (#31892160)

How does Ghost virtualize anything? Sure it can clone their existing drive for backup purposes, but what happens when a desktop motherboard fries itself and is obsolete enough that they need to upgrade to something newer? Yeah they can get the data back, but the drive image won't match the new hardware.

Re:Yes (-1, Troll)

Anonymous Coward | more than 4 years ago | (#31892330)

How does Ghost virtualize anything? Sure it can clone their existing drive for backup purposes, but what happens when a desktop motherboard fries itself and is obsolete enough that they need to upgrade to something newer? Yeah they can get the data back, but the drive image won't match the new hardware.

I suppose on Windows that might be a real problem. That's okay. After you pay Microsoft money for the privilege of using Windows, there are many techies who will glady let you pay them money so they can fix these problems that only Windows has, namely the drivers are not a core part of the kernel. That way everybody wins! Oh yeah, except for you. Keep going with the monopolist Sparkles, one day you'll learn.

Re:Yes (2, Insightful)

GNUALMAFUERTE (697061) | more than 4 years ago | (#31892502)

He won't. The reason m$ is still around, is the huge industry around windows flaws. He's probably benefited by it too. It takes 10x more people to manage a windows-based network than a Unix based network. Think about it. All the antivirus companies. All the anti-spyware, registry cleaners, etc. All the "technicians" that keep joe sixpack's computer running. All the license money around windows. Remember, windows is not an OS in the sense that GNU/Linux is an os. Your average distro includes several DVDs with all the software you'll ever need. If what you want is not there, just fire up $package_management_system and search for it. Windows retails for, what, 300 dollars?. Ok, now add to that office, antivirus, graphic software, virtualization solution, disk imaging, etc, etc, etc. You are talking about a lot of money. The amount of people that have a job thanks to windows flaws is HUGE. And it's that group of people that is keeping windows alive. Thanks to that, it's not going away any time soon.

Thinking about it, it's how capitalism works. Accountants, lawyers, marketing droids, most managers, bankers 90% of government employees, etc,etc. None of them do anything productive. They have a job JUST because there's a glitch on the system.

This people will keep it alive, because it's what's feeding them, and most of them don't even realize how wrong it is, and what useless and pointless lifes they live.

Re:Yes (1)

GNUALMAFUERTE (697061) | more than 4 years ago | (#31892450)

And he is already using Clonezilla anyway, which is much better than norton (and GPLed). It's the best tool out there to image many systems at once over the network.

There are several solutions for this guy:

1st) Get rid of windows. Seriously. Get rid of it.
2nd) If you can't, you'll need to maintain several different images.
3rd) Even if your computers are different, they can't be all that different. That is, I don't know how many computers you are managing, but there is a finite number of different hardware configurations you can have. Let's say you are managing 50 computers, I'm sure the different combinations you have is no more than 5. So, you can get away with just maintaining 6 images, 1 with GNU/Linux, and 5 different XP images. It still sucks, but it's better than nothing.

Re:Yes (1)

CAIMLAS (41445) | more than 4 years ago | (#31892494)

If he does get rid of Windows, then there's no point in using images at all. It's a waste of storage space, and a headache to maintain. Just set up a locally cached storage repository (which you then maintain/keep up to date by manually clearing packages) and install from that using a package list. Use configuration management (something like puppet).

Of course, for 20 systems, that's overkill. For the time that a Linux install takes, simply having a local mirror would likely be Good Enough. If there are no issues with 3rd party apps or esoteric version requirements, that's the ticket.

Or in this case... (1)

SanityInAnarchy (655584) | more than 4 years ago | (#31892554)

...since you're apparently a capable Unix admin, ntfsclone.

But that doesn't apply here. That applies to viruses screwing up your Windows, but when hardware dies, particularly the motherboard, that image isn't going to help much.

dick sucking (-1, Troll)

Anonymous Coward | more than 4 years ago | (#31892040)

I just got my first blow job :) Too bad it was from a dude :( Oh well, a mouth is a mouth :0

VMWare View (3, Informative)

Anonymous Coward | more than 4 years ago | (#31892042)

VMWare View [vmware.com] is what you want.

Similar Environment (1)

MyLongNickName (822545) | more than 4 years ago | (#31892044)

Using Citrix. Not sure if this is what you are looking for or not...

yes (4, Insightful)

girlintraining (1395911) | more than 4 years ago | (#31892048)

I do. The short answer: Don't.

Just on the interactivity alone, it's slow response, you spend extra seconds loading windows, menus, and after awhile those extra seconds add up to real productivity loss. Virtualization belongs on servers and in labs, where interactivity is less important than raw horsepower. For a workstation, don't virtualize. It's painful.

Re:yes (3, Informative)

MyLongNickName (822545) | more than 4 years ago | (#31892056)

I am in a virtualized environment and it works fine. I guess it really depends on your situation.

Most of my users are using basic business apps. For these things, Citrix XenApps (I think that is the name this week) works well.

Re:yes (1)

JWSmythe (446288) | more than 4 years ago | (#31892208)

    Yup, it totally depends on the situation.

    At one employer, I had to occasionally run Windows apps, to appease the bosses. It was annoying, but I did it. For that, I had XP installed in a Virtualbox VM. It ran fine. I'd leave it minimized so it didn't bother me while I was doing real work. The hardware wasn't anything exciting. It was a $400 PC from CompUSA (single core AMD64, 2Gb RAM). Everything worked fine, including the occasional request to look at something in MSIE because "hey, it doesn't work in MSIE". Of course, when *I* looked it was fine, and when I went to their desk it worked fine, and it wasn't even really my problem to fix, I was just the "go to guy". I had to use it in the Windows VM, because if I did it in something like IEs4Linux (ick, taint a perfectly good Linux box), they'd say it was because it was in Linux. {sigh}

    At home, it didn't work out for me, because I had the occasional urge to play a Windows game, like Microsoft Flight Simulator X. So when I want to play a Windows game, I reboot into Windows and play for an hour, and then back to Linux for everything else.

GP is a user, P is an IT guy (3, Insightful)

snikulin (889460) | more than 4 years ago | (#31892428)

Am I right?

Re:GP is a user, P is an IT guy (1)

girlintraining (1395911) | more than 4 years ago | (#31892608)

Am I right?

I'm both a user and an IT professional. I'm a strong proponent of using the tools I make, and spending some time actually doing the job they were meant for before handing it back. People who are conventionally-schooled have preconceptions about how things "should" be, and when they get into the field you get ideas like this -- remote desktop for one application is not what the article is about. The article was talking about wholesale virtualization of the entire workstation, not just a single application.

The reasons most companies do this is because it gives the illusion of control and less work: You only need to update one image on a server, not 10,000 workstations, and you can lock it down pretty hard. But it's an illusion that sacrifices functionality and speed in many implimentations.

I've used Citrix as a client. It's painful, even on a LAN, because the sessions can randomly timeout, disconnect, or the server becomes oversaturated by a few users running an intensive database query that sucks all the CPU cycles from everyone else's session -- which when the database is hosted on the same server can create a horrible bottleneck that won't show up in the lab.

The kinds of problems this kind of wholesale virtualization create are difficult to diagnose, and management is reluctant to upgrade or plan for scalability. It works okay for a few, maybe even a hundred, users accessing one server. But try having a thousand, or ten thousand, trying to access the same terminal server. The hardware could be made out of unobtanium, run at a million gigahertz, and have fifty terabytes of RAM, and it would count for exactly shit because the network card is only one gigabit and it spends so much time swapping due to time slicing that you lose most of your performance in overhead. The server starves itself to death by constantly going to main memory for every other CPU operation.

Don't centralize if you don't have to -- it creates a single chokepoint, a single point of failure.

Re:yes (2, Informative)

itzdandy (183397) | more than 4 years ago | (#31892210)

I would argue just about every point here.

modern hypervisors are quite fast. Most of the perceived slowdown is a result of using something like VNC to access the VM.

basic linux install with KVM and the console glued to the VM. Get serious and contribute some software developers or put out some bounties to make a windows video driver appropriate for your needs.

Re:yes (1)

girlintraining (1395911) | more than 4 years ago | (#31892658)

modern hypervisors are quite fast. Most of the perceived slowdown is a result of using something like VNC to access the VM.

It's not about the damn hypervisor, it's about system overhead. Every thread you add means more shuffling in and out of the cpu stack. The more threads, the more accesses to (slower) main memory instead of level 2 or level 1 cache. It doesn't matter what operating system you use, or if it's virtualized or not -- modern systems can only handle so much concurrency gracefully. Exceed that limit and you incur performance penalties. And beyond a certain point, the system spends more of its time doing memory ops than actual processing.

You don't want to stuff a whole workstation, with perhaps fifty threads running, into a VM, and then multiply that by a few thousand... it won't matter what hardware you're running or how many cores it has, it's gonna choke on the I/O, either in memory overhead or at the network interface.

Re:yes (1)

Mad Merlin (837387) | more than 4 years ago | (#31892254)

I second the GP's response, with the added caveat that graphical performance is by far the slowest part of current virtualization methods. To put it in perspective, your GPU (even if it's a bargain basement integrated piece of junk) has a lot more (albeit narrowly focused) horsepower than your CPU does. Virtualizing the CPU is pretty much a solved problem with vmx/svm, while there's still no performant solution for virtualizing the GPU.

Re:yes (1)

MyLongNickName (822545) | more than 4 years ago | (#31892296)

Well, then if you are worried about GPU virtualization, why not go with application virtualization?

Of course, I have a hard time believing that 20 workstations is all that hard to maintain. unless they are geographically disbursed, I am not sure if virtualization is worth the effort.

Re:yes (2, Insightful)

Mad Merlin (837387) | more than 4 years ago | (#31892478)

By application virtualization I assume you mean running a single application over the network as is possible with X11 (or many other solutions), instead of the whole desktop/machine. The problem with that is that it doesn't solve the problem outlined in TFS at all, as he wanted to eliminate having to deal with a grabbag of random hardware which Windows inevitably does not support (without special coaxing) every time a new machine comes through the door or some hardware explodes.

Re:yes (0)

Anonymous Coward | more than 4 years ago | (#31892482)

That's only a problem with screen-scrapers.

If you're running a real network-transparent window server, and your client program is on the local network, there will be no problem with latency.

Well, unless your client is pushing enough opengl to fill the network, but most programs don't do that...

Re:yes (1)

somenickname (1270442) | more than 4 years ago | (#31892560)

I do. The short answer: Don't.

Just on the interactivity alone, it's slow response, you spend extra seconds loading windows, menus, and after awhile those extra seconds add up to real productivity loss. Virtualization belongs on servers and in labs, where interactivity is less important than raw horsepower. For a workstation, don't virtualize. It's painful.

This is a surprising response. The rare times I've needed to work on Windows GUI projects, I've always virtualized with VirtualBox from an Ubuntu host and have never had any performance complaints at all. In fact, it was much faster than most Windows machines I've used because once I got the guest to a good state, I snapshotted it and rolled it back every time I shut the guest off. I would almost go so far as to say that the preferred way to run Windows is as a guest OS from linux where you roll back the guest every time. It's fast, stable and borders on pleasant to use.

Now, going the other way is not the same. In a corporate environment where your Windows workstation is likely straining to even keep up with the virus checker, trying to run an Ubuntu VM under that can be slightly painful. It's not unbearable but, it's certainly not as pleasant as the Ubuntu host and Windows guest situation.

Slipstream the drivers + update the .iso (5, Informative)

couchslug (175151) | more than 4 years ago | (#31892052)

It's easy enough to slipstream (lots of) extra drivers and periodically update a master install .iso using tools such as nlite.

Re:Slipstream the drivers + update the .iso (1)

Traze (1167415) | more than 4 years ago | (#31892094)

This is your best bet. And it's free!

Re:Slipstream the drivers + update the .iso (1, Interesting)

Anonymous Coward | more than 4 years ago | (#31892098)

I did this when I had to maintain a computer lab of about 250 machines and 6-7 different hardware profiles between them. Turned the multi-stage update nightmare I inherited from my predecessor into a (relatively) painfree couple of hours some evening after any patch Tuesday.

not a cure-all (4, Interesting)

CAIMLAS (41445) | more than 4 years ago | (#31892060)

Virtualization is not a cure-all (and your approach is wrong, to boot).

What you're looking to do is use the latest, greatest technology for profit(!!!). You're going about it wrong. There are plenty of other, better technologies to accomplish the same basic thing. Proper system imagining/installation via something like an installation server.

When you've got 20 workstations, you're at that cusp of continuing on the path you're on (and hopefully, resorting to a method of consistent repeatability) or deciding on a different approach - thin clients, perhaps. Or maybe virtualization is the right approach - but I can guarantee that there's likely no good reason to virtualize Windows on top of each of the 20 workstations that couldn't be solved with better design.

Honestly, if you're one of multiple IT in a place with only 20 workstations, you're seriously over-staffed. Someone - if not you, someone else - is going to figure this out, and figure out a way to make themselves important and you redundant. Even with moderate consistency and controls, a single competent Administrator should be able to take care of 5 times as many workstations and a handful of servers without too much sweat.

Re:not a cure-all (3, Insightful)

MyLongNickName (822545) | more than 4 years ago | (#31892266)

Who said he has multiple IT people working? My guess is that it is a smaller shop and they have one or maybe two people doing double duty as IT admin/other duties. My guess could be wrong, but so could yours :)

Re:not a cure-all (1)

CAIMLAS (41445) | more than 4 years ago | (#31892452)

At any rate, virtualization at the workstation level to abstract the primary utility is the Wrong Approach.

One thing I've learned is that simplicity is often better than complexity. KISS. This plan doesn't: while it might save some time on deployment of a new system, it's needlessly complex and has twice as much maintenance involved. Also, it adds additional headaches due to Windows licensing (unless they're going to sysprep the machines).

There are a lot of little "gotchas" which someone not up on such things might overlook. To be informed on such things, you've either got to do copious amounts of research, test a sizable setup yourself, and/or have experience with a similar deployment. All of these things are reasons to keep each part of a system as simple as reasonably possible (without compromising its functionality).

Re:not a cure-all (2, Insightful)

catmistake (814204) | more than 4 years ago | (#31892572)

Virtualization is not a cure-all

I respectfully disagree. When it comes to MS Windows, if ever there was a cure-all, virtualization is it. Make a short list of the problems with Windows, and one way or another, virtualization can solve it. If you're clever enough, for instance, the ubiquitous need for virus protection can be eliminated by sand boxing (just think of the gazillions of proc cycles that could be saved). Virtualization can make Windows secure in a way it will never be when it runs on the bare iron. Once you have a virtualized system just right, you can zip it up, deploy it by the multitudes. What's that? Something acting wonky? Delete, unzip, redeploy in less time than it takes to scan a hard drive.

Now, I agree that virtualization isn't the absolute ideal solution in all situations, but that doesn't mean it's not a cure-all (for the inherent headaches of MS Windows). A cure-all is a generalized solution. There might be better specialized solutions, but they're specialized and not a cure-all. Virtualization is the tonic that can give a Windows desktop or server the key features that Microsoft was never able to include or patch. In fact, I'd say, if Windows is broken, and it really has been for a long time, virtualization fixes it.

Re:not a cure-all (1)

Z34107 (925136) | more than 4 years ago | (#31892668)

I'd be inclined to agree with your "proper system imaging/installation via something like an installation server" approach. If he has sufficient dollars, kittens blood, and vespene gas, he wants to set up Windows Deployment Server (a feature in 2008 R2.) Windows 7 images are almost completely hardware agnostic - you can build them in a virtual machine and deploy them to real hardware if you want, as long as you stick the appropriate drivers on the server.

XP is a different story; it's so hardware bound it's not even funny. If you Sysprep an XP image, ghost/dd/imagex/whatever will probably let you move it to different hardware. Except that he mentioned dual displays, so all his computers had better use the same graphics driver. This is the one business case for upgrading to 7 - imaging is a lot easier in general, especially if you don't have homogeneous hardware.

Another poster mentioned using nlite with a "bunch of drivers" added in. That would probably work if you're willing to script the installation and configuration of the rest of the programs on your image. Or, even worse, do it all manually.

Xen? (2, Interesting)

the_humeister (922869) | more than 4 years ago | (#31892064)

But only if you have hardware virtualization support.

Re:Xen? (2, Informative)

CAIMLAS (41445) | more than 4 years ago | (#31892538)

Xen would be the way to do it, if you had servers. Running the display on the same system as the Xen system is, last I checked, not yet possible.

Maybe (2, Funny)

MeNeXT (200840) | more than 4 years ago | (#31892068)

next year will be the year of the Windows workstation.... 8^)

NxTop - A Client Based Hypervisor (5, Informative)

kaustik (574490) | more than 4 years ago | (#31892078)

NxTop is pretty cool. It is a hypervisor that installs directly onto the client hardware, allowing you to pull and boot pre-configured images over the network. The hypervisor removes the need for specialized drivers and supports dual monitors. It also has the advantage over VMwareView of allowing the OS to sync for offline use if you would like to leave the office with a laptop. Sure VMware has it as an "experimental" feature now, but it is production with these guys. They came and did a demo for us the other day, pretty cool stuff. I think it was affordable too. You can set policies for who gets what images, remotely disable a lost or stolen laptop, etc. Check this out: http://www.virtualcomputer.com/About/press/nxtop-pc-management-launch-massively-scalable-desktop-virtualization-for-mobile-pcs [virtualcomputer.com]

Disk imaging software (4, Insightful)

SlamMan (221834) | more than 4 years ago | (#31892082)

You're just making it harder than it needs to be. Use Ghost, Acronis, KACE, or any of the other semi-hardware agnostic imaging systems. Failing that, just take individual images of each peice of disparate hardware. Just takes a little one time act for each peice of hardware, and a large disk drive.

What's the point? (1)

toastar (573882) | more than 4 years ago | (#31892084)

i think ghost/clonezilla is the way to go. you really shouldn't add extra layer's of complexity for no reason.

Do you really think switching to linux will fix your driver problems? The real solution is to use the same hardware across the network.
I mean the having a a cd taped to the side of the case for machine specific drivers might be a little low tech but it prevents confusion.

As much as I hate to give Microsoft praise... (5, Informative)

PenguSven (988769) | more than 4 years ago | (#31892104)

this was solved a long time ago. Sysprep allows you to bundle whatever drivers you want, and it will just load what it needs on first boot. Combine that with a network imaging solution (back when I worked in that area, we used ZENworks, but there are other options), and ideally network installs of software (i.e. the image should be a base OS and not much else) and you should have limited problems. A new machine type will require a new image, but you can just deploy the old one, add the new drivers, run sysprep and re-create the image. I never had to do mass-imaging of Linux machines, but surely you could take a similar approach for the Ubuntu images?

Re:As much as I hate to give Microsoft praise... (2, Insightful)

QuantumRiff (120817) | more than 4 years ago | (#31892568)

In addition to sysprep, if you are running Vista or Windows 7, you can use the tool DISM.exe from the Windows Automated Installation Kit, to inject plug and play drivers into your offline image. You also might really, really want to look at the MDT 2010 tool from Microsoft. It does make deployments of windows easier when it comes to drivers.

sysprep is not a 100% thing and some dirvers have (1)

Joe The Dragon (967727) | more than 4 years ago | (#31892580)

sysprep is not a 100% thing and some drivers have there own control planes / back round apps that may or may not be loaded right after sysprep.

Re:As much as I hate to give Microsoft praise... (1)

techno-vampire (666512) | more than 4 years ago | (#31892586)

I never had to do mass-imaging of Linux machines, but surely you could take a similar approach for the Ubuntu images?

Why bother? Unless your hardware OEMs refuse to cooperate with Linux, the drivers are either going to be present, or Ubuntu will download them after the first boot. Ubuntu may not be the geekiest distro around, but it does make things like that as easy and painless as possible.

Specialized requirements (3, Insightful)

purduephotog (218304) | more than 4 years ago | (#31892120)

Not always is a common solution the right one. Many times they lack the requisite low level IO needed to do the job right.

Take, for instance, DDC/CI. I don't know what you're doing and that's fine, but in my line of work we have to talk to the monitor. You ain't doin' that on a virtual machine.

Just because it's virtual doesn't mean it's better.

Driverpacks (1, Informative)

Anonymous Coward | more than 4 years ago | (#31892138)

http://driverpacks.net/ [driverpacks.net]

Re:Driverpacks (1)

Joe The Dragon (967727) | more than 4 years ago | (#31892618)

They work good just install the latest Nvidia / ati divers after running that and maybe a few others one (mostly laptops with custom drivers that you need to get from dell / hp / others.)

Ghost Solution Suite 2.5 (0)

Anonymous Coward | more than 4 years ago | (#31892144)

I use GSS 2.5 for this reason.

Create SOE in VMWare Server.
Take an image of this via GSS (as the base image).
Use DeployAnywhere to deploy this image onto foreign hardware.
Add platform-dependant drivers.
Take another image of the SOE image in its machine-dependant state.

I've done this for 30 separate hardware platforms (HP, Compaq, Toshiba, Acer etc).

Works brilliantly!

james/logik

VMware view (4, Informative)

dissy (172727) | more than 4 years ago | (#31892146)

It's not cheap so might not be a viable option for a smaller shop, but VMware has been making some very interesting strides in this area.

Check out VMware View, also known as PCoIP (Yes, that is personal computer over internet protocol)

http://www.vmware.com/products/view/ [vmware.com]
http://www.vmware.com/resources/techresources/10083 [vmware.com]

Put really simply, each real workstation is loaded with a minimal system and the vmware view clients.
When a user goes to login to a computer on your network, after authentication their virtual workstation pops up (Be it windows or ubuntu) and lets them work.

All of the actual 'workstations' being used are virtual machines, thus are the same unified image you are looking for with one set of drivers.

While I have not tested it with a multi-monitor setup, they claim it is supported.

The one main thing you do lose is full accelerated 3D support, and direct support for old eccentric hardware. (Think ISA card support and non-standard PCI interfaces)
I can say USB support is simply amazing in how well it works.

Clients can even play full interactive flash media and video, and it runs well (As well as one would expect it to work in native OS anyway)

We did something else which was a lot more useful (5, Interesting)

Merc248 (1026032) | more than 4 years ago | (#31892156)

I used unattended [sf.net] on a FreeBSD box at one of my old jobs, since we had like five or so different models of computers. It works sort of like RIS, except it's easier to extend the system since it's all written in Perl and it's all open source. We dumped the contents of an XP disc on the server, then slipstreamed driver packs [driverpacks.net] into the disc directory structure; this catches almost everything but the most obscure hardware out there. Unattended allowed us to run post-install scripts, so we threw in a bunch of other software packages that would install after the OS was done installing, like Office 2007, Adobe suite, etc.

This was substantially better than a disk image; we took care of all of the drivers in one fell swoop, so the only thing we used as a differentiator between computers was how the person used the computer (if it's a student lab computer, we loaded a bunch of stuff like Geometer's Sketchpad, InDesign, etc. If it was a faculty's laptop, we'd load software to operate stuff in the classroom.) We save space on the server, and we save time when it comes to putting together another "image" for a different use case.

But as others said above, I wouldn't virtualize the workstation, even if it eases up on the IT dept. a little bit; just be smart about what deployment method you use. I wouldn't recommend using unattended if you had only about three different models; it's likely substantially easier to just use CloneZilla.

Oh, and use a centralized software deployment system such as WPKG [wpkg.org] . Your disk images will go stale after a while, in which case you'll have to make sure that you can manage the packages installed on clients somehow.

Bare-metal client hypervisor (2, Informative)

Anonymous Coward | more than 4 years ago | (#31892192)

What you are looking for is called type 1 or bare-metal client hypervisor. Bare-metal client hypervisor's are a fairly new technology with the leading ones "which are still in development" being from Citrix and VMware. They are XenClient and CVP both are expected to be out later this year. Two of the smaller players in this field are Neocleus and Virtual Computer both have a general release product however neither of them have been around long enough to be proven.Hope this helps you might not have a the solution you are looking for today but by next year you should have some good options.

Re:Bare-metal client hypervisor (3, Informative)

Nutria (679911) | more than 4 years ago | (#31892316)

Bare-metal client hypervisor's are a fairly new technology with the leading ones "which are still in development" being from Citrix and VMware.

This makes me a little distraught, since hypervisors have been around for 30+ years.

LTSP (1)

likuidkewl (634006) | more than 4 years ago | (#31892246)

I don't exactly know what you are looking to accomplish, but aside from spending money to make the machine identical, you can look into LTSP or the OpenSuse version Life. These allow your normal workstations to boot over the network and then depending on what you are looking to accomplish you can have them call a Terminal services session or just use the Linux distribution that is loaded. 20 Workstations are a breeze for 1 4CPU/8GB ram server especially with the progress of local apps on the client side. Have a look :)

Get the WAIK and use Sysprep (2, Insightful)

Toasterboy (228574) | more than 4 years ago | (#31892252)

Existing deployment tools from Microsoft already do this. You need the WAIK, which is a free download from Microsoft.

You need to create a generalized image. If you get all the required drivers for all your hardware into the driver store, the drivers will be found during install. You can also deploy from PXE boot using WDS with a generalized image...

There are a few caveats around a few drivers that aren't designed properly for Sysprep, and applications that aren't designed with sysprep in mind, but otherwise it's quite slick. You can script the installation of these exceptions to occur later on during deployment using unattend.xml and RunSynchronous commands though. You can also supply your licence key in the unattend.xml file.

About 90% of all Windows deployments are sysprepped by OEMs or by corporate IT folks....

Please read the documentation, the tools are quite flexible.

Re:Get the WAIK and use Sysprep (1)

Southpaw018 (793465) | more than 4 years ago | (#31892336)

This is the correct answer. Use Clonezilla for the Linux installs and WDS for the Windows installs (or install a third party PXE server and use the same server for both). Forget virtualization unless you specifically need it to run applications or multiple simultaneous operating systems.

WDS is how I reimage Windows PCs on my network, and to go from nothing to 100% reinstall is, start to finish, 1 keystroke, a standard login prompt, and two mouse clicks. Come back in a few minutes and you're booted into the system.

Acronis Universal Restore (1)

saverio911 (997619) | more than 4 years ago | (#31892260)

I just did this to use a single image on my company's multiple versions of their standard hardware (they use everything until it dies a horrible death). I used to use nLite to automate the install and just slipstreamed the drivers in but a driver for a new model's raid controller would not integrate so we switched to Acronis and it worked the first time. Imaging from the Acronis-prepped DVD now takes 15 minutes which used to use to take 45 minutes when we installed with nLite. Most of the savings came from not installing all the apps we use. I had nLite automatically installing a lot of stuff like, .net, hardware support apps, and A/V with multiple reboots until it was all completed. Now all the apps are installed on the image stored on the dvd. I know nLite is free but we were willing to pay for the cut in deployment time and administration.

Forget Virtualization! (0)

Anonymous Coward | more than 4 years ago | (#31892310)

For the Windows side (I'm assuming XP), put DriverPacks (http://driverpacks.net/driverpacks/latest) in a folder on your disk image and use SPDrvScn (http://www.vernalex.com/tools/spdrvscn/) to add them all to the driver path. You'll need to use SysPrep and link to the each mass storage driver in the SysprepMassStorage section of sysprep.ini to support RAID and AHCI storage controllers (otherwise you'll need to use IDE emulation in BIOS for a performance hit).

Linux should be pretty resilient to multiple hardware configurations on its own, but I guess that depends on the distribution.

Quickest Route... (0)

Anonymous Coward | more than 4 years ago | (#31892318)

Your time/money is probably best spent standardizing your desktop hardware at this point.

Shadowprotect HIR (2, Informative)

ill1cit (730941) | more than 4 years ago | (#31892340)

Wow, such terrible advice from slashdot. The easiest way to move Windows OS from one machine to another when their are hardware differences is to get your self a copy of shadowprotect and use the HIR (hardware independent restore) option. Google it. Virtualising is not the best way to by a long shot to do what you are trying to do.

Virtualize a workstation? (1)

EsJay (879629) | more than 4 years ago | (#31892380)

Really?

Wrong solution (-1, Troll)

noz (253073) | more than 4 years ago | (#31892382)

Virtualization is not a solution to poor operating systems.

Your Windows sucks. Enjoy your monopoly product.

Re:Wrong solution (0)

Anonymous Coward | more than 4 years ago | (#31892558)

Wow, that is so insightful. Do you have any more advice coming from that closed mind of yours?

Virtual Windows Under Ubuntu? (1)

Doc Ruby (173196) | more than 4 years ago | (#31892388)

Can I run a Windows 7 virtual image (Virtual Clonedrive) on an Ubuntu PC somehow? On a P4/2.6Ghz/1GB-RAM machine? Fast enough to run Visual Studio 2010 and test Silverlight apps? How?

Re:Virtual Windows Under Ubuntu? (1)

CAIMLAS (41445) | more than 4 years ago | (#31892564)

Windows 7 with VS2010 won't even run on that hardware; why would you expect it to work on a virtual environment? That's somewhat unreasonable.

(Though, I will say from personal experience, it's amazing how much better a Windows VM will run on the same hardware it was on prior, but virtualized and with slightly less RAM due to system overhead. I did this once some time ago - w2k3 w/ VS2008, 400M RAM with the rest for the CentOS host. Disk access was, sadly and amazingly, faster and swapping wasn't half as painful as it was in "just" Windows.)

Multi-monitor support (0)

Anonymous Coward | more than 4 years ago | (#31892394)

VMWare Workstation 7 has support for multiple monitors.

you 7Ail it (-1, Offtopic)

Anonymous Coward | more than 4 years ago | (#31892416)

gand the striking [idge.net]

Several options (1)

uvsc_wolverine (692513) | more than 4 years ago | (#31892418)

We've been looking at several options for this type of thing at the University where I work. We've got one lab currently running LANDesk, and another that is running off of VMWare. They each have their own advantages and drawbacks, you've just got to decide which is best for your situation.

In the multiple computer labs that I maintain virtualization is out of the question due to the performance hit being too high considering the usage for the two of the labs (Photoshop, streaming video, audio decoding, some really heavy javascript stuff). The rest of my labs lack the funding/justification to setup a VMWare or LANDesk backend and they get the hand-me-downs when I get new hardware for the larger labs. What I've done instead is I use Altiris (now Symantec) Deployment Solution. I've put together a basic lab image (MS Office, Firefox, anti-virus, Windows updates, etc.) in VMWare Fusion on my Mac. I then deploy that image out to the rest of my labs as they only need Office and internet access. That way I only really have to maintain two lab images. One for the two Photoshop labs, and one for the rest of my labs. When important updates come out I update the VM, and have my deployment server push the updated image out to all of my machines in the middle of the night so I don't get in the way of the students being able to do homework and other assignments. The nice thing about Deployment Solution is it has an option for hardware independent imaging where it removes the existing hardware abstraction layer (HAL) and injects drivers for whatever hardware the image has been deployed to. You do have to maintain your driver database and make sure that you get updated drivers for new hardware, but this has worked flawlessly for me for quite a while now and I'm imaging against...I think five different sets of hardware (they mostly differ in the motherboards, no video cards in the lab machines beyond the integrated video) with the one base image.

Re:Several options (1)

uvsc_wolverine (692513) | more than 4 years ago | (#31892430)

Oh...I forgot to mention that this is all network-based imaging. Deployment Solution works via PXE booting to a WinPE image. You can use Linux as your boot image, but the WinPE image Symantec supplies seems to work best based on what I've read on their forums.

Clients (1)

not_hylas( ) (703994) | more than 4 years ago | (#31892434)

Cut to the chase.

You have Client machines - not all are going to be the latest or the greatest in hypervisor tech., [you do what you have to do to keep things afloat]. Consider, Thin Clients from a myriad of hardware offerings, less headaches and better Server hardware will keep you way ahead of the curve and lessen your - footprint, exposure and budget.
The caveat is only if your Clients run AutoCAD, heavy graphic intensive programs or major databases, programming.

Windows, UNIX or Linux - or all, "pick your poison" the rest is academic.
Good luck.

VMware ESXi (1)

flyingfsck (986395) | more than 4 years ago | (#31892516)

Yes you can do it with VMware ESXi, if and only if the hardware supports it.

Mainframe (1)

inode_buddha (576844) | more than 4 years ago | (#31892522)

A small or entry-level mainframe could do it, consolidate every single one of the boxes into one. That or find something with say 32 cores.

Disable the hypervisor (1)

Billly Gates (198444) | more than 4 years ago | (#31892542)

Its not worth the security risks as it will be impossible to detect and delete viruses and malware that hide with in it unless you disable it in the bios and do a format c:.

You have run into the problem that many companies face which love to lock things down. Slashdotters hate this but its nice to have everything the same hardware and software for reasons like this.

Someone mentioned Citrix clients and vmware or virtual box players but they really really suck and consume incredible resources on desktops. Try to standardize on common desktops and create a script to use Windows update for the latest drivers or keep them on a USB flashdrive. A pain yes, but this is what your customers want. Anything else makes the setup look bad which ultimately makes your boss look bad which makes you look bad if you have a buggy slow activity and responsiveness from Citrix clients or virtual machines. Consumers judge responsiveness in quality more than performance. If it takes a few seconds for a menu to pop up they will think they are on 486s.

If the hardware is nearly identical you can setup a profile with Windows Update for drivers disabled.

one random linux fan (0)

Anonymous Coward | more than 4 years ago | (#31892544)

Dump these motherfucking windows boxes.

How this fucking OS annoys me...

VMWare View (1)

watermark (913726) | more than 4 years ago | (#31892550)

VMWare makes a product called "VMWare View". It's basically a thin client that connects you to a VM running on a server. Most Windows thin client environments boot a RDP thin client that connects to a Windows TS, but this approach gives every workstation their own Windows environment to screw up. While not exactly what you're looking for, it will provide a driver agnostic approach to running a workstation. (although I would go with the nlite idea proposed above)

Altiris Client Management (1)

jambarama (784670) | more than 4 years ago | (#31892578)

As others have mentioned, you don't want to go down the rabbit hole of virtualization just to manage 20 computers in an office.

Altiris products are worth considering. The Client Management Suite is pretty terrific for managing lots of dissimilar clients. The rebranded "Backup Exec System Recovery Solution," doesn't do as much, but also works fine with different hardware clients. I haven't bought anything from them since Symantec bought them, but we loved Aliris before that.

If that's too rich for your blood, a SOHO WHS box may be enough to cover your windows machines. Ubuntu machines are easy, you could go as barebones as something like crontab, dd, gzip & rsync on an NFS share. Or you could do something fancier.

You're not fixing the problem correctly. (0)

Anonymous Coward | more than 4 years ago | (#31892630)

This is an excellent example of not identifying the problem correctly. You have 20 workstations, all different, but you need them to be common. Rather than investing in a software solution and all of the complications it will bring, just fix the problem. Lease 20 identical workstations for 3-4 years. Build one image.

Don't merge. Keep them separate. (1)

Nefarious Wheel (628136) | more than 4 years ago | (#31892636)

Stop looking for a stove+fridge combination, buy a fridge and a stove.

Seriously, you will spend less money and have a faster result by buying two machines, if you need both environments - unless you're talking about many hundreds (perhaps thousands) of machines, it's difficult to justify building a merged Windows / Ubuntu SOE in terms of delivery architecture. What would the merged SOE look like in terms of budget, after it's filtered through a bunch of consultants? Ubuntu doesn't take much in the way of hardware to drive, so you can get the lower spec gear to get decent performance. YMMV of course, but I'm willing to bet you'd get a cleaner result by keeping the two environments separate, and save a lot of money by avoiding the merge altogether.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>