Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Reasonable Hardware For Home VM Experimentation?

timothy posted more than 5 years ago | from the reasoning-with-hardware-always-a-risk dept.

Databases 272

cayenne8 writes "I want to experiment at home with setting up multiple VMs and installing sofware such as Oracle's RAC. While I'm most interested at this time with trying things with Linux and Xen, I'd also like to experiment with things such as VMWare and other applications (Yes, even maybe a windows 'box' in a VM). My main question is, what to try to get for hardware? While I have some money to spend, I don't want to, or need to, be laying out serious bread on server room class hardware. Are there some used boxes, say on eBay to look for? Are there any good solutions for new consumer level hardware that would be strong enough from someone like Dell? I'd be interested in maybe getting some bare bones boxes from NewEgg or TigerDirect even. What kind of box(es) would I need? Would a quad core type processor in one box be enough? Are there cheap blade servers out there I could get and wire up? Is there a relatively cheap shared disk setup I could buy or put together? I'd like to have something big and strong enough to do at least a 3 node Oracle RAC for an example, running ASM, and OCFS."

Sorry! There are no comments related to the filter you selected.

8 core Mac Pro (5, Funny)

MacColossus (932054) | more than 5 years ago | (#27292207)

Xeon based, easy access to multiple drive bays, dual gigabit ethernet, etc. Runs linux, Windows, Mac OS X

Re:8 core Mac Pro (4, Informative)

jackharrer (972403) | more than 5 years ago | (#27292457)

I hope you're joking... That's waaaay too expensive.

I can run up to 4 VM on my laptop (Lenovo T60) with 3GB a and Core 2 Duo 2GHz without any problems. Often I need to work on 3 machines (design one + cluster for testing) and it works really well together. Problem is that disk subsystem sucks, so I suggest you invest in some RAID, but processor or memory wise it's enough. If you run Linux, you can run more of them as they use less memory and processor usage is also nicer. Just stay away from GUI as X uses abysmal amount of processing power in remote VM for anything more that 800x600.

You don't really need anything very expensive - most of commodity hardware nowadays runs VMWare Server easily. It's also free so even sweeter. Just choose processor that supports virtualisation, as that speeds up everything a lot.

Re:8 core Mac Pro (1)

mrsteveman1 (1010381) | more than 5 years ago | (#27292769)

Expensive relative to what? An arbitrary machine with the same specs? I doubt that.

Expensive relative to what a home user (even one who wants to run VMs) actually needs? Absolutely, you can get away with a machine for a few hundred dollars and get plenty of cpu power and ram.

Re:8 core Mac Pro (0)

Anonymous Coward | more than 5 years ago | (#27293411)

You've got be kidding about running four guest vms on your laptop unless you are talking about running them one at a time.

Re:8 core Mac Pro (1)

AnonGCB (1398517) | more than 5 years ago | (#27292953)

That sounds brilliant, the asker stated he doesn't want to shell out lots of cash, and you suggest paying the mac tax?

Re:8 core Mac Pro (1, Insightful)

MacColossus (932054) | more than 5 years ago | (#27293031)

Actually, resell value of Mac Pro's is unbelievable via Ebay and such. You can sell the previous model for almost as much as you paid for it. Stick that in your Mac tax. Have kids in school? You can get it even cheaper. He's experimenting which would suggest short term.

Re:8 core Mac Pro (1, Flamebait)

AnonGCB (1398517) | more than 5 years ago | (#27293075)

The point is, it's a much higher cost of entry, considering you can build the same system for much cheaper and not have it in a shitty looking case. There's a reason you're post is funny, not informative.

Quote of the day... (-1, Offtopic)

Anonymous Coward | more than 5 years ago | (#27293035)

"I predict future happiness for Americans if they can prevent the government from wasting the labors of the people under the pretense of taking care of them."

-Thomas Jefferson

Re:8 core Mac Pro (1)

portalcake625 (1488239) | more than 5 years ago | (#27293077)

Way too much, I can run Mac OS X 10.5 (OSx86), Windows 2000, Ubuntu Studio and DOS all at the same time using VMware on Vista on 2 GB of RAM and a 2 GHz laptop.

How about... (5, Funny)

Anonymous Coward | more than 5 years ago | (#27292213)

...something like this? []

Re:How about... (1, Interesting)

Anonymous Coward | more than 5 years ago | (#27292991)

Virtualization Shmirtualization!

Just do what this guy did [] .

Great question! (5, Funny)

BadAnalogyGuy (945258) | more than 5 years ago | (#27292221)

I ran into this same situation and found the best cost/performance setup was a Beowulf cluster of netbooks.

You get the cumulative power of those Atom processors and have a huge memory pool to run the VMs within.

Re:Great question! (1)

flyingfsck (986395) | more than 5 years ago | (#27292495)

I have run Virtualbox on my Asus Eee PC 701. A beowulf cluster would be really neat.

Just about any Dual core and up. (4, Informative)

AltGrendel (175092) | more than 5 years ago | (#27292223)

Just check the BIOS to make sure that you can set the MB for virtualization.

Re:Just about any Dual core and up. (1)

DigitalStone (1502161) | more than 5 years ago | (#27292489)

I run about 3-4 different VM's on a dual core with 4 gigs of ram on any given day. Overclocking is a great way to get more for your money. Of course your mileage may vary. -- A good Gigabyte board some G.Skill ram, hardy PSU, a hefty sized heatsink and a case with open airflow and your set. The e6750 (stock 2.66) was able to do 3.3 without breaking a sweat. I did some research in overclocking forums and simply put together the same system others had with experience. -- This was a year ago, but am happy to say the system has been working great. Not sure about what the best price for overclocking hardware is today, but it may be something to check into if you are looking to get faster performance for less money.

Re:Just about any Dual core and up. (1, Troll)

Anthony_Cargile (1336739) | more than 5 years ago | (#27293185)

I run about 3-4 different VM's on a dual core with 4 gigs of ram on any given day.

My dual core 2006 Gateway laptop with 2G ram did this [] - almost every version of Windows running at once on top of Ubuntu 8.04 with eye candy. It's not a 64-bit machine, either, so I've known for a while that fairly low-end computers can run virtualization software fairly well.

Re:Just about any Dual core and up. (0, Troll)

Random Destruction (866027) | more than 5 years ago | (#27293263)

you used a video camera to record a computer screen? How quaint.

Re:Just about any Dual core and up. (0, Troll)

Anthony_Cargile (1336739) | more than 5 years ago | (#27293419)

you used a video camera to record a computer screen? How quaint.

Not only does the Eye of GNOME desktop recorder software misbehave with my low-end video hardware, but those CPU cycles alone would have been greater than that of the virtual machines. Don't automatically assume judgments without knowing all of the details.

Re:Just about any Dual core and up. (2, Informative)

koko775 (617640) | more than 5 years ago | (#27293039)

Don't get an abit motherboard, or at least don't get their Intel P35-based boards. I can't speak to the rest of their stuff, but putting my Abit IP35-based computer to sleep and waking it back up actually *disables* the VM extensions, either freezing upon waking if any were running, or ensuring none start until I power off (reset doesn't cut it).

Other than that, I recommend a Core 2 Quad with lots and lots of RAM, and an array of 1TB SATA drives to RAID.

Also of note: Windows 7 doesn't let you use a real hard drive partition; it needs a hard disk file, at least on KVM, which is pretty awesome.

Joke (0, Flamebait)

mother_reincarnated (1099781) | more than 5 years ago | (#27292237)

Is this a joke? Like seriously is it? No there isn't. Move along. This is the exact opposite of cobble old hardware into a pile.

need special hardware? (5, Informative)

TinBromide (921574) | more than 5 years ago | (#27292243)

I ran my first virtual machine on an athlon 2200+ with 768 megs of ram. If it can run windows 7, you can run a VM or 3 (Depending on how heavy you want to get). Essentially take your computer, subtract the cycles and ram required to run the OS and background programs, that's the hardware you have left over to run the os. If the guest OS was compatible with your original hardware, chances are it'll work just fine in the OS.

Re:need special hardware? (1)

mother_reincarnated (1099781) | more than 5 years ago | (#27292285)

I ran my first VM on a 386SX-16, not the point really. This guy wants a SAN and to run Oracle RAC on it.

Re:need special hardware? (1)

perlchild (582235) | more than 5 years ago | (#27292389)

Some time ago, you could run RAC on twin-tailed firewire... Now I can't find the article.

Re:need special hardware? (1)

jimmyharris (605111) | more than 5 years ago | (#27293457)

One here [] and another here [] . Both are for older versions (3&4) of RHEL but the same principles apply.

As someone who works with Oracle RAC and RHEL regularly, I'd recommend skipping the shared physical disk completely and using NFS instead. You could (and we do in testing) run the NFS server virtualised as well.

Dell XPS Studio (5, Interesting)

drsmithy (35869) | more than 5 years ago | (#27292247)

Dell currently have the Studio XPS (2.66Ghz Core i7, 3G RAM, 500G HDD) going for US$800 - for a basic home virtualisation server, it's hard to go past, especially if you spend another US$80 or so to bump the RAM up to 9GB. I can't imagine you could build it yourself for a whole lot less (depending on how you value your time, of course).

(Damn, sometimes I wish I lived in the US. Stuff is just so bloody cheap there.)

Re:Dell XPS Studio (1)

dbIII (701233) | more than 5 years ago | (#27292799)

I can't imagine you could build it yourself for a whole lot less (depending on how you value your time, of course).

It's not as if you even have to set jumpers now - do you really need more than an hour?

As for really cheap, there's bound to be somewhere near you that does direct imports from Asia. Iwill buries Dell in quality every time and at the more expensive end Supermicro boards come directly from there anyway even if it is a US company.

Re:Dell XPS Studio (0)

Anonymous Coward | more than 5 years ago | (#27293063)

Damn, sometimes I wish I lived in the US. Stuff is just so bloody cheap there.

Actually, MOST of our prices are derived from something called a "Free Market" whereby goods and services are priced in response to what we call "market forces" such as "supply" and "demand". What country do you live in where prices are artificially inflated by your government? Was your government a "limited social contract" to secure life, liberty, and property? No? That's too bad. ):

Re:Dell XPS Studio (1)

Java Pimp (98454) | more than 5 years ago | (#27293427)

You can get a pretty beefy Dell PowerEdge server with a quad core processor for less than $800. Look at the Small Business section under Tower Servers. I was actually thinking about picking one up for this same reason just the other week!

Memory (5, Informative)

David Gerard (12369) | more than 5 years ago | (#27292267)

64-bit Linux host and as absolutely much memory as you can possibly install.

Re:Memory (1)

ghetto2ivy (1228580) | more than 5 years ago | (#27293023)

Seriously! Thats all he needs! Why the big hubub? Mostly he'll need ram!

Depends how many VMS your running. (5, Informative)

Deleriux (709637) | more than 5 years ago | (#27292273)

I personally use qemu-kvm and im quite happy with it. Thats running on a dual core machine with 2G of ram (probably not enough ram though!).

For the KVM stuff you need have chips which support Intels VT or AMDS AMD-V so your processor is the most important aspect. A quad core would probably be suitable too if you can buy that.

For just experimentation usage its a fantastic alternative to VMWare (I personally got sick of having to recompile the module every time my Kernel got updated).

On my box myself i've had about 6 CentOS VMs running at once but frankly there were not doing much most of the time. Ultimately its going to boil down to how much load you inflict on VMS underneath, my experience with it has not been very load heavy so I could probably stretch to 9vms on my hardware which is probably on the lower end of the consumer range these days.

The most important bits are your CPU and RAM. If your after something low spec you can do dual core 2g ram but you could easily beef that up to quad core 8G RAM to give you something you can throw more at.

Oh and Qemu without KVM is painstakingly slow - I wouldn't suggest it at all.

Reccomend a Quad Core CPU (2, Interesting)

ya really (1257084) | more than 5 years ago | (#27292293)

I currently run VMware Workstation with an Intel Q6600. VMware has a setting to choose to use one or 2 of the cores. Generally, for Linux VMs, one core is enough (unless you decide on GUI). If one goes for Windows Vista/7, 2 is better for performance, but one works okay for XP.

Ram is dirt cheap right now on Newegg as well. I have 8gb of Corsair ddr2 ram I got for 50 dollars after rebates. Non GUI, you can get by with 384-512mb of ram, but otherwise, id go with at least 1024 or more.

The nicer part of VMware Workstation is it now supports Directx 9.0c (but with only shader level 2, still working on 3). Expect a 10 or so perecent in performance droppage though for gaming depending on how many resources you allocate.

Your needs look a bit bigger than mine (mostly trashing VMs and running test software before doing something crazy to the actual box). A bigger CPU such as a Xenon might be more to your liking, since you can have 2 of them for a total of 8 cores (leading to lots of VMs).

Re:Reccomend a Quad Core CPU (1)

bastion_xx (233612) | more than 5 years ago | (#27292437)

Pretty much the base for my ESX lab. Q6600 to support SMP guests, 64-bit O/Ss, whitebox config (Asus P5something or other, 8GB RAM, small SATA drive for ESX install and local vmfs storage) and an iSCSI / NFS server for testing vmotion and such. It's ironic, but when my collegues and I do testing for consultant contracts, we have better lab environments in our basements than the companies for which we're doing work. It actually faster to mock up a design or implementation by RDPing to home and doing the work than "requesting" resources in the development labs. For ESX, the only downside is testing desktop usability features like you mentioned (no aero in Vista/7, etc). Check out the free offerings by VMware, Citrix and Microsoft.

Re:Reccomend a Quad Core CPU (4, Informative)

MeanMF (631837) | more than 5 years ago | (#27292663)

Choosing the "dual processor" option in a VM isn't necessarily a good idea, especially if you have a lot of VMs running. It means that whenever the VM needs physical CPU time, it has to wait until two cores free up. And when it does get CPU time, it will always use two cores, even if it's not doing anything with the second one. So if there is a lot of competition for CPU, or if you're running a dual-processor VM on a dual-core host, it can actually cause things to run much slower than if all of the VMs were set to single-processor.

Re:Reccomend a Quad Core CPU (3, Interesting)

Anonymous Coward | more than 5 years ago | (#27292789)

Not necessarily. Look up "relaxed co-scheduling." It's been in there since around 2006. (Another reason why VMware outperforms the others.)

Re:Reccomend a Quad Core CPU (1)

MeanMF (631837) | more than 5 years ago | (#27293397)

That's only in ESX, not Workstation or Server.

Re:Reccomend a Quad Core CPU (1)

Joe U (443617) | more than 5 years ago | (#27293083)

I'm also using a Q6600, Vista x64 host and 4GB (soon to be 8) of RAM and a RAID 10 array. Quad Core is great for VMWare and the Q6600 is an inexpensive workhorse. Go with Quad processors for VMs (XEON for your workload), this is one case where the extra cores will be of use.

What I use. (3, Informative) (1195047) | more than 5 years ago | (#27292305)

My VM server rig is decidedly low-end compared to many I've seen, but it certainly gets the job done. I custom built the box, mostly from components bought on NewEgg; it has a dual-core AMD64 chip (soon to be upgraded to a quad-core), 3 GB RAM, and about 500 GB total drive space between two IDE (yeah, I know, will upgrade to SATA at some point) drives.

The machine runs Ubuntu Server with VMWare Server 2. I can easily run several Debian and Ubuntu VPS nodes on it under light load, and I use it for experimentation with virutal LANs and dedicated purpose VMs. I periodically power up a Windows Server 2003 VM, which uses a lot more resources, but it's still fine for testing purposes.

Shared disk (1)

drsmithy (35869) | more than 5 years ago | (#27292313)

Is there a relatively cheap shared disk setup I could buy or put together? I'd like to have something big and strong enough to do at least a 3 node Oracle RAC for an example, running ASM, and OCFS."

Er, if you're running VMs, you inherently have "cheap shared disk" - the disk in the host that any of the VMs can access. :)

Lots of deals on eBay (5, Informative)

rackserverdeals (1503561) | more than 5 years ago | (#27292329)

You can find lots of used servers on eBay that you can mess around with. Sun's v20z servers are pretty cheap and have a decent amount of power.

A lot of the stuff I've run across is rack mounted and keep in mind that rack mounted servers are loud [] in most cases. So it may not be the best thing to play around with in your home or office.

You don't really need any special CPU to mess around with virtualization, you won't get "full" virtualization but I don't think that will stop you. For more info check out, this page [] .

I'm currently running a number vm's in my desktop using Sun'x VirtualBox (xVM) whatever they're calling it now. Even within some of the solaris VM's I'm running solaris containers so I'm doing virtualization upon virtualization and my processor doesn't have Virtualization technology support.

If you want to do full virtualization look for server class CPUs. Xeons and Opterons. Using Newegg's power search [] there is an option to filter by CPU's that support virtualization technology.

If you're primary focus is Oracle RAC, you may want to look at Oracle VM [] which is Xen based.

Re:Lots of deals on eBay (1)

CAIMLAS (41445) | more than 5 years ago | (#27292877)

If you want to do full virtualization look for server class CPUs. Xeons and Opterons. Using Newegg's power search [] there is an option to filter by CPU's that support virtualization technology.

You can do that with "desktop" class CPUs, too - just fine. Only substantial difference between Opteron and Phenom 1 and 2 is the ability to have multiple CPUs; a Phenom or even an Athlon 64 x2, or I believe an Intel Core or Intel Core Duo, will do the job just fine. They all (iirc) have VT extensions.

Used or scrapped server-class machine (4, Interesting)

roc97007 (608802) | more than 5 years ago | (#27292337)

You can run virtual instances on practically anything. I use VMWare Workstation on an older AMD Athlon 3200+ (the machine on which I'm typing this) and get acceptable performance if I only have one instance booted at a time. You're not going to be playing video-intensive games on the instance, but it'll work find

I maintain a few websites (my blog, a gallery, couple other things) on an old server class machine in the garage. Companies often scrap servers after the 3 year warranty expires, or they've finished depreciating (depending on individual business rules) and they're often fast enough to make reasonable virtual servers. Often you can pick them up at a scrap sale or surplus store, or, if your company has an IT department, get permission to snag a machine that's about to be scrapped.

I recently brought up VMWare's free bare-metal hypervisor ESXi and was surprised at how easy it was to set up and create instances. VMWare has a free Physical-to-Virtual converter you might want to experiment with. It works great with Windows, but is kinda hit-and-miss with Linux.

It doesn't matter all that much (0)

Anonymous Coward | more than 5 years ago | (#27292361)

Any of the chips with x86 hardware virtualisation should do. >=2Gb ram would also be good

VMware don't need much (0)

Anonymous Coward | more than 5 years ago | (#27292363)

Of course faster CPU and as much RAM as you can get are good things but you don't need high-end stuff for VMware. I was running VMware on a 450 Mhz Celeron with 512 MB RAM back 10 years ago and it worked fine (Linux host and Windows 2000 guest).

Not that much (2, Interesting)

Beached (52204) | more than 5 years ago | (#27292393)

You can do it "well" on a dual core with 4GB of ram. Even less, but with todays prices you can get a system for a couple hundred if you watch for sales. RAM is you biggest killer that you will notice. Then again, with quad cores with VM assistance going for under $200CDN, thats relatively cheap. If you're worried about HD performance a couple 500GB drives striped will give you over 100MB/s of read speed a relatively small investment.

Depends how many VMs... (1, Informative)

Anonymous Coward | more than 5 years ago | (#27292395)

I get along quite happily running 5 or 6 VMs on a Dell Vostro slimline desktop, Core2Quad, 8GB ram and a 10k RPM disk that cost me no more than £400 6 months ago, and thats using Microsofts HyperV server (free download, runs as the hypervisor itself so theres no Windows Server instance underneath it - don't mistake 'HyperV Server' for 'HyperV for Windows Server 2008', they are not the same :) ).

opeteron with iommu (0)

Anonymous Coward | more than 5 years ago | (#27292475)

I got a dual-quad-opteron for my messing about, making sure to get one cpus new enough to have an IOMMU for faster I/O virt.

kvm/qemu is the sweetest virt solution nowadays imo - vms scheduled by linux itself, rather than some other wonky kernel.

some cpu and lots of RAM (2, Interesting)

higuita (129722) | more than 5 years ago | (#27292481)

we have about 4 machines with 2 quadcore running ESX and about 100 machines (many linux and windows) and 64GB of ram in each esx node... and we have still about 50% of resources free

so grab one quad core machine, with lots of ram (for oracle RAC+ASM+DB you will need at least about 4GB for the 3 RAC nodes, the more the better)

as this is for testing, i would but a plain quadcore PC, with 6 to 8 Gb of ram, install a linux 64bit with xen or vmware esxi

if you have more money, you can buy more ram or even cpu, but you dont really need a blade nor a server, a plain PC will do

ohh i forgot... HD, buy at least 2 HD, to spread the IO load, if you want raid, then you need 4 HD for a raid10... you can also try iscsi with a openfiler based nas/san (another PC, with lot of HDs and several gigabit network card)... of course, the server also need several gigabit network card to increase the IO bandwidth of iscsi

have fun

Do your homework before purchasing White Box HW (3, Informative)

cthornhill (950065) | more than 5 years ago | (#27292583)

I strongly advise you to do your homework before spending money on non-server class hardware (or before selecting a server for that matter). VMWare runs on a lot of hardware, but it also fails badly on a lot of consumer grade motherboards. There are some list (White Box Hardware Lists such as [] ) you can check. After spending some time on name server HW and on White Box gear, I can tell you that the name server gear is a lot more compatible, easier to work with, and worth the money (if you have it). If you are doing casual stuff and don't mind the considerable pain you will have to go through to get patches and select disks systems and other components, consumer gear will let you play a bit. As for doing anything serious with more than one VM on a box - not likely. Xen is a commitment, as is VMWare or any other VM system. It is going to eat the box if you do anything other than dabble in it, and you are going to spend some real money if you intend to do much with VMWare (think $3K - $5K to get very serious). Running a VM is easy. Running multiple servers, backups, external disk systems, etc. is real work and costs real money for all the extra stuff you will need. If you stick to Linux you can save a bunch, but if you intend to do any real work with MS Servers, you are going to need several licenses, and iSCSI targets, back up tools, etc...You won't actually learn too much before you go to that level that you can't learn with VMWare Workstation (a great product but not anything like a production server environment). You can get you feet wet for nothing but time with most of these tools, but you can't get real, in depth experience with what it takes to run a production cluster, replication, remote storage, live replication, and all the rest of the things you need for real production unless you actually set up a production like system - that means real servers (White Box or name brand) and lots of hardware. You won't be able to see much with less than 8 cores, and 16GB plus some local RAID and iSCSI network targets. You can get started, but if you are going to spend money, I really think you should start to buy gear that is going to build towards a real server environment or you should stick to home systems and maybe run VMWare Workstation or some other stand alone VM just to play with it. VM Mode Linux (not very popular today) or some Xen sets for personal use would give you some understanding of VM concepts, but not a lot of basis for real production issues (at least they did not for me and I was a pretty heavy development user). Production VM deployments have a lot of issues that all take real in depth study, and lots of resources (iron) to get right. On the other hand, you can get a Supermicro, a Dell, or HP server with dual Xenon quad cores for less than $4000 with some nice disk. Get 4 or 5 containers under a VM and set up a replication to another server and a remote iSCSI disk and then you have enough to start to actually do real learning on. Of course the license fees will be way more than the hardware costs, if you are using MS tools and VMWare. ESXi is OK but unless you are going to go deep and do it all the hard way (hack the OS) you can't do a lot with the free version. With Xen, if all you want is to run a couple versions of Linux, just get a quad core box and have some doesn't really give you much production knowledge, but you will have some interesting test you can try. What I am really saying is - with only 4 cores you can do some useful things to support development,and you might make a nice personal server for you private web sites, but you don't have enough iron to experience the real issues of production VM management. If you are going past what a developer does (or a tester) and looking at a operations type environment you will need 8 to 16 cores on multiple boxes. This is a lot more than a home user typically wants to spend. IMO you also can't really expect to be really good on more than one system unless you do it day in and day out. I know that there is way too much to learn about VMWare to do more than that as a full time job at if you are doing serious production work. This is OK, as jobs with any one of these are going to be pretty highly paid and intense - there is only so much you can stand... Bottom line - you can start with a 4 core consumer system, but you can't do much more than get your toes wet on that hardware. Size and money count in this game. You are building nothing less than the equivalent of a mainframe - don't expect it to be easy or cheap. You can learn to do valuable stuff as a single user, but learning real production configuration is going to take real production hardware. If you are going to buy server gear, go ahead and buy the real thing - not some game PC. There is a big difference.

Re:Do your homework before purchasing White Box HW (1)

Jurily (900488) | more than 5 years ago | (#27292775)

Wow. That's the first time I've seen a /. comment that completely filled my screen. Thank you.

Also tl;dr.

Re:Do your homework before purchasing White Box HW (5, Funny)

rackserverdeals (1503561) | more than 5 years ago | (#27292849)

and if you're not careful, VMWare apparently makes the Enter key inoperable :)

Much better solution (4, Informative)

codepunk (167897) | more than 5 years ago | (#27292589)

Amazon EC2 is what I use for stuff like this, both windows and linux boxes everything available at a single push of a button. I
also use it alot for development, fire up a machine load and go.

Re:Much better solution (1)

JAlexoi (1085785) | more than 5 years ago | (#27292673)

Well if the person posting wants to experiment with the infrastructure itself, then EC2 is definitely NOT an option. Since EC2 is a manged infrastructure.

Re:Much better solution (2, Informative)

codepunk (167897) | more than 5 years ago | (#27292823)

Well yes it will not teach you how to plug a ethernet interface into a switch. However the poster in this case said he wants to run
in a vm environment but money is limited. In this case if he want's to play with big boxes, configuration testing etc there
is no better option available to him than EC2.

Re:Much better solution (2, Insightful)

rackserverdeals (1503561) | more than 5 years ago | (#27293011)

Actually, if he's looking to play around with Oracle RAC, he's looking at virtualization technology to do that without having to buy multiple servers. In that case, Amazon EC2 will be a good idea.

If he's more interested in playing with Xen than RAC, then no.

What does your budget allow? (2, Informative)

itomato (91092) | more than 5 years ago | (#27292595)

Reading 'cayenne8', I can't help but imagine a V8 Porsche, and because I'm a car guy, for good or bad, this shifts the focus of my comment toward resources, specifically what is available, versus what is acceptable or tolerable.

Let's say you're a one-man Lab, incorporating all the SA, Developer, and Midware functions into your 'play' with this environment. How much time will each environment spend heavily plowing into loads?

If your intent is to deploy RAC in a multitude of scenarios, in short order, with a minimum of intervention, you may be able to get away with $1500 to $2500 worth of NewEgg parts (think high throughput - RAID, Max. RAM, Short access times, etc.) and the virtualization technology of your choice. Personally, I find VirtualBox capable of everything I need as far as virtualization and deployment goes, however, you need to be able to leverage 'fencing', with likely puts you into VMWare territory.
Fortunately, VMWare Server is 'free', and CentOS and OpenSuSE support some of the more advanced features of HA on Linux. Then again, if we're looking at resources as a major factor, then Redhat and Novell might be worth looking at, as they both offer 60 to 90-day evaluation licenses for their Enterprise Linux products, which may offer a prettier and more 'honest' (per the documentation and common expectations) implementation of their respective HA features than the freely-available, and in some cases, in-flux versions of the same software.

As far as RAC goes, take a look at the requirements for RAC, per Oracle's installation guidelines,, and size/spec from there. I believe you can get away with 16GB - total, if you have the capability to size the VM's memory access, or otherwise configure the amount of addressable memory, or put uo with or hack Oracle's RAC installation pre-flight. There is also valuable documentation available on your chosen OS vendor's sit, which may even be Oracle, who knows.. []

You may be hell-bent on performance, however, and you may be looking for the ultimate grasp of technological perfection, as it exists at Sun Mar 22 17:29:59 EDT 2009. In this case, you may want to look at Xen, which is available on Solaris as their 'xVM' technology, as well as on various Linuxes and BSDs.
On the other hand, you may be a Mac guy, with a decked-out Octo-core Xeon Mac Pro, where you have the option of Parallels and Virtual PC and something else, in addition to Sun's VirtualBox mentioned above.

Ultimately, things to keep in mind may be shared disk requirements, fencing options, and VM disk and memory access.

Re:What does your budget allow? (1)

funwithBSD (245349) | more than 5 years ago | (#27292827)

The only difficultly with putting RAC on "a" machine is that it is the configuration of the networking that tends to be the major PITA.

All that will get sidestepped by going virtual. If you don't expect to be maintaining the hardware, it is not likely to matter.

Re:What does your budget allow? (1)

itomato (91092) | more than 5 years ago | (#27292855)

That's why I use VitrtualBox, The ability to set up bridged and NAT'ed networks is easy and reliable.

With Xen or the like, there's quite a bit more that needs to be accounted for, and now that I think about it, multiple physical NICs may not be a bad idea for this one-box lab.

Basically, Yes to all. (1)

Anonymous Coward | more than 5 years ago | (#27292597)

We're running Xen heavily at my work, using mostly commodity type hardware. One point I will make off the bat, is that Xen can't run Windows on all types of Intel processors. There is a crippled version of their Celeron dual core processor that does not handle multiprocessing well enough to allow you to 'appear' native to the application. All Opteron processors from AMD have this bit available, so the Dell Poweredge T105 will work well for that, or any Opteron-based system. We bought a T105 so that we could have Windows on a virtual Xen instance, and so far it's been working well.

I've been noticing that you can get older P4 based, rack-mount systems pretty cheap on eBay, and the supporting parts to set up your 'lab' environment would not be too bad either.

Two machines (4, Informative)

digitalhermit (113459) | more than 5 years ago | (#27292609)

You can do Oracle with just a single machine running multiple VMs; however, if you really want to get serious, you should consider building two physical machines. One each machine, create a virtual or two with 1-2G of RAM. for the shared disk, use DRBD volumes between the two.

My test RAC cluster has two AMD X2 64-bit systems with two gigabit NICs each. CompUSA has a similar machine for about $212 on sale this week. Newegg prices are similar. You'll need to add a couple extra Gig NIC and some more storage. Still should cost under $400 each.

On each physical system I used CentOS 5.2 with Xen. I created LVMs on the physical machines as the root volumes. Also carved out a separate volume to back the shared volume. Then I carved out a xen virtual machine on each with 1.5G each. I put the DRBD network on one pair of NICs. The other pair was used for the network and heartbeat (virtual ethernet devices).

on 64 bit... (1)

NemoinSpace (1118137) | more than 5 years ago | (#27292643)

Make sure your are buying a processor (and motherboard) that supports VT otherwise you may not be able to host other 64 bit systems.

VirtualBox (1)

Thinboy00 (1190815) | more than 5 years ago | (#27292671)

VirtualBox is fairly good even on mediocre hardware. The more RAM and CPU the better, but you don't need a quad-core with 8 gigs of RAM just to run a virtualizer. Heck, you don't even need a dual core for that. Do make sure you have lots of RAM though (I have ~2 gigs, and ~2 gigs swap as well, though Linux never uses it anyway). YMMV, so don't use this info for anything mission-critical.

Don't bother with 'server' hardware (3, Insightful)

Minwee (522556) | more than 5 years ago | (#27292675)

The difference between 'server class' hardware and you beige box PC is that the more expensive 'server' is a lot more reliable and has extra remote access and hardware monitoring features. That's about it. If all you want is to run virtual machines in a test environment, just get a desktop with a hefty CPU and a whole whack of RAM and you're set. A good 'gaming' machine without the video card would be fine. You don't need to spend extra for a 'server'.

Almost anything will do (1)

kakris (126307) | more than 5 years ago | (#27292677)

Since it doesn't sound like you're planning to actually run any production software on this machine, just about anything will do. Memory will probably be your biggest need, so at least two or three gigs might be in order. Disk space is cheap, and processor power probably won't matter too much for experimenting. As far as shared disk, try iscsi target mode, it's supported on most Linux distributions, and it works with most cluster software.

Dell Inspiron 530 Q6600 Models w 3-4Gig (0)

Anonymous Coward | more than 5 years ago | (#27292747)

Look for the sale.

I got mine shipped to the door for $530 USD end of 2008.

I threw in 8 gigs ($100) and an ATI4850 ($150) and the damn thing is still cheap. Only the q6660 models will have the 2-rail power supply with the pci-e power connector. But you may not care for your needs.

Q6600 will VM 64-bit OS on a 32-bit host.

All you need is memory (1)

Gavin Scott (15916) | more than 5 years ago | (#27292761)

Really for getting started you just need memory. Everything else is just a convenience in terms of performance and won't really buy you more functionality.

I run XP as my host OS with just 2GB of physical RAM, and then do development in a 768MB Linux partition under that using VMWare Workstation. You can do the same thing for free with Xen or VMWare Player or Server.

When 2-4GB is not enough, then either upgrade your workstation to a 64-bit OS and throw in as much memory as you can fit/afford, or bring up another box.

You can get VMWare ESXi for free and do bare metal virtualization if you want to, or run the 64-bit Linux distro of your choice and instances of VMWare Server on top of that, again all for free.

Memory is the most important resource, then having more than one CPU core is nice, but for "home experimenting" you don't really need anything fancy.

If you have money, wait a couple weeks until the Nehalem server chips come out and get a dual-CPU server with 16+GB or ram. That will give you 8 screaming fast cores, ECC memory, and the ability to run as many virtual machines as you could possibly want.


Cheese on a bread budget (2, Interesting)

CAIMLAS (41445) | more than 5 years ago | (#27292803)

You say you want to go "cheap", that you don't want to spent too much money, yadda yadda... and then you go on to mention things like "cheap" shared disk and "cheap" blade servers?

What you realistically need and want are two different things.

I'd suggest a cheap quad-core AMD Phenom II system with 8G or so of RAM. Nothing too fancy. that I assume you're going to be running a Windows host OS, or VMWare ESX. More RAM will be needed for the Windows host OS, obviously.

Absolute lowest-end hardware you'd want to look at getting is an AMD Athlon 64 x2 or Intel Core (IIRC) based system. In other words, you want/need the VT support, or it'll be purely an emulated environment, and substantially slower than native (30%?), not just marginally (10%?).

I recommend AMD hardware because it's got a better price/performance point, and because unlike the other stuff in the "reasonable midprice" range for Intel, it's got the memory controller/north bridge integrated into the CPU (for newer gen stuff). I'd say go Phenom or Phenom II without any hesitation.

With a CPU like this [] , there's no reason you couldn't build a full system for around $450-500, sans storage. You could probably find a suitable "starter"/deal system for $300 from TigerDirect that'll do the job just fine with a little more RAM and another drive.

For disk, just go with an SATA RAID card (LSI are good) and 3 1Tb disks. That's about as cheap as you'll get and still have room to work.

Things to consider: (1)

gsporter (1507851) | more than 5 years ago | (#27292835)

I won't get into the debate on "name brand" verses "whitebox" or do it yourself.
I am a teacher so money is an issue for me.

Things I have found that work for me in a lab (rather than production environment):

dual or quad processor (AMD seems to work and lower $)

processor / mother board support for hardware virtualization

Lots of ram ( I use x64 OS with 8-16G)

Choice of vm technology - Virtual PC/Server is find for MS based guest but
                                                      VMWare Server all the way for anything else

I don't get too wrapped up in RAID solutions, HD are cheap and DVD backup is even cheaper

If you need to backup/change out vm machines consider building an Openfiler NAS. It even
supports iSCSI if you have the funds.

First and foremost, vm's can be a fun learning tool.


Hardware I use (1)

itr2401 (873985) | more than 5 years ago | (#27292837)

What I would do is look at what you need in hardware terms to run something like VMware ESXi. Firstly ESXi is at the best price point (it's free) whilst giving you most of the VI3 capabilities. The hardware that I run it on is pretty much whitebox hardware - Intel CPU (Q6600, 8GB RAM), Intel or Supermicro motherboard. The main requirement is that in order to install ESXi there needs to be a hardware controller that it can find and install to. Have a look at the VI3 HCL ( to find your hardware to ensure things will run. On one system I use an Adaptec 19160 to boot ESXi from, then use iSCSI to an OpenFiler machine where all the VM's are stored. The other machine is a Dell T3400 where the onboard SATA when configured as non raid / AHCI, ESXi can install directly to any of the attached SATA drives. Another solution is to use an Adaptec SATA RAID card for local storage.

It depends on how much you want to spend on power. (1)

GuyverDH (232921) | more than 5 years ago | (#27292841)

Really.. it all boils down to your monthly utility fees and what you are willing to pay...

You can pick up 1-off servers being ditched by corporations (if you degauss the drives and certify that you will destroy them if you ever stop using them, you may get the drives as well) - otherwise, it will probably be sans hard drives, for next to nothing....

I picked up a test platform, Two Dual Core 2.66Ghz 64 Bit Xeons, 16GB RAM, 8 hot swap U320 72GB Drives, battery backed raid caching controller, dvd, floppy, 2 x 720 watt hot swap power supplies, in a nice deskside case, for zilch, nada, zip.. Just haul it away...

Currently running VMWare ESXi, with 16 VMs installed (I normally don't run more than 6 at a time, but it will run 12 without too much latency)...

Solaris x86 (64 bit and 32bit installs), Ubuntu, Nexenta (the v2 beta), OpenSolaris, Windows 7 Beta (32 and 64 bit) - blech, CentOS 4 & 5 (32 and 64 bit), PCLinuxOS 2007 and 2009, ReactOS, Windows XP (32 and 64) - configured as static - ie - no changes ever saved...

It's lots of fun to work with, and a great learning platform.

What is "VMware"? (0)

Anonymous Coward | more than 5 years ago | (#27292851)

What is VMware? There is no such thing as "VMware".

There's VMware Workstation, Fusion, and Server. All of which are hosted applications (Server/Workstation on Win/Lin, Fusion on OS X).

Then there's ESX 3.5 (or ESXi 3.5), which virtualizes a completely different set of hardware for the VM then any of the hosted solutions do. ESX/ESXi allows for v4 virtual hardware ONLY (based on a 440BX Intel Reference platform), the hosted solutions allow for v4 or v7 virtual hardware (which is similar to v4, but supports 3D acceleration, passthrough PCI devices, etc).

Hosted solutions will run on anything that can boot the above operating systems. ESX/ESXi **REQUIRES** a specific set of hardware, and you'd be best to find something on the VMware HCL as such (a Dell T300, or an HP ML350).

So, again, what is VMware? Hosted or bare metal hypervisor? Two completely different series of products, and despite "generally" virtualizing the same hardware (from the VM's point of view), there are still heavy, radical differences between the two.


I've setup Oracle RAC on VMware (1)

Stone316 (629009) | more than 5 years ago | (#27292879)

But the question is, how many VM's do you plan on running at once?

I installed a 2 Node RAC environment on Vmware using my laptop which was a 2Gz Intel Core 2 Duo with 2GB of ram. (Instructions here [] )

So you don't need something super powerful if you don't plan on leaving them all running 24x7 and just startup the ones you are playing with at the time. A Quad core system with at least 4GB of ram should and lots of disk should be plenty.

I would stay away from running any of your environments on external USB drives. I have a 1 TB USB 2.0 drive and its too slow for anything heavy such as installing Oracle E-Business Suite.. However databases and RAC worked fine.

I've played around a fair bit with Oracle and VMware so if you have any questions feel free to ask.

I'm thinking of doing the same thing myself (1)

duplo1 (719988) | more than 5 years ago | (#27292883)

I've been thinking of maxing out my 8 Core Mac Pro with 32GB ram, gobs of disk space and installing XenServer or Vmware ESX server and boot via rEFit. Another option I'm considering is picking up a new box altogether. I have my eye on a Dell server with 2 4x core AMD CPUs. With a 300GB disk and 32GB ram, it goes for just over $3k. Add in a few SAS drives and you're around $4k, but you have a highly capable system capable of running more VMs than you probably need.

Stand-alone "blades", multi-home Linux SAN (1)

jimbudncl (1263912) | more than 5 years ago | (#27292907)

A while back I ran across these [] little boxes. They were being phased out, and were on sale. I bought one, and found that VMware ESXi works great on the... so I got 5 more ;)

I set them up with ESXi, and put 5 1TB drives in a midtower case running Linux with 3 GigE NICs, and setup NFS shares and iSCSI targets (just to play around). Bond the NICs and have ESX use it for datastores... all for $3,600.

Tada! Instant "blade" environment w/SAN! Sure, the performance isn't quite the same, but for proving out concepts and experimenting, it's awesome. And ESX is fun to play with compared to plain old Server (1 or 2). Not to be biased, but VMware is by far the most well stocked, feature wise, virtualization solution out there. I've personally used it since pre-1.0 back in 1999-2000.

I'm mentioning this since you mentioned VMware, and I thing someone above me mentioned it as well, but it's a important point; VMware ESXi is by far more picky about hardware than Linux. If you want to play with it at some point, make sure whatever you buy will work with it. Check out [] , which gives you more hardware compatibility insight than VMware's documentation.

Have fun!

Minuos 3, Troll) (-1, Troll)

Anonymous Coward | more than 5 years ago | (#27292913)

GNAA (GAY NIGGER declined in market And its long term You ne3d to succeed Live and a job to whole has lost rivalry, and we'll (Click Here that the project

hmmm (1)

thatskinnyguy (1129515) | more than 5 years ago | (#27292947)

For modeling something like RAC, a dual-core anything with tons of RAM would be necessary.

However, the devil's advocate in me is saying to not go virtual with this project unless you have some speedy-fast fiber channel SAN at your disposal. Reason being: you aren't going to see the same performance in the VMs as you would with physical hardware. Especially with the database backend that is constantly thrashing your drives depending on load.

Re:hmmm (1)

thatskinnyguy (1129515) | more than 5 years ago | (#27292973)

Also, I forgot to add, VMs defeat the purpose of clustering if the physical hardware fails. Meaning: one physical machine down, n nodes in the cluster down.

You just need enough RAM (3, Informative)

marynya (735459) | more than 5 years ago | (#27292969)

The main requirement is enough RAM for two operating systems plus some extra for the virtualization system. The CPU is less important. I run Windows XP Pro as a virtual system on a Linux host with VMware Workstation 6. It is a 5-year-old Athlon 3000+ box with 1 GB of RAM. I allocate 512 MB to Windows, which is about the minimum for XP. Current Linux distributions need at least 256 MB and VMware is something of a memory hog itself so 1 GB is about the minimum RAM for this setup. Windows is perhaps just a smidgen slower than it would be if running natively on the same hardware but the difference is minimal. It does not have much effect on the speed of Linux apps running simultaneously. Things bog down fast if you try to run more than one virtual system simultaneously but VMware is good at using multiple processors for this. I did some work which involved running up to 6 instances of FreeBSD simultaneously on an 8-core Xeon system with 4 GB RAM. Up to 6 it did not slow down much. Over 6 it got sludgy. Have fun! Mike

Memory, memory and more memory (1)

spaceyhackerlady (462530) | more than 5 years ago | (#27292983)

Buy all the memory you can afford. Then buy some more.

Virtualization is a memory pig. Cool, fun to play with, but still a memory pig.


Openfiler + USB Flash is a great way to do ESXi. (2, Informative)

johnthorensen (539527) | more than 5 years ago | (#27292985)

The biggest thing that you have to watch out for with VMWare ESXi is the hardware compatibility list. You will run into trouble with two major components: RAID controllers and network adapters.

The network adapter solution is simple: buy the most plain-jane Intel PCI or PCIe adapter that you can find. Examples of ones that are known to work right out of the box are the Intel PWLA8391 GT [] (single-port PCI) and the Intel EXPI9402PT [] (dual-port PCIe). I own both of these and can personally confirm operation with the latest version of VMWare ESXi.

The drive controller situation is both complicated and -- more importantly -- expensive. Overall, Adaptec seems to be the most well-supported controller hardware out there. I have tried LSI controllers, but they often don't play well with desktop boards. Unfortunately for experimenters, the built-in RAID on practically every Intel motherboard is completely unsupported in RAID mode. Obviously no enterprise environment would be using on-board RAID like that, but it would be nice to have for experimentation.

Which brings me to my favorite storage solution for ESXi: Openfiler [] . Openfiler is an open-source NAS/SAN solution based on rPath Linux. It turns any supported PC into a storage applicance, and can share its storage in a plethora of ways. In the case of a virtualization effort, it has two major things going for it: it supports any storage controller that Linux supports, and it supports iSCSI and NFS.

If, say, you do have a machine sitting there with Intel on-board RAID, you can install Openfiler there. While the hardware might not work under ESXi, it'll work great for Openfiler. Even better, Openfiler also supports Linux software RAID which can be superior when it comes to disaster recovery (no need to have a specific controller card to see your data). With this in mind, you'll be able to get Openfiler running on just about any hunk of shit box you have sitting around.

Once you have Openfiler set up, you can take the next step in virtualization-on-the-cheap: installing ESXi on a USB flash drive. There are a number of tutorials on the web for this (just google 'ESXi USB flash install'), but the basic process amounts to extracting the drive image from the ESXi installation archive and simply writing it to flash with dd (on Linux) or physdiskwrite (on Windows). Once this is done, you can plug the flash drive into nearly *any* recent x86 hardware and it will boot ESXi. A really neat feature that you get along with this is the ability to substitute hardware with ease, and upgrade to later versions of ESXi simply by swapping the flash drive.

Once you have ESXi installed, create an iSCSI volume on your Openfiler box. Then, use the VMWare management software to connect the ESXi box to your Openfiler iSCSI volume. You can then create virtual disks and machines from the actual USB-flash-booted VMWare host, all of which will be stored on your Openfiler machine. You may also want to try experimenting with NFS instead of iSCSI. There are a couple proponents of this out there that say under certain circumstances it's even faster than iSCSI. It also makes backing up your virtual machines a little simpler since an NFS share is generally easier to get to than iSCSI from most machines. Another cool aspect of the Openfiler-based configuration is that you will get access to another whiz-bang feature of VMWare called vMotion. Since the VMs and their disks are stored centrally, you can actually move the VM execution from one ESXi box to another - on the fly.

In all, this is a great way to get your feet wet in virtualization because you can have a pretty sophisticated setup with very basic commodity hardware. If you want to go the extra mile and get really fancy, put a dedicated gigabit NIC (or two, bonded) in each box and enable jumbo frames; the SAN will be more than fast enough most anything you'd like to do.

Good luck!

Re:Openfiler + USB Flash is a great way to do ESXi (1)

BitZtream (692029) | more than 5 years ago | (#27293117)

You could also checkout FreeNAS as an alternative to Openfiler, just depends on if you want to run Linux or FreeBSD on your NAS.

Second, the cheap crap they put on motherboards and call 'RAID' is generally nothing of the sort. Its almost always handled by the CPU itself, either via the driver or System Management mode of the CPU and as such is no better than using the software RAID provided by your OS. In most cases its better to use the software RAID as its made to work with your OS in a the most efficient manner, rather than however the MB designers wanted to do it. And since the MB designers were just looking for some cheapass way to add the word RAID to their marketing material, its probably not exactly the best way to go about it.

VMotion does not work with ESXi. You must have ESX and Virtual Center to gain access to all the neat stuff involving more than one vmware server.

Re:Openfiler + USB Flash is a great way to do ESXi (1)

johnthorensen (539527) | more than 5 years ago | (#27293335)

Good comments, but vMotion most certainly does work with ESXi. Yes you need Virtual Center, but ESX is not a prerequisite.

In the long term, I believe that VMWare sees greater uptake of ESXi vs. ESX since it is a lot thinner and plays better in a dense environment.

Generic server from Shuttle (1)

strredwolf (532) | more than 5 years ago | (#27293001)

I've bought a small Shuttle K45 system, adding my own Intel chip and extras in there. Cost about $450 for my setup. About to put VMWare Server on it. I'll let you know how it works out.

Re:Generic server from Shuttle (1)

KDingo (944605) | more than 5 years ago | (#27293173)

I wasn't expecting a Shuttle in here, but this is somewhat like my setup, at least hardware-wise. I got a Shuttle SG31G2, Intel Quad core and maxed out 4GB. This thing is very quiet and I'm very happy with it. While using it as my main desktop, I found out that Xen doesn't exactly play well with X for some reason (openSUSE 11.1 64-bit) where if I switched to console mode I couldn't switch back to X!

I ended up using KVM with Xen's network bridge setup scripts; it totally helps. Only thing I would like from a desktop user perspective is a management interface.

My hints (4, Informative)

kosmosik (654958) | more than 5 years ago | (#27293037)

Well you don't clearly state what you wish to accomplish nor how much money you have so it is hard to answer. But maybe such setup will be OK.

Build yourself custom PCs.

Storage server:
- good and big enclosure which can fit large ammount of drives
- moderate 64bit AMD processor (really any - you will not be doing any serious processing on storage server)
- any ammount of RAM (really 1 or 2 gigs will be enough)
- mobo with good SATA AHCI support (for RAID) and NIC (any - for management) onboard
- one 1Gb PCI-* NIC with two ports
- 6x SATA2 NCQ HDD (any size you need) dedicated for working in RAID - software based (dmraid) RAID1+0 array configuration

Virtualization servers (2 or more):
- you need the virtualization servers to have the same config
- any decent enclosure you can get
- the fastest 64bit AMD processor you can get preferably tri or quad core (it will do the processing for guests) with VT extensions
- as much RAM as you can get/fit into the machine
- mobo with VT support, one (any - for management) NIC onboard
- one 1Gb PCI-* NIC with two ports
- one moderate SATA disk for local storage (you will be using it just to boot the hypervisor) or disk-on-chip module

Network switch and cables:
- any managed 1Gb switch with VLAN and EtherChannel support, HP are quite good and not as expensive as Cisco
- good CAT6 FTP patchcords

General notes for hardware:
- make sure all of the PC hardware is *well* supported by Linux since you will be using Linux :)
- if you can get better (quality wise) components, good enclosures, power supplies, drives etc. - since it is a semi server setup you don't like it to fail for some stupid reason

Network setup:
- make two VLANS - one for storage, other for management
- plug onboard NICs into management VLAN
- plug HBA NICs into storage VLAN
- configure ports for EtherChannel and use bonding on your machines for greater throughput

Software used:
- for storage server just use Linux
- for virtualization servers use Citrix XenServer5 (it is free, has nice management options, supports shared storage and live motion) or vanilla Xen on Linux, don't bother with VMWare Server, VMware ESX and Microsoft solutions are expensive

Storage server setup:
- install any Linux distro you like (CentOS would not be a bad choice)
- use 64bit version
- use dmraid for RAID and LVM for volume management
- share your storage via iSCSI (iSCSI Enterprise Target is in my opinion best choice)

Virtualization servers setup:
- install XenServer5 (or any distro with Xen - CentOS won't be bad)
- use interface bonding
- dont use local storage for VMs - use storage network instead

Well here it is. Quite powerfull and cheap virtualization solution for you.

Dell 440SC (1)

CustomDesigned (250089) | more than 5 years ago | (#27293047)

I not only run this at home, but at lots of small business customers. Has 3Ghz Pentium D (dual core, 64-bit). Get 2 large SATA drives (500G or more) and 2G or more ECC memory. Starting price is $400, but by the time you get the memory and disk upgraded, it is about $600, $800 with onsite maintenance. A big benefit for me for home use was it is *quiet*. It has a single large (and therefore quiet) fan with ducting to draw air over the CPU heatsink. Look for it in the "small business section" of Dell.

Drawbacks: only 2 drive bays (upgrade to 840 for 4 bays - not as quiet). No sensors - at least that lm_sensors knows about. I just monitor the disk temperature.

Configuration: run the 2 drives with software RAID1, and LVM on top of that. Create a small (100M) RAID1 boot partition at the beginning of the disk. The RedHat/Fedora installer can create this configuration. (I also save and mirror the Dell diagnostics partition, and add it to the grub boot menu.)

HCL for VMWare ESX , ESXi (1)

sniperu (585466) | more than 5 years ago | (#27293079)

Here is a compatibility list for ESXi. If you pull it off then add a second machine and build yourself an ESXi cluster ;). You need to pay attention to the SATA controller. [] This way you are not only getting experience with your line of work applications but also with the VM software most datacenters use. VMWare Server/MS Virtual Server are nice alternatives when you don't have ESX/ESXi compatible hardware, but eat much more HW resources and ESXi takes just 3 minutes to install and you don't have to patch the operating system every other week :).

Good White Box Setup for VM Test Lab (1)

gotpaint32 (728082) | more than 5 years ago | (#27293147)

You should have at least 2 of these: AMD-V or Intel-VT Capable Motherboard and Processor Combo (for those interested in running Hyper-V or other enhanced VM setups) I prefer the Intel branded boards for my setups, never let me down... At least 4GB of RAM per box Cheap SATA drive 100GB maybe? 2 or more Intel Pro 1000 NICs (can get them for about 35 bucks on newegg) You should also get: Any box with a P4 or similar should work for this. Setup Openfiler or FreeNAS. If you are playing with VMs, shared storage is a must. You will preferably setup RAID0 or something equally as fast (assuming this is purely a test environment and you dont care about redundancy.) It will be hit or miss with the network adapter support for integrated NICs so you may want to pick up an intel pro 1000 for one of these as well. You will also need supporting hardware 2 gigabit switches - 3Com unmanaged 5 or 8 port ones are excellent and support jumbo frames very nicely A cheap 4 port KVM, there is a trendnet 4 port USB KVM for under 75 bucks available at most shops 3 desktop UPSes or maybe 1 good enterprise UPS. Always a smart move to protect your investment, and power failures can easily screw up a RAID0 solution. Don't forget to check out the online HCLs for the VM solutions you want to run. The above configuration should work for VMware ESX or HyperV, ESXi has tighter requirements so it would be worth checking out. Happy hunting!

Storage & RAM (1)

s.revenant (1463073) | more than 5 years ago | (#27293157)

The key is going to be Storage first, then RAM. As long as hardware uses the same CPU's, that isn't a big deal. I personally have built hundreds to thousands of ESX hosts in the datacenter, both large and small deployments. At home, you just need a simple platform, I use the Dell 440, and for storage you can use iSCSI with OpenFiler. While I wouldn't recommend iSCSI for production/enterprise, it is great for small scale stuff.

Simply put, two new cheap servers identically configured, with at least 4-8GB RAM each (really this depends upon how much you want to use--the sky is the limit), and one system to run iSCSI / OpenFiler (can be different / older / repurposed hardware). Make sure to have enough NICS to isolate your iSCSI traffic to its own interface. TCO will probably be around ~$2000, if you have some old hardware laying around to fill in the gaps.

Core 2 Duo laptop (0)

Anonymous Coward | more than 5 years ago | (#27293195)

I use a Core 2 Duo laptop. I bought the laptop off of Ebay with a busted screen for $150. I bought 4G of RAM for it, which cost me another $35. Once I got CentOs installed on it with an old monitor plugged into it, I closed the lid and put it on the shelf tied into my network. I have 27 VM images on it that I use for various programming test environments. Granted I never run more than 5 or 6 at the same time, with the Core 2 Duo I can run x86-64 images as well to let me test on as many different types of systems as I can get my hands on.
My 2-cents.

Don't go overkill. (2, Interesting)

GiMP (10923) | more than 5 years ago | (#27293205)

I run a VPS hosting company, my job is to research, setup, and maintain a cluster/grid of servers running Xen with hundreds of guests (virtual machines). For testing and even for deployment, we've used machines as simple as a single-core AMD 3800 with 80GB disks in RAID-1, and 1GB of RAM. These aren't the most profitable machines, as they can only support as many virtual machines as can pay for the electricity and square footage, but they work perfectly fine for up to approximately 12 guests. I do highly recommend a dual-processor or dual-core system, though.

If you want to know how much you can stress a system, for highly-dense numbers of guests, I try not to load more than 15 guests and 2GB of RAM per CPU core. Of course, if you plan to have a low-density of guests (say one guest per core), you'll need to adjust accordingly.

I found that for my home office, where I often have pretty excessive needs such as installing multiple operating systems and performing multiple large compiles at the same time, a dual quad-core system with 16GB of RAM is overkill. Right now, I'm using a single quad-core workstation with 8GB of RAM and it works pretty well for me, and is probably still a bit more than I need.

Doesn't take much (1)

MikeB0Lton (962403) | more than 5 years ago | (#27293221)

I find that enabling hardware virtualization in the BIOS and installing enough RAM for my VM needs is enough for low budget. Storing the virtual disks on a hard drive physically separate from my host operating system allows things to run well without much impact to my other processes. You can offload your virtualization to a different computer and run the management client on your primary computer if you require more processing power. I'm sure having a 64bit quad core CPU will help out too, but it isn't necessary for most things. Use whatever VM software you like the most.

Don't Skimp: Build in Stages (1)

maestro371 (762740) | more than 5 years ago | (#27293267)

I have a dual dual-core Xeon system built on a Tyan Tempest (i5000VF) motherboard with 8GB of RAM that runs XenServer 5. Right now I have it running 2 Windows 2008 domain controllers, an XP instance, an OpenSolaris instance, and several Linux VMs.

From NewEgg, that RAM cost me about $160 total. The 5 500GB drives (at the time I bought them) were $150 a piece. The processors were $150/each and the motherboard was $340. I picked up a 3Ware 9550SX PCI-Express RAID controller from E-bay for about $200.

It is server-class hardware, but can be built in stages (e.g., start with one processor, 4GB of RAM and 1 drive). I'd recommend not skimping; you'll appreciate the stability in the long term. I've been using this setup for about 2 years and am just now looking at starting again with new hardware (I'd like to build a shared-storage setup with OpenSolaris and ZFS).

Why not rent versus buy - use Amazon Web Services (1)

bmullan (1425023) | more than 5 years ago | (#27293291)

Amazon Web Services (AWS) []

Amazon Web Services (AWS) is a great way to do something like this w/out laying out any money.

As low as 15 cents per hour and ONLY when you are actually using a virtual resource.

Get an AWS account - its free.

Learn how to use their Amazon Machine Instance (AMI) management tool to launch any of the hundreds of publicly available operating system images out there (Windows, Solaris or Linux), 32 bit or 64 bit.

Clone an AMI that you like as a base to make it "yours". Customize it however you like.

Put VMware Workstation on it and you can experiment building Virtual machines all you like.

I've done it with an Ubuntu 64 Bit AMI then after installing VMware on the Ubuntu 64 I have played around with creation of virtual machines of other operating systems and applications such as Ubuntu Linux, Mac OSX, and Windows XP & Vista.

Its basically using the virtual cloud environment that AWS offers to work on developing your own virtual appliances or machines.

Standard AWS Instances

Instances of this family are well suited for most applications.

* Small Instance (Default) 1.7 GB of memory, 1 EC2 Compute Unit (1 virtual core with 1 EC2 Compute Unit), 160 GB of instance storage, 32-bit platform

* Large Instance 7.5 GB of memory, 4 EC2 Compute Units (2 virtual cores with 2 EC2 Compute Units each), 850 GB of instance storage, 64-bit platform

* Extra Large Instance 15 GB of memory, 8 EC2 Compute Units (4 virtual cores with 2 EC2 Compute Units each), 1690 GB of instance storage, 64-bit platform

High-CPU Instances

Instances of this family have proportionally more CPU resources than memory (RAM) and are well suited for compute-intensive applications.

* High-CPU Medium Instance 1.7 GB of memory, 5 EC2 Compute Units (2 virtual cores with 2.5 EC2 Compute Units each), 350 GB of instance storage, 32-bit platform

* High-CPU Extra Large Instance 7 GB of memory, 20 EC2 Compute Units (8 virtual cores with 2.5 EC2 Compute Units each), 1690 GB of instance storage, 64-bit platform

EC2 Compute Unit (ECU) â" One EC2 Compute Unit (ECU) provides the equivalent CPU capacity of a 1.0-1.2 GHz 2007 Opteron or 2007 Xeon processor.

Oracle RAC Guide with Firewire (1)

tuttle (973) | more than 5 years ago | (#27293305)

Oracle already has a number of guides to building a cheap Oracle RAC setup. One of the more interesting ones used a firewire device that could support multiple logins. Thus creating a cheap and fast shared storage device to use for ASM and OCFS. The article is here: [] . The setup here was only a 2 node system. I'm not sure if these cheap firewire drives can handle 3 logins. There is another guide for doing iSCSI, although I would think the firewire setup would be cheaper and faster.

You can share CPU time but you need disk and ram (2, Informative)

BagOBones (574735) | more than 5 years ago | (#27293351)

The biggest problem I see with those getting into virtualization is that they think that virtualizing things makes them magically need fewer resources.

You can share CPU time as most apps will not drive the CPU 100%, having said that it is often best to have as many cores as you can afford.

Do not over allocate your RAM, if you can have as much ram as needed for how much you allocate to the VMs, if you over lap you will get a huge performance hit.

Sparse disk is a fairly new feature only in some VM systems, you will need lots of disk for all of the VMs, also you will probably want to run them on different LUNs or disk groups so you don't get lots of thrashing on the drives.

If you are only running 1 or 2 VMs as a test then really all you need is to up the ram a little and make sure the host meets the minimum specs of the VM applications.

If you are only... (1)

glitch23 (557124) | more than 5 years ago | (#27293371)

installing stuff to run it as a hobby and not push any major data sets through the system you really only need to worry about RAM and disk capacity (for storing the VM files which will house the OS and programs). Just get as much RAM as you can so you can give each VM its own normal amount of RAM (500MB-4GB) depending on which applications are in the VMs. You probably want each VM to have at least 10GB of disk space so calculate that in to your overall disk capacity requirements. Your hardware in the end will be based on how many VMs you will end up running. Each one will have to have an OS installed of course so you are going to have some wasted disk space. Worst thing you can do is skimp on don't want your host OS to be swapping because you gave the VMs too much memory and didn't have enough left for the host.

Oracle and VM's (1)

dheltzel (558802) | more than 5 years ago | (#27293463)

Speaking as an Oracle DBA who has done a little of this, I can tell you to get a lot of RAM. I would say that an MB that can be expanded to at least 8 GB is the way to go. You might get by with only 4GB for a while, but you will eventually want more, give the relative costs of RAM.

Oracle is always RAM hungry, and VM's multiply that.

Having just done this (2, Informative)

boyter (964910) | more than 5 years ago | (#27293471)

I just did this myself. I ended up just shooting for cheap hardware on the theory that if it breaks in 2 years I can just replace it. I have a Quad Core Phernom with 8 gig of RAM and two 750gig drives. Chucked VMWare on it and havent had any issues running about 8 or so VM's on it. It also serves up media using TVersity and is a network share dump as well.

The biggest issue I have had so far, is Disk Driver perfomance. If you are planning on running multiple concurrent VM's then go for as many HDD's as you can. Stick the most load intensive ones on seperate drives and you will really see the benefits.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?