×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Virtualization Is Not All Roses

CmdrTaco posted more than 7 years ago | from the watch-out-for-those-thorns dept.

Operating Systems 214

An anonymous reader writes "Vendors and magazines are all over virtualization like a rash, like it is the Saviour for IT-kind. Not always, writes analyst Andi Mann in Computerworld." I've found that when it works, it's really cool, but it does add a layer of complexity that wasn't there before. Then again, having a disk image be a 'machine' is amazingly useful sometimes.

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

214 comments

Frist psot (-1, Troll)

Anonymous Coward | more than 7 years ago | (#18291030)

GNAA trolls YOU!

Yawn (5, Insightful)

dreamchaser (49529) | more than 7 years ago | (#18291078)

This is the exact same pattern that almost every computing technology follows. First the lemmings all rush to sound smart by touting it's benefits. Soon it is the be all and end all in "everyone's" mind. Then the honeymoon fades and people realise it's a useful tool, and toss it into the chest with all the other useful tools to be used where it makes sense.

Re:Yawn (5, Informative)

WinterSolstice (223271) | more than 7 years ago | (#18291134)

Yes - we have quite a bit that we just put in here at my shop.

Virtualization good: Webservers, middle tier stuff, etc.
Virtualization bad: DBs, memory intensive, CPU intensive.

Biggest issue? "Surprise" systems. You might see a system and notice a "reasonable" load average, then find out once it's on a VM that it was a really horrible candidate because it has huge memory, disk, CPU, or network spikes. VMWare especially seems to hate disk spikes.

What we learned is it's the not the average as much as the high-water-marks that really matter. A system that's quiet 99.99% of the time, but spikes to 100% for 60 seconds here or there can be nasty.

Re:Yawn (2, Funny)

Anonymous Coward | more than 7 years ago | (#18291218)

re your sig:

An operating system should be like a light switch... simple, effective, easy to use, and designed for everyone.

Did you know that in the US, light switches are traditionally installed with "up" being "on", while in England they are traditionally installed with "down" being "on"?

Perhaps instead operating systems should be like nipples, everyone is born knowing how to use them, and they don't operate differently in different countries ;)

Re:Yawn (0)

Anonymous Coward | more than 7 years ago | (#18291786)

If the lights are off and you want them on, flip the switch.
If the lights are on and you want them off, flip the switch.
How is this difficult?
With the proliferation of three way switches and touch sensors,
i don't even think this is a problem for most anymore.

Re:Yawn (0)

Anonymous Coward | more than 7 years ago | (#18292340)

What if you're blind, though?

Re:Yawn (5, Funny)

rhaas (804642) | more than 7 years ago | (#18292428)

If you're blind, then why do you care about the light switch in the first place?

Re:Yawn (1)

Stalks (802193) | more than 7 years ago | (#18292320)

Perhaps instead operating systems should be like nipples
- Does that mean if a female computer is given a baby peripheral, it secretes white resin?

If the lights are off and you want them on, flip the switch.
If the lights are on and you want them off, flip the switch....

- You would be surprised, when I visit the USA, my mind subconsciously tries to push the light switch down to turn it on, it isn't until I put conscious thought into the process, that I push up instead.

Re:Yawn (4, Insightful)

vanyel (28049) | more than 7 years ago | (#18291396)

Virtualization good: Webservers, middle tier stuff, etc.
Virtualization bad: DBs, memory intensive, CPU intensive.


We're starting to do the same. It looks the articles basically says "managing them is more complex, and you can overload the host". Well duh! They're no harder to manage (or not much) than that many physical machines, but it does make it a lot easier (cheaper!) to create new ones. And you don't virtualize a machine that's already using 50% of a real system. Or even 25%. Most of ours sit at 1% though. Modern processors are way overkill for most things they're being used for.

Re:Yawn (5, Informative)

WinterSolstice (223271) | more than 7 years ago | (#18291568)

"Modern processors are way overkill for most things they're being used for."

Right - except like I said - watch those spikes. We took a system that according to our monitoring sat at essentially 0-1% used (load average: 0.01, 0.02, 0.01) and put it on a virtual. Great idea, right?

Except for the fact that once a day it runs a report that seems fairly harmless but caused the filesystem to go Read Only due to a VMWare bug. The report lasts only about 2 minutes, but it hammers the disk in apparently just the right way.

It's the spikes you have to be careful of. Just look for your high-water-marks. If the box spikes to 90% or 100% (though the load average doesn't reflect it) it will have some issues.

Re:Yawn (4, Informative)

cbreaker (561297) | more than 7 years ago | (#18291800)

Your bug comment is kinda moot - it's not a normal problem with virtualization.

We have over 120 VM's running on seven hosts with VI3. Most of them, as you can imagine, are not high work-load (although we do have four Terminal Servers handling about 300 terminals total) but sometimes they are, and we've really not had any issues.

It depends on what you're doing, really. Saying you WILL have problems is any situation isn't really valid.

Re:Yawn (2, Informative)

WinterSolstice (223271) | more than 7 years ago | (#18292278)

That's really awesome, and obviously your systems are a great use for VMs :D

Our web stuff virtualized *beautifully*. We had few to no issues, but we ran into major problems when mgmt wanted to virtualize several of the other systems.

And since when is a warning about an unfixed bug moot? It's an *unfixed* bug in ESX Server Update 3. When it's patched in the standard distribution, then it will be moot.

VMs are still quite a new science (as opposed to LPARs) so there are lots of bugs still out there.

Re:Yawn (1, Funny)

Anonymous Coward | more than 7 years ago | (#18292006)

If the only way you're looking at metrics on a system is running 'uptime' to get the load average...

Well, friend, your IT department has a lot more issues than just problems with VMWare.

Re:Yawn (3, Informative)

WinterSolstice (223271) | more than 7 years ago | (#18292182)

That's obviously just an example - uptime doesn't provide high-water marks, etc

Ahh, slashdot. People just *love* to split hairs :D

Ok, last time I'm saying this:
BE CAREFUL. Not every system is an ideal candidate for virtualization, and even the ones that seem perfect at first glance can fail. Don't rely on only "overview" metrics. Do thorough inspection, and make sure you load test.

VMs rule, but there are gotchas and bugs that can be showstoppers. Just cause someone else has 300 servers running via virtualization doesn't mean you can :D

Re:Yawn (1)

vanyel (28049) | more than 7 years ago | (#18292074)

Was this on the guest fs or the host fs? One of the things causing us to move slowly is that we've seen this a couple of times on the host fs, and not sure why. Once it happened when pre-allocating a virtual disk for a new system; we've been thinking we were triggering an obscure linux fs bug...

Re:Yawn (5, Informative)

giminy (94188) | more than 7 years ago | (#18292472)

We took a system that according to our monitoring sat at essentially 0-1% used (load average: 0.01, 0.02, 0.01) and put it on a virtual.

Load average is a bad way of looking at machine utilization. Load average is the average number of processes on the run queue over the last 1,5,15 minutes. Programs running exclusively I/O will be on the sleep queue while the kernel does i/o stuff, giving you a load average of near-zero even though your machine is busy scrambling for files on disk or waiting for network data. Likewise, a program that consists entirely of NOOPs will give you a load average of one (+1 per each additional instance) even if its nice value is all the way up and it is quite interruptable/is really putting zero strain on your system.

Before deciding that a machine is virtualizable, don't just look at load average. Run a real monitoring utility and look at iowait times, etc.

Reid

Re:Yawn (2, Informative)

T-Ranger (10520) | more than 7 years ago | (#18292508)

Well, disks may not be a great example. VMWare is of course a product of EMC, which makes (drumroll) high end SAN hardware and software management tools. While Im not quite saying that there is a clear conflict of interest here, the EMC big picture is clear: "now that you have saved a metric shit load of cash on server hardware, spend some of that on a shiny new SAN system". The nicer way of that is that both EMC SANs and VMware do the same thing: consolidation of hardware onto better hardware, abstraction of services provided, finer grained allocation of services, shared overhead - and management.

If spikes on one VM are killing the whole physical host, then you are surely doing something wrong. Perhaps you do need that SAN with very fast disk access. Perhaps you need to schedule migration of VMs from one physical host to another when your report server pegs the hardware. Or, if its an unscheduled spike, you need to have rules that trigger migration if one VM is degrading service to others.

Re:Yawn (0)

Anonymous Coward | more than 7 years ago | (#18292564)

We had a similar situation, though it was a consultant who tried to make the mistake. We were going virtual and had a webapp that when monitored, appeared to do "nothing" all day. He suggested we put it on a single virtual machine with one processor. The app's minimum requirements were one server with four processors or two servers with two processors each. There were a few select transactions that when triggered would require massive amounts of horsepower, or they would time out and fail. Virtualizing it ended up being successful, but we gave it two virtual machines with two processors each.

Re:Yawn (2, Interesting)

dthable (163749) | more than 7 years ago | (#18291506)

I could also see their use when upgrading or patching machines. Just take a copy of the virtual image and try to execute the upgrade (after testing, of course). If it all goes to hell, just flip the switch back. Then you can take hours trying to figure out what went wrong instead of being under the gun.

Re:Yawn (1)

WinterSolstice (223271) | more than 7 years ago | (#18291590)

Very good point - and one I personally enjoy. Especially good when building a "Reference" system before imaging it out to other servers. Being able to clone 30 web boxes in minutes off a virtual is SO nice :D

Re:Yawn (1)

OldeTimeGeek (725417) | more than 7 years ago | (#18291572)

Mind if I quote you to our server support units?

We're about to migrate our 500+ server farm (webservers, Exchange and databases) to VMs and I can't seem to get them to understand that not everything can work within a VM.

Re:Yawn (1)

WinterSolstice (223271) | more than 7 years ago | (#18291792)

Hehehe - please do :D

"This dude on this web forum said DBs suck on VMs"

Let me know how that works for you

Re:Yawn (3, Funny)

OldeTimeGeek (725417) | more than 7 years ago | (#18291878)

Has to work better than what I've tried. Besides, if I say I read it on Slashdot, they gotta submit to my advanced research skillz...

Re:Yawn (1)

afidel (530433) | more than 7 years ago | (#18291868)

Exchange, Database, and busy AD controllers (all forms of database) are the worst candidates for current VM solutions due to the heavy I/O penalty. Besides, most of those systems are busy a good percentage of the time and so are already poor candidates for VM's.

Re:Yawn (1)

ergo98 (9391) | more than 7 years ago | (#18292044)

Exchange, Database, and busy AD controllers (all forms of database) are the worst candidates for current VM solutions due to the heavy I/O penalty.

The I/O penalty isn't necessarily "heavy", and sometimes could best be called marginal. Furthermore, when you aggregate servers, people often find that their budget supports buying more beefy back-end hardware, perhaps getting a much more performant SAN -- itself a virtualization layer in the storage subsystem -- rather than a marginal disk array for each machine.

This is ignoring that the overwhelming majority of machines in small, medium, and large shops across the land sit at close to 0% 24 hours a day.

Spikes happen, but if my VM database server spikes on the 4-way dual-core machine backed by an extreme performance SAN that it sits on -- usually only during batch processing at night when the other user-facing virtual servers are doing nothing -- it has far more resources to draw from than if I had partitioned each box out into its own little physical island.

Re:Yawn (3, Interesting)

afidel (530433) | more than 7 years ago | (#18292162)

Well, our Oracle servers are DL585's with four dual core cpu's, 32GB of ram, dual HBA's backed by an 112 disk SAN and they regularly max out both HBA's, trying to run that kind of load on a VM just doesn't make sense with the I/O latency and throughput degradation that I've seen with VMWare. I know I'm not the only one as I have seen this advice from a number of top professionals that I know and respect. If you have a lightly loaded SQL server or some AD controllers handling a small number of users then they might be good candidates, but any server that is I/O bound and/or spends a significant percentage of the day busy is probably the lowest priority to try to virtualize. You can probably get 99+% of the benefit of virtualization from the other 80-90% of your servers that are likely good candidates.

Re:Yawn (4, Insightful)

ergo98 (9391) | more than 7 years ago | (#18292350)

I know I'm not the only one as I have seen this advice from a number of top professionals that I know and respect.

Indeed, it has become a bit of a unqualified, blanket meme: "Don't put database servers on virtual machines!" we hear. I heard it just yesterday from an outsourced hardware rep for crying out loud (they were trying to display that they "get" virtualization).

Ultimately, however, it's one of those easy bits of "wisdom" that people parrot because it's cheap advice, and it buys some easy credibility.

Unqualified, however, the statement is complete and utter nonsense. It is absolutely meaningless (just because something can superficially get called a "database" says absolutely nothing about what usage it sees, its disk access patterns, CPU and network needs, what it is bound by, etc).

An accurate rule would be "a machine that saturates one of the resources of a given piece of hardware is not a good candidate to be virtualized on that same piece of hardware" (e.g. your aforementioned database server). That really isn't rocket science, and I think it's obvious to everyone. It also doesn't rely upon some meaningless simplification of application roles.

Note that all of the above is speaking more towards the industry generalization, and not towards you. Indeed, you clarified it more specifically later on.

Re:Yawn (1)

bberens (965711) | more than 7 years ago | (#18291680)

I've found it to be an amazing tool for development and testing. We use free VMWare at work for this sort of thing all the time. It's really a dream and has saved us a ton of cash on hardware.

Re:Yawn (2, Interesting)

herve_masson (104332) | more than 7 years ago | (#18292504)

Virtualization good: Webservers, middle tier stuff, etc.

Virtualization *insanely* good: development !

It simply changed my programmer life entirely. How can I keep machines with any flavor and version of the linux boxes I'm working at which can be booted in seconds ? How can I have a (virtual) LAN with dozen machines communicating to each other when developping a failover/balanced service ? How can I multiply the number of machines by a cut'n'paste operation ? I do I rollback a damaging crash or a faulty operation (via snapshots) ? The whole thing even fit in my workstation and works beautifully.

VmWare is the most beautiful and useful piece of software I ever used I think, even with those stupid clock problems when running certain bsd/linux environements.

Jeez, I could not even think of working differently now. For me, this is more than a useful tool; this is a revolutionary tool that makes my job possible, which obvioulsy does not mean it's good and rosy for anything on the planet (who thought it was ?)

It's Marketing vs Technologists. (2, Informative)

khasim (1285) | more than 7 years ago | (#18291206)

No one who understands the technology believes that virtualization can perform all the miracles that the marketing people claim it can.

Unfortunately, management usually falls for the marketing materials while ignoring the technologists' cautions.

Remember, if they've never tried it before, you can promise them anything to make the first sale. Once you've sold it, it becomes a tech support issue.

Re:It's Marketing vs Technologists. (2, Insightful)

LinuxDon (925232) | more than 7 years ago | (#18292426)

While I completely agree with you on this at many other areas, I don't in this case.
The reason for this is that virtualization -simplifies- tech support in every way (except for real-time applications).

Load problems, especially in a virtualized environment are extremely easy to manage technically.
You can just add additional servers and move the virtual machine to the new machine while it's running.

It's the management who will be having a budget problem when this happens, while tech support is not having a technical problem.

Re:Yawn (4, Informative)

Anonymous Coward | more than 7 years ago | (#18291296)

First time I've ever posted anon...

A vendor just convinced management to move all of our webhosting stuff over to a Xen virtualized environment (we're a development firm that hosts out clients) a few weeks before I hired in. No one here understands how its configured or how it works and this is the first implementation that this vendor has performed, but management believes that they walk on water. No other tech shops in the area have even the slightest bit of expertise with it. So guess what now? Come hell or high water, we can't afford to drop these guys no matter how bad they might screw up.

Who ever claims that open-source is the panacea to vendor lock-in is smoking crack. Open source gives companies enough "free" rope to hang themselves with if it isn't implemented smartly. Virtualiztion is no different.

Re:Yawn (1)

99BottlesOfBeerInMyF (813746) | more than 7 years ago | (#18291780)

This is the exact same pattern that almost every computing technology follows.

For the most part, I agree. The main difference as I see it, is that hardware assisted virtualization hit at the same time as several other trends and it has been applied in ways that are upsetting some long-standing problems and roadblocks. When virtualization was being touted as the next great thing, people were thinking of it for use with flexible servers and Sun and Amazon and other players have brought that to market and it is nice and convenient and cheap, but not the solution to all our problems. What I don't think quite as many people were expecting was how VM on the desktop could undermine Windows by bringing the security of Linux to a Windows laptop, or the convenience of OS X to the same. This morning an engineering manager stopped by my office and asked me what it would take to setup a Windows on top of OS X solution. I told her she would need a new laptop from Apple and she went to write up the PO. She is locked in by proprietary Windows tools, but she needs to have some Mac programs as well and today, right now, that does not mean she needs two separate machines. A year ago, that would have been the case and it would have been a roadblock. MS sees this and are working to stop it, but they are late to the game now. Apple also is late to the game, but got lucky and now I don't think they have a clue as to what to do to capitalize upon this. Linux on the desktop could be the real winner that walks away from this upset, if someone is smart enough to invest in making it really usable for the average person, but I'm not sure the community can pull it off.

In summary, virtualization was touted as the next great thing, but it has made a big difference in surprising and unanticipated areas, which is what makes it a little unusual.

Is this for real? (4, Insightful)

Marton (24416) | more than 7 years ago | (#18291082)

One of the most uninformative articles ever to hit Slashdot.

"Oh, so now more apps will be competing for that single HW NIC?" Wow. Computerworld, insightful as ever.

Waste of time... (2, Insightful)

evilviper (135110) | more than 7 years ago | (#18291100)

I want those 2 minutes of my life back...

Re:Waste of time... (0)

Anonymous Coward | more than 7 years ago | (#18291540)

ok, all done. I've re-adjusted your lifespan information to add an extra 2 minutes to your life to componsate for what has been wasted here, though going by how this information says you'll die, i really don't think you'll want the extra time.

He must be talking about freeware (1)

lusid1 (759898) | more than 7 years ago | (#18291128)

His arguements don't apply to an ESX virtual infrastructure, though there is validity for the free virtualization products.

Re:He must be talking about freeware (5, Informative)

Semireg (712708) | more than 7 years ago | (#18291308)

I'm certified for both VMware ESX 2.5 and VMware VI3. VMware's best practices are to never use a single path, whether it be for NIC or FC HBA (storage). VMware also has Virtual Switches, which not only allows you to team NICs for load balancing and failover, but also use port groups (VLANs). You can then view pretty throughput graphs for either physical NICs or virtual adapters. It's crazy amazing(TM).

As for "putting many workloads on a box and uptime," this writer should really take a look at VMware VI3 and Vmotion. Not only can you migrate a running VM without downtime, you can "enter maintenance mode" on a physical host, and using DRS (distributed resource scheduler) it will automatically migrate the VMs to hosts and achieve a load balance between CPU/Memory. It's crazy amazing(TM).

Lastly, just to toot a bit of the virtualization horn... VMware's HA will automatically restart your VMs on other physical hosts in your HA cluster. It's not unusual for a Win2k3 VM to boot in under 20 seconds (VMware's BIOS posts in about .5 seconds compared to an IBM xSeries 3850 which takes 6 minutes). Oh, and there is the whole snapshotting feature, memory and disk, which allows for point in time recovery on any host. Yea... downsides indeed.

Virtualization is Sysadmin Utopia. -- cvl, a Virtualization Consultant

Re:He must be talking about freeware (2, Informative)

Professor_UNIX (867045) | more than 7 years ago | (#18291728)

Not only can you migrate a running VM without downtime
I'm pretty hard to please sometimes, but Vmotion is probably the single coolest feature of VMware ESX. The first time I sat there on a running VM while it was being migrated to another ESX server and didn't notice a single second of downtime while browsing the web (I had RDP'd to the box) I was in love. I was also pinging the machine from another window and it didn't drop a single packet. I really hope they eventually allow this feature to sneak into the free VMware Server and let you use it on NAS data stores for small businesses or home environments, but I doubt it.

Re:He must be talking about freeware (1)

zero time ghost (699927) | more than 7 years ago | (#18291908)

Yeah, VI3 is the start of something huge. It decouples hardware from OS almost completely -- and that has major implications in datacenter management. It's like the physical infrastructure is now just a blob of CPU, RAM, and disk space, and you can add or subtract hardware to that blob without disrupting your operations at all. Even the most jaded geek has to see how cool that is.

He must be talking about mainframes. (0)

Anonymous Coward | more than 7 years ago | (#18292230)

The most jaded geek has never dealt with mainframes. We've been doing some of this stuff for years. About time you all caught up.

Re:He must be talking about freeware (2, Insightful)

div_2n (525075) | more than 7 years ago | (#18292314)

I'm managing VI3 and we use it for almost everything. Ran into some trouble with one antiquated EDI application that just HAD to have a serial port. That is a long discussion, but for reasons I'm quite sure you could guess, I offloaded it to an independent box. We run our ERP software on it and the vendor has tried (unsuccessfully) several times to blame VMWare for issues.

You don't mention it, but consolidated backup just rocks. I have some external Linux based NAS machines that use rsync to keep local copies of both our nightly backups and occasional image backups at both sites.

Thanks to VMWare, it's like I've told management--"Our main facility could burn to the ground and I could have our infrastructure back up and running at our remote site before the remains stop smoldering much less get a check from the insurance company."

He must. ESX set up properly avoids most pitfalls (4, Insightful)

cbreaker (561297) | more than 7 years ago | (#18291910)

Indeed. If you have a proper ESX configuration: At least two hosts, SAN back-end, multiple NIC's, supported hardware - you'll find that almost none of the points are valid.

Teaming, hot-migrations, resource management, and lots of other great tools make modern x86 virtualization really enterprise caliber.

I think that the people that see it as a toy are people that have never used virtualization in the context of a large environment, being used properly with proper hardware. You can virtualize almost any server if you plan properly for it.

In the end, by going virtual you end up actually removing so much complexity from your systems that you'll never know how you did it before. No longer does each server have it's own drivers, quirks, OpenManage/hardware monitor, etc etc. You can create a new VM from a template in 5 minutes, ready to go. You can clone a server in minutes. You can snapshot the disks (and RAM, in ESX3) and you can migrate them to new hardware without bringing them down. You can create scheduled copies of production servers for your test environment. So much more simple then all-hardware.

I'll admit that you shouldn't use virtual servers for everything (yet) but you will eventually be able to run everything virtual, so it's best to get used to it now.

This just in... (2, Insightful)

Anonymous Coward | more than 7 years ago | (#18291132)

Really Cool Thing can have drawbacks. Popular computer technology shown not to be silver bullet. Film at 11.

Testing PXE terminals (3, Interesting)

Anonymous Coward | more than 7 years ago | (#18291144)

I've found that VMware is incredibly useful for testing network booting (PXE) systems. I rolled my own custom Damn Small Linux for PXE booting on our thin client workstations. VMware was great for testing purposes. Everybody loves DSL too, they can listen to streaming audio and MP3s while they work too, since I included mplayer and Flash in Firefox. NX and FreeNX to connect to our terminal server.

Question: Do cards have to support it? (1, Interesting)

Anonymous Coward | more than 7 years ago | (#18291166)

Hey I just wanted to know from someone who has tried virtualization, do graphics cards have to support virtualization? I mean I think that the drivers do some initialization when they startup, so will going from one machine to another cause a problem with that? I can think of a situation where one machine has an opengl window open and you go to the other machine to play an FPS, what will happen?

sorry for the AC,
Dan
(interesting that the word in the image is forgive lol)

Re:Question: Do cards have to support it? (3, Informative)

db32 (862117) | more than 7 years ago | (#18291336)

From what I have seen and experienced the VM video card is the issue. The virtual machine uses the virtual hardware drivers so the actual hardware is largely irrelevant so long as the host OS can handle it. In a desparate attempt to get FFXI installed on my linux machine I resorted to attempting to use VMware only to find out that VMware does not support any kind of 3d accel stuff (again, virtual hardware vs real hardware).

Re:Question: Do cards have to support it? (2)

kcbanner (929309) | more than 7 years ago | (#18292100)

Actually vmware *does* support 3D accel now...google it and you can add an option to the .vmx file to enable it.

Also, I suggest trying VirtualBox, it runs really smooth...fast too (xp home intall in 5 minutes), and it supports 3D accell I believe.

Desperate? (1)

hellfire (86129) | more than 7 years ago | (#18292164)

In a desparate attempt to get FFXI installed on my linux machine I resorted to attempting to use VMware

Ummm... Exactly how desperate does one have to be to attempt that???

Re:Desperate? (1)

db32 (862117) | more than 7 years ago | (#18292390)

Read the last few days worth of XP phone home and other such stories. My wifes legitimate install got hit with that WGA shit and determined she was pirated, when I called their only solution was buy a new copy. I REALLY REALLY don't want to put XP Media Edition back on my laptop and dual boot for a single game. But it mostly involves going on business trips living out of a hotel for 2 weeks with nothing better to do than play video games unmolested by children and day to day household chores :).

Re:Question: Do cards have to support it? (1)

joewhaley (264488) | more than 7 years ago | (#18292326)

<shameless-plug>
You should try out our moka5 LivePC Engine [moka5.com]. We implemented 3D graphics virtualization support [moka5.com] on top of VMware so almost all Direct3D games run at (moreorless) full speed. We often play Half Life 2 network games in the office inside of a virtual machine. (We call it "regression testing" :-).)
</shameless-plug>

Re:Question: Do cards have to support it? (1)

LordEd (840443) | more than 7 years ago | (#18291366)

I've only played a bit with virtual server 2005, but each virtual machine is given a virtual S3Trio64 video card (which does not have 3d support).

The graphics cards do not have to support virtualization because all hardware in a virtual system is virtual. It doesn't really exist. The system is just emulating how a given virtual hardware device would react.

I read about one of the other big virtual system that did allow you to use 3d hardware support, but that had to be assigned to a single virtual system and could not be shared.

Virtualization (5, Interesting)

DesertBlade (741219) | more than 7 years ago | (#18291170)

Good story, but I disagree in some areas.

Bandwidth concerns. You can have more than one NIC installed on the server and have it dedicated to each virtual machine.

Downtime: If you need to do maintance on the host that may be a slight issue, but I hardly ever have to anything to the host. Also if the host is dying, you can shut donw the Virtual machine and copy it to another server (or move the drive) and bring it up fairly quickly. You also have cluster capability with virtualization.

Re:Virtualization (1)

EvanED (569694) | more than 7 years ago | (#18291300)

Also if the host is dying, you can shut donw the Virtual machine and copy it to another server (or move the drive) and bring it up fairly quickly. You also have cluster capability with virtualization.

Enterprise VM solutions allow you to migrate with essentially no ( 1 sec) downtime.

Re:Virtualization (1)

rdoger6424 (879843) | more than 7 years ago | (#18292058)

nitpicking: VMware's ESX server has a feature (VMotion) that allows you to migrate from server to server with no downtime.

Re:Virtualization (2, Insightful)

drinkypoo (153816) | more than 7 years ago | (#18291304)

Bandwidth concerns. You can have more than one NIC installed on the server and have it dedicated to each virtual machine.

or, of course, you can use a faster network connection to the host, simplifying cabling. it might not be cost-effective to even go to GigE for many people at this point with one system per wire. For a lot of things it's hard to max that out, obviously fileserving and the like is not such an application, but those of you who have been there know what I mean. But if you're looking at multiple cables to each server and the attendant nightmares it may be just the reason you need to justify that new switch purchase.

Re:Virtualization (2, Insightful)

jallen02 (124384) | more than 7 years ago | (#18291576)

I would say that every single one of those points in the article are being addressed in the enterprise VM arena. In the end due to the raw extra control you get over virtual machines it very much is the future. There is very little memory overhead. Once virtual infrastructure becomes fully developed and the scene plays out completely I think it will actually make the things in the article easier, not harder. You have to pace yourself in how and where you use virtualization in your organization, but the benefits are huge for the right environments.

As far as current day performance goes: disk access is essentially close to if not at native speeds and CPU speed is generally 70-80% of what the native processor can do. Most instructions aren't touched by a virtual machine monitor at all. Memory is more or less untouched and you actually get memory savings. Say you have 4 VMs of Windows 2003 running. All of the pages of memory that are the same (say, core kernel pages and the like) get mapped to the same physical page. The guest operating systems never know. You can effectively scoop up a lot of extra memory if you have a lot of systems running the same software. All of those common libraries and Windows/Linux processes are only paid for once in memory. The technology is simply awesome. In a few years with more and more powerful multicore systems virtualization will make more and more sense, even on performance critical systems.

It has its problems, but I am a believer.

Re:Virtualization (0)

Anonymous Coward | more than 7 years ago | (#18292198)

Why do I need to run 4 instances of Windows Server 2003? It's a modern, pretty bullet-proof OS with protected memory. Why can't I run all the apps in your 4 instances on a single instance? (And get 100% of my CPU).

Re:Virtualization (1)

slackmaster2000 (820067) | more than 7 years ago | (#18291764)

Sometimes it's hard to wrap the mind around new concepts. It's hard to break out of the mindset that a server consists of hardware running an operating system upon which some software services are operating. If that entire server concept -- hardware, OS, software -- is bundled up into one software image that can be running on any piece of hardware on the network, then we have to re-imagine what "downtime" means, or what our hardware requirements are going to be. The ability to zip entire "servers" around various pieces of physical hardware has some pretty significant ramifications, especially if they aren't tightly bound to data storage. If imagined and implemented properly, it could indeed mean reducing downtime considerably while realizing a lot of hardware savings.

For saps like me, virtualization is presently just a really, really, really convenient way to try out server software.

This article is useless as it exposes nothing that isn't painfully obvious. I don't think that there's an IT department out there deploying virtualization without realizing each image on a machine is going to be sharing hardware and bandwidth. These are the same considerations we are faced with any time we deploy multiple services on the same machine. What's most interesting about virtualization is its possibilities, not it's drawbacks which aren't terribly unique.

Disk contention is the big shortcoming (3, Informative)

pyite69 (463042) | more than 7 years ago | (#18291188)

It is great for replacing things like DNS servers that are mostly CPU. However, don't try running two busy database machines on the same disk - you can't divide it up nearly as well as CPU or bandwidth use.

Also, make sure to try OpenVZ before you try Xen. If you are virtualizing all Linux machines, then VZ is IMO a better choice.

why are we reading this garbage? (5, Insightful)

philo_enyce (792695) | more than 7 years ago | (#18291194)

to sum up tfa: poor planning and execution are the cause of problems.

how about an article that makes some recommendations on how to mitigate the problems they identify with virtualization, or point out some non obvious issues?

philo

Re:why are we reading this garbage? (2, Funny)

ndansmith (582590) | more than 7 years ago | (#18291434)

how about an article that makes some recommendations on how to mitigate the problems they identify with virtualization, or point out some non obvious issues?
Have it on my desk Monday morning.

it is all roses for Disaster Recovery (2, Insightful)

QuantumRiff (120817) | more than 7 years ago | (#18291196)

If your servers become toast, due to whatever reason, you can get a simple workstation, put a ton of RAM in it, and load up your virtual systems. Of course they will be slower, but they will still be running. We don't need to carry expensive 4 hour service contracts, just next business day contracts, saving a ton of money. The nice thing for me with Virtual servers is it is device agnostic, so if I have to recover, worst case, I have only one server to worry about NIC drivers, RAID settings/drivers, etc. After that, its just loading up the virtual server files.

Re:it is all roses for Disaster Recovery (1)

bigredradio (631970) | more than 7 years ago | (#18291444)

Sort of... I agree that you can limit hardware needs, but you also have a central point of failure. If the host OS or local storage goes, you now have lost multiple systems instead of one. One issue I have seen is having external scsi support. At least with Xen, you cannot dynamically allocate a pci scsi card to each node. This may also hold true for fiber channel cards.(not sure). That means, no offsite tape backups for the individual nodes and no access to SAN storage through the virtual nodes.

Re:it is all roses for Disaster Recovery (1)

QuantumRiff (120817) | more than 7 years ago | (#18291606)

I'm pretty sure Fiber Channel works with Virtual servers, but I don't know about dedicating one card per host. I have played with iSCSI SAN's and virtual servers, and it works fairly well too. The lack of SCSI is a royal pain. I would love to setup my backup server as a virtual machine, and move it to any server with a SCSI card to restore from tape in an emergency.

Re:it is all roses for Disaster Recovery (1)

qwijibo (101731) | more than 7 years ago | (#18292038)

If you can go one step further and setup your backup server as a virtual machine on your primary server, you can be promoted to management. =)

excess power (3, Insightful)

fermion (181285) | more than 7 years ago | (#18291220)

I see virtualization as a means to use the excess cycles in the modern microsprocessors. Like over aggressive GUI and DRM, it creates a need for the ever more expensive and complex processors. I am continuously amazed that while I can run most everything I have on a sub GHZ machine, everyone is clamoring about the need for 3 and 4 GHZ machines. And though my main machine runs at over a GHZ, it still falters at decoding DRM compressed Video, even though a DVD plays fine on my 500 MHZ machine.

But it still is useful. Like terminals hooked up to big mainframes, it may make sense to run multiple virtual machines off a single server, or even have the same OS run for the same user in different spaces on a single machine. We have been heading to this point for a while, and now that we have the power, it makes little sense not to use it.

The next thing I am waiting for are very cheap machines, say $150, with no moving parts, only network drivers, that will link to a remote server.

Re:excess power (0)

Anonymous Coward | more than 7 years ago | (#18292194)

Who would have thought that moving to a dumb terminal would be a good idea?

Any tool can be misused (1)

lohphat (521572) | more than 7 years ago | (#18291242)

VMs are perfect for low bandwidth task which would otherwise have to take up their own box (web-hosting small domains for example). If you're trying to use VMs as a high-performance file server, you've chosen a path of pain.

Also any memory intensive task will have severe performance impacts. ESX's virtualization of the MMU adds 35% overhead and in some cases causes tasks to take twice as long opposed to raw h/w. As with all vendors, don't believe the marketing hype but test and benchmark before deployment on ALL solutions.

We're about 95% virtualized and never going back! (3, Interesting)

Anonymous Coward | more than 7 years ago | (#18291262)

The absolute only place it has not been appropriate are locations requiring high amounts of disk IO. It has been a godsend everywhere else. All of our web servers, application servers, support servers, management servers, blah blah blah. It's all virtual now. Approximately 175 servers are now virtual. The rest are huge SQL Server/Oracle systems.

License controls are fine. All the major players support flexible VM licensing. The only people that bark about change control are those who simply don't understand virtual infrastructure and a good sit-down solved that issue. "Compliance" has not been an issue for us at all. As far as politics are concerned -- if they can't keep up with the future, then they should get out of IT.

FYI: We run VMware ESX on HP hardware (DL585 servers) connected to an EMC Clariion SAN.

Re:We're about 95% virtualized and never going bac (1)

hamsjael (997085) | more than 7 years ago | (#18291694)

A lot of our customers have been convinced to run vmware ESX for their servers, our (GIS) apps are very cpu/IO intensive and perform really lousy on vmware. At our own network we run el-cheapo "vmware server" This also perform very badly, and i cant see it really doing anything other than light loads. And why is vmware so paranoid about performance ratings? I mean they are REALLY ANAL about it see this for example: http://r.vresp.com/?XenSource/374d47d120/874970/eb 5243d7c7/076d584 [vresp.com] look at all the "[REDACTED]" things.... WTF?! Furthermore wee have had a lot of problems getting the virtual (windows) machines to keep correct time, seems to be related to smp on the host.

Like all technologies, you need a good plan (2, Interesting)

caseih (160668) | more than 7 years ago | (#18291270)

There's nothing wrong with the technology as such. All of the problems mentioned in the article are not inherent to virtualization, nor are they flaws in the technology. Virtualization just requires some basic planning. What is the average disk utilization (disk bandwidth) of a server you want to virtualize? What about CPU? How about network bandwidth? You need to know this before you start throwing stuff into a VM. VMWare and Xen both allow you to take advantage of multiple hardware NICs in the host, multiple processing units, and also multiple physical disks and buses. Of course running multiple VMs on one host will have to share bandwidth and server throughput. The article is stating the obvious but making it sound like virtualization has an inherent fatal flaw and thus will fall out of favor, which makes the article rather lame.

Home Use (3, Insightful)

7bit (1031746) | more than 7 years ago | (#18291316)

I find Virtualization to be great for home use.

It's safer to browse the web through a VM that is set to not allow access to your main HD's or partitions. Great for any internet activity really, like P2P or running your own server; if it gets hacked they still can't affect the rest of your system or data outside of the VM's domain. It's also much safer to try out new and untested software from within a VM, in case of virus or spyware infection, or just registry corruption or what have you. I can also be useful for code developement within a protected environment.

Did I mention portability? Keep back-up's of your VM file and run it on any system you want after installing something like the Free VMWare Server:

http://www.vmware.com/products/server/ [vmware.com]

or VMWare Player:

http://www.vmware.com/products/player/ [vmware.com]

And if your VM gets infected or something, just delete it and make a copy of the backup, rinse & run!

the sad thing is how much we need virtualization (0)

Anonymous Coward | more than 7 years ago | (#18291326)

I think it is sad that we need virtualization as much as we do. So many applications today require a few single purpose dedicated machines. Look at the new Microsoft Exchange 2007 architecture. Accounting systems want dedicated front-end and back-end servers. You end up with so many underutilized machines performing the same functions. Yes, there are some really neat things you can do with virtualization, but server proliferation is still a problem.

Re:the sad thing is how much we need virtualizatio (2, Interesting)

dthable (163749) | more than 7 years ago | (#18291608)

And if the software doesn't require a dedicated machine, the IT department wants one. The company I used to work for would buy a new machine for every application component because they didn't want Notes and a homegrown ASP application to conflict with each other. Seemed like a waste of hardware in my opinion.

Same old "doing it half-assed" (2, Interesting)

Jagged (2249) | more than 7 years ago | (#18291404)

From the article:

Increased uptime requirements arise when enterprises stack multiple workloads onto a single server, making it even more essential to keep the server running.
You don't just move twenty critical servers to one slightly bigger machine. You need to follow the same redundancy rules you should follow with the multiple physical servers.

Unless you are running a test bed or dealing with less critical servers, where you can use old equipment, you get a pair (at least) of nice, beefy enterprise servers with redundant everything and split the VMs among them. And with a nice SAN between them, you can move the VMs between the servers when needed.

Even better if you can, get the servers (or another pair) set up at two sites for disaster recovery.

Yes, this will cost money, but Virtuilzation is not designed to make the bean counters save money. You need a plan to do it right and the budget to pay for all of it.

Completely trash article... here's why... (1, Informative)

Anonymous Coward | more than 7 years ago | (#18291464)

Increased uptime requirements arise when enterprises stack multiple workloads onto a single server, making it even more essential to keep the server running. "The entire environment becomes as critical as the most critical application running on it," Mann explains. "It is also more difficult to schedule downtime for maintenance, because you need to find a window that's acceptable for all workloads, so uptime requirements become much higher."


Absolute rubbish. If you don't know how to buy and install redundant hardware and implement a virtualization platform that allows hot-migration, then you should learn. If you don't want to, then you need to go back to help desk duty.

Bandwidth problems are also a challenge, Mann says, and are caused by co-locating multiple workloads onto a single system with one network path. In a physical server environment, each application runs on a separate box with a dedicated network interface card (NIC), Mann explains. But in a virtual environment, multiple workloads share a single NIC, and possibly one router or switch as well.


Ohhh nooo! Sharing a single router! Sharing a single gigabit NIC!

First, regarding the NICs. When we first started working with VMware ESX, we bought four gigabit NICs thinking we'd need that much bandwidth. Guess what? We don't. We're so far from it. Even with iSCSI operations. Any basic tech article you will read about getting into VMs will explain why two gigabit NICs are probably enough. Before your NIC is flooding, your server will be. And that's not even taking into account 10-gigabit NICs.

As far as routers are concerned... My God man, what kind of dime store router are you running that this sort of thing becomes a concern?

This article is clearly written by rank amateurs and should be completely dismissed.

For those of you that don't want the adds. (-1, Redundant)

Anonymous Coward | more than 7 years ago | (#18291470)

Andi Mann, senior analyst with Enterprise Management Associates, an IT consultancy based in Boulder, Colo., says that virtualization's problems can include cost accounting (measurement, allocation, license compliance); human issues (politics, skills, training); vendor support (lack of license flexibility); management complexity; security (new threats and penetrations, lack of controls); and image and license proliferation.

Mann says that enterprises sometimes have difficulty finding or applying adequate monitoring and management tools that work across both virtual and physical landscapes. Other issues can include support, integration and compatibility of different operating systems on the multivendor hardware being virtualized.

Increased uptime requirements arise when enterprises stack multiple workloads onto a single server, making it even more essential to keep the server running. "The entire environment becomes as critical as the most critical application running on it," Mann explains. "It is also more difficult to schedule downtime for maintenance, because you need to find a window that's acceptable for all workloads, so uptime requirements become much higher."

Bandwidth problems are also a challenge, Mann says, and are caused by co-locating multiple workloads onto a single system with one network path. In a physical server environment, each application runs on a separate box with a dedicated network interface card (NIC), Mann explains. But in a virtual environment, multiple workloads share a single NIC, and possibly one router or switch as well.

This setup can increase the network traffic through this single path, "resulting in problems with bandwidth availability and throughput," Mann says. Besides these technical concerns, enterprises also must often deal with misleading and conflicting vendor claims, plus the potentially unexpected costs of additional hardware and software for a virtualized environment, Mann adds.

Also depends on the kind of virtualization (0, Offtopic)

sconeu (64226) | more than 7 years ago | (#18291484)

What kind of virtualization do you need?

Are we talking server virtualization? Are we talking storage virtualization?

There are many kinds of virtualizations.

I admit, I didn't RTFA, but based on the comments I'm assuming server virtualization.

Storage virtualization, done right, can be done with minimal overhead inside your SAN fabric.

Worst. Article. Ever. (2, Informative)

countSudoku() (1047544) | more than 7 years ago | (#18291528)

God damn, that was so not worth the RTFA. I have adblock+ running and there were still more crap panes than individual characters in the article proper. I'll think twice before venturing to craputerworld next time. From the "no shit, Sherlock" dept. would be more appropriate. That article, besides being a waste of time, was so junior admin.

Most admins have already figured out that; 1) don't put all your "eggs" into one virtual "basket", 2) spread the virts across multiple NICs and keep the global(or master) server's NIC separate, 3) use VIPs and clusters to load balance across similar virtual instances on separate physical h/w to keep unexpected downtime in check, 4) don't load up too many dissimilar virts into a single physical server, 5) learn the new environment in dev/qa and do your homework on the new commands and resource/user capping features, and 6) read more /. and less computerworld. WTF, bring something new to the table. That was just weak.

Hype Common Sense (2, Interesting)

micromuncher (171881) | more than 7 years ago | (#18291556)

The article mentions a point of common sense that I fought tooth 'n nail about and lost in the Big Company I'm at now.

For a year I fought against virtualizing our sandbox servers because of resource contention issues. One machine pretending to be many with one NIC and one router. We had a web app that pounded a database... pre virtualization it was zippy. Post virtualization it was unusuable. I explained that even though you can Tune virtualized servers, it happens after the fact, and it becomes a big active management problem to make sure your IT department doesn't load up tons of virtual servers to the point it affects everyone virtualized. They argued, well, you don't have a lot of use (a few users, and not a lot of resource utilization.)

My boss eventually gave in. The client went from zippy workability in an app being developed, to slow piece of crap because of resource contention, and its hard to explain that an IT change forced under the hood was the reason for SLOW, and in UAT, SLOW = BUSTED.

That was a huge nail in the coffin for the project. When the user can't use the app on demand, for whatever reason, and they don't want to hear jack about tuning or saving rack space.

So all you IT managers and people thinking you'll get big bonuses by virtualizing everything... consider this... ONE MACHINE, ONE NETWORK CARD, pretending to be many...

Re:Hype Common Sense (1)

LodCrappo (705968) | more than 7 years ago | (#18291876)

sounds like sour grapes and a piss poor implementation to me. why didn't you just install more NICs if that was the problem, or more ram, more CPUs etc if that was the problem?

Re:Hype Common Sense (1)

lucabrasi999 (585141) | more than 7 years ago | (#18292084)

For a year I fought against virtualizing our sandbox servers because of resource contention issues.

Sandbox, Test, Development. Those are the environments that just scream FOR virtualization. Obviously, your organization needs a lesson in virtual architecture. Sounds like you purchased your services from Andi Mann. Trust me, based on what I read in the article, the guy has no idea what he is doing.

Virtualization != x86 (4, Insightful)

HockeyPuck (141947) | more than 7 years ago | (#18291580)

Why is it all of a sudden whenever someone says "Virtualization" they imply that it must be Vmware/Xen/windows/x86 platform.

It's not like these issues haven't existed on other platforms. Mainframes, mini's (as400), Unix (aix/solaris/hpux), heck we've had it on non-computer platforms (VLANs anyone...).

And yes using partitions/LPARs on those platforms required *GASP* planning, but in the age of "click once to install DB and build website" aka "Instant gratification" we refuse to do any actual work prior to installing, downloading, deploying...

How about a few articles comparing AIX/HPUX/Solaris partitions to x86 solutions...

Virtualization Is Not All Roses? (2, Funny)

wiredog (43288) | more than 7 years ago | (#18291658)

Is some of it crocuses? Or at least daffodils?

Please tell me it's not daisies.

Virtual Roses come with Virtual Thorns! (0, Redundant)

Pohket (1033316) | more than 7 years ago | (#18291696)

Even better, now IT workers can take (virtualized) servers with them when you fire them!

Author is completely uninformed (4, Insightful)

LodCrappo (705968) | more than 7 years ago | (#18291790)

Increased uptime requirements arise when enterprises stack multiple workloads onto a single server, making it even more essential to keep the server running. "The entire environment becomes as critical as the most critical application running on it," Mann explains. "It is also more difficult to schedule downtime for maintenance, because you need to find a window that's acceptable for all workloads, so uptime requirements become much higher."

No, no, no. First of all, in a real enterprise type solution (something this author seems unfamiliar with) the entire environment is redundant. "the" server? You don't run anything on "the" server, you run it on a server and you just move the virtual machine(s) to another server as needed when there is a problem or maintenance is needed. It is actually very easy to deal with hardware failures.. you don't ever have to schedule downtime, you just move the VMs, fix the broken node, and move on. For software maintenance you just snapshot the image, do your updates, and if they don't work out, you're back online in no time.

In a physical server environment, each application runs on a separate box with a dedicated network interface card (NIC), Mann explains. But in a virtual environment, multiple workloads share a single NIC, and possibly one router or switch as well.

Uh... well maybe you would just install more nics? It seems the "expert" quoted in this article has played around with some workstation level product and has no idea how enterprise level solutions actually work.

The only valid point I find in this whole article is the mention of additional training and support costs. These can be significant, but the flexibility and reliability of the virtualized environment is very often well worth the cost.

VMware or Windows Virtual Server? (1)

Dadoo (899435) | more than 7 years ago | (#18291856)

As long as we're on the subject, does anyone have any opinions about whether VMware or Windows Virtual Server is better and why? We're actually in the process of spec-ing out our first virtual server, as we speak, and we're having an argument over which one to use. Are there any other virtualization technologies we should be considering?

Re:VMware or Windows Virtual Server? (1)

lucabrasi999 (585141) | more than 7 years ago | (#18292034)

Are there any other virtualization technologies we should be considering?

Yes. AIX.

Re:VMware or Windows Virtual Server? (1)

Dadoo (899435) | more than 7 years ago | (#18292248)

Yes. AIX.

Sorry, I can't agree with you, there. We have a couple of AIX servers here (a pSeries 550 and a 6F1) and I can tell you, unless IBM gets their act together before we need to replace them, they will not be replaced with IBM servers. My experience with IBM is that, if you're not willing to spend $500,000 or more on a machine, they don't want to be bothered.

Re:VMware or Windows Virtual Server? (1)

jregel (39009) | more than 7 years ago | (#18292518)

I've used both VMware Server (the free one) and Windows Virtual Server and they both do the same sort of virtualisation - on top of a host OS. In the case of VMware Server, I've installed on top of a stripped down Linux install and it's working pretty well. Obviously Windows Virtual Server requires a Windows OS underneath it which has a bigger overhead.

My personal preference is to use VMware Server as the product works incredibly well. That's not to say that Virtual Server doesn't work well, but it just feels more mature.

Of course, to be serious about it, have a look at Virtual Infrastructure 3 (aka ESX server). This really is an impressive piece of software. It runs without a host OS and can do all sorts of resource management, failover etc. We're in the process of rolling out four VM host servers running ESX Enterprise and although we're at the early stages, we've managed to make a good start in getting a virtualised Altiris deployment server and a couple of Unicenter monitoring servers running in VMs, saving us a fair amount in physical hardware.

It's a bit of a mind shift if you're used to specifying physical hardware, but the flexibility is very impressive.

Screwdrivers (1)

Weaselmancer (533834) | more than 7 years ago | (#18291972)

I've found that when they work, it's really cool, but it does add layer of complexity that wasn't there before. Then again, having screws hold items together instead of nails is amazingly useful sometimes.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...