Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Buying New Commercial IT Hardware Isn't Always Worthwhile (Video)

Roblimo posted about 3 months ago | from the sometimes-it's-better-and-costs-less-to-stick-with-proven-hardware dept.

Businesses 92

Ben Blair is CTO of MarkITx, a company that brokers used commercial IT gear. This gives him an excellent overview of the marketplace -- not just what companies are willing to buy used, but also what they want to sell as they buy new (or newer) equipment. Ben's main talking point in this interview is that hardware has become so commoditized that in a world where most enterprise software can be virtualized to run across multiple servers, it no longer matters if you have the latest hardware technology; that two older servers can often do the job of one new one -- and for less money, too. So, he says, you should make sure you buy new hardware only when necessary, not just because of the "Ooh... shiny!" factor" (Alternate Video Link)

cancel ×

92 comments

Sorry! There are no comments related to the filter you selected.

Duh (5, Funny)

HornWumpus (783565) | about 3 months ago | (#47510227)

Used hardware vendor says rack space is free...run your data center on Pentium 3s. News at 11.

Re:Duh (0)

Anonymous Coward | about 3 months ago | (#47510281)

Power is free and getting cheaper all the time!

Re:Duh (4, Insightful)

tsa (15680) | about 3 months ago | (#47510323)

Electrical energy is also free, apparently.

Re:Duh (1)

fibrewire (1132953) | about 3 months ago | (#47510649)

The key is to know what the hell you're doing, and let the spectators watch the show. New or used, buy for what your customer needs - NOT for the free tickets Vendor X gives you as an incentive to buy their gear. Being a professional in this field, here are some pointers.
- Contact your power company for virtualization incentives to upgrade all that old hardware.
- Use free software that is commercially supported, like XenServer, QuantaStor, and Zimbra.
- Stick with a vendor that produces quality gear like Dell (Sorry HP, but you know why)
- Buy from wholesale vendors like Stallard, or from established eBay vendors.

Re:Duh (3, Informative)

FuegoFuerte (247200) | about 3 months ago | (#47510689)

If you're saying HP doesn't produce quality gear, you have apparently not used their servers. There's a reason they're one of very few top-tier server vendors, and it's because they do produce some great gear. I came from an all-HP shop, and I'm currently in an all-Dell shop. Both manufacturers have their strengths and weaknesses, but all things considered they're approximately equivalent.

Re:Duh (1)

fibrewire (1132953) | about 3 months ago | (#47510889)

You are correct. Both have their strengths and weaknesses, but for automated systems there seems to be a need for constant babysitting for HP equipment. Seems like HP support is always showing up to do SOMETHING to a server related to drive firmware or cooling fans, while the Dell equipment just keeps running ad infinitum. HP support has only cost $300K this quarter for the aforementioned issues, and caused downsizing in a department that could use a few more bright staff members. Cooling fans and drive firmware - is this for real?

Re:Duh (1)

Anonymous Coward | about 3 months ago | (#47511825)

HP and Dell and IBM do not make hard drives. If HP is updating drive firmware and Dell isn't, for likely the same model drives wrapped in custom sleds and (only sometimes) with certain operational parameters tweaked, you might be surprised to later discover Dell drives "failing" when they are actually just fine except for a Seagate/Hitachi/etc firmware bug, or worse yet, subtle, infrequent, silent data corruption that can go on for months before being discovered when something mysteriously fails to compile. If you're spending that much on time and materials charges from any vendor you either have a ton of equipment or you made the wrong choice when deciding not to go with a maintenance contract. In addition, a lot of that would be covered under warranty for 3 years so your equipment is too old and should be replaced. Companies just capitalize it anyway (i.e. TFA, which I didn't read, sounds like nonsense because the initial cost of the hardware is actually a relatively small part of the TCO over several years of operation and if you DO have new high-end hardware and run into temporarily tough times later yo'll be just fine postponing the hardware refresh for a year, while if you have old used crap you're going to be in a world of hurt). Datacenter space is expensive, electricity and cooling are expensive, support and maintenance is expensive. You can pay me now or you can pay me later, but now saves you in the long run if you can collapse your server footprint AND save power/cooling AND get hardware maintenance under warranty with new equipment, AND capitlize the hardware expenditures and take depreciation over time.

Re:Duh (1)

dbIII (701233) | about 3 months ago | (#47512651)

If you're saying HP doesn't produce quality gear, you have apparently not used their servers

They also produce such crap that they have been fined for false and misleading conduct in relation to the sale of computer products (Australia's ACCC). It's difficult for the buyer to determine what is top notch HP gear and what is not based on what HP salesfolk are spouting.

in an all-Dell shop

Dell used to be mostly ASUS until ASUS went it alone. There's plenty of white-box vendors that are very good in certain segments so are worth looking at before playing potluck with HP (or Dell) who will always have something to sell to you whether it's what you are really after or not.

Dell's strength is that they make weak servers (1)

0xdeaddead (797696) | about 3 months ago | (#47514281)

HP's weakness is their prices.

Re:Duh (1)

sjames (1099) | about 3 months ago | (#47510651)

I picked up a Sunfire v20 for $20. I would have to run it for 8 years non-stop for the electricity cost to add up to the cost of a new more efficient machine of equivalent capability.

Re:Duh (3, Interesting)

Penguinisto (415985) | about 3 months ago | (#47510787)

A cheapie SunFire v200/210 will run like a tank, but you'll be crippled by the server's top speed, and they do put out the heat if you push up the load average (and HVAC costs should always be factored in, yo.)

You'll also need to buy a lot of those pizza boxes to make up for the processing power that you can find in a box half its age, let alone the newer iron.

Sometimes you have to run the old stuff (I work in an environment where we have testbed boxes, and SunFires are a part of that, along with ancient RS/6000 gear, PA-RISC HPUX gear, etc. I can tell you right now that the old stuff cranks out a lot more heat (and in many cases eats a lot more rackspace) than the equivalent horsepower found in just a handful of new HP DL-360's.

Re:Duh (2)

sjames (1099) | about 3 months ago | (#47511393)

It is used only for administration, serial console for a few devices, crunch some log files, etc. It used to be a backup mail server as well. All well within it's capabilities. It isn't likely to run at high load very often. The run like a tank feature is it's primary reason to be. Since it is the machine used to diagnose problems, it's helpful that it is unlikely to be the machine with a problem.

Sometimes, old used equipment is exactly the right answer, sometimes it's a terrible idea. The production servers are much newer machines.

Re:Duh (0)

Anonymous Coward | about 3 months ago | (#47511867)

When I'm stuck in the server room for extended periods, standing behind the AIX shelves for a couple minutes always helps to bring the body temp. back up :P

Depends on the tasks (1)

dbIII (701233) | about 3 months ago | (#47512703)

You'll also need to buy a lot of those pizza boxes to make up for the processing power that you can find in a box half its age, let alone the newer iron.

It entirely depends on what you are doing with it. If the task is not CPU bound on an old box you don't need a lot of them.
I've got one old sparc box for occasional use for some legacy software from 1996 and 2002 - it flies on a machine from around 2008. Another has a pile of old tape drives of various types hooked up to it, once again, for occasional use. The only gain in either situation from replacing them is theoretically increasing longevity. Neither case lends itself to a virtual machine unless the thing running that VM has a sparc processor, in which case there's no point for a VM.

Re:Depends on the tasks (1)

Penguinisto (415985) | about 3 months ago | (#47512815)

The only gain in either situation from replacing them is theoretically increasing longevity. Neither case lends itself to a virtual machine unless the thing running that VM has a sparc processor, in which case there's no point for a VM.

Well, not entirely "no point"... [wikipedia.org] (and I didn't even have to bring up zones ;) )

Re:Depends on the tasks (1)

dbIII (701233) | about 3 months ago | (#47512993)

There would be a point if I had something else a new sparc could be doing. Until then two old bits of kit with almost zero resale value will keep things going, with no real problems unless both die at once, and even then the original SparcStation10 the original software runs on still turns on but is slowwwww.
A sparc VM on x86 that actually runs sparc solaris would be nice, and apparently such things were seen in the wild in the past but are unavailable now.

Re:Duh (2)

FuegoFuerte (247200) | about 3 months ago | (#47511129)

Sunfire v20 has a 465w PSU, so figure about 350w under typical usage. Once you figure power and cooling in a typical datacenter environment, cost hits somewhere around $2,900/year (at $25/watt over a 3-year lifespan). So, over 8 years, you're looking at $23,200 for that old Sunfire. I find it hard to believe your new more efficient machine of equivalent capability will cost you nearly $23,000.

Or, you're running it in your mother's basement where things like power and cooling aren't an issue.

Re:Duh (1)

sjames (1099) | about 3 months ago | (#47511443)

Nice way to toss in an insult just to prove what a bright light you aren't.

Were you a brighter bulb, you would realize that PS rating and actual consumption often have little to do with each other. In fact it doesn't draw half of what you think it does and electricity doesn't cost as much as you think where I have the box.

Meanwhile, who said my usage was typical? Certainly not me.

Re:Duh (0)

Anonymous Coward | about 3 months ago | (#47512397)

Our old build server drew about 200W under load, the new E5 box only consumes about 60W and does the job twice as fast.

140W difference for ~8h/day nets us about $500/year in savings at $0.12/kWh. ~2 years for the box to pay for itself due to electricity consumption alone.

I agree though, everyone should run their own numbers.

Re:Duh (0)

Anonymous Coward | about 3 months ago | (#47514937)

Our old build server drew about 200W under load, the new E5 box only consumes about 60W and does the job twice as fast.

140W difference for ~8h/day nets us about $500/year in savings at $0.12/kWh. ~2 years for the box to pay for itself due to electricity consumption alone.

I agree though, everyone should run their own numbers.

Your calculations are off by a factor of 10.
0.14kW * 8h/day * 365day/year * $0.12/kW = $49/year

MegaDuh (1)

dbIII (701233) | about 3 months ago | (#47512725)

so figure about 350w under typical usage

It doesn't work like that.
To use a car analogy that would be like assuming each car is moving ten tons all the time just because the motor can do it.

Re:MegaDuh (1)

FuegoFuerte (247200) | about 3 months ago | (#47519339)

It depends on the device. Most manufacturers don't drastically over-spec their PSUs for a purpose-built server, because to do so is highly inefficient. In practice, most enterprise-class devices will use somewhere between 65-80% of their max PSU rating under load. In this case, that's somewhere between 302 and 372 Watts, so I settled on a nice even number sort of in the middle. Since the spec sheet I found only listed max power draw and not typical, I used a reasonable estimate based on typical enterprise equipment that I've dealt with. These numbers don't have to be exact, and in fact aren't meant to be - the point is the same; even if the server only used 200W, you'd be looking at somewhere around $13k for power and cooling in a typical datacenter environment over the 8-year lifespan cited. For that much, you can EASILY get a more capable server which will use substantially less power/cooling.

Peak (1)

dbIII (701233) | about 3 months ago | (#47520231)

The specs are for peak consumption for whatever can normally be expected to fit in the box. That design criteria can mean speccing it for a couple of dozen power hungry disks even if they are not in 95% of the servers of that type.

the point is the same; even if the server only used 200W

With respect, basing a precise number on a wild guess (eg. $13k vs $10k or $1k) is pointless numerology even if it is a common bad habit.
In general terms you have a point but in specifics you are probably out an order of magnitude or two, especially in terms of little servers that probably draw under 100W while fully CPU bound despite having a 500W rated power supply.

Re:MegaDuh (1)

dbIII (701233) | about 3 months ago | (#47520257)

In practice, most enterprise-class devices will use somewhere between 65-80% of their max PSU rating under load

Where on earth did you get that from? Also why assume stuff is running at full capacity 24/7/365 anyway? I've got some stuff that's fully CPU bound for weeks at a time (geophysical software), but even then it adds up to only about 1/8 of a year. Other places with less time sensitive stuff can queue things up and get 100% usage out of the resources they have but it's not common outside of specific fields.

Re:MegaDuh (1)

FuegoFuerte (247200) | about 3 months ago | (#47519347)

Also, your car analogy fails, because a computer is nothing like a car.

Re:MegaDuh (1)

dbIII (701233) | about 3 months ago | (#47520349)

The analogy is fine because it describes capability versus actual usage. Reading anything more than that into it is ridiculous - it's an analogy.

In this case the PSU is capable of supplying a lot more power than a fully fitted out server can consume at maximum load, and then some. If the server doesn't have a dozen disks it's likely to still have the same model of PSU as the one that does, and even that one with a dozen disks is not going to be running them all at maximum power consumption all of the time. For example I've got compute nodes with a single SSD and a dozen empty bays because that's the cheapest decent chassis for a multi-socket board - the max load on those PSUs would be a small fraction of wjhat they can do.

The real answer to this is just look at the power bill to see usage or run a server via one of those now really cheap meters for a while and see what it really draws.

Apart from outliers like the Pentium IV (netburst) stuff, and storage of course, the power consumption of typical machines under typical loads hasn't dropped as much as we'd all like over the past few years. That old Sun machine sitting idle most of the time is not going to use much less power than a new Xeon sitting idle most of the time.
If you go beyond the typical and use an Atom or ARM chip and SSDs you win big time (if your software is portable enough) but comparing like to like you don't.

Re:Duh (1)

Penguinisto (415985) | about 3 months ago | (#47510743)

Electrical energy is also free, apparently.

So is HVAC - go figure.

Propaganda Machine Says: (0)

Anonymous Coward | about 3 months ago | (#47511259)

When IBM releases new hardware the best time to buy is later. When INTEL release new hardware the best time to buy is now.

Re:Duh (0)

Anonymous Coward | about 3 months ago | (#47510351)

Amazon EC2 Free Tier gives about as much performance as a Pentium 3. Run your Quake 3 like it's 1999, in the cloud for free!

Re:Duh (1)

Anonymous Coward | about 3 months ago | (#47510591)

The Free Tier is absolutely useless, and time-limited.

As for preowned hardware. Any quad-core system is still usable today, but keep in mind that a 8 year old quadcore is only half as good as today's quadcore at the same speed. You can buy perfectly good Dual Quadcore systems for 350$ off eBay that two of them equals the same performance as a 8000$ new machine.

If you have your gear somewhere where you have an entire rack (eg he.net) but they only give you 15A. Then it makes no sense to upgrade your equipment except to save energy. My client has exactly 4 dual-quad core machines in the rack that can be used to maximum capacity. If the other idled gear in the rack was also used it would trip circuit breakers.

There is a point to having the latest systems, but that requires having rackspace with power that lets you actually use it. If I had unlimited power I could fill the rack ( 42 dual quadcores = 336 cores, or 158A needed. But I can't, so only 10% of the space is used.

not main servers. $1300 IP KVM $120. Storage $110 (1)

raymorris (2726007) | about 3 months ago | (#47512455)

For your primary servers, power is a very important cost consideration of course.
On the other hand, I buy Raritan 16 port IP KVMs that are BETTER than their new models at 90% lower cost. I use them a few times power year. Their better than the new ones because they have a perfectly good web interface I can use from my phone to take care of a server that it down, rather than having to drive to office to use their proprietary control software for the new ones.

Similarly, I use some very popular 16-bay storage boxes that I get for around $100 used. It's nothing more than a metal box with a SAS expander in it. There's darn little that can go wrong with what is essentially just a case and sleds, so why would I want to pay $X000 each for them?

The people talking about tax depreciation obviously haven't thought it through. You pay lower taxes by having lower profits. Sure, spending $20,000 on equipment means you can (slowly) deduct $20,000 from your taxable profit, thereby reducing your tax by $4,000. You just spent $20,000 to "save" $4,000. That's not exactly a brilliant move, especially since that $4,000 is depreciated over at least five years. You want to spend $20,000 now to get $4,000 back five years from now? I see why you're a computer geek and not an accountant (or manager).

Re:not main servers. $1300 IP KVM $120. Storage $1 (1)

Lord Lemur (993283) | about 3 months ago | (#47514493)

The point being that the capital depreciation can be offset making the TCO lower. It is an important factor in the math. You want to compare TCO of your current kit vs. TCO of optional kit, and a 4K swing might change the winner.

shocking (1)

Anonymous Coward | about 3 months ago | (#47510253)

A guy who is the CTO of a company that deals in used hardware tries to urge people to buy used servers instead of new ones.

No kidding (2)

pkinetics (549289) | about 3 months ago | (#47510527)

And the sad part is some CFO will see the video clip, override the CIO's IT Plan for updating their hardware infrastructure and then complain about a lack of 110% uptime

Re:No kidding (1)

RabidReindeer (2625839) | about 3 months ago | (#47510831)

And the sad part is some CFO will see the video clip, override the CIO's IT Plan for updating their hardware infrastructure and then complain about a lack of 110% uptime

Well, I'm burning in a refurbed Dell server right now. But I only demand 100% uptime, so I'm happy.

Re:shocking (1)

mjwalshe (1680392) | about 3 months ago | (#47510797)

well might make sense for your dev/test infrastructure if you need to build a technical copy /testbed

What about power? (3, Informative)

MetalliQaZ (539913) | about 3 months ago | (#47510263)

I can't see the video but in the summary he mentions using two old servers to do the job of one new server. I appreciate the recycling, but it sounds like he is talking processing or I/O equivalence, and usually it is power that is the dominating factor in data center effectiveness. Are two servers really cheaper than one when you factor in electricity, cooling, and rack space?

Re:What about power? (5, Insightful)

MightyMartian (840721) | about 3 months ago | (#47510451)

For some tasks I can understand recycling. I use older hardware to build routers, anti-spam gateways, VPN appliances and the like. Normally these are fairly low-cycle tasks, at least for smaller offices. But I've learned my lesson about using older hardware in mission critical applications. I've set up custom routers that worked just great, until the motherboards popped a cap, and then they're down, and unless you've got spares sitting around, you're in for some misery.

Re:What about power? (2)

afidel (530433) | about 3 months ago | (#47510575)

This is why we've got a virtualization first strategy, VMWare HA makes sure even if you lose a box downtime is minimal (and for even more fun use Fault Tolerance and so long as your switches are properly configured you lose nothing since the two VMs run in lock step)

Re:What about power? (1)

mjwalshe (1680392) | about 3 months ago | (#47510755)

that's why you buy Telco grade cisco gear

Re:What about power? (1)

Ravaldy (2621787) | about 3 months ago | (#47514775)

Yep. Cost of new gear cost of maintenance + down time.

As equipment ages failure probability increases. A power supply in the process of failing isn't always easy to identify as starvation to components can cause odd problems that don't look like a power supply failure. You have to troubleshoot and its harder if it's old equipment because sometimes the standards have passed and getting quick replacement components is not possible.

Re:What about power? (2, Insightful)

Anonymous Coward | about 3 months ago | (#47510535)

Posting AC as my opinion is mine alone:

Server rooms don't magically expand either. That is why the HP Moonshot, which isn't perfect, is getting a lot of attention. It is far cheaper to buy a dense rack unit than to buy another building, add some kilo-amps of 208-240 VAC UPS and PDUs, as well as the CRACs they require to move the heat out of the building.

On a microcosm, yes, one can run an old single core P3 at home... but why? A newer machine is far more power and heat efficient, and likely runs an OS that is far more recent (thus more secure against modern threats) in general.

Older computers are not old cars. You don't get an achievement for a WoW action by using a Core Duo. Move the data to a VM, donate or properly recycle the machine, and move on.

My company... (5, Funny)

Anonymous Coward | about 3 months ago | (#47510279)

...is on an "upgrade" cycle. All equipment with red LEDs needs to be replaced with equipment with blue LEDs, at least on the front face of the equipment.

The CEO toured the data center recently and wanted to see blue LEDs on everything.

Re:My company... (1)

viperidaenz (2515578) | about 3 months ago | (#47510303)

Blue LED's are so last decade.
Everything is white LED's now.

Re:My company... (2)

tsa (15680) | about 3 months ago | (#47510353)

I'm so happy with that. I don't like blue light and many of those LEDs were far too bright.

Re:My company... (1)

NormalVisual (565491) | about 3 months ago | (#47510863)

My Sawtooth Mac had white LEDs back in 2000. :-D

Re:My company... (0)

Anonymous Coward | about 3 months ago | (#47510949)

white is now, but the future is purple.

Re:My company... (0)

Anonymous Coward | about 3 months ago | (#47510533)

The CEO toured the data center recently

I read this as "The CEO found the data center recently"

Re: My company... (0)

Anonymous Coward | about 3 months ago | (#47517725)

The CEO then went back to the office and mentioned the data centre to the CIO .... who said "We have a data center? I thought everything was virtual now?" :-)

My company... (0)

Anonymous Coward | about 3 months ago | (#47510739)

There's more truth to that than I'd like to admin. The EMC VMAX, their Cadillac product, has a giant blue light bar across the front of the rack. The light bar is redundantly powered, so that it always looks good if a C-level does a data center walk through.

Re:My company... (0)

Anonymous Coward | about 3 months ago | (#47513067)

Simon Travaglia, is that you?

Re:My company... (2)

scsirob (246572) | about 3 months ago | (#47513579)

I used to work for a hardware vendor who sold equipment to IBM. IBM demanded that all red power LED's be replaced with green ones. IBM users were used to seeing red LED's only when there was a fault with the equipment.

Bottom line: Sometimes a LED upgrade cycle makes sense..

So says (1)

Stumbles (602007) | about 3 months ago | (#47510299)

the guy who's whole business model is dependent on companies buying used hardware.

Re:So says (2)

NoNonAlphaCharsHere (2201864) | about 3 months ago | (#47510409)

Anyone denigrating "Oooooooh! Shiny!!" CLEARLY doesn't understand Slashdot.

What exactly is the news here? (2)

mi (197448) | about 3 months ago | (#47510311)

So, he says, you should make sure you buy new hardware only when necessary, not just because of the "Ooh... shiny!" factor"

What's new about this advice? Was it not as useful and applicable 50, 100, and 1000 years ago?

Slashvertisement? (4, Insightful)

nine-times (778537) | about 3 months ago | (#47510347)

Guy who sells used computer hardware claims that buying new computer hardware is a bad idea, and that you should buy used gear instead. News at 11.

Not what this guy is saying is wrong, but there are other unaddressed issues. They cover issues like "power savings", but not the much more important issue of buying an unknown piece of hardware from an unknown vendor, without a warranty. Aside from that, sometimes there are issues of physical constraints-- like I have limited space, limited ventilation, and one UPS to supply power. Do I want to buy 5 servers, or one powerful one?

Also, it's not true that hardware isn't advancing. In the past few years, USB has gotten much faster, virtualization support has improved, drives and drive interface has gotten faster, etc.

And sometimes, buying "new" is more about getting a known quantity with support, rather than wagering on a crap-shoot.

Re:Slashvertisement? (5, Insightful)

Anonymous Coward | about 3 months ago | (#47510429)

One new server crammed with RAM, with a support contract, and with readily available power supplies is preferable by FAR to me and my organization versus 6 old units. Especially considering per-processor licensing fees for Windows and VMWare.

Re:Slashvertisement? (5, Insightful)

afidel (530433) | about 3 months ago | (#47510657)

Amen to this, we run ~400 VM's on 14 hosts, using less than 1/3rd the power we did when we ran 160-180 physical boxes and everything is easier to manage, new deployments take minutes instead of weeks. We've saved a few million by not needing to grow our datacenter, probably over a million on Microsoft licensing, and made both my staff and my customers happier. There's no way I'd run things on old physical boxes just to save a few dollars on capital expenses.

No shit (2)

Sycraft-fu (314770) | about 3 months ago | (#47510777)

We consolidated about 20ish old servers (and added new systems) in to two Dell R720xds that are VM hypervisors. Not only does this save on power n' cooling but it is way faster, more reliable, and flexible. It is much easier and faster to rebuild and stand up a VM, you can snapshot them before making changes, if we need to reboot the hypervisor or update firmware we can migrate VMs over to the other host so there's no downtime. Plus less time is wasted on admining them since there are less systems, and they are newer.

On top of that they have good support contracts, and some excellent reliability features that you didn't get on systems even 5ish years ago (like actively scanning HDDs to look for failures).

Big time win in my book. Now does that mean we rush out and replace them with new units every year? No, of course not, but when the time comes that they are going out of support, or more likely that usage is growing past what they can be upgraded to handle, we'll replace them with newer, more powerful, systems. It is just a much better use of resources.

Re:Slashvertisement? (1)

Anonymous Coward | about 3 months ago | (#47511103)

Fuck Windows and VMWare, have you seen Oracle's utterly fuckwit virtualisation licensing model?

Stick an Oracle database on just one VM and you have to licence the entire fucking cluster. Total lunacy, and it's costing them sales.

Posting anonymously to dissociate this post from my employer and their Oracle relationships.

Re: Slashvertisement? (1)

dremspider (562073) | about 3 months ago | (#47511371)

Is it really still like this? I remember this was an issue 8 years ago... I would have never thought it was still like this.

Agreed but NEVER just one, you need two or more (0)

Anonymous Coward | about 3 months ago | (#47511873)

Otherwise you get no HA. If you mean X new modern servers crammed with RAM, etc, versus 6X old servers though, then yes, absolutely and without question. Better, more reliable, easier to support, smaller footprint, more efficient, and quite frankly, cheaper in the long run.

Re:Slashvertisement? (1)

nine-times (778537) | about 3 months ago | (#47514977)

That is also a very good point. Licensing fees are often per-processor or per-machine. If I buy 20 old servers and want to buy Windows Server licensing to go with it, I have to buy a separate version of Windows Standard for each. If I buy a single new, extremely powerful server, I might be able to set up 20 virtual servers, and only have to buy 1 copy of Windows Datacenter. And that's just talking about the OS.

Re:Slashvertisement? (0)

Anonymous Coward | about 3 months ago | (#47510441)

As AC to preserve modding you up...

I'd like to point out that there is a business case to be made for using older hardware. Where some operating and maintenance costs can be higher, buying the systems used offers significant cost savings.

Yeah, sorry... he IS wrong..... (2)

King_TJ (85913) | about 3 months ago | (#47510483)

What I mean is, most businesses keep everything of importance on their servers. Think of the salaries they pay in total all of their employees who spend time in front of computers each day. Everything you pay them to do is, essentially, tied to those servers. If the server runs a hosted application slowly, then all of your people using that application are forced to work more slowly -- making them less efficient. If a server crashes and people lose access to information until it's brought back up -- even more inefficiencies result.

Now, WHY would you cheap out over what's probably a $10,000 or less price difference buying a new server with a warranty, and some guy's used one that's less powerful (but probably uses just as much electricity and requires just as much cooling)?

As it is, I've never worked anyplace where servers get swapped out all that often. When it's time to shop for a new one, they've typically gotten a good 4 or 5 years of 24 hour/7 day use out of the old one already. (In bigger places where they get upgraded more often, I suspect they do a higher volume of business too -- and make more profit with each server than the places I worked.)

Used servers are nice to resell at steep discounts so "end users" get the opportunity to tinker with them. They're probably great as someone's home media server, or for the software developer who wants to experiment with hosting his/her own software app. They're probably even an option for the people who couldn't ever afford new systems in the first place (like charities running on shoestring budgets). But for most of corporate America -- IMO, servers should be purchased new and used no longer than their warranty period.

Re:Slashvertisement? (1)

TeknoHog (164938) | about 3 months ago | (#47510503)

In the past few years, USB has gotten much faster

I agree with most of your post, but this is simply false. USB 3.0 is a completely new interface, bolted on USB 1/2 to make it seem like a seamless transition.

I used to think USB is all about selling a new interface with an old name. For example, in a few years we'd have a CPU socket called USB 14.0, but hey, at least it's USB. Now I have a USB 3.0 hard drive, and the mini plug/socket in particular shows how it's just USB 1/2 + 3.0 bolted together. So my new future prediction is USB 17.0 where you have this fist-sized lump of connectors from different ages, all tied into one bunch to ensure backwards compatibility.

BTW, I have two Intel Core CPUs here, Core 2 Duo T7200 (released 2006) and Core i5 520M (2010), both "mobile" CPUs. The former is a lot faster under certain workloads. In practice, they are roughly equal, and the new one probably has better power efficiency, but it's not exactly the level of progress I'd expect.

Re:Slashvertisement? (1)

dave562 (969951) | about 3 months ago | (#47510677)

What are you talking about? USB 3.0 is significantly faster than USB 2.0. I work in a business where we have to transfer data on physical media due to the volumes involved. We ship hundreds of drives a month. Our clients refuse to accept anything other than USB 3.0 anymore because the previous generation is too slow.

Re:Slashvertisement? (1)

queazocotal (915608) | about 3 months ago | (#47510727)

Sometimes.
A USB3 port, if you plug a USB3 hub into it, and 2 USB2 devices into it will go just as fast, and no faster than a USB2 hub.
Because that's what it is.
There are no transaction translators at all.
There are none even specced in the spec as optional, for high-end vendors to aspire to.

Re:Slashvertisement? (1)

TeknoHog (164938) | about 3 months ago | (#47510733)

I'm sorry if you missed my point. I agree that USB 3.0 is faster than USB 2.0 — in the same way that PCI Express is faster than PCI. Does that mean PCI has gotten much faster, or is PCIe a new interface that has replaced PCI?

Re:Slashvertisement? (1)

Bitmanhome (254112) | about 3 months ago | (#47511305)

I agree with most of your post, but this is simply false. USB 3.0 is a completely new interface, bolted on USB 1/2 to make it seem like a seamless transition.

I've been wondering about that -- Since a USB 3 port has separate pins for ultra-speed and high-speed, shouldn't I be able to plug two devices into the same port?

Re:Slashvertisement? (1)

tlhIngan (30335) | about 3 months ago | (#47511143)

Not what this guy is saying is wrong, but there are other unaddressed issues. They cover issues like "power savings", but not the much more important issue of buying an unknown piece of hardware from an unknown vendor, without a warranty. Aside from that, sometimes there are issues of physical constraints-- like I have limited space, limited ventilation, and one UPS to supply power. Do I want to buy 5 servers, or one powerful one? ...

And sometimes, buying "new" is more about getting a known quantity with support, rather than wagering on a crap-shoot.

And that is the main reason why people buy new. To get the support contract because they know if the equipment goes down, they can start losing money fast. Sure they can do redundancy and stuff, and they often do, but they generally want both units to be under service contracts so when one fails and the other one is handling the load, the failed one is getting prompt service to minimize the likelihood of complete stoppage should the other fail.

I've seen perfectly functional equipment force-upgraded because the company making them stopped supporting it. Essential equipment like filers and such? They actually see end of complete support 6 months ahead and plan on migration way before the contract expires for good so they can revert to hardware still under support.

Running old servers is perfectly fine, especially for home use where the user can benefit from the low cost of what was very expensive equipment a few years ago. But until those companies are willing to provide support in case of failure that's better than "here's a spare, fix it yourself", well, there are very valid reasons to go with new. Even if new is barely an upgrade from the old.

Really? this is news in 2014? (0)

Anonymous Coward | about 3 months ago | (#47510367)

Seriously? I don't think any self-respecting, halfway competent IT department has purchased new hardware because of "oooh shiny" in I don't know how many years.

You buy new hardware for three reasons:
1)Warranty coverage
2)Need of processing power
3)ROI from depreciation.

Yeah, you can keep old, fully depreciated equipment around, but you're giving up a lot in terms of tax writedowns from depreciation over the lifetime of the equipment. You also don't want to keep old gear around because if it doesn't have a service contract attached to it, and it breaks in production, you are FUBAR. And finally, yeah, you could keep those old servers around and make a big cluster for VMs, but they eat up rackspace, cooling, and power.

If you can afford new, you always buy new.

Re:Really? this is news in 2014? (2, Interesting)

Anonymous Coward | about 3 months ago | (#47510559)

If you can afford new, you always buy new.

I would add "size" to your list, sometimes newer hardware can allow you to expand your resources, without taking up as much space. However.....

I don't think your rule is universally true. Buying new is like buying a new car. Yea it's great to drive off the lot, get better gas mileage and chances are slim you will find yourself with a broken car sitting on the side of the road, but making the payments forever can be a bitch.

You may not get depreciation, but that must means it's an expense, which comes off the top of your net before taxes, so you get it all this year.

You may get a warranty, but what's a 24 hour SLA worth to a mission critical system? What you need is a fully capable hot standby, which is easier to afford when you are buying used.

Processing Power? I don't know very many applications that *really* need more processing power, which cannot be (and likely SHOULD) effectively split up over multiple servers. Many people think that faster processing means faster application, but this is extremely rare. What's usually needed is more memory or I/O performance which can be had for a song if you buy old hardware and upgrade memory and disk drives with SSD's.

So, where some folks with money go with new, it's not universally true that this is the best use of one's money. Sometimes, if you think about the problem, cheaper, used hardware can serve you well for a lot less coin.

Nail in the coffin... (1)

Anonymous Coward | about 3 months ago | (#47510403)

Oh, great, it's hard enough to replace obsolete equipment as it is. Once management sees this, they'll wonder why we can't keep that old Dell server going a few more years - after all, other companies are buying the same server for this guy. IT will never get another upgrade approved ever again if this gets out. Forget the cost savings of lower-power equipment, and the massive throughput increases in newer drives.

There is one business case for buying old - when a production machine is gone and you can't get parts for it any longer, but you have to use it, you can buy a spare machine or two or three of the exact same model and swap out parts. Used computer brokers are good for something like this.

Re:Nail in the coffin... (2)

LinuxIsGarbage (1658307) | about 3 months ago | (#47511967)

Oh, great, it's hard enough to replace obsolete equipment as it is. Once management sees this, they'll wonder why we can't keep that old Dell server going a few more years - after all, other companies are buying the same server for this guy. IT will never get another upgrade approved ever again if this gets out. Forget the cost savings of lower-power equipment, and the massive throughput increases in newer drives.

Lucky you getting to keep that old Dell server... We have to keep that old 1983 PDP-11 going.

Re:Nail in the coffin... (1)

Anne Thwacks (531696) | about 3 months ago | (#47513639)

We have to keep that old 1983 PDP-11 going.

Ebay the PDP11 and buy a dozen Pentium4s with the money.Assuming your PDP11 services 12 users, the performance will be similar, and the electric bill less than half. You may need an extra P4 to act as a tape server, a couple more to act as disk servers, and some others as terminal servers..

OK, maybe the PDP11 IS more power efficient!

Just retired a 12 year old server (1)

Anonymous Coward | about 3 months ago | (#47510439)

Little Dell running SQL 2000 on Windows 2000. A whole gigabyte of RAM. Made it through 12 years without an issue. I've been told it's off living on a farm now.

Question for Roblimo (1)

gazbo (517111) | about 3 months ago | (#47510447)

Do you actually feel embarrassed about having posted this, or did you do it willingly?

Re:Question for Roblimo (1)

Roblimo (357) | about 3 months ago | (#47511093)

Thousands of viewers, 10 or 20 complaints. That seems like a pretty good ratio to me.

And, of course, if you have good ideas for video interviewees, why don't you send them to me instead of complaining? Please make sure to include contact info. My email is robinATroblimo-com.

Thanks!

Should be filed under (0)

Anonymous Coward | about 3 months ago | (#47510523)

"No shit Sherlock"

In other news (0)

Anonymous Coward | about 3 months ago | (#47510539)

Captain fucking obvious is looking for a sidekick.

Really? (1)

Bernard Gonzalez (3762317) | about 3 months ago | (#47510579)

I had to sign up just for this, is this a press release of sorts ?

TCO (0)

Anonymous Coward | about 3 months ago | (#47510619)

I know it's a TLA and a corp buzzword but I think it applies here. For most scenarios the actual cost of the hardware is a small part of the total cost of ownership when you factor in things like space, power, cooling, maintenance, support staff, purchasing a warranty, etc.

I'd expect that most of those other factors will be higher for used hardware as I expect them to be less efficient which will require more space, power and cooling. They are used so equipment so I'd expect more failures and maintenance required. Overall I'd expect that these other costs in TCO will eat up any savings on the initial hardware cost and you'll likely end up spending more in the long run. Especially when you factor in that most/a lot of the companies making this stuff requires support/maintenance contracts to even get access to things like updated drivers or firmware updates. Often including critical security fixes so a contract may not be optional if you want to have a secure environment.

Sure there may be cases where it does save you money but you need to look at all of your costs and that isn't easy when you have so many aspects to consider and are forecasting out into the future.

How much does a front page ad cost? (2)

dave562 (969951) | about 3 months ago | (#47510641)

I swear we saw an identical article a few months ago.

Go away.

We do not want your advertisements. Nobody wants your old gear. I pay you guys to haul it away, not sell it back to me on Slashdot.

Really Slashdot? Really? (4, Funny)

Tomsk70 (984457) | about 3 months ago | (#47510715)

Next Week; Linux rubbish at server tasks says Microsoft Reseller

Paid Ad ? (2)

Tsiangkun (746511) | about 3 months ago | (#47510813)

Roblimo has thick skin. He said so in March when he posted an advertisement for this same service in the disguise of a story. This still looks like front page placement of an ad for a friends company. How much does it cost for front page placement ?

This is more valid than not (2, Insightful)

Anonymous Coward | about 3 months ago | (#47510959)

Not everyone can buy used hardware, but for those who can, doing so is a huge money saver ($75K worth of hardware six years ago is now selling for thousands). Case and point, we bought a fully stocked 16-blade system for about $4K with Quad-Core Xeons and 4GB of RAM. People might say that is crap, but not when what you're replacing is already crap of the crap and upgrades are cheap as well. When factoring in clustering, etc, running on used equipment is hardly risky. Support-wise, this stuff usually has software available that is rock-solid because it's been around for a while. So yes, you wouldn't want to run mission-critical apps on it, but you can get a way with running a ton of auxiliary infrastructure. Never mind making for great test systems.

Refurbished is great! (2)

thatkid_2002 (1529917) | about 3 months ago | (#47512735)

When you have SMB type customers then refurbished hardware is great value. They're usually not willing to fork out for a new server. When there is refurbished hardware for a fraction of the price -- still new enough to be reasonably efficient and to add a HP Care Pack or whatever -- why not? Having hardware that is up to scratch is both good for you and good for your customer. Out of dozens of customers of this nature we've never been bitten (and yes, the customer knows the server is refurb + Care Pack).

It's really great when you get a strong business relationship going with your local refurb business. Getting the pick of the litter really gets your geek juices flowing!

We did have a reasonably strong virtualization setup too, and that helps as the article suggests.

The laptop I am typing this on right now is a refurb model that I got for an excellent price a year and a half ago. It's probably the best laptop I've ever had including brand new ones.

Interviewer is extremely ignorant on power (3, Informative)

George_Ou (849225) | about 3 months ago | (#47512869)

At one point the interviewer asks "how much money you gonna save on electricity for 50 computers, $50/year"? It's clear he's never even attempted to do the math. An extra 100 watts in California is going to cost $314.91 per year at the typical rate (above baseline) of 35.949 cents per year. That's just the savings on one computer system much less 50 computers.

Ignoring important factors (1)

brunes69 (86786) | about 3 months ago | (#47514157)

This guy is ignoring two very important factors here involved in purchasing of IT hardware in any enterprise.

- Hardware is a capital cost whose depreciation can be written off every year on your corporate income tax. After 4 years or so, your hardware actually now has near zero actual capital value to the company. Thus, as long as a company believes they will be around to see the depreciation of the asset fully written down, it is of little advantage to sacrifice performance in order to save some inconsequential amount on the hardware. This is why companies always buy the latest and greatest.

- The money you spend on the system is just the one-time capital cost. The on-going costs - the electricity used, the maintenance costs, the costs of extending the warranty - these will all be substantially higher per unit computed with older systems than newer systems.

How much did MarkITx pay Slashdot for this? (0)

Anonymous Coward | about 3 months ago | (#47515617)

To me this sound like blatant advertising and marketing for a Gray Market vendor. When did Slashdot turn in to a sales rag?

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?