×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Sun Unveils Thumper Data Storage

ScuttleMonkey posted more than 7 years ago | from the named-by-the-sound-it-makes-when-hitting-the-ground dept.

285

zdzichu writes "At today's press conference, Sun Microsystems is showing off a few new systems. One of them is the Sun Fire x4500, known previously under the 'Thumper' codename. It's a compact dual Opteron rack server, 4U high, packed with 48 SATA-II drives. Yes, when standard for 4U server is four to eight hard disks, Thumper delivers forty-eight HDDs with 24 TB of raw storage. And it will double within the year, when 1TB drives will be sold. More information is also available at Jonathan Schwartz's blog."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

285 comments

:O (3, Funny)

joe 155 (937621) | more than 7 years ago | (#15700941)

24TB... thats almost enough to hold all my pr0n!

Twitterpated. (3, Funny)

Tackhead (54550) | more than 7 years ago | (#15701053)

> 24TB... thats almost enough to hold all my pr0n!

Orly Owl: Why, don't you know? He's twitterpated.
Thumper: Twitterpated?
Orly Owl: Yes. Nearly everybody gets twitterpated in the Thumper room. For example: You're walking along, minding your own business. You're looking neither to the left, nor to the right, when all of a sudden you run smack into a pretty rack holding 24 TBs of pretty racks! Woo-woo!

Re::O (1)

UnknowingFool (672806) | more than 7 years ago | (#15701123)

Don't lie. It's really full of Star Trek episodes.

Re::O (4, Interesting)

FuturePastNow (836765) | more than 7 years ago | (#15701223)

28 seasons of Star Trek + all the movies = 250GB.

Re::O (4, Funny)

Don_dumb (927108) | more than 7 years ago | (#15701320)

28 seasons of Star Trek + all the movies = 250GB.
That is the most geeky post I have seen on this site for a while, you should be proud.

Re::O (2, Insightful)

Chris Burke (6130) | more than 7 years ago | (#15701545)

I nominate "Star Trek Collections" as a new unit of storage measurement! Quick, somebody work out the conversion into the standard libraries' of congress.

Re::O (1)

joe 155 (937621) | more than 7 years ago | (#15701380)

what season are you missing... by my count there are 29 seasons of star trek... next you'll be saying that you have all the movies except the wrath of Khan!

Thumper: If you... (0, Offtopic)

avirrey (972127) | more than 7 years ago | (#15700950)

can't say something nice, don't say nothin' at all. Keep talking thumper because I like the nicities (specs). ------ Drools @ Technology

I want one! (2, Interesting)

andrewman327 (635952) | more than 7 years ago | (#15700952)

This is perfect for the space constraints applied to many server rooms now days. I wonder how they managed to control the heat output. My laptop only has one HDD and it gets pretty warm. I am very impressed that (according to Sun) costs $2 per gig! As always, I hope it works as promised.

Re:I want one! (5, Informative)

cyanics (168644) | more than 7 years ago | (#15700972)

and they are especially showing off the low power usage in that kind of space..

48 Hds, 2CPUs, and still less than 1200 Watts.

Oh many. Datafarm in a single rack.

Re:I want one! (1)

ScottLindner (954299) | more than 7 years ago | (#15700978)

Maybe they didn't do much at all to control the heat? It wouldn't be the first time a vendor left heat issues to the end user to resolve.

I doubt this is the case though. Sun tends to make pretty good hardware. At least that's my limited experience.

Re:I want one! (0)

bcat24 (914105) | more than 7 years ago | (#15700995)

Didn't you hear? They're specially designed for use in cold climates. They store tons of data and keep your building at a comfortable temperature at the same time!

Re:I want one! (1)

andrewman327 (635952) | more than 7 years ago | (#15701073)

"Didn't you hear? They're specially designed for use in cold climates. They store tons of data and keep your building at a comfortable temperature at the same time!"


I am suddenly begining to doubt that greenhouse gases are responsible for global warming. Al Gore needs to make a movie about 48 drive servers.

Re:I want one! (2, Informative)

Jeff DeMaagd (2015) | more than 7 years ago | (#15701041)

It's not that big of a problem. A 7200RPM drive might take 15W max. 48 drives brings the total up to 675W. Not that bad in the server world, especially given the capacity.

Re:I want one! (1)

andrewman327 (635952) | more than 7 years ago | (#15701110)

7200 rpm? What fun is that? I want 48 15k rpm drives running in RAID 0! Now THAT is fast! (Drive failure? What's that?)

Re:I want one! (2, Insightful)

andrewman327 (635952) | more than 7 years ago | (#15701048)

From TFA (the last one): "We're still figuring out what to call the product, 'open source storage' or 'a data server,' but by running a general purpose OS on a general purpose server platform, packed to the gills with storage capacity, you can actually run databases, video pumps or business intelligence apps directly on the device itself, and get absolutely stunning performance. Without custom hardware (ZFS puts into software what was historically done with specialized hardware). All for around $2.50/gigabyte - with all software included."


This device is very interesting. It is poised to slash costs in data centers. It consumes less space, uses less power, costs $2/gig, and is managed just like any other server. Instead of calling it a data storage device, they should be marketing it as "DAS UBER SERVER!"

Apple did it first. (-1, Flamebait)

Anonymous Coward | more than 7 years ago | (#15701328)

Like all other wicked developments in computing, Apple did it first. Welcome to the club, Sun.

Re:Apple did it first. (1)

DAldredge (2353) | more than 7 years ago | (#15701482)

Name three (3) major things apple did first. Please note that first means no one else did them before apple.

Interesting (2, Funny)

Dark Paladin (116525) | more than 7 years ago | (#15700959)

I've been talking to the wife about getting a NAS for the house - but now a 1 to 2 terabyte system seems so...puny.

Hey, honey - remember how I said I wanted to store *all* the movies on the server? Get a load of this ;).

Re:Interesting (0)

Anonymous Coward | more than 7 years ago | (#15701337)

You misspelled PR0N as "movies". In addition, I did get the double meaning of the last sentance.

Holy SHIT! (1)

gasmonso (929871) | more than 7 years ago | (#15700969)

Did you see how tightly packed the drives were? Is heat a concern or is there a tornado cooling system in place?

http://religiousfreaks.com/ [religiousfreaks.com]

Re:Holy SHIT! (2, Insightful)

Anonymous Coward | more than 7 years ago | (#15701096)

Could you please put the link to your stupid website in your sig, so those of us who are uninterested don't need to read it a dozen times in every story? KTHX...

Re:Holy SHIT! (4, Informative)

imsabbel (611519) | more than 7 years ago | (#15701282)

Why does everybody here get so up with "The HEAT!!111".
Its 48 hds in a 4U case. 48HDs is about 600W under full load.
If you compare this to the fact that there are dual-socket - dual core servers out there that push 300W through a 1U case, thats nothing.

Also, a 4U case allows the use of nice fat 12cm fans in the front, while the horizontal backplane allows for free airflow (in contrast to vertical ones like used before)

$42,000 (1)

enitime (964946) | more than 7 years ago | (#15700996)

"starting as low as $2 per GB"


Doesn't sound like much.. but that's $42,000 for the top 24TB model.

Perhaps it's time to start using "per TB" costs for these things. Surely no one sells sub-terabyte storage servers anymore.

Re:$42,000 (1)

Fredwick.com (975970) | more than 7 years ago | (#15701031)

$42,000 for 24TB is dirt cheap.

Re:$42,000 (1)

avirrey (972127) | more than 7 years ago | (#15701109)

Remember the telco's are going to want to prioritize the HD usage... may not be that sweet of a deal. LOL.

Re:$42,000 (2, Interesting)

bryerton (524453) | more than 7 years ago | (#15701135)

Is it? I recognize that the system itself is impressive. But to buy 48 750GB SATA-II 3.5" drives costs around $24,000, and gives you ~36TB. If you notice the pricing, it becomes obvious SUN is drastically over-pricing the drives. The only diff I noticed at a first glance between the $40k and the $90k option was the size of the drives. Perhaps I missed something...

If I didn't, only a fool would buy the more expensive version. Just go in for the cheap array, and purchase 750GB drives yourself, re-sell the original 48x250GB ones, and you'll save yourself a rather large sum of money.

Re:$42,000 (0)

Anonymous Coward | more than 7 years ago | (#15701391)

If I didn't, only a fool would buy the more expensive version. Just go in for the cheap array, and purchase 750GB drives yourself, re-sell the original 48x250GB ones, and you'll save yourself a rather large sum of money.


Pretty friggin' obvious you haven't worked at a datacenter. I know no administrator who would do that to save tiny amount of money and lose the guarantee. No, even if these systems use same drives as your home computer, these are not used as toys.

Re:$42,000 (1)

OldeTimeGeek (725417) | more than 7 years ago | (#15701478)

Perhaps I missed something...

If you were looking at putting one in your basement, yeah, you could do it cheaper. For a datacenter, however, it's more than just a bunch of drives. Stuff like maintenance, 24-hour support, firmware upgrades, monitoring software, making sure it works with all of your systems and being there to help when it doesn't are important to large enterprises...

Indeed, Sun's list prices are way too high (1, Informative)

this great guy (922511) | more than 7 years ago | (#15701617)

The 12 TB config [sun.com] is sold at $33k, or $2.75/GB, but assembling such a server yourself is possible and can be done today for 1/3rd of this price:

  • 1 x dual-Opteron mobo = 1 x $500
  • 2 x Opteron 285 = 2 x $1100
  • 8 x 2 GB DDR400 registred DIMM = 8 x $300
  • 6 x 8-port PCI-X Marvell SATA card = 6 x $100
  • 48 x 250 GB 7.2kRPM SATA disks = 48 x $110
  • 1 x Chassis+PSU+Rails = 1 x $1000
  • Total = $11980 or $1.00/GB

(I have actually slightly overestimated the above prices.) Of course people are going to say that such a server is not be as reliable as a Sun server, that it does not come with technical support, etc. But in most cases such arguments are invalid because you save so much money that you can afford assembling/maintaining the server and replacing faulty hardware parts yourself. Time is money, but by having saved money you can now afford time ;-) The living proof that such a model would be successful is Google: instead of buying Sun servers like most startups in their time, they built their servers themselves to save money.

Re:$42,000 (1)

99BottlesOfBeerInMyF (813746) | more than 7 years ago | (#15701404)

$42,000 for 24TB is dirt cheap.

It is a pretty good price, but not insane. Apple's cheap RAID servers cost about $1.85 per gig. (I know, Apple + low price = crazy.) Mind you, you'd need 8u not 4u to fit them. That is really where I see the the advantage here for companies that need more storage but have real space constraints.

Re:$42,000 (1)

Homology (639438) | more than 7 years ago | (#15701121)

> Surely no one sells sub-terabyte storage servers anymore.

Most "storage" servers sold today have less than a terabyte
of capacity.

Okay... (4, Funny)

Vo0k (760020) | more than 7 years ago | (#15701009)

...but how good is it at repelling the antlions?

Ah, yes...a machine labelled "Sun"... appropriate! (1)

soren42 (700305) | more than 7 years ago | (#15701015)

No doubt that fully loaded it generates almost as much heat as the sun, too!

Really snazzy tech, but that's a lot moving parts in a little space... and probably too hot to touch. Could you imagine the cooling required for a densely-packed data center of these things?

Or am I way off base here?

Re:Ah, yes...a machine labelled "Sun"... appropria (2, Interesting)

geoff lane (93738) | more than 7 years ago | (#15701314)

The (redundent) power supply is rated at 1800Watts which implies about 6300BTU/Hr heat out of the box. For 24Tb and a server that is remarkably low.

Dune.. (4, Funny)

WizADSL (839896) | more than 7 years ago | (#15701018)

Thumper? I hope the sand worms stay away...

Re:Dune.. (1)

creimer (824291) | more than 7 years ago | (#15701074)

That's the whole beauty of this new product. It's in an air conditioned room and outside the sand worms keep all physical intruders away. The only thing you need to worry about are the virtual sand worms coming over the Net.

cooling (3, Interesting)

Zheng Yi Quan (984645) | more than 7 years ago | (#15701027)

Heat output from all those drives is a concern, but if you look at the photo on the ponytailed hippie's blog, you can see that the box has 20 fans in the front and probably more in the back. Makes you wonder what the thrust-to-weight ratio is. This box is going to make a screaming database server. 2GB/sec throughput to the internal disk beats anything out there, -and- the customer doesn't need to invest in SAN hardware to do it.

Re:cooling (1)

eln (21727) | more than 7 years ago | (#15701063)

Great, so it's throwing all that heat out to raise the ambient temperature in the rack and force you to invest in more air conditioning power and more specialized airflow in the rack to keep this thing from damaging your other systems.

I'd be interested to see how your actual overall power consumption within a rack and within a data center is affected by this thing.

Re:cooling (1, Insightful)

Anonymous Coward | more than 7 years ago | (#15701132)

So 24TB of storage servers otherwise doesn't generate heat?

Put doors on the rack and chimney the damn thing. Use plastic wrap if you have to. Ain't nothing any data center doesn't do for blades anyway. Drives just don't generate the amount of heat you think they do tho.

Re:cooling (1)

Wesley Felter (138342) | more than 7 years ago | (#15701101)

Database performance is generally more related to IO/s, not GB/s. Thumper may still win an equal-cost comparison against eterprisey SAN equipment because it gets more spindles.

Re:cooling (1)

EbbTide (966055) | more than 7 years ago | (#15701305)

This box is going to make a screaming database server. 2GB/sec throughput to the internal disk beats anything out there, -and- the customer doesn't need to invest in SAN hardware to do it.


Yes, but if you want any sort of redundancy in your database, you're going to have to go to the SAN. Unless you choose replication, but that's not an optimal solution for availability.

Re:cooling (0)

Anonymous Coward | more than 7 years ago | (#15701490)

.... and what happens when a disk fails? You have to stop the whole array to open the top to replace one disk - not very hot-swap! I seem to remember a very similar product called Kashia (Cashia, Cassia?... sorry, old brain in serious decline) that died a death as soon as datacenter managers asked "And what about online drive replacement?"

Imagine the scene - DB admin is on the phone to the FD:
ADMIN: "Yes, sir, we've found the problem with the order system, but I'm going to have to shut down all the applications - billing, shipping, payroll, all of 'em - with databases on the same storage array so I can swap out the one faulty disk..."
FD:
FD's ASSISTANT: "So, which address do you want me to send your pink slip to?"

Re:cooling (0)

Anonymous Coward | more than 7 years ago | (#15701575)

erm ... you just swap it out ?

Who said the machine needs to be powered down ?
You unmount the drive or remove it from the ZFS pool, remove, replace and reintroduce the new disk into the pool.

ZFS (using RAID-Z) will carry on as if nothing has happened

Re:cooling (1)

oliana (181649) | more than 7 years ago | (#15701342)

Picture gallery shows 5 hot-swappable dual fans for a total of 10. And the back doesn't appear to have any (unless there are some hiding in the power supplies, but it didn't look like it.)

But real cool 360 tour [sun.com] complete with the ability to take out drives, fans, CPUs, power supplies.

Ok. Now if I could just afford it. (1)

kabocox (199019) | more than 7 years ago | (#15701032)

I would love to be able to spend $33K on that. I'd be lucky to get something in the $3-$4K range approved though. Do you have anything in that price range that I might actually get past my boss?

Re:Ok. Now if I could just afford it. (1)

bearl (589272) | more than 7 years ago | (#15701205)

I don't work for these guys, but we've bought stuff from them before and they've been reasonable.

NAS servers [elitepc.com]

Nowhere near those Suns in capacity or performance, but they are less expensive.

Re:Ok. Now if I could just afford it. (1)

DAldredge (2353) | more than 7 years ago | (#15701502)

A few GPS trackers, film camera with zoom lens and some remote audio equiptment + time should allow you to get enought information on your boss so that getting anything approved will not be a problem.

Software RAID only, plus 7200 RPM no10k or 15k (-1)

belrick (31159) | more than 7 years ago | (#15701068)

This is a more low-end or small-business grade disk configuration. Mid-range or high-end would havwe HW RAID. I've only seen 15,000 RPM in 146 GB disks and smaller, and 10,000 RPM in 300 GB disk, but I'd hoped the 250 GB option would have had 10,000 RPM disk.

Re:Software RAID only, plus 7200 RPM no10k or 15k (3, Informative)

ratboy666 (104074) | more than 7 years ago | (#15701289)

Actually, software RAID is an advantage, performance-wise.

The old-time "big-ticket" was checksum calculation, but that is now an "also-ran". Distributing the i/o? Software can do it as well as hardware.

Both hardware and software have to be familiar with the blocking factor.

Where software wins is that it can be aware of, and skip reading to fill blocks if the block has never been used (or is not PRESENTLY in use). Which hardware RAID controllers cannot avoid doing.

The idea is to tie the RAID more tightly into the filesystem.

As to lower speed drives -- did you count the heads? Each is active at the same time. Yes, an individual i/o would complete faster with 10k or 15k spin, but the total throughput is based on the number of heads. For RAID5, reading multiple blocks will give you pretty much all the read performance you can stomach.

Write performance for an individual write operation would be improved; but generally application buffering deals with it. The tradeoff is number of heads, spin rate, and heat. The right balance? For you, write performance up, and, keeping heat constant, number of heads down (I presume that you are dealing with transactional loads, with commits). For me? tends to go the other way (my workload is general storage, with a bit of database).

As always, YMMV
Ratboy

Re:Software RAID only, plus 7200 RPM no10k or 15k (3, Informative)

E-Lad (1262) | more than 7 years ago | (#15701335)

This box is 100% designed to be used in mutual full advantage with ZFS. Thumper is what you would call a modern RAID array, as ZFS in this case blurs the destinction between hardware and software RAID. The CPU and memory horsepower is there for RAID-Z.

From this box, one can serve out file systems with NFS and/or SMB/CIFS (aka a traditional NAS), and in future releases of Solaris 10, also serve out LUNs over iSCSI and FCP while having all that data backed by the performance, reliability, and features of ZFS. The only thing it's missing is a consolidated, centralized CLI for manipulating storage, a la NetApp and ONTAP... but all the requisite pieces are there to turn Solaris, and especially Solaris-on-Thumper, into a NetApp killer at less cost.

ATA over Eithernet (1)

bhima (46039) | more than 7 years ago | (#15701071)

I am a worng thinking that AoE is a little more flexible / interesting than this?

Re:ATA over Eithernet (1)

DavidRawling (864446) | more than 7 years ago | (#15701153)

Possibly, but I'm certain you could add iSCSI target software to this and you get the benefit of multiple spindles and SAN-like behaviour with cheap disks, clustering and shared storage, heterogenous host systems (Windows, Linux, Unix, etc), single point of backup ...

And here I was salivating over 24 spindles in 4RU ... damn!

Re:ATA over Ethernet (1)

dreddnott (555950) | more than 7 years ago | (#15701369)

I suspect that the X4500 is just a vehicle for Sun's much-vaunted ZFS filesystem (hence RAID-Z in the specs).

Wow (2, Funny)

bepolite (972314) | more than 7 years ago | (#15701120)

If my math is right... that's 50,331,648MB / 295,734,134 (US Population) = 174.27683 kilobytes for every man woman and child in the US. In one box!

And if MY math is right (1)

jeffmeden (135043) | more than 7 years ago | (#15701199)

296,344,308,438,456,234 * 349,000,000 = Who the hell cares about that statistic??? Seriously, lets compare the bit count per 1U to the number of chicken eggs laid per year in the US.

You Must Be New Here (1)

darthservo (942083) | more than 7 years ago | (#15701457)

296,344,308,438,456,234 * 349,000,000 = Who the hell cares about that statistic??? Seriously, lets compare the bit count per 1U to the number of chicken eggs laid per year in the US.

Welcome to Slashdot...News for nerds

Re:Wow (0)

Anonymous Coward | more than 7 years ago | (#15701240)

Wow, that much? 174.27683k should be enough for anyone.

That brings me back... (1)

bennomatic (691188) | more than 7 years ago | (#15701440)

I remember when I got my 1541-compatible MSD SuperDrive for my Commodore 64. Those 5 1/4" floppies held an amazing 170KB of data. That was like the equivalent of 20 5-minute tapes!

more insightful than funny (0)

Anonymous Coward | more than 7 years ago | (#15701458)

Not that funny - that's pretty much what we're evaluating using one -- a record on near every person in a state - with hopes of going nationwide.

Must ... fight ... compulsion... (0)

Anonymous Coward | more than 7 years ago | (#15701550)

174.27683 kilobytes ought to be enough for everybody.

Variable redundancy? (4, Insightful)

Jerk City Troll (661616) | more than 7 years ago | (#15701224)

It would be nice if the system had a setting where you could transparently specify a redundancy factor in sacrifice of capasity. For example, I could set a ratio of 1:3 where each bit is stored on three separate disks. This ratio could increase to the number of disks in the system. And of course, little red lights appear on failed disks, at which point you simply swap it out and everything operates as if nothing happened (duH). Sure, we have a degree of this already, but managing redundant arrays is still a very manual process and when we start talking about tens or soon hundreds of terabytes, increased automation becomes a necessity.

Re:Variable redundancy? (4, Informative)

Anonymous Coward | more than 7 years ago | (#15701273)

Check out ZFS-- http://www.opensolaris.org/os/community/zfs [opensolaris.org]

It makes managing this sort of storage box a snap, and allows you to dial up or down the level of redundancy by using either mirroring (2-way, 3-way, or more) or RAIDZ. And soon, RAIDZ2.

Additionally, Solaris running on the machine has fault managment support for the drives, and can work with the SMART data to predict drive failures, and exposes the drives to inspection via IPMI and other management interfaces. Fault LEDs light when drives experience failures, making them a snap to find and replace.

Re:Variable redundancy? (3, Informative)

Wesley Felter (138342) | more than 7 years ago | (#15701296)

ZFS can provide anywhere between 200% and 10% redundancy depending on what mode and stripe size you use. It should also automatically repair when failed disks are replaced.

Re:Variable redundancy? (1, Informative)

Anonymous Coward | more than 7 years ago | (#15701333)

It would be nice if the system had a setting where you could transparently specify a redundancy factor in sacrifice of capasity. For example, I could set a ratio of 1:3 where each bit is stored on three separate disks. This ratio could increase to the number of disks in the system. And of course, little red lights appear on failed disks, at which point you simply swap it out and everything operates as if nothing happened (duH).

with ZFS it's as easy as that.

I saw a demonstration at LinuxTag in Wiesbaden, germany. The used files instead of harddisks, and just filled one of them with random bytes. Everything worked as if nothing had happend...


--
http://moritz.faui2k3.org/ [faui2k3.org]

Pfft... (1)

Miguelito (13307) | more than 7 years ago | (#15701235)

I saw at least 2 different companies offering almost identical ideas at least 2 years ago. Sure the total storage wasn't as high as the disks weren't as big yet, but the big 4U chassis with a ton of disk installed vertically isn't anything new.

Re:Pfft... (2, Interesting)

spun (1352) | more than 7 years ago | (#15701531)

You can also buy commodity 3U server chassis that hold 16 drives. We built a number of these as ROCKS cluster head nodes for Los Alamos National Labs. Two 3ware SATA raid cards running 8 drive RAID 5 arrays, bonded together in software as a RAID 0 array. Decent performance relatively inexpensively. Which is after all what the I in RAID is supposed to stand for. If you do this, get the SATA backplane that uses 4 Infiniband cables instead of 16 SATA cables and the cards that support that. I've done it both ways, and trust me, your knuckles will thank you for the four-fold reduction in cables. As an interesting aside, the chassis we used has a space up top for a 2.5" laptop hard drive to use as the system disk. It's is the only way to fit a system disk in that chassis.

The return of Sparc Storage Array (1)

Wiseleo (15092) | more than 7 years ago | (#15701239)

I have an SSA 1000 in storage with a matching SparcStation 10 that has a fibre channel host bus adapter. The SSA had 30 SCSI disks in groups of 10 and a fibre channel interface. Once upon a time it made for a lot of fun with doing some database performance modeling with 30 1GB SCSI drives. The size was roughly the same.

Blades are more important! (0)

Anonymous Coward | more than 7 years ago | (#15701249)

I think the bigger news is the sun blade system [sun.com]. They were miising it for so long and now they have something to compete with IBM [ibm.com] and HP [hp.com] blade servers. And although sun's low end servers are something similar to blades ($1300 per unit duh!) they're not quite the same.

sun infatuated with sw-raid ? (1)

TobiasS (967473) | more than 7 years ago | (#15701295)

Why does sun hate hardware raid solutions ? I guess they dont want it to compete with their SAN'ish products. I just can't see using this thing for anything other than workgroup file servers and disk-disk backups. It will probably do fine on linear reads, can't imagine the random access and write performance to be all too hot.

Re:sun infatuated with sw-raid ? (0)

Anonymous Coward | more than 7 years ago | (#15701398)

> Why does sun hate hardware raid solutions ?

Do you think hardware RAID controllers have their transistors configured for the purpose of RAID alone? Know what they're running? SOFTWARE.

This thing IS a raid controller.

hardly... (1)

dreddnott (555950) | more than 7 years ago | (#15701436)

ZFS is probably a step ahead of most hardware RAID solutions. I believe Ratboy666 wrote an excellent post above that details Sun's likely reasoning.

Re:sun infatuated with sw-raid ? (1, Informative)

Anonymous Coward | more than 7 years ago | (#15701503)

Why does sun hate hardware raid solutions?

Remember that "hardware" raid is simply a dedicated low-end microcontroller (compared to an Opteron) running pretty much the same software that "software" raid runs -- but in an environment that's harder to apply patches&bug fixes (need to re-flash the firmware).

This extra dedicated microcontroller is nice when it's in a server that's doing a lot of other CPU intensive stuff (rendering); but on a dedicated file server you're far better off having the powerful main CPUs performing the RAID logic == software RAID.

Considering.... (1)

cbiltcliffe (186293) | more than 7 years ago | (#15701304)

Considering you can get 750GB drives now, shouldn't this thing be currently capable of 36 TB raw capacity?

Re:Considering.... (1)

DAldredge (2353) | more than 7 years ago | (#15701515)

Sun has to have time to test and certify the drives - they don't run down to fries and buy the biggest drives they can buy. People who buy this sort of thing kind of look down on the use of untested/unproven drives...

Is it even possible (even on Slashdot) (1)

The_REAL_DZA (731082) | more than 7 years ago | (#15701315)

to imagine a Beowulf cluster of these?

Sorry, couldn't resist; I'm usually about a day late for that particular well-worn meme.

24TB for $70k (Sun) or 24TB for $16k (generic) (2, Interesting)

linuxbaby (124641) | more than 7 years ago | (#15701353)

We were waiting anxiously for this item to be announced, because we have about 100TB of storage (now) and add about 8TB per month. Perfect customer for these.

But, unfortunately, they're not quite as cheap as I had thought. (Friend on the inside thought Sun was going to price them at $1.25 per GB, not $2 per GB)

Instead, we've been using these. Very good cooling:
http://www.rackmountpro.com/productpage.php?prodid =2348 [rackmountpro.com]

32 SATA-II 750g drives = 24TB, same as the Sun X4500, but for only $16,000 for the entire system (chassis, mobo, ram, drives) instead of $70,000 for the Sun Thumper. Huge difference especially if you're ordering many of them.

Re:24TB for $70k (Sun) or 24TB for $16k (generic) (1)

JungleBoy (7578) | more than 7 years ago | (#15701577)

I agree, this Sun box is way over priced. In May, I received a simliar box for our lab from Atipa [atipa.com]. Dual opteron, 24 x SATA II 500G SATA. I'm testing it in a RAID 60 configuration right now. I'm pulling over 350 MB/s at the application level. I'm using a pair of Areca [areca.us] raid 6 controllers (with real Open Source kernel support, thanks Eric Chen!) and striping them together with mdadm. It's amazingly fast. And with 500GB platters, I'm relieved to have N+2 redundency.

Re:24TB for $70k (Sun) or 24TB for $16k (generic) (2, Interesting)

larien (5608) | more than 7 years ago | (#15701616)

If you buy 10 at a time, it comes down to around $47k each (http://store.sun.com/CMTemplate/CEServlet?process =SunStore&cmdViewProduct_CP&catid=151017). Also, if you're paying list price on Sun kit, you're doing something wrong.

Congratulations, Jonathan! (1)

Reverend528 (585549) | more than 7 years ago | (#15701359)

You got your Blog linked on the front page of Slashdot. Now get your butt upstairs, Mom needs help with the dishes!

ZFS (4, Insightful)

XNormal (8617) | more than 7 years ago | (#15701385)

This fits nicely with Sun's new ZFS [opensolaris.org] file system.

ZFS blurs the traditional boundaries between volume management, RAID and file systems. All disks are added into one big pool that can be carved out into either the native ZFS filesystem format or virtual volumes that can be formatted as other filesystem formats. It has many other interesting features like instantaneous snapshots and copy-on-write clones.

True size? (1)

drkfce (932602) | more than 7 years ago | (#15701397)

Is this 24TB where 1TB = 1,000,000,000,000 bytes or 1,099,511,627,776 bytes? If it is the former, that sure cuts down on the true size.

Two and a Half Libraries of Congress (2, Funny)

monopole (44023) | more than 7 years ago | (#15701464)

Or the complete text content of the Library of Congress, coupled with 6 Academic Research Libraries, with the capacity to dump the equivalent of 2 pickup trucks worth of books every second . In a 4U rack. For the price of several cars. Now that's my type of bookshelf system!

Huh? (0)

Anonymous Coward | more than 7 years ago | (#15701512)

Its compact dual Opteron rack server, 4U high, packed with 48 SATA-II drives.


OK, what does its compact dual Opteron rack server, 4U high, packed with 48 SATA-II drives do?

Crazy (2, Funny)

Reality Master 101 (179095) | more than 7 years ago | (#15701517)

<old_man_mode>Yeh dern kids today are gawdamn spoiled. Back in mah day, we didn't have these FANcy tahrabyte arrays! My TRS-80 had 128K -- that's right, KAY-uh -- on a floppy! And the operating system took about 40K of that, leavin' me about 85K left! And I was happy to have it! I had tuh use a paper hole-puncher and cut a write-protect tab so I could flip the floppy over tuh get more space! Damn kids these days... -mumble- -grumble-</old_man_mode>

Beware of the 2.5" disk drives (2, Informative)

xenophrak (457095) | more than 7 years ago | (#15701522)

I'm glad that they are at least offering a server in this class with 3.5" disks. The 2.5" 10K RPM SAS disks that are on the x4100 and x4200 are just junk pure and simple.

Re:Beware of the 2.5" disk drives (1)

bencc99 (100555) | more than 7 years ago | (#15701595)

The disks are ok - the controllers are a bit pap. It's the same in the T2000, sadly :(

See my benchmarks [spod.cx] - I might get around to doing one of the X4100s we've got for comparison...

You call 24 TB storage? Bah! (1)

Sodki (621717) | more than 7 years ago | (#15701563)

Thumper delivers forty-eight HDDs with 24 TB of raw storage. And it will double within the year, when 1TB drives will be sold.

Probably they'll just stick a Bacterial DVD on it.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...