Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Intel Intros 310 Series Mini SSDs

samzenpus posted more than 3 years ago | from the judge-them-by-their-size-do-you dept.

Data Storage 122

crookedvulture writes "Intel has added a couple of tiny 310 Series solid-state drives to its storage lineup. Measuring just 51 x 30 x 5.8mm, the mini-SATA SSDs are about a tenth the size of a standard notebook hard drive. Impressively, their performance ratings track with full-sized SSDs. Intel is pushing the 310 Series as a solution for dual-drive notebooks that combine solid-state and mechanical storage to give users the best of both worlds. Next-gen notebooks just got a little more interesting."

cancel ×

122 comments

Sorry! There are no comments related to the filter you selected.

kkkkkkkkkk (-1)

Anonymous Coward | more than 3 years ago | (#34706780)

lolwut

"Mini Series" (1)

Anonymous Coward | more than 3 years ago | (#34706826)

Hopefully I'm not the only one the read the title as "Intel intros 310 mini series"

Drat (4, Interesting)

DurendalMac (736637) | more than 3 years ago | (#34706832)

I was excited as these appear to be Mini PCIe cards, but then I was disappoint as it looks like it's a SATA connector that shares the form factor. It's not entirely clear, though.

Re:Drat (1)

oldspewey (1303305) | more than 3 years ago | (#34706842)

Why is SATA a disappointment?

Re:Drat (2)

dreamchaser (49529) | more than 3 years ago | (#34706898)

Why is SATA a disappointment?

Because slightly older laptops might have an emtpy Mini PCIe slot but not an extra SATA connector? To me it's not a dissapointment but perhaps to the poster you replied to it is.

Re:Drat (1)

stevel (64802) | more than 3 years ago | (#34710206)

The MiniPCIe standard includes SATA lines, as well as USB. So if you have an open full MiniPCIe connector, it probably has SATA capability. What you have to watch for, though, are slots that are physically MiniPCIe but which are wired for USB only (many notebooks and netbooks with WWAN connectors), or use non-standard pinouts for PATA (Dell Mini 9, for example.)

What is not clear, for the add-on user, is if the SATA lines are visible to the chipset. Usually mobile chipsets have 1 or maybe 2 ports for SATA.

In any event, these Intel cards are interesting for notebook and netbook manufacturers, less so for the end-user interested in a DIY upgrade.

Re:Drat (5, Insightful)

Rockoon (1252108) | more than 3 years ago | (#34706908)

SATA 1.0 (1.5 Gb/s) can't keep up with any modern SSD

SATA 2.0 (3.0 Gb/s) is currently keeping the industry down.

SATA 3.0 (6.0 Gb/s) isnt widely adopted yet, but even when its finally popular enough that too will just keep the industry down.

SATA-IO should be ashamed of itself for implementing 3.0 with such bullshit specs given the obvious reality of the situation.

Thats why many people want PCIe to become a standard interface for SSD's. That wont happen until low cost/capacity SSD's use it.

Re:Drat (2)

afidel (530433) | more than 3 years ago | (#34707038)

Even the Fusion I/O cards with SLC only push 500-700MB/s depending on the workload and they cost $7,500 for a 160GB card, SATA 6Gb should be plenty fast for a consumer standard.

Re:Drat (1)

AHuxley (892839) | more than 3 years ago | (#34707152)

Yes SATA 6Gb SSD raid via a good sandforce or better like solution.
Or pack the pci slots :)

Re:Drat (2)

Khyber (864651) | more than 3 years ago | (#34707936)

RAID

Good SandForce

As long as it's not on Sandy bridge, with it's gimped PCI-E, maybe you've got a point.

Re:Drat (1)

AHuxley (892839) | more than 3 years ago | (#34708246)

and one day we get light :)

Re:Drat (3, Informative)

Rockoon (1252108) | more than 3 years ago | (#34707154)

OCZ has 740MB/s cards for an order of magnitude less (Save $7000 and spend only $650) than than Fusion I/O's offering, and with 50% more capacity too (240GB card)

For cards in the price range you are talking about, OCZ delivers 1400MB/s on its 512GB card.

You seem to be less informed than you realize.

Re:Drat (2)

afidel (530433) | more than 3 years ago | (#34707208)

Are those the best case numbers or worst case? OCZ has a history of claiming huge numbers and terribly under-delivering. Oh, and at least for my use case MLC is a non-starter so the only OCZ card I'd be interest in is the Z-Drive e88 R2 which is ~$10k so 30% more for a two card solution (RAID1) and I only need ~120GB for the OLTP tables.

Re:Drat (3, Informative)

sr180 (700526) | more than 3 years ago | (#34707348)

Yes, we've been evaulating the OCZ Cards - and they are much slower in real life then the benchmarks suggest. Note that the FusionIO has a FusionIO Duo - which pulls 1.5GBytes a sec. This seems to be the holy grail of speed atm.

Re:Drat (1)

jon3k (691256) | more than 3 years ago | (#34710444)

Just curious, why is MLC not an option? Is it just a matter of sheer IOPS requirements or do you have longevity concerns? Can I ask what the workload is?

Re:Drat (1)

afidel (530433) | more than 3 years ago | (#34710540)

Longevity concerns, worst case numbers based on our workloads puts minimum life for MLC at ~6 months if the controller isn't very smart about write amplification, the 10x improvement for SLC makes that a much more acceptable ~60 months. The load is a mix of OLTP and reporting against a JD Edwards database. When you have lots of 8KB random writes you can wear out cells pretty quickly.

Re:Drat (3, Informative)

bobcat7677 (561727) | more than 3 years ago | (#34712306)

Having worked with sets of comparable cards from Fusion IO and OCZ (IOXtreme and Zdrive), I can give this assessment:

Neither card met the published performance numbers. But the Fusion I/O card came closer to it's published numbers then the OCZ card in basic benchmarks making the Fusion I/O card quite a bit faster for raw throughput. Both cards were blazingly fast though pushing MBps and IOps like no tomorrow.

Real world performance suffered greatly with the Fusion I/O cards due to their software driven architecture. The CPU overhead was significant, even on a powerful multi CPU Xeon server. The OCZ cards did not have this problem.
The Price/performance ratio in real world made OCZ the winner overall. The competition was closest when excluding CPU overhead, but once you include CPU overhead the OCZ cards win hands down.
Support was highly disappointing from Fusion I/O. With OCZ you expect minimal support, but I expected something better from the "premium" Fusion I/O brand (and price point). Unfortunately, their support was no better then OCZ.
We originally evaluated the original Zdrive model which was kindof a rough implementation of the technology. If you are going to buy one now, avoid the old Zdrives...there are several problems with their design. The new R2 Zdrives have fixed these problems and are sold at basically the same price point for similar specs.

We eventually returned the Fusion I/O cards due to their ridiculous CPU penalty. We still have the OCZ cards, but have stopped using them in favor of normal SAS controllers with hot swap SSD drives. It's just not convenient to shut down a server and crack open the case just to replace a failed SSD...and SSDs do fail:) At this point, PCIe SSD cards seem better suited to high end workstation applications where it's not as big of a deal to crack open the box for maintenance.

Re:Drat (1)

nabsltd (1313397) | more than 3 years ago | (#34707398)

No SATA SSD pushes even 250MB/sec for continuous reads in the real world, even when connected to a 6Gbps SATA controller. See the latest comparison benchmarks [techreport.com] .

This is because the entire SATA controller typically gets a single PCI Express lane, which is a 500MB/s max. The OCZ cards use 4 lanes, so 550MB/sec or so (which are the actual benchmarks [guru3d.com] ) is pretty poor use of a 2GB/sec max bandwidth.

Re:Drat (1)

Rockoon (1252108) | more than 3 years ago | (#34707508)

None of the SSD's they seem to have tested in your first link were SATA 3.0 (6 Gbps) so obviously they were restricted to less than 300MB/sec....

As far as your second link, since they benchmarking the slowest OCZ card (and showing that the benchmarks agree with the advertised speed), why are you declaring that its making poor use of PCIe x4 given that fact?

Its the slowest card that OCZ offers. Think about it.

Don't be so dishonest with your presentation.

Re:Drat (0)

Anonymous Coward | more than 3 years ago | (#34709128)

No SATA SSD pushes even 250MB/sec for continuous reads in the real world, even when connected to a 6Gbps SATA controller.

Lol?

Mine (OCZ Vertex 2) pushes ~280MB/s, non-cached, streaming large reads (~64GB test-file, way bigger than main memory, from cold start.)

You're wrong.

Re:Drat (1)

jon3k (691256) | more than 3 years ago | (#34710466)

Really? Because here's a SATA 6G drive reading over 350MB/s [tweaktown.com] .

Re:Drat (1)

Rudeboy777 (214749) | more than 3 years ago | (#34710696)

You're going to have to provide some evidence of such speeds in a real-world usage scenario.

Hardware review site benchmark porn is less than useless.

Re:Drat (1)

jon3k (691256) | more than 3 years ago | (#34710428)

Today, yes, and Fusion I/O total throughput isn't that impressive, just look at the OCZ RevoDrive X2 for comparison. The Crucial RealSSD [newegg.com] ($2.18/GB) drives are SATA 6G and are currently pushing over 350MB/s. And we're talking current generation controllers. You really think we'll see SATA 12G before we completely saturate SATA 6G? All we need to do is take a Crucial RealSSD, add in a second controller and internal raid and we're looking at nearly 700MB/s of non-sequential peak read throughput. We could overshoot SATA 6G that fast!

The reason drive manufacturers aren't pushing new controllers and more exotic controller designs is because the VAST majority of computers still don't support SATA 6G.

Re:Drat (1)

Twinbee (767046) | more than 3 years ago | (#34707970)

So why are they having so much difficulty in making SATA decently fast?

And why don't you think SSDs for PCIe (or indeed just PCI for standard desktops) have caught on yet?

Re:Drat (1)

DigiShaman (671371) | more than 3 years ago | (#34708046)

Not withstanding signaling issues across the cable, SATA can be faster. It's just there must be standards met for obvious cross compatibility reasons. This holds true for both the host and device chipset.

As for SSD over PCIe, it's a niche market. You also have to factor in cost and potential installation issues. But must have faster I/O, you could always leapfrog the issue by implementing the controller right on the CPU in the same manor as RAM is addressed today.

Re:Drat (2)

fuzzyfuzzyfungus (1223518) | more than 3 years ago | (#34706964)

Until you hit the really high end(where SATA is a bottleneck), there isn't much wrong with SATA. It's more the fact that mini-PCIe slots, sometimes several, are downright standard in notebooks and similar small devices, while these strange, hybrid 'electrically SATA; but mini-PCIe connector' things are not. an SATA device isn't going to do anything useful plugged in to a conventional mini-PCIe slot, and it will require a mechanical adapter to connect to any reasonably normal SATA connector.

For the moment, unless these things really take off, this particular combination of form factor and bus type screams "OEMs only", where either PCIe or SATA are common, standard, and amenable to individual use and upgrade...

Re:Drat (3, Insightful)

fuzzyfuzzyfungus (1223518) | more than 3 years ago | (#34706892)

It isn't a vice exclusive to Intel; but that is indeed what you are seeing.

For reasons that I can only imagine had something to do with "somebody pinching pennies until their pecuniary ichor flows", the trend somehow started of using the mini-PCIe connector, without so much as the decency of different keying or anything, to handle what are, electrically, SATA signal lines plus power. There would be nothing wrong with this if these things were actually storage-oriented mini-PCIe cards(like the HDD PCI cards of yore, with a controller chip+flash, capable of acting like a normal PCIe device; or if they were just using some 'sub-mini SATA' connector; but using a straight mini-PCIe connector for something electrically and logically completely different is plain hostile.

I get this sense that users aren't really supposed to touch these things, or the innards of the devices in which they will end up, or such a confusing and potentially damaging connector misuse would likely have not taken place...

Re:Drat (1)

iamhassi (659463) | more than 3 years ago | (#34707006)

RTFA: "Otherwise known as mSATA, the diminutive SSD form factor pipes Serial ATA signaling over a mini PCI Express connector."

So that's a Mini PCIe connector, not SATA

Re:Drat (4, Insightful)

fuzzyfuzzyfungus (1223518) | more than 3 years ago | (#34707074)

Worse, it's both. Mechanically identical to a mini PCIe connector; but electrically/logically identical to SATA. Won't work if plugged into a PCIe bus, because it isn't a PCIe device; but won't plug in to virtually any SATA connector; because it has the form factor of a mini PCIe card.

Re:Drat (1)

MartinSchou (1360093) | more than 3 years ago | (#34708760)

Yes ... won't plug in to virtually any SATA connectors, apart from the mini-SATA connectors. Just like mini-USB doesn't connect with virtually any USB connectors (apart from mini-USB), right?

And it's not like mini-SATA is a new connection either.

September 21, 2009 08:00 AM Eastern Time ; SATA-IO to Develop Specification for Mini Interface Connector mSATA Extends Benefits of SATA Interface for Small Form Factor Applications [businesswire.com]

Obviously Intel is trying their best to screw over people by dreaming up some completely non-standard and completely new interface for their tiny SSDs

Re:Drat (3, Insightful)

fuzzyfuzzyfungus (1223518) | more than 3 years ago | (#34711622)

As I noted there [slashdot.org] this form factor has nothing to do with intel particularly, nor did they come up with it.

I just strongly object to the use of an identical connector for two completely different, non-interoperable protocols. Were it some chintzy once-off by a bottom feeding netbook monger, trying to pinch every last nickel off production costs, it would be understandable, if distasteful; but the fact that they've gone and made a standard out of it, without adding so much as a cheap keying change to the mSATA version of the mini PCIe connector, pisses me off.

My displeasure isn't Intel specific; but aimed at the unmodified reuse of a connector intended for a completely different protocol. It's sloppy and user hostile.

Re:Drat (0)

MartinSchou (1360093) | more than 3 years ago | (#34708794)

Yes ... won't plug in to virtually any SATA connectors, apart from the mini-SATA connectors. Just like mini-USB doesn't connect with virtually any USB connectors (apart from mini-USB), right?

And it's not like mini-SATA is a new connection either.

September 21, 2009 08:00 AM Eastern Time ; SATA-IO to Develop Specification for Mini Interface Connector mSATA Extends Benefits of SATA Interface for Small Form Factor Applications [businesswire.com]

Obviously Intel is trying their best to screw over people by dreaming up some completely non-standard and completely new interface for their tiny SSDs

Re:Drat (2)

owlstead (636356) | more than 3 years ago | (#34708898)

Well, that should at least allow notebook manufacturers to use the same physical design if they decide to switch to a PCIe interface. For the current generation (and probably the next SATA-3 generation as well), the SATA standard is fast enough. End users won't notice and more importantly, it won't influence the BIOS or operating system at all.

Re:Drat (1)

Anonymous Coward | more than 3 years ago | (#34707078)

This appears to be a "PCI Express Mini Card"

http://en.wikipedia.org/wiki/Pci_express

This form factor has 1 PCI-e lane, so it's either 2 Gb/sec or 4 Gb/sec

From the article:
"pipes Serial ATA signaling over a mini PCI Express connector."

Neither is all that shabby for such a tiny card

The 200 MB/sec read bandwidth is probably limited by this bus.

Really this is QUITE a nice number if you compare it to a standard notebook rotating media drive, I would not complain.

As others have posted, this thing is tiny! Put two in RAID-0 and you've got server-level I/O performance in your laptop.

Re:Drat (0)

Anonymous Coward | more than 3 years ago | (#34708950)

But it's not that new, because Asus has been using SSDs of this form factor in the EEEPC for some time, now. Here [mydigitaldiscount.com] is an example and I think OCZ are even making some properly fast drives.

Re:Drat (0)

Anonymous Coward | more than 3 years ago | (#34708884)

It looks like a real PCIe mini card from the picture, the article is probably just confused. This isn't all that new, though. Asus has been using SSDs of this form factor in the EEEPC for some time, now. Here [mydigitaldiscount.com] is an example.

The DDRDrive X1 - I wish it came out! apk (0)

Anonymous Coward | more than 3 years ago | (#34709714)

"I was excited as these appear to be Mini PCIe cards" - by DurendalMac (736637) on Wednesday December 29, @10:25PM (#34706832)

I hear you: The last time I was "excited" about a SSD product though, was THIS one (never came to market):

DDRdrive uses PCIe to increase speed of mainstream solid state disks

http://www.tomshardware.com/news/ddrdrive-ssd-announced-q1,2195.html [tomshardware.com]

This was "in the makings", back in 2006, but it never came to market... I was BADLY disappointed! I already have & use:

---

1.) A CENATEK RocketDrive 2gb PCI-133 SDRAM based "true SSD" PCI bus based (133mb/sec)

2.) GIGABYTE IRAM 4gb DDR2 RAM based "true SSD" SATA 1 bus based (150mb/sec)

Both drives are used for:

  A. pagefile.sys placement
  B. %temp/tmp% ops via environment variable set
  C. %comspec% location
  D. system logs (like eventlogs, which ARE moveable)
  E. application logs (app logging)
  F. all webbrowser caches
  G. print spooler location

---

(I say 'true SSD', because none of these units use FLASH memory on them, which has slower write cycles typically)

The PCI-e based DDRDrive X1 however?

It was going to use the immensely FASTER PCI-e bus... & faster RAM than my CENATEK RocketDrive, in DDR memory: I was "saving my pennies" for it in fact, but again, it NEVER came out to market!

What a shame!

APK

P.S.=> One of these days though, you KNOW someone's going to make a SSD that doesn't use FLASH RAM, & thus has instantly FAST writes too (faster than FLASH RAM does @ least), as well as screaming fast reads + access times, & one that uses the PCI-e bus too, only a matter of time... apk

Re:The DDRDrive X1 - I wish it came out! apk (0)

Anonymous Coward | more than 3 years ago | (#34710074)

There's no such thing as "FLASH RAM". It's either FLASH, or RAM, not both.

raid? (1)

Anonymous Coward | more than 3 years ago | (#34706836)

10 of them in a raid in a laptop?

Re:raid? (1)

Joe The Dragon (967727) | more than 3 years ago | (#34706854)

try finding a laptop with 10 pci-e links. maybe if you add a pci-e switch to X16 lanes for the video chip

Re:raid? (2)

Atriqus (826899) | more than 3 years ago | (#34706910)

I'm pretty sure that's a SATA interface, but as it stands, your statement is valid for either.

Perfomance vs size (4, Interesting)

BradleyUffner (103496) | more than 3 years ago | (#34706856)

Why is it impressive that a smaller solid state drive performs as well as a standard size one? What does the size have to do with anything relating to these performance benchmarks?

Re:Perfomance vs size (0)

Jah-Wren Ryel (80510) | more than 3 years ago | (#34706896)

Why is it impressive that a smaller solid state drive performs as well as a standard size one? What does the size have to do with anything relating to these performance benchmarks?

Because the bigger it is the more smoke it can hold and we all know that letting the smoke out totally kills performance.

Re:Perfomance vs size (2)

dsginter (104154) | more than 3 years ago | (#34706936)

What does the size have to do with anything relating to these performance benchmarks?

Perhaps because of the whole decades of history related to rotating bulk storage? Without increases in spindle speed (and, thus, price), larger storage has always been faster.

Don't you remember the Quantum Bigfoot?

Get off of my lawn!

Re:Perfomance vs size (4, Interesting)

fuzzyfuzzyfungus (1223518) | more than 3 years ago | (#34706938)

It isn't wildly impressive, since many of the larger SSDs are either smaller boards padded out with aluminum or plastic to meet 2.5inch size standards, or 2.5 inch boards taking advantage of relatively lax density requirements to save on board layers and fabrication expenses; but it is the case that most high-performing SSDs are doing somewhat RAID-esque stuff across their multiple flash chips. Thus, unless the design is severely gimped by either incompetence or cost constraints, larger device=space for more chips=more opportunity for spreading operations across multiple flash chips=higher overall apparent speed. For a very small device to hit high speeds, the maker is either doing some clever packaging, to get a competitive number of dice in that space, or implementing a nice controller that can compensate for not having substantial parallelism to play with, or using comparatively pricey flash that is high on the speed and density curves, rather than just doubling up on whatever is available at mainstream price points and taking advantage of the available board space.

Given Intel's formidable fab expertise and capital resources, it would not surprise me if two and three are at play here...

Re:Perfomance vs size (0)

Anonymous Coward | more than 3 years ago | (#34710774)

It isn't impressive.

Fixed

Re:Perfomance vs size (2)

masterwit (1800118) | more than 3 years ago | (#34707142)

Why is it impressive that a smaller solid state drive performs as well as a standard size one?

Is it the size of the ship or the motion of the ocean? (Sorry couldn't help myself.)
Otherwise, good point!

Re:Perfomance vs size (4, Interesting)

Rockoon (1252108) | more than 3 years ago | (#34707216)

Why is it impressive that a smaller solid state drive performs as well as a standard size one? What does the size have to do with anything relating to these performance benchmarks?

The speed of SSD's is linearly correlated with the number of flash chips they contain, because the flash chips are operated in parallel (think RAID0, only its implicit in the design)

Smaller would usually mean less flash chips, so less parallelism.

Re:Perfomance vs size (1)

owlstead (636356) | more than 3 years ago | (#34708930)

It does seem from the picture that they have used new packaging for the chips. If I remember correctly, there are more chips on my Intel SSD than there are on the picture, so they've probably paired them. That is quite a bit of effort to go through for just introducing a smaller form factor. This may also be a drawback from competitors that don't have a direct influence over the production facilities. Note that this is pure speculation from what I see from the picture.

Re:Perfomance vs size (1)

TheRaven64 (641858) | more than 3 years ago | (#34709252)

Smaller size also means less cooling, which may be a factor in performance of flash.

Re:Perfomance vs size (1)

sam0737 (648914) | more than 3 years ago | (#34708184)

Why is it impressive that a smaller solid state drive performs as well as a standard size one? What does the size have to do with anything relating to these performance benchmarks?

Ask a woman and they might be able to tell...

Windows (-1, Troll)

antifoidulus (807088) | more than 3 years ago | (#34706900)

I wonder how much that primitive joke of an "operating system" will derail the widespread adoption of these hybrid technologies. With grown up OSs that aren't stupid enough to map the physical drive layout to the logical file layout, these hybrid drives are a no brainer, just change the fstab to point /home(/Users for macheads :P) to the hd and / to the ssd. Done! However in Windows you now would have to contend with your drive being divided amongst 2 drive letters and all the registry hell that goes along with it. Not to mention the fact that a large # of applications simply fail if everything isn't on C:\

Again, windows will probably hold up the rest of us from evolving long enough so that they can write another hack to make their shitty "operating system" work. Why, why are people still putting up with such hard to use primitive bullshit? Linux is infinitely easier to use than Windows ever was.

Re:Windows (5, Interesting)

jonbtn (530417) | more than 3 years ago | (#34706960)

Perhaps you don't know that Windows (Vista confirmed, 7 should too) can map a seperate drive to a folder instead of a drive letter, if you tell it to. It is rather easy to do. You can even setup multiple paths for a single drive if you want.

Re:Windows (3, Informative)

fuzzyfuzzyfungus (1223518) | more than 3 years ago | (#34707048)

It goes back at least as far as XP, probably 2000 if you don't need the Fisher-Price skin...

Now, just to get back to the bigotry and one-upsmanship, any setup that forces the user to think about how best to allocate filesystem stuff between block devices, or forces them to commit to one inflexible configuration, is arguably underutilizing the capabilities of this sort of technology.

Machines are, unless the human really wants to, supposed to handle the grunt work(not to mention, keeping accurate track of file accesses, speed and latency of multiple devices, etc. properly is really beyond the capabilities of a human, at least in realtime).

What you really want is an FS arrangement that can seamlessly present you with a single logical volume, silently handling the details of what to commit to flash and what to platter, for optimal performance and responsiveness without the cost of going all Flash.

Re:Windows (1)

afidel (530433) | more than 3 years ago | (#34707132)

Yep, like ZFS and L2ARC or EMC's FAST Cache, I don't want to have to think about which are the hot blocks, I just want the hot blocks to almost always be in cache. This is definitely where the high end storage market is going (and the really high end has always kind of been there with largish NV ram cache).

Re:Windows (2)

Rockoon (1252108) | more than 3 years ago | (#34707186)

This goes back to DOS for christ sakes.

Re:Windows (1)

hedwards (940851) | more than 3 years ago | (#34707526)

In the MS world, perhaps. But I know FreeBSD supports it and I don't think they added that after they split from UNIX.

Re:Windows (0)

Anonymous Coward | more than 3 years ago | (#34709318)

Now, just to get back to the bigotry and one-upsmanship, any setup that forces the user to think about how best to allocate filesystem stuff between block devices, or forces them to commit to one inflexible configuration, is arguably underutilizing the capabilities of this sort of technology.

ZFS pools to the rescue! :)

Re:Windows (1)

pradeepsekar (793666) | more than 3 years ago | (#34710170)

The "join" command in DOS could do this too... way back in the 90's...

Re:Windows (1)

afidel (530433) | more than 3 years ago | (#34707090)

Directory linking goes back to Windows 2000 but mapping c:\Users to it is a bit more difficult as the currently logged in users profile is always in use thus locking the folder. I guess you might be able to do it remotely though if none of the system processes have it open. Alternatively if it was a single user workstation you could log in as admin and just link that users folder to the drive. Personally I just put temp, pagefile and the readboost cache on my SSD as my general files are not the thing that needs the speedup.

Re:Windows (3, Insightful)

nabsltd (1313397) | more than 3 years ago | (#34707452)

Directory linking goes back to Windows 2000 but mapping c:\Users to it is a bit more difficult as the currently logged in users profile is always in use thus locking the folder.

There are quite a few ways to deal with this issue:

  • You can schedule the mapping to take effect during a reboot (after copying the files)
  • You can boot off another disk, copy the files and create the mapping. If you do this, you have to make sure to map to the drive letter that will be used when you boot from the first drive.
  • Log in as the first created user, enable log in as "Administrator", log in as "Administrator", delete the first user, then set "D:\Users" as the profile directory. Every user created after that point will have their profile in the new directory, while "Administrator" will still be on C:, which is very similar to Unix where the root home dir is on /, not /home.

There are also tools from Microsoft designed to automate installs that will allow the mapping to be set at install time.

Re:Windows (2)

hedwards (940851) | more than 3 years ago | (#34707550)

You don't really need tools for it. MS allows you to use the WINNT.SIF file for that purpose. It's also a convenient way to do all sorts of other adjustments that wouldn't work properly when done post install. It goes back to at least Windows 2000.

Re:Windows (1)

Z34107 (925136) | more than 3 years ago | (#34707566)

You should be able to do what you're describing with group policy [tech-faq.com] . It's designed more for roaming profiles, but it should work for moving c:\users off of an SSD.

Re:Windows (1)

ascendant (1116807) | more than 3 years ago | (#34707410)

This has been a feature since microsoft introduced NTFS. Which is long before vista.

Re:Windows (1)

Gadget_Guy (627405) | more than 3 years ago | (#34707438)

Correct. They are called Volume Mount Points [wikipedia.org] , and they were introduced in Windows 2000 (ten years ago). You can mount non-NTFS drives as a folder on an NTFS drive. It even works on USB drives and CD/DVD drives (so you could have /dev/cdrom).

I have a feeling that it may have been possible to do with the filesystem in NT 6, but there was no user interface for it.

Re:Windows (2)

fast turtle (1118037) | more than 3 years ago | (#34707648)

Only thing is, it doesn't work that way in Windows simply because the damn / folder is part of the /user data folder. Due to that, you have to map each users /home folder on an individual basis and mapping more then a couple of folders/drives will slow boot/shutdown times considerably. Another issue is "Unlike *nix" MS didn't see fit to isolate the damn /root "/admin" folder from the /home "/user data" folder, meaning you simply can't relocate /home "/user data" to another drive.

Another issue is that the folder/drive mapping is on a user by user basis and the reason it slows shit down is the fact that Windows wants to cache the data from those mapped locations in the offline cache location. So now it copies all of that data back into "C" drive because the idiots at MS couldn't take a page from the *nix folks and use a design that's trully engineered for multiple users. I've got Vista (32/64) Win7 (32/64) and both Pro/Ultimate do not make it easy to do any of that.

Oh I'll admit that there may be some third party tools that could do it but they'll probably cost more then I can afford or the local mom and pop shop is willing to spend on IT infrastructure.

Re:Windows (0)

Anonymous Coward | more than 3 years ago | (#34708888)

Perhaps you don't know that Windows (Vista confirmed, 7 should too) can map a seperate drive to a folder instead of a drive letter, if you tell it to. It is rather easy to do. You can even setup multiple paths for a single drive if you want.

I don't remember if a third party software was involved, but you could do this in Windows 95 and even DOS (or was it DR-DOS, not sure). Unfortunately the drive would still occupy a drive letter as well as the folder, so it wouldn't fix drive-letter-hell. Being able to do something, does not always mean that it is easy enough to bother with, "mount" is extremely easy to use and a very versatile everyday tool for me and I'm just a humble home computer user. I haven't used Windows recently (since W98), but I would guess it isn't as simple nor as useful, as it is in unix-like operating systems. even in more recent versions of Windows .

Re:Windows (1)

klui (457783) | more than 3 years ago | (#34709258)

There are limitations to the mounting of drives in a separate directory under XP and Server 2003. Not sure if it also affects Vista/7/Server 2008. Some software insists that the destination directory only correlates to the boot drive and not the physical disk forcing a duplicate mount as another drive letter so they see that I have enough space. Overall, it's better than having many drive letters.

Re:Windows (1)

MichaelSmith (789609) | more than 3 years ago | (#34706994)

Its funny because years ago MSFT briefed us DEC guys on their brand new WNT OS which had so much DEC technology in it. And I did ask why the disk device names had to follow Windows95 (and DOS). I didn't get a good answer but I suppose the reason was backwards compatibility.

Re:Windows (1)

Anonymous Coward | more than 3 years ago | (#34707072)

I can't believe it either ... but there is a whole industry dedicated to dealing with windows. But it's the way our world works, sadly.

We create artificial scarcity, force people to use an inferior and limited technology, that has ridiculous drawbacks, and requires a tremendous workforce around it just to keep it functional. And we keep people using it even when there are cheaper, infinitely better, more reliable and future-proof technologies. The reason is simple: Through artificial scarcity, we keep the money flowing in a certain direction, we keep control in the same hands, and we create hugely profitable but completely pointless industries.

Think about it, we could be running 100% on clean, future-proof, secure and cheap nuclear energy. Instead, we rely on oil. The infrastructure that oil demands is huge, the drawbacks are incredible, we are polluting the environment, drilling the oceans to get some more black juice out of the earth at a huge risk.

We could also have moved all of our communications to ip-based networks, cutting down costs, and removing the need for so many different networks. We could have a single infrastructure that would provide us with high-bandwidth, low-latency internet everywhere, and put everything from phone calls to TV through that network. Instead, we are running different networks for each purpose, and within each purpose different networks for each provider. If we re-purposed all cellphone towers from all providers to give us just internet access, we could have 100% coverage everywhere in the world. Instead, we have huge overlapping (areas serviced by several providers), and huge areas with no coverage at all.

We could also be using just Free Software. It's open, transparent, reliable, cheap, and ethical. Instead, most people use windows. That means triplicating new hardware purchases, cutting 70% on hardware's lifespan, spending incredible resources in pointless activities like antivirus production/sale/deployment, and an IT structure several times bigger than required, not to mention all the lost time and profit due to preventable downtime.

But it's the way the economy works. It's the way the usual people keep getting richer, while keeping the majority of the world in line, quite and productive.

It's absolutely sad, but it's not just something that happens only in software, and it's certainly no accident.

Re:Windows (1)

Kjella (173770) | more than 3 years ago | (#34707288)

With grown up OSs that aren't stupid enough to map the physical drive layout to the logical file layout, these hybrid drives are a no brainer, just change the fstab to point /home(/Users for macheads :P) to the hd and / to the ssd. Done! However in Windows you now would have to contend with your drive being divided amongst 2 drive letters and all the registry hell that goes along with it.

Except that your / is full of small files and your /home/[user]/Documents are also full of small files that'd be much better off on a SSD, while all the help files on / that I hardly ever use and media files in your home folder should go on the HDD.

P.S. While the TARDIS tricks you can pull off on "grown up" OSs can be useful, they're hell to make sense of and make very simple questions have very complicated answers. Like for example, do I have the space to copy in these 30 GB of files? Well that depends, you only have 10 GB free on / but it's bigger on the inside and there may even be more disks being mounted somewhere under /home again. So you can copy in a bunch of files to a subdirectory and still have just as much free space, as if you were a magician pouring water into a hankerchief. Windows is simple, you have your drives and the more you put in them the fuller they get which is very straightforward to understand.

P.P.S. Your setup describes how most Windows machines are set up in a corporate setting, the apps on C:\ and your profile and/or my documents redirected to the network. Kinda silly to pretend that's not possible.

Re:Windows (1)

SuperQ (431) | more than 3 years ago | (#34707808)

Figuring out free space inside a *NIX system isn't that hard. Just because you lack algorithmic imagination doesn't mean it's difficult.

Re:Windows (1)

StayFrosty (1521445) | more than 3 years ago | (#34711412)

Like for example, do I have the space to copy in these 30 GB of files? Well that depends, you only have 10 GB free on / but it's bigger on the inside and there may even be more disks being mounted somewhere under /home again.

The df command will tell you how much space is available on each block device and lists the mount point for the device. If you pass it the "-h" argument it conveniently gives you the sizes in the more human readable MB, GB, etc abbreviations instead of listing the number of 1k blocks.

Re:Windows (4, Insightful)

GNUALMAFUERTE (697061) | more than 3 years ago | (#34707306)

Sorry to repost this, but I accidentally posted it as AC, and nobody is going to see it at -1.

I can't believe it either ... but there is a whole industry dedicated to dealing with windows. But it's the way our world works, sadly.

We create artificial scarcity, force people to use an inferior and limited technology, that has ridiculous drawbacks, and requires a tremendous workforce around it just to keep it functional. And we keep people using it even when there are cheaper, infinitely better, more reliable and future-proof technologies. The reason is simple: Through artificial scarcity, we keep the money flowing in a certain direction, we keep control in the same hands, and we create hugely profitable but completely pointless industries.

Think about it, we could be running 100% on clean, future-proof, secure and cheap nuclear energy. Instead, we rely on oil. The infrastructure that oil demands is huge, the drawbacks are incredible, we are polluting the environment, drilling the oceans to get some more black juice out of the earth at a huge risk.

We could also have moved all of our communications to ip-based networks, cutting down costs, and removing the need for so many different networks. We could have a single infrastructure that would provide us with high-bandwidth, low-latency internet everywhere, and put everything from phone calls to TV through that network. Instead, we are running different networks for each purpose, and within each purpose different networks for each provider. If we re-purposed all cellphone towers from all providers to give us just internet access, we could have 100% coverage everywhere in the world. Instead, we have huge overlapping (areas serviced by several providers), and huge areas with no coverage at all.

We could also be using just Free Software. It's open, transparent, reliable, cheap, and ethical. Instead, most people use windows. That means triplicating new hardware purchases, cutting 70% on hardware's lifespan, spending incredible resources in pointless activities like antivirus production/sale/deployment, and an IT structure several times bigger than required, not to mention all the lost time and profit due to preventable downtime.

But it's the way the economy works. It's the way the usual people keep getting richer, while keeping the majority of the world in line, quite and productive.

It's absolutely sad, but it's not just something that happens only in software, and it's certainly no accident.

Re:Windows (0)

Anonymous Coward | more than 3 years ago | (#34707392)

Yeah, and everyone who disagrees with you is stupid. It's all so beautiful in your head!

You've gotta be at least partially retarded to think it's some sort of conspiracy though.

Re:Windows (1)

RicktheBrick (588466) | more than 3 years ago | (#34712128)

I have a computer that will boot just fine but will not communicate with either the mouse or keyboard. It does not manner if they are ps/2 or usb. I tried a usb pci board and even tried an external usb device. Once I boot I see the computer on my network so it must be some sort of motherboard problem. Why shouldn't I be able to purchase a bare bones computer and transfer the hard drive, optical disk and memory and just turn it on and be back to where I was before the problem? I can not do that since it was a vista computer and the hard drive will not boot in a new computer. So I have to purchase another computer with a new copy of vista or 7 and than remove the hard drive from the old computer and set it up as a secondary drive to the new computer's hard drive so I have my old data. But I still have to reinstall all of my programs. That is a lot of work so Microsoft can sell two copies of their operating system. I also have a lexmark printer that still works fine but lexmark will not update the drive to windows 7 or vista so either I throw away the printer or maintain a windows xp system just to use the printer. There are no drivers for Ubuntu either. So I see your point and agree with you.

Re:Windows (1)

Archangel Michael (180766) | more than 3 years ago | (#34707360)

Actually, what we really need is an OS that maps all memory the into one contiguous map, from fastest, to slowest and put the files rarely used on the slowest media and the ones used towards the fastest. But also include knowledge of memory that is temporary and fast verses slow (tape) and/or even unavailable (network shares) seamlessly as one huge pile.

Windows supported TRIM before anyone else (3, Interesting)

SuperBanana (662181) | more than 3 years ago | (#34707444)

I wonder how much that primitive joke of an "operating system" will derail the widespread adoption of these hybrid technologies.

The primitive joke of an operating system that introduced USB-flash based application acceleration (no such similar feature for any free operating system, and supported SSD TRIM commands before any other operating system? (OS X still doesn't and there are no announced plans to; Linux 2.6.32+, I believe, does only on a kernel level, but support amongst various filesystems seems inconsistent or not present; it's hard to tell. hdparm supports manually running TRIM using areas reported by the filesystem as free, but that's hardly equivalent to Windows, which "just works".)

Re:Windows supported TRIM before anyone else (1)

dbIII (701233) | more than 3 years ago | (#34708110)

The primitive joke of an operating system that introduced USB-flash based application acceleration (no such similar feature for any free operating system

You can tell just about every operating system in use today where to put a swap file. Your "new feature" is as relevent as "it also comes in pink". It was also a horrible kludge to get around memory usage issues and disk space limitations since most USB flash disks at the time (and many now) are horribly slow. I've turned "stupidfetch" off on some Vista systems to improve performance.
TRIM is in a lot of drivers on a lot of operating systems depending upon vendor support for each product - so not just on MS platforms, not from MS in the first place and some were available on a few platforms on release.
Your examples are just as misplaced as the idiot that called it a "primitive joke"

Re:Windows supported TRIM before anyone else (1)

drsmithy (35869) | more than 3 years ago | (#34710184)

You can tell just about every operating system in use today where to put a swap file.

Not the same thing. At all.

Your "new feature" is as relevent as "it also comes in pink". It was also a horrible kludge to get around memory usage issues and disk space limitations since most USB flash disks at the time (and many now) are horribly slow.

No, it was adding a caching layer to improve performance. Exactly the same principle used by NetApp, EMC, Sun, et al. *Exactly* how flash/SSD disk _should_ be being used, not with horrendous manual hacks requiring the user to understand and manually relocate data depending on what they think its performance requirements might be.

Re:Windows supported TRIM before anyone else (1)

dbIII (701233) | more than 3 years ago | (#34710514)

Yes. It. Is. Exactly. The. Same. Thing.
The only difference is the default behaviour was changed (I suppose avoiding "horrendous manual hacks" such as, OMFG, ticking a box). There were some other caching changes that made a mess of Vista for a while but they should have been patched out by now. We can argue about this all day without even taking a step off the Microsoft platforms, I'm pretty sure even including Windows CE and flash devices there.

Re:Windows supported TRIM before anyone else (0)

Anonymous Coward | more than 3 years ago | (#34708158)

"It just works" is an Apple slogan, not Windows.

http://en.wikipedia.org/wiki/List_of_Apple_Inc._slogans#Mac_OS_X

Re:Windows supported TRIM before anyone else (0)

Anonymous Coward | more than 3 years ago | (#34708484)

The primitive joke of an operating system that introduced USB-flash based application acceleration

Let me get this straight: you are trying to claim that the "primitive joke of an operating system" is in fact cutting edge technology because it implemented in 2006 a way to cache files in a particular mount point?

Re:Windows supported TRIM before anyone else (3, Insightful)

bryonak (836632) | more than 3 years ago | (#34708812)

I know you're replying to a rather trollish parent, but still I'd like to remind you not to let facts get in the way of your biased presentation.

Assumingly you refer to ReadyBoost (which was introduced in Windows only around 2006): isn't that about the fastest way to trash your USB drive? Further assuming you are inclined to do so on a UNIX-like system, say Ubuntu:

- unmount the USB volume
- sudo mkswap /dev/sdX1
- sudo swapon -p 32767 /dev/sdX1
- increase swappiness to be on Windows levels so your disk gets aggressively cached (may need to tune the VFS caching too)

This has been available for decades, and it shows how ReadyBoost is mainly the marketing department "boosting" a simple technique.
Why noone has bothered to automate the above steps (as done by ReadyBoost)? First there is usually no need to at least on Linux-based systems (compare memory requirements), secondly having a pen drive stick out of your laptop all the time just to make it a bit faster is both cumbersome and wasteful, thirdly there are much better techniques on RAM-constrained machines.

As for TRIM... well, the 2.6.32 kernel has been released in 2009, there were two major Ubuntu releases with that kernel resp. a newer one, and 'discard' (TRIM) support takes 5-10 minutes of additional setup (I installed Ubuntu on a SSD MBP a few weeks ago). Granted, it doesn't "just work out of the box" (point for Windows!), but it works well enough.
Concerning file system support: the current standard ext4 and the future standard btrfs are discard-capable, as are number of the more obscure ones.
Others don't support it, but we have the same situation on Windows... only 50% of the commonly used file systems know TRIM (NTFS does, FAT32 doesn't). See, just a matter of presentation ;)

Re:Windows (0)

Anonymous Coward | more than 3 years ago | (#34707500)

There's a lot of things I don't like about Windows, but have you actually tried Windows in the past 10+ years or are you really that ignorant? Almost all applications work just fine running somewhere other than C:\Program Files. I've been doing it for 12+ years and have never hit an application that had issues running off another drive.

I've had plenty on many machines (1)

dbIII (701233) | more than 3 years ago | (#34708144)

There always seems to be one showstopper that actually needs to run from C: that in my case has often been the application that the computer was purchased to run in the first place. It's meant things like re-installing Windows7 to only have a single partition so that software written in 2010 ends up on the system drive. It should not happen but it frequently does, is very annoying and it will be a few years before developers grow out of hard coding things to be on the C: drive and requiring to run as Administrator.
MS Office, openoffice etc don't care but there are still a lot of badly written applications out there that depend on the single user, non networked mindset that was out of date before MSDOS existed.

Re:Windows (0)

Anonymous Coward | more than 3 years ago | (#34707806)

I wonder how much that primitive joke of an "operating system" will derail the widespread adoption of these hybrid technologies. With grown up OSs that aren't stupid enough to map the physical drive layout to the logical file layout, these hybrid drives are a no brainer, just change the fstab to point /home(/Users for macheads :P) to the hd and / to the ssd. Done! However in Windows you now would have to contend with your drive being divided amongst 2 drive letters and all the registry hell that goes along with it. Not to mention the fact that a large # of applications simply fail if everything isn't on C:\

Again, windows will probably hold up the rest of us from evolving long enough so that they can write another hack to make their shitty "operating system" work. Why, why are people still putting up with such hard to use primitive bullshit? Linux is infinitely easier to use than Windows ever was

If your going to make fun of windows at least have a logical argument and your facts straight.

You can do a volume set in windows to combine space on multiple drives into one logical partition. You can also do mount points where separate partitions are folders under the c: drive the same as linux.

Your comment regarding physical vs logical separation makes no sense because on both systems you are separating disk space into containers it is just the trivial representation (folder vs drive letter or folder vs folder) that can be the same or different... there is still no unifed access without a volume/span set on either platform.

Re:Windows (1)

hairyfish (1653411) | more than 3 years ago | (#34708178)

Wow! you can't make that stuff up...

Performance (0)

Anonymous Coward | more than 3 years ago | (#34706948)

A couple of these suckers in a RAID 0 would certainly be pretty speedy.

Re:Performance (4, Insightful)

fuzzyfuzzyfungus (1223518) | more than 3 years ago | (#34707126)

If mounted upright, these guys would be just a little too tall for a 1U; but a 2U could fit several hundred... With the economies of scale enjoyed by something designed to be shoved into consumer laptops, a shelf or two of these little puppies could, with the right controller, make fibre channel stuff that costs a factor of ten or two as much wet itself...

Re:Performance (1)

LoudMusic (199347) | more than 3 years ago | (#34707832)

You could mount them flat in a 1U configured for five tall and fifteen wide for a grand total of 75 hot swappable units. Using the current 80 GB unit you'd have 6000 GB. With a simple RAID 5 array with, lets say two hot spares for good measure, you'd have 5760 super redundant usable GB and at a theoretical sustained write speed of 5760 MB/s. If one dies it's replaced by a hot spare in 17 minutes (80 GB at 80 MB/s) and you replace the failed one for a new 'chip' at your earliest convenience.

I've never been in charge of something that needed that kind of speed or availability, but it sure sounds cool!

Re:Performance (1)

fuzzyfuzzyfungus (1223518) | more than 3 years ago | (#34709416)

Even better, with the availability of sliding rails and matching cable management arms on the back, you could get multiple rows of what you describe, in a single 1U, if you were willing to make the user slide the device out and pop the top to replace parts that aren't in the front row(certainly not as convenient, there is a reason that HDD bays are generally mounted on the front; but not unprecedented in high-density storage applications...)

Even if you were shooting for '2 post' friendly depth you could likely get two or three rows in, while a full 4 post requiring depth might be good for 8ish. It wouldn't be cheap; but it would be dense, fast, quiet, relatively cool-running, and cheaper than some alternatives...

Re:Performance (1)

Junior J. Junior III (192702) | more than 3 years ago | (#34709596)

What really makes me excited about that is the smaller chunks of RAIDed disk. Recovery by hot spare has always made me nervous due to the length of time it takes to repair a 1-2 TB hole in your redundancy. As long as the failure rate of the individual drives isn't such that you're incurring multiple failures or encountering them very frequently, this faster return-to-full-health time would be a real boon.

Re:Performance (2)

WuphonsReach (684551) | more than 3 years ago | (#34710008)

What really makes me excited about that is the smaller chunks of RAIDed disk. Recovery by hot spare has always made me nervous due to the length of time it takes to repair a 1-2 TB hole in your redundancy.

Depends on the RAID type.

RAID 5 (and 6?) rebuild/recovery windows tend to scale linearly with the number of drives in the array. So a very large array with very large drives can takes hours/days to rebuild.

RAID 1 and RAID 10 rebuild/recovery windows scale with the size of an individual spindle, not the number of drives. So a drive failure in a RAID-10 array generally results in a very short recovery window, determined by the size/speed of a single drive within the array. Even on bigger 10-20 disk RAID-10 arrays, replacing a failed drive is only an hour or two of resync.

(I much prefer the predictability of RAID-10 rebuild times.)

Re:Performance (1)

oodaloop (1229816) | more than 3 years ago | (#34709020)

Yeah, and imagine a beowulf cluster of those!

OK, I'm sorry. I just hadn't heard that one in a while. I'll be on my way now.

Re: Performance vs. size (1)

Calgary Computer (1967696) | more than 3 years ago | (#34706962)

I suppose a small size that performs well is impressive because smaller and lighter are attributes prized in laptops along with it's performance. Although performance is still the main quality we all want..

Wow that almost as small (0)

Anonymous Coward | more than 3 years ago | (#34707166)

as my pecker.

the mini-SATA SSDs are about a tenth the size... (1)

fotoguzzi (230256) | more than 3 years ago | (#34707834)

By size, do they mean volume?

Hard drive caddy (1)

owlstead (636356) | more than 3 years ago | (#34708998)

Just a quick note for you guys that like to fiddle with miniature screw-drivers and such: you can always replace your optical drive with an SSD or HDD. It seems that newmodeus has this market cornered for a while, restricting you to a higher priced product, but it is certainly a viable option. I've left my HDD where it is at because of possible heat issues (although there is quite a lot of spare room in the caddy) and possible problems with warranty. The only drawback is that you have to put your movies on HDD pre-flight or that you will have to take an external optical drive with you.

The real benefit is an SSD and HDD in a laptop (1)

George_Ou (849225) | more than 3 years ago | (#34709462)

I hate having to choose between an SSD and an HDD for a laptop and really want one of each. I want a nice big 500+ GB HDD but I always want a 40+ GB SSD for a boot/OS/applications/page partition i.e., the "C" drive. Then you really get the best of both worlds because you get the insanely fast IO speeds of SSD but you have somewhere to put large data files.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>