Slashdot: News for Nerds


Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

IBM Speeds Storage With Flash: 10B Files In 43 Min

timothy posted about 3 years ago | from the you-sure-have-a-lot-of-mp3s dept.

Data Storage 76

CWmike writes "With an eye toward helping tomorrow's data-deluged organizations, IBM researchers have created a super-fast storage system capable of scanning in 10 billion files in 43 minutes. This system handily bested their previous system, demonstrated at Supercomputing 2007, which scanned 1 billion files in three hours. Key to the increased performance was the use of speedy flash memory to store the metadata that the storage system uses to locate requested information. Traditionally, metadata repositories reside on disk, access to which slows operations. (See IBM's whitepaper.)"

cancel ×


Blaargh!! (-1)

Anonymous Coward | about 3 years ago | (#36855186)

Ha ha ha ha!

File Sizes? (0)

Anonymous Coward | about 3 years ago | (#36855192)

But how big was each file? 1kb? 1mb? 1gb?

Re:File Sizes? (4, Informative)

GuldKalle (1065310) | about 3 years ago | (#36855218)

As far as I can see, the files themselves were not read, only the metadata (who has access, modification time, position on the spinning platter, etc.).

Re:File Sizes? (1)

TheRaven64 (641858) | about 3 years ago | (#36855548)

It says in the heading. They copied 10 byte files in 43 minutes. Not very impressive, even the old Mac troll copied files faster than that...

10B (0)

Anonymous Coward | about 3 years ago | (#36855196)

Did anyone else read that as "10 byte files?" that seemed mighty slow lol

Re:10B (-1)

Anonymous Coward | about 3 years ago | (#36855244)

> Did anyone else read that as "10 byte files?" that seemed mighty slow lol


Re:10B (0)

Anonymous Coward | about 3 years ago | (#36855300)

I counter with a Yes.

Re:10B (1)

isorox (205688) | about 3 years ago | (#36855726)

Did anyone else read that as "10 byte files?" that seemed mighty slow lol

Nope, I read 267

Re:10B (0)

Anonymous Coward | about 3 years ago | (#36857584)

Still faster than Flash on my OS X machine.

Sounds Original (0)

Anonymous Coward | about 3 years ago | (#36855234)

Feb 2010: "Isilon senior product manager Gautam Mehandru said Isilon has added the solid-state drives to its nodes for a specific purpose: metadata storage and management. "The OneFS file system will automatically identify metadata and place it on the SSD capacity of the cluster," he said. "Regular data will remain on hard disk drives – this will allow faster namespace operations for design and simulation workflows to accelerate replication and performance in server virtualization environments."

43 min for 10 bytes? (5, Insightful)

Tei (520358) | about 3 years ago | (#36855256)

Thats very slow.

Also, please, better technical expertise writing the articles.

Re:43 min for 10 bytes? (4, Funny)

impaledsunset (1337701) | about 3 years ago | (#36855274)

Come on! Adobe Flash has always been slow, that's a massive improvement!

Re:43 min for 10 bytes? (0)

Anonymous Coward | about 3 years ago | (#36855278)

Make 10 Billion files on your ext3 filesystem and see how long an ls takes you

Re:43 min for 10 bytes? (2)

maxwell demon (590494) | about 3 years ago | (#36855370)

Make 10 Billion files on your ext3 filesystem and see how long an ls takes you

Ext3 can store 10 billion files in 10 bytes? Must be the new Whoosh feature, which avoids reading metadata like the comment title.

Re:43 min for 10 bytes? (1)

kno3 (1327725) | about 3 years ago | (#36855594)


Re:43 min for 10 bytes? (1)

maxwell demon (590494) | about 3 years ago | (#36855636)


Well, for burning I'd prefer ISO9660 with RockRidge extension to ext3. :-)

Re:43 min for 10 bytes? (0)

Anonymous Coward | about 3 years ago | (#36855294)

TFA isn't much better. And it says that they have 6.8TB on just 4 SSDs and that it reaches 40Gbps read throughput. No one really thinks IBM could sell this today with a straight face.

Re:43 min for 10 bytes? (1)

MichaelSmith (789609) | about 3 years ago | (#36855364)

IBM are selling ClearCase with a straight face.

Re:43 min for 10 bytes? (0)

Anonymous Coward | about 3 years ago | (#36855434)

it should be written 10 billion files. not 10 B files since it easily imply 10 Bytes of files.
however, i still do not know whether that feat considered fast or not. how many files did we have on our computer in average by the way?

Re:43 min for 10 bytes? (1)

physicsphairy (720718) | about 3 years ago | (#36855480)

43 min for 10 bytes.

I see they've copied the poorly hobbled together config for my SAMBA server.

43 min for 10 beers! (0)

Anonymous Coward | about 3 years ago | (#36859320)

Isn't it below the required minimum for one to drink at a fraternity party?
IBM seems to lag behind standards.

Define Scanning in... (1)

Vecanti (2384840) | about 3 years ago | (#36855258)

I read the article, but I don't really understand what they mean by "scanning in" 10 billion files in 43 minutes. Is this just copying? Is this "scanning in" in the traditional sense like from paper? Maybe I missed something reading through it I guess.

Hard to be impressed otherwise.

Re:Define Scanning in... (0)

Anonymous Coward | about 3 years ago | (#36855270)

Maybe they are scanning them to see if they contain a 1 or a 0. That way they can claim insane numbers like 10B. Whatever a 10B is.

Re:Define Scanning in... (1)

Sulphur (1548251) | about 3 years ago | (#36855392)

Maybe they are scanning them to see if they contain a 1 or a 0. That way they can claim insane numbers like 10B. Whatever a 10B is.


Re:Define Scanning in... (0)

Anonymous Coward | about 3 years ago | (#36855538)

lol, how did it go,
there are 10 types of people that can read binary.
those that can and those that can't

Re:Define Scanning in... (2)

MichaelSmith (789609) | about 3 years ago | (#36855368)

I wonder how google would go indexing the contents of 10 billion files.

Need more information. (0)

Anonymous Coward | about 3 years ago | (#36855262)

How big are these files and what are they scanning them for?

Re:Need more information. (0)

Anonymous Coward | about 3 years ago | (#36855304)

The files are zero length. The files were spread evenly across approximately ten million directories. The scan is roughly similar to what's necessary to load the information for an "ls -lR", or "dir /s" if you prefer.

Re:Need more information. (0)

Anonymous Coward | about 3 years ago | (#36855308)

or "dir /s" if you prefer.


Re:Need more information. (0)

Anonymous Coward | about 3 years ago | (#36855998)

Hey, it's still valid...

$ dir /s
dir: cannot access /s: No such file or directory

If your boxen aren't Windows you don't need it. (0)

symbolset (646467) | about 3 years ago | (#36855264)

It were nice if there were some text here but there isn't.

Huh.... (1)

Demena (966987) | about 3 years ago | (#36855296)

Traditional filesystems hold their metadata on disc? Ermmm... Exactly what do you think that the 'sync' command does. Traditionally metadata is held in memory and periodically written to disc for storage.

Re:Huh.... (0)

Anonymous Coward | about 3 years ago | (#36855318)

It still takes an awfull lot of time to find a file by name starting from the root directory, for some reason.

Re:Huh.... (1)

Demena (966987) | about 3 years ago | (#36855484)

That really depends on the directory layout and directory sizes. Study the i-node structure to understand why.

Re:Huh.... (1)

SuricouRaven (1897204) | about 3 years ago | (#36855342)

Not all of it. Just that which has been recently accessed. Enough for most purposes, as usually only a tiny bit of the stored data is ever needed at once. Doesn't hold up well in some scientific and engineering uses though, and if you need fast response times even on files that haven't been accessed in weeks then it becomes a potential problem.

Re:Huh.... (1)

Demena (966987) | about 3 years ago | (#36855502)

There is s difference between filesystem metadata and file metadata. You mention scientific and engineering uses as being particularly bad when it is my belief that the system architects are the cause of this. It is common to find bad architects in those fields. Directory structure is important. If you do not understand the particular filesystem architecture you cannot design for good and fast access. If you want a good fast access system it is absolutely necessary to understand things at that level. Most delays are not in accessing the file itself or the data within it but actually finding the file (or the bit you want) in the first place.

Re:Huh.... (1)

smallfries (601545) | about 3 years ago | (#36858408)

Are you confusing a system that stores something in memory, and a system that caches a copy of a small part in memory for fast access?

Re:Huh.... (1)

Demena (966987) | about 3 years ago | (#36859578)

I'm not confusing anything. I know exactly how it works.

Re:Huh.... (1)

smallfries (601545) | about 3 years ago | (#36859794)

It doesn't sound like you do. Sync is used to flush the cache of metadata back out to the disk. The metadata is actually stored on disk.

Re:Huh.... (1)

Demena (966987) | about 3 years ago | (#36860250)

Which is precisely what I said. The filesystem metadata that is _used_ is in memory. It is periodically _saved_ to disk iff there have been changes (i-node 0 for standard unix filesystems).

Re:Huh.... (1)

smallfries (601545) | about 3 years ago | (#36861444)

So now you are shifting in your claims. Yes, when metadata is used it is in memory - the same is true of any data. But it is held (to use your term) on disk, where it is loaded into memory on use, changed and saved back to disk. The primary store of metadata, the one that persists between boots, is held on the disk. A small local cache is changed, as with any data. So going back to your original (erroneous) claim: traditional file-systems *do* hold their metadata on disk, even if they cache a portion of it in memory as an optimisation.

Re:Huh.... (1)

Demena (966987) | about 3 years ago | (#36861754)

No. There is no need to retract anything. I made no erroneous claim. Stop trolling.

Re:Huh.... (1)

anamin (796023) | more than 2 years ago | (#36906808)

You be trollin.

Adobe Flash is the future! (-1)

Anonymous Coward | about 3 years ago | (#36855312)

Come on... it might not be able to play YouTube videos without crashing every 5 videos...

But you have to give the Adobe guys a thumbs up for this one... doing something for 43 minutes without crashing is a new achievement for Adobe Flash...

In other news: Slashdot is now the INCOMPETENT FOX NEWS of tech!

Re:Adobe Flash is the future! (0)

Anonymous Coward | about 3 years ago | (#36856024)

let med correct that...

doing nothing for 43 minutes without crashing is a new achievement for Adobe Flash...

Demand (1)

wesleyjconnor (1955870) | about 3 years ago | (#36855380)

......Is this kind of performance in scanning in high demand?

Re:Demand (0)

Anonymous Coward | about 3 years ago | (#36860264)

yes, I was involved in the work they did a few years back to scan 100million files. The use case was specifically scanning through massive archival systems (GPFS/HPSS)

cost/performance (1)

maxwell demon (590494) | about 3 years ago | (#36855386)

They noted that while solid-state storage can cost 10 times as much as traditional disks, they can offer 100 percent performance boost.

So you get 2 times the performance for 10 times the price? I'd say that's still 5 times as expensive. What would be the performance boost with a RAID of 5 disks?

Re:cost/performance (1)

FishTankX (1539069) | about 3 years ago | (#36855468)

I think you misunderstood the point of the statement in that article.

It's referencing using solid state as a cache, and how even though solid state memory costs 10x as much, when used for caching duty, it can increase the performance of the disk array by 100%. This would be in line with the numbers alot of sites are getting from intel's new hard disk SSD caching tech.

You can DIY it in linux. (1)

elsJake (1129889) | about 3 years ago | (#36855520)

Some filesystems allow you to store the journal on a different disk , such as a SSD

Re:You can DIY it in linux. (0)

Anonymous Coward | about 3 years ago | (#36857832)

That's something wholly different. If you want to experience the benefits of SSDs caching filesystem metadata (and data), then play around with ZFS + L2ARC or with swapcache on dragonflybsd.

Numbers (1)

kramulous (977841) | about 3 years ago | (#36855528)

Now, some of my maths might be (a little) off, but ...

I've just spent half the day processing financial files ... 133KB average file size and processed (by process, I mean every byte is 'looked' at in c++ code) 4000 per second. I did this on a single file (compressed tar.gz) that when expanded is 7857 files and just over 1GB in size. The compressed file is temporarily stored in /dev/shm. The parallelisation is around one thread processing the ram drive file while the other file copies the next file (1GB file uncompressed, 65MB compressed) from a 5400rpm notebook drive (Thinkpad X60) to the ram drive.

Now, this latest in file processing by a giant of the industry has 'achieved' 3.55 million per second files 'processed' (and by processed it is never said what - but I'll assume the same as me) of files that are 650bytes in size (PDF says dataset was 6.5 TB).

I was processing on a notebook that is about 7 years old architecturally and achieved 544MB bytes processed per second and the latest IBM can do is 2.3GB per second.

Is this a *big* step forward? I should log into our cluster and do a test on memory a little more advanced and see how their numbers stack up.

I guess what i'm saying is, there is just no substitute for writing software properly.

Re:Numbers (1)

smallfries (601545) | about 3 years ago | (#36858394)

Your lack of understanding is quite simply astounding. You have completely missed the point of their research, which is to reduce the latency in randomly accessing information in a large dataset. They are not measuring throughput (or bandwidth) although the article does state that they hit 4.9GB/s. If you made your files much much smaller and then repeated your test you would find that your performance drops drastically as your program comes limited by a different IO bound. Instead of being bounded by the bandwidth you would find that the number of separate IO operations became the bottleneck. The reason that SSDs have provided such a large increase in performance is not the increase in bandwidth of 5-10x spinning disks - the decrease in latency has increased the number of IO operations by four, or five, orders of magnitude.

Using a RAM disk you should be able to hit a much higher number of IO operations than even an SSD, but it won't come close to the 3.9M+/s that IBM have reported. For database transactions where the amount of information is each record is small, but the number of records accessed is high, this measure is a much better indicator of performance than bandwidth alone.

I guess what i'm saying is, there is just no substitute for writing software properly.

Well, if you have a complete lack of understanding of a subject, but you continue on regardless talking absolute bollocks then perhaps the world would seem like that to you.

Re:Numbers (0)

Anonymous Coward | about 3 years ago | (#36858740)

Then the article and the summary should not have used the term "files".

They should have used something like "I/O operations" and then specified the average size of the data used per operation.

This would have clearly presented and conveyed the idea of what this research was attempting to improve.

It is a bad article, because it was easy to miss that point.

And seriously, I appreciate you taking the time to explain that. It should have been in the article, or even the summary.

Re:Numbers (1)

Salamander (33735) | more than 2 years ago | (#36881634)

Doing something for 7857 files and doing it for 10 billion are very different situations. 7857 files, including metadata, can easily be sucked into memory in one big chunk and unpacked/examined from there. That simply doesn't work for datasets larger than memory. At the higher scale, modern filesystems do tend to fall apart, badly, so different approaches are needed. Comparing your paper airplane to an F-22 doesn't make it look like you know anything about writing software properly. Quite the opposite.

Re:Numbers (0)

Anonymous Coward | more than 2 years ago | (#36881736)

I agree ... there is a major difference of scale that makes it difficult to compare. I freely acknowledge that. Six (6) orders of magnitude is a difficult beast to predict.

However, I processed 1 billion files (~100,000 compressed files) that scaled linearly from my quoted test size (the average file size was the same as I quoted) - scaled both in number and across processors (512 available to me at the time). Mine was designed purely because the original file sizes were to big for memory (not zero file length, but had to carve them up appropriately) ... currently the biggest processing node I have is 96GB (well, I have a 192GB but the processors are too old). My major bottleneck was a single panasas unit that was reading the filestore at about 800MB/s (the best I have available to me).

I have just found that the container files (tar) really help to reduce the number of file seeks (clearly). I stuff as many as I can into there based on the compression method, whether the compression is parallel so that IO still remains the bottleneck and that the file size is an even multiple of the block size.

Still playing and still have more to do. Don't get me wrong, I really enjoyed reading the white paper it is just that I would really like to see them push everything to the limit, not just a single item (inodes). A single problem is easier to design for than many problems.

Scan? (0)

Anonymous Coward | about 3 years ago | (#36855550)

What does it mean by "scan"?

Try it for yourself (1)

BlackPignouf (1017012) | about 3 years ago | (#36855562)

time sudo ls -lAR / | grep -E '^[ld\-]+' | wc -l

It should give you the number of files on your filesystem and the time it took to "scan" them all.

Re:Try it for yourself (0)

Anonymous Coward | about 3 years ago | (#36855642)

I am one of the rare vaguely tech competent people on slashdot, so I'm using Windows here.

I decided to just do one of them right click, properties on a folder that I know is filled with quite a few files and timed it.

~90 seconds for 93,000 files total of 90 gigs.

That's...about 1030 files per second and would take 9,708,737.9 seconds = 161,812.3 minutes = 2696.9 hours =112.4 days to scan 10 billion files.

If this is relevant to the article in that my right click properties is analogous to their scanning of files, then I guess at very least it shows how huge a gap there is between personal computers and supercomputers.

Now, if this is NOT relevant to the article and what I did was cute in the same way a child pretending he's an adult is cute, then I await the pointing and laughing.

Re:Try it for yourself (1)

pakar (813627) | about 3 years ago | (#36856072)

Well, you probably need to make sure you dont have any of the files or metadata in the buffercache before starting.. Also limit the search to the actual filesystem you want to test..

# echo 3 >/proc/sys/vm/drop_caches
# time find / -xdev -printf "%p %y %s %n %i %m %G %U %c %b %a\\n" |wc -l

real 0m36.738s
user 0m6.031s
sys 0m12.737s

This on a simple 40Gb Intel SSD with a ext4 fs

Re:Try it for yourself (1)

reset_button (903303) | about 3 years ago | (#36861884)

FYI: "drop_caches" only drops clean pages, so you need to run "sync" first if you want to properly flush your cache.

10B Files = 16 dec Files (1)

allo (1728082) | about 3 years ago | (#36855612)

not that impressive

Re:10B Files = 16 dec Files (0)

Anonymous Coward | about 3 years ago | (#36856354)

So to you 10 in binary is 16 in decimal?

Re:10B Files = 16 dec Files (0)

Anonymous Coward | about 3 years ago | (#36867034)

Now that *might* have been funny, assuming it was a binary-to-decimal conversion joke. But, before making such a joke on a site frequented by computer nerds and alpha geeks, you may want to get your bases in order. 10B would be *binary*, which would convert to 2 dec, not 16 dec. 10H would be 16 dec. Best you don your asbestos underwear, this is going to get rough :-)

Re:10B Files = 16 dec Files (1)

allo (1728082) | more than 2 years ago | (#36951724)

yeah, my error. Its embarassing. I blame the lack of coffee ...

Challenge (0)

Anonymous Coward | about 3 years ago | (#36855624)

I'm assuming that the files are 4KB in size.

This is just 1 percent the capacity of the human brain. I challenge IBM to make a machine with 100 times the performance.

Alternative summary (2)

tulcod (1056476) | about 3 years ago | (#36855762)

IBM throws a lot of hardware at a problem; problem gets solved.

SUN were doing this in 1990 (1)

petes_PoV (912422) | about 3 years ago | (#36855910)

I have a vague memory of Sun producing an NFS accelerator about 20 years ago. This worked by caching remote file data in non-volatile memory.

Details, please? (0)

Anonymous Coward | about 3 years ago | (#36855956)

How big is a file on average and what constitutes "scanning" that file?

Re:Details, please? (0)

Anonymous Coward | about 3 years ago | (#36860380)

It's not about READING the files! The size of the file is irrelevant, it could even be 0 bytes. Scanning the file constitutes reading its name and other metadata from the filesystem. Think of what that means for a second. On a filesystem with a billion files, with each file being ten characters long, you would need at least ten bytes per file, or ten gigabytes worth of metadata at least. Most filesystems would choke given directories with this many files. I once had a log directory which rotated logs hourly, and not having touched it in four years, just doing an ls on the 35,000 files it contained took almost a minute before the ls command even produced any output (using ext3). Extrapolating that to a billion files, doing the same thing on a similar setup, assuming ext3 could take it (which it probably can't), it would take almost 70 days to read all that metadata.

I did pretty much same thing w/ SQLServer (0)

Anonymous Coward | about 3 years ago | (#36857022)

DECADES ago, circa 2000-2002 @ MS' Tech-Ed in fact, increased DB performance by MANY ORDERS OF MAGNITUDE, & simply by using software based RamDisks/RamDrives for putting DB devices into RAM (before they began doing it "natively" in SQLServer), or if the DB's were too large, just their indexes &/or Temp/Scratch tables!


I was also doing "temp/scratch" table work the SAME WAY on smaller DB engines like Access &/or DBase III before it, circa 1991-1999 as well before that...

Why? Because, it works.

I.E./E.G.-> Lower 'seek/access' for starters (which is step #1 of the File Open/Read-Write/Flush/Close I/O cycle), & NO std. HDD read/write mechanical head-movements latencies being another.

Between the 2 of those alone, alongside B-Tree indexing?

You have a "HAUL A$$" DB engine...

This can also be applied to DB driven websites (or not), Terminal Servers, & far, Far, FAR MORE also! Creativity's your ONLY limitation really!

* Yes - The future IS doing Ramdisk/Ramdrives folks, & that "future IS now"...

(Albeit I was doing that decades ago, & only now are you seeing it as more "mainstream", & imo, only REALLY mainly due to co$ts of course - because there were SSD's that worked, Quantum had them iirc, rushmore drives iirc but they cost a fortune!)

I'll also admittedly state that "accomplishment" @ Tech-Ed for myself & EEC Systems/ isn't exactly "brain surgery" to figure out that using a faster media along with good algorithms on the datasets you have will yield a better, faster, & more efficient way of doing things!

HOWEVER? Hey - Nobody else did it before we did that I knew of & received a good deal of "notoriety/press/ink" for it @ least...

I later moved on to actual "TRUE SSD's" as I call them, not based on FLASH RAM:


1.) Gigabyte IRAM 4gb DDR2-RAM + PCI-Express x4 bus & SATA II 300gb/sec access circuit


2.) CENATEK "RocketDrive" 2gb PC-133 SDRAM + PCI 2.2 bus 133mb/sec. access circuit


They'll do the SAME for DB's, WebSites, Terminal Servers, & far, Far, FAR more also... but faster on writes typically than FLASH was initially @ least (that's changed, but these have better longevity).

For home use/performance-gains? I use them for:


A.) Pagefile.sys placement (1/2 of 4gb IRAM in own partition)

B.) WebBrowser cache, history, & actual browser program placements

C.) Print Spooler location

D.) %Comspec% location

E.) %TEMP% and %TMP% ops for OS + Apps

F.) Operating System & Application Event Loggings & logging in general

... and more!



Hey - They truly are, "The good stuff"... period!

( I've known & actually used them, & right after software-based Ramdisks/Ramdrives, for ages, & simply because THEY WORK for practical & better, noticeable, and effective performance gains (mostly)).


P.S.=> I just like seeing & knowing that ideas myself & others used decades ago & we were often laughed at by the "wannabe's" in this art & science of computing are only NOW becoming "the performance wave of the future" in the mainstream... funny that, eh? Not...

... apk

I dont know why people makes fun of it. (0)

Anonymous Coward | about 3 years ago | (#36858340)

Well, It was good read, the researchers had spent lot of precious time building this up, and its a valid thing.
I see a lot of negativity on Slashdot and the real tragedy is that the funny posts appear first than the insightful's.
people.. whats the matter?? If apple does something, nokia, google,ms, ibm, every time you speak cowardly funny.
fuck face.


Anonymous Coward | about 3 years ago | (#36858690)

God. Really?

10 billion files at one byte each is a transfer rate of ten gigabytes per 43 minutes. Slow.

10 billion files at one gigabyte each is a transfer rate of ten exabytes in 43 minutes. Incredibly fast.

Do you see why it's important to include within the summary the average file size they used?

something strange in the title? (1)

Anonymous Coward | about 3 years ago | (#36859006)

I was wondering what does it mean 10B files... Ok, the article talk of 10 Billion files... But 1 Billion is 10^9 or is 10^12. So If you have to use a symbol, use a sensible one... What about 10G files? :D

Comparison of several big UGG Deckers (1)

linyanyun (2419744) | about 3 years ago | (#36880996)

UGG Boots [] Australia is the United States under the company, in terms of degree of market recognition, to seize the market early, large leafy tree. However, in China, Ugg Australia is a trading company agent in product promotion maintenance, to ensure credibility, the existence of defects, which led to the present, very few pure through the headquarters of the United States authorized UGG authentic Cheap Ugg [] . Jumbo ugg [] from Australia Australia is a relatively big, and reflects the Australian tradition of craftsmanship and style, fashionable, unique, fine workmanship. But it is worth Unfortunately Jumbo ugg [] Australia brand is currently only in home sales has not been involved in China. Xiaobian that: If no relatives, friends in Australia, then forget it, travel expenses, postage, flowers you have blood Aukoala Australia ugg sale [] was founded in the 1970s, when only an initial small workshop, has now been developed to favored by European and American fashion star to become the essential thing. Product imaginative, low-key luxury and fashion, not only continuation of the classic design, but also into the tassel, rivets, Bohemia, feathers and other fashion elements, not only in the order of a high-end products brand, is on the taste laid in the fashion industry's extraordinary. ugg cardy [] Xiao Bian broke the news: Aokoala enter the Chinese market in 2010, by a regular brands company, the overall evaluation was very good, recommended to has a try. It's written by xSteven on 7.26 tag: UGG Boots [] Cheap Ugg [] ugg [] ugg sale [] ugg cardy []

Coach high performance in China in 2010 (1)

linyanyun (2419744) | about 3 years ago | (#36881000)

Coach bags [] in the Chinese market is booming, the report shows same-store sales showed double-digit growth. The U.S. market grew only 6.3%. According to Bain & Company survey, China's luxury market in 2009 the total capacity of 23.3 billion U.S. dollars, Coach said he accounted for 5% of the total market. Lew Frankfort said: "Coach handbags [] brand awareness in China is not very high, about 8%, we decided to catch up over the next five years, China will surpass Japan to become the Coach after the United States the second largest Coach outlet [] market. Recently, high-end accessories manufacturer and retailer discount Coach bags [] said it expects fiscal year 2011 sales in China rose 75 percent year on year. goal is to develop the Chinese middle class consumer level. Coach's CEO Lew Frankfort said in an interview: "As of 2010 In July, China has 25 new store openings in China, the number of stores totaled 65. "authentic bags [] will focus future development strategy in the Asian market, before the Coach has been very high-profile entry into the cheap Coach [] European, U.S, UK, Ireland and Portugal markets. Lew Frankfort also said:" As the middle class was growing rapidly, so consumption will rise. China will become the first global luxury market, but now Coach bags [] is just out of infancy. ". Coach also plans to launch in China in mid-2011, e-brand shopping site. It's written by xSteven on 7.26 tag: Coach bags [] Coach handbags [] Coach outlet [] discount Coach bags [] authentic bags [] cheap Coach [] Coach bags []
Check for New Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account