Beta

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Intel and Micron Unveil 128Gb NAND Chip

Unknown Lamer posted more than 2 years ago | from the rotating-disks-are-so-90s dept.

Data Storage 133

ScuttleMonkey writes "A joint venture between Intel and Micron has given rise to a new 128 Gigabit die. While production wont start until next year, this little beauty sets new bars for capacity, speed, and endurance. 'Die shrinks also tend to reduce endurance, with old 65nm MLC flash being rated at 5,000-10,000 erase cycles, but that number dropping to 3,000-5,000 for 25nm MLC flash. However, IMFT is claiming that the shrink to 20nm has not caused any corresponding reduction in endurance. Its 20nm flash uses a Hi-K/metal gate design which allows it to make transistors that are smaller but no less robust. IMFT is claiming that this use of Hi-K/metal gate is a first for NAND flash production.'"

cancel ×

133 comments

Sorry! There are no comments related to the filter you selected.

Get ready for a new wave of poorly coded software (4, Insightful)

malakai (136531) | more than 2 years ago | (#38292662)

I love SSDs, especially for development work. Nothing like having a dev VM per client each on their own little SSD isolated from your non-work related default operating system. But SSD's are dangerous...

SSD's are like crack to bad applications. The magically make them feel better, while masking the underlying problem. I'm worried what the future is going to hold when the average desktop comes with an SSD drive. Already I've already seem some development companies demo financial software on striped SSD's as if that's what everyone runs these days. I guess it's no difference then an abundance of RAM and an abundance of CPU power. < Insert in my day rant here >

Re:Get ready for a new wave of poorly coded softwa (5, Insightful)

ksd1337 (1029386) | more than 2 years ago | (#38292742)

Well, that's the problem, isn't it? Lazy programmers aren't writing efficient code, they're just relying on Moore's Law to push them through. Of course, I don't think the average consumers understand much about efficiency, seeing as eyecandy is so popular, even a selling point.

Re:Get ready for a new wave of poorly coded softwa (0)

Anonymous Coward | more than 2 years ago | (#38292894)

>> I don't think the average consumers understand much about efficiency, seeing as eyecandy is so popular, even a selling point.

Buying and investing for tomorrow is much better than short term gains.

Re:Get ready for a new wave of poorly coded softwa (2, Informative)

AmiMoJo (196126) | more than 2 years ago | (#38293076)

Not all programmers are doing that. Android and Windows have both been getting faster on the same hardware.

Re:Get ready for a new wave of poorly coded softwa (2)

A12m0v (1315511) | more than 2 years ago | (#38295120)

So does iOS and so does most browsers. Usually the main offenders are application software developers and not system software developers.

Re:Get ready for a new wave of poorly coded softwa (0)

Anonymous Coward | more than 2 years ago | (#38296372)

>>[...] been getting faster on the same hardware

>So does iOS [...]

My iPhone 3G running iOS4 disagree.

Re:Get ready for a new wave of poorly coded softwa (1, Interesting)

David_Hart (1184661) | more than 2 years ago | (#38293180)

Well, that's the problem, isn't it? Lazy programmers aren't writing efficient code, they're just relying on Moore's Law to push them through. Of course, I don't think the average consumers understand much about efficiency, seeing as eyecandy is so popular, even a selling point.

Of course the biggest offender of relying on Moores Law is JAVA...

Re:Get ready for a new wave of poorly coded softwa (1)

marnues (906739) | more than 2 years ago | (#38293942)

Care to back up your claim? I'll understand if you haven't used Java since 1.2, but that's hardly relevant today.

Re:Get ready for a new wave of poorly coded softwa (4, Funny)

Neon Spiral Injector (21234) | more than 2 years ago | (#38294232)

Yeah, Java 1.2 ran like crap on my 133 MHz 5x68 with 16 MB of RAM in 1998, but today's Java isn't too bad on my dual Six Core CPUs with 32 GB of RAM.

Re:Get ready for a new wave of poorly coded softwa (1)

Joce640k (829181) | more than 2 years ago | (#38294716)

Care to back up your claim? I'll understand if you haven't used Java since 1.2, but that's hardly relevant today.

Ummm, isn't that exactly what he was claiming?

Re:Get ready for a new wave of poorly coded softwa (4, Insightful)

timeOday (582209) | more than 2 years ago | (#38293218)

This moralistic spin ("lazy" programmers) is absurd. The tradeoff between development cost and hardware requirements is obviously affected by cheaper yet higher-spec hardware. If you want to run WordPerfect for DOS at insane speeds on modern hardware, go right ahead. That piece of that software cost $495 in 1983 (cite [answers.com] ) and was written in assembly language for speed. (I hope the connection there is not lost on anybody).

Re:Get ready for a new wave of poorly coded softwa (4, Insightful)

billcopc (196330) | more than 2 years ago | (#38293988)

There are some of us who are quite proficient with assembly language. We also had some very sloppy compilers back then, so the two went hand-in-hand.

Back then, I would build a first prototype in straight C (or whatever), then identify the bottlenecks and rewrite those functions in assembly. Heck, in school I wrote a few QBasic games/apps that linked in some assembly calls. Sometimes I'd get cocky and copy the assembled code directly into a QBasic variable, then execute it. For common stuff like blits and mouse calls, I could type those opcodes from memory. You wouldn't think a QB game could handle 3D graphics as 320x200 on a 386, with sound effects and digital (MOD) music, but with a modest application of hand-tuned code, you can write the script-like glue in whatever language you want with only a minimal impact on final performance.

I'm not saying we need to write all apps in raw assembly, that's absurd. We rarely did that back in the day, except for extreme situations and bragging rights. Today's compilers seem to do a good-enough job, but the faster they get, the more our so-called developers push into truly wasteful practices like nested script interpreters - most PHP and Ruby frameworks fall into that category. Do we really need 16-core machines with 48gb of Ram to push a few pages of text ? Not if we were writing actual computer code, and not this navel-gazing techno poetry that's more for humans than machines.

Re:Get ready for a new wave of poorly coded softwa (1)

timeOday (582209) | more than 2 years ago | (#38294362)

In light of the grandparent, my questions for you are:

1) Do you still use assembler as often as you did back then?
2) If not, is it because you weren't "lazy" then but now are?

Re:Get ready for a new wave of poorly coded softwa (2)

Joce640k (829181) | more than 2 years ago | (#38294908)

In light of the grandparent, my questions for you are:

1) Do you still use assembler as often as you did back then?
2) If not, is it because you weren't "lazy" then but now are?

No, it's because I write much larger programs.

The amount of time/effort needed to write assembly language programs grows exponentially as they grow larger. It's simply not worth it to gain a few percent of speed compared to a good compiler.

Much better to learn to disassemble critical code every now and again and learn what makes your compiler happy.

People still make Dendy games (1)

tepples (727027) | more than 2 years ago | (#38296020)

Do you still use assembler as often as you did back then?

There are still people coding video games for 8- and 16-bit platforms that don't lend themselves well to the abstractions of C. These platforms include retro consoles, Chinese SoCs, and Chinese SoCs compatible with retro consoles.

Re:Get ready for a new wave of poorly coded softwa (1)

Joce640k (829181) | more than 2 years ago | (#38294944)

I'm not saying we need to write all apps in raw assembly, that's absurd. We rarely did that back in the day, except for extreme situations and bragging rights.

Speak for yourself.

I started out writing programs in hexadecimal. Assembly language simply wasn't practical on the machines I was using.

Then we got floppy disks...but I still did another four or five years of assembly language programming before I ever saw a compiler.

Re:Get ready for a new wave of poorly coded softwa (1)

izomiac (815208) | more than 2 years ago | (#38294292)

The tradeoff between development cost and hardware requirements

This is exactly why I like free software. If you're writing code as a hobby then you take pride in your work and release the best version you can conceive of, and only when it's ready. If you're selling it, then everything is a tradeoff designed to maximize profit. Or at least that's my theory. In reality, a lot of free software developers are commercial software developers by day and their "bad" habits carry over, especially with less interesting parts of the codebase. And there are commercial software developers who legitimately take pride in their work and aren't so focused on maximizing profit, but they're a rarity in this day and age.

Re:Get ready for a new wave of poorly coded softwa (3, Insightful)

blair1q (305137) | more than 2 years ago | (#38293450)

They aren't lazy, they're productive, and taking advantage of the resources available.

When they're tired of putting the first-to-market markup and the bleeding-edge markup in their bank accounts, then they'll address reports of sluggishness or resource starvation in less-profitable market segments.

Right now, though, the fruit that are hanging low are fat and ripe and still fit in their basket.

Re:Get ready for a new wave of poorly coded softwa (3, Insightful)

ArcherB (796902) | more than 2 years ago | (#38293482)

Well, that's the problem, isn't it? Lazy programmers aren't writing efficient code, they're just relying on Moore's Law to push them through. Of course, I don't think the average consumers understand much about efficiency, seeing as eyecandy is so popular, even a selling point.

Most of the programmers I know don't care about timelines, eyecandy, popularity or selling points. These guys are computer nerds. Just as car nerds want their hot rods to purr when idle and roar when pushed, most programmers want their code to run fast, efficient and clean. The problem is that programmers are under thumb of timelines and feature bloat put on them by management and sales.

  This is not necessarily a bad thing as if it were not for deadlines, no programs would ever be finished. Yes, code is more inefficient, but that is only because the hardware has allowed it to be. It does not hurt the bottom line if a customer has to wait 1.5 seconds for program to launch or 3. The bottom line is what management cares about, and to be fair, it is what drives business and keeps Red Bull stocked in the break room fridge.

App Store launch speed requirement (1)

tepples (727027) | more than 2 years ago | (#38296090)

It does not hurt the bottom line if a customer has to wait 1.5 seconds for program to launch or 3.

What does hurt the bottom line is device makers who reject a slow-launching application from the device's official application repository because the repo curator feels slow-launching applications reflect poorly on the platform. In such a case, an application developer has to at least put up a splash screen while the program is launching.

Re:Get ready for a new wave of poorly coded softwa (1)

TheCouchPotatoFamine (628797) | more than 2 years ago | (#38293542)

you know, if done correctly, as apple did it, "eye candy" can be quite efficient. Unless you like your video card sitting there, doing nothing, being /efficient/ (rolls eyes)

Limiting your market (1)

tepples (727027) | more than 2 years ago | (#38296130)

"eye candy" can be quite efficient. Unless you like your video card sitting there, doing nothing, being /efficient/ (rolls eyes)

A line like "System requirements: Intel integrated graphics not supported" means lost sales unless an application is such a killer app that people will buy a new laptop just to get AMD or NV graphics.

Re:Get ready for a new wave of poorly coded softwa (3, Insightful)

GreatBunzinni (642500) | more than 2 years ago | (#38294134)

Well, that's the problem, isn't it? Lazy programmers aren't writing efficient code, they're just relying on Moore's Law to push them through. Of course, I don't think the average consumers understand much about efficiency, seeing as eyecandy is so popular, even a selling point.

Your comment is either naive or disingenuous. There are plenty of reasons that lead a specific software to do a good job under a specific scenario but do poorly under another which is completely different, and all this without incompetence being a factor. Let me explain.

Consider one of the most basic subjects which is taught right at the start of any programming 101 course: writing data to a file. For that task, a programmer relies on standard interfaces, either de-facto standards such as platform-specific interfaces or those defined in international standards such as POSIX. This means that a programmer tends to not be aware of any specification regarding the file system or even the underlying hardware when developing a routine that dumps data to a file. Basically, what tends to be taught is to open a file, write to it and then close it. This tends to be acceptable in most scenarios, but this is a dangerous thing to do. After all, just because some data is written to a file it doesn't mean the data is immediately written to that file. The underlying platform may rely on IO buffers to be able to run things with a bit more efficiency. This means that even though your call to write() does succeed, and even though your program can successfully read your data back, that data isn't in fact stored in your file system. This means that if your program is killed/crashed or if your computer dies then you risk losing your data and corrupting the file. If this happens, does it mean that the programmer is incompetent?

This problem can be mitigated by flushing the data to a file. Yet, calling flush() doesn't guarantee that every single bit of your data will be successfully stored in your file system. The thing is, this only guarantees that, when flush() returns, the data is flushed. If the system dies while your program is still writing away your data then you quite possibly lose all your data, and no call to flush() can save you from that. If this happens, does it mean that the programmer is incompetent?

Some clever people took a bit of time to think about this, and came up with some techniques which avoided any the risk of corrupting your data. One of the techniques is to dump the data to temporary files and then, after they succeed in saving the data, the old file is deleted/renamed to a backup file name and the newly created temporary file is renamed back to the original name. With this technique, even if the system dies then the only file which might have been corrupted is the newly created temporary file, while the original file is kept in its original state. With this approach, the programmer guarantees that the user's data is preserved. Yet, this also has the nasty consequence of storing what's essentially the same file in entirely different inodes. This screws with a lot of stuff. For example, it renders hard links useless and screws around with the way versioning file systems work. If this happens, does it mean that the programmer is incompetent?

So, no. This hasn't anything to do with what you arrogantly referred to as "lazy programmers" or even incompetence. Times change, technical requirements change, hardware requirements change, systems change.... And you expect that the software someone designed a couple of years ago will run flawlessly and avoid each and every issue which are only being discovered today and might only be discovered tomorrow. How can programmers avoid these issues, if they don't even have a working crystal ball? This isn't realistic, and you can only make such claims by being completely clueless and out of touch with reality. So, please tone down your arrogance and spend a moment thinking about this issue.

Re:Get ready for a new wave of poorly coded softwa (1)

ksd1337 (1029386) | more than 2 years ago | (#38295968)

Yeah, I guess it is pretty naive. I'm just tired of new versions of software running slower and slower on my older hardware. (Web browsers especially. And operating systems.)

Re:Get ready for a new wave of poorly coded softwa (1)

Belial6 (794905) | more than 2 years ago | (#38294904)

Writing inefficient bloated code more often than not has nothing to do with being lazy. It frequently has to do with looking at the big picture. Spending $100k in developer time to optimize code that could run unoptimized on $100 worth of extra hardware isn't being industrious. It is wasting resources.

There are plenty of places that optimization makes sense, but more often than not, ease of future maintenance should be much higher on the list of priorities.

$100 of HW times how many copies? (1)

tepples (727027) | more than 2 years ago | (#38296148)

Spending $100k in developer time to optimize code that could run unoptimized on $100 worth of extra hardware isn't being industrious. It is wasting resources.

Not if at least 1,000 copies of the software will be sold.

Re:Get ready for a new wave of poorly coded softwa (1)

VitaminB52 (550802) | more than 2 years ago | (#38295778)

Lazy programmers aren't writing efficient code, they're just relying on Moore's Law to push them through.

Wrong.

It's not about lazy programmers, it's all about calculating managers.

To cite one of my managers during a project review:
Next time you're not going to perfect our code like you did on this project. If we measure software quality on an A to F scale, and if our client is satisfied with quality level C, then stop improving your code once you hit that C level. Going for the A will only increase project cost, but not our revenues..

We have another client that wants B (1)

tepples (727027) | more than 2 years ago | (#38296186)

if our client is satisfied with quality level C, then stop improving your code once you hit that C level.

Comeback: "We have another possible client that wants B." Or "We have a competitor offering B."

(Not that B [wikia.com] , or that /b/ either [wikipedia.org] .)

Re:Get ready for a new wave of poorly coded softwa (2)

ByOhTek (1181381) | more than 2 years ago | (#38292750)

I'm worried what the future is going to hold when the average desktop comes with an SSD drive.

Same thing that has happened with every change that has provided a significant performance improvement with a given resource...

Applications that have a little more functionality, and lot more waste.

Re:Get ready for a new wave of poorly coded softwa (0)

0123456 (636235) | more than 2 years ago | (#38292774)

Same thing that has happened with every change that has provided a significant performance improvement with a given resource...

Applications that have a little more functionality, and lot more waste.

Except with SSD write lifetimes falling with every generation, in this case 'a lot more waste' could mean trashing your drive in a few months.

Re:Get ready for a new wave of poorly coded softwa (3, Informative)

Anonymous Coward | more than 2 years ago | (#38293082)

No, it couldn't. Most drives - even those with bad write lifetimes - could be continually overwritten for a period of many years before needing to be replaced. Reference: http://www.storagesearch.com/ssdmyths-endurance.html [storagesearch.com]

As a sanity check - I found some data from Mtron (one of the few SSD oems who do quote endurance in a way that non specialists can understand). In the data sheet for their 32G product - which incidentally has 5 million cycles write endurance - they quote the write endurance for the disk as "greater than 85 years assuming 100G / day erase/write cycles" - which involves overwriting the disk 3 times a day.

That's for old-ish tech and a smallish drive. For consumers, large drives get written to exponentially less. Consider, the vast bulk of consumer "big" drives are and movies. These are big, chunky files that don't get overwritten very much. As a consequence the vast majority of your drive stays clean. For most people, they'll want or need to buy a new hard drive long before the old one wears out. Please read up on the facts before spouting nonsense.

Re:Get ready for a new wave of poorly coded softwa (2)

Tanktalus (794810) | more than 2 years ago | (#38293644)

I'm not sure that these "large files" prove what you think it does. Then again, I'm not entirely confident that the opposite is true, either. Even if it doesn't prove what you think it does, I think your point still largely stands.

Let's say I have a 32G SSD device. If I put some movies on here, I'm left with, say, 16G, of "active" disk space usage. If I then go and use what's left very actively, the write endurance for that section falls given the same usage patterns. For example, at 100GB/day erase/write cycles, I've cut it down to ~43 years. If I can interleave the deletion/replacement of the movies into the mix, then I should trend back up to around 85 years, but I don't think that the life span will be extended beyond that, again, assuming constant usage.

Where your point largely still stands is how much less than 100GB/day a consumer will use their disk. By storing 16G of movies/music and only going through 50GB of erase/write cycles, we get back up to 85 years. And both of these numbers are still way too high for consumer usage - 3GB/day might be stretching it as an average for consumers.

Re:Get ready for a new wave of poorly coded softwa (1)

0123456 (636235) | more than 2 years ago | (#38293726)

And don't forget that if you overwrite a single byte in a block, then that's going to result in a full block write.

Given that people who use their SSDs for compilations have said they can wear them out in under a year with older generation disks that supported many more write cycles, a crappy application could easily wipe out the next generation of SSDs in a few months by continually writing to the idsk.

Re:Get ready for a new wave of poorly coded softwa (4, Interesting)

KingMotley (944240) | more than 2 years ago | (#38294020)

Not exactly. The older SSDs didn't do write leveling. Also, most OSs don't force a write to disk when a single byte in a sector changes (perhaps it does on linux, I don't know). Most SSDs also have write caching today so even if the OS was silly enough to request a write to disk, it would quickly get invalidated by the next request to write to the same sector before it even hit the flash portion of the SSD.

Lastly, even if you disregard all of that, then you also must realize that you don't need to do an erase if all the changes you are making are turning bits on. In that case, you just do a write instead of erase and write, and that doesn't wear out the SSD at all (I believe).

Also, there is nothing keeping the SSD from periodically moving data with low write counts to the high write count portions of the disk in the background in hopes that the semi-static data will remain semi-static.

Re:Get ready for a new wave of poorly coded softwa (0)

Anonymous Coward | more than 2 years ago | (#38295118)

That was sort of what I was getting at with the large files. If you only use half your disk regularly and the rest is reserved for lots of ~1GB files, you can always move around those files to expose some more long lasting disk space.

Re:Get ready for a new wave of poorly coded softwa (1)

viperidaenz (2515578) | more than 2 years ago | (#38295142)

I believe both writing a erasing a cell in flash degrades it. Its caused by the charge jumping over the insulated gate of the transistor, each time that happens the glass thats forms the insulation breaks down a bit. smaller transistors have less insulation to break down, so fewer write cycles. I guess the Hi-K/metal gates are more robust

Re:Get ready for a new wave of poorly coded softwa (1)

symbolset (646467) | more than 2 years ago | (#38294326)

With a decent amount of write cache this is not a problem. You can force this with a pathological worst-case utility, but in the real world this is not how it works.

Re:Get ready for a new wave of poorly coded softwa (0)

Anonymous Coward | more than 2 years ago | (#38294640)

I'm not sure that these "large files" prove what you think it does. Then again, I'm not entirely confident that the opposite is true, either. Even if it doesn't prove what you think it does, I think your point still largely stands.

Let's say I have a 32G SSD device. If I put some movies on here, I'm left with, say, 16G, of "active" disk space usage. If I then go and use what's left very actively, the write endurance for that section falls given the same usage patterns. For example, at 100GB/day erase/write cycles, I've cut it down to ~43 years. If I can interleave the deletion/replacement of the movies into the mix, then I should trend back up to around 85 years, but I don't think that the life span will be extended beyond that, again, assuming constant usage.

Where your point largely still stands is how much less than 100GB/day a consumer will use their disk. By storing 16G of movies/music and only going through 50GB of erase/write cycles, we get back up to 85 years. And both of these numbers are still way too high for consumer usage - 3GB/day might be stretching it as an average for consumers.

Don't worry about "active" data and "mostly read-only" data on your SSD. That is not how SSDs work. Modern SSDs will perform wear-leveling across their entire internal flash memory. That includes across partitions, both primary and extended. You didn't think your SSD actually knew about all the different file systems and all the different partitioning schemes, did you? SSDs do not even know which space on the drive your operating system considers empty or full unless they are told via TRIM command.

Re:Get ready for a new wave of poorly coded softwa (1)

viperidaenz (2515578) | more than 2 years ago | (#38295094)

5 million cycles? modern nand chips have 3000-5000 cycles, so by those calculations, overwriting the disc 3 times a day = 2.7 - 4.5 years

Re:Get ready for a new wave of poorly coded softwa (1)

tabrisnet (722816) | more than 2 years ago | (#38295632)

For as much as anecdote != data, I took a recent (installed it Thanksgiving 2010) OCZ MLC SSD (30G) SSD, and gave it a database workload wherein a commit was made every 5 seconds. Wore it out in a year. Replaced it in August.

Relevant SMART data (cleaned up so /. doesn't hate on it)
205 Max_PE_Count_Spec       0x0000   -   -   -    Old_age   Offline      -       10000
206 Min_Erase_Count         0x0000   -   -   -    Old_age   Offline      -       8948
207 Max_Erase_Count         0x0000   -   -   -    Old_age   Offline      -       9930
208 Average_Erase_Count     0x0000   -   -   -    Old_age   Offline      -       9655
209 Remaining_Lifetime_Perc 0x0000   -   -   -    Old_age   Offline      -       4

You can't tell me it can't be done with modern SSDs. It merely isn't the case for MOST consumers. But when has Slashdot been made up of MOST consumers?

Re:Get ready for a new wave of poorly coded softwa (5, Informative)

Rockoon (1252108) | more than 2 years ago | (#38295890)

Except with SSD write lifetimes falling with every generation

Except this isnt true. Flash lifetimes are dropping due to process shrinks, but SSD lifetimes are remaining steady due to increasing capacity made possible by those process shrinks.

This is the problem with you SSD critics. You get that one nugget of information and then gleefully go on spitting bullshit at everyone on forums like this one. To be quite clear, YOU DO NOT KNOW WHAT YOU ARE TALKING ABOUT.

Why do you volunteer to talk about a subject that we both know that you are poorly informed about? You dont see me talking about JAVA performance because... guess what... even though I know a couple things about JAVA, I refuse to make declarative statements about topics where I know that I only know a couple of things about.

If you are an expert in something... wait for that topic before you act like an expert.

Re:Get ready for a new wave of poorly coded softwa (1)

William Robinson (875390) | more than 2 years ago | (#38292800)

SSD's are like crack to bad applications.

I disagree that SSD has anything to do with it. Average desktops already come with more than 500GB SATA drives. Nobody is stopping poor coder to push inefficient code to market as long as people are ready to throw money at it.

Re:Get ready for a new wave of poorly coded softwa (1)

GameboyRMH (1153867) | more than 2 years ago | (#38292848)

Average desktops come with the slowest bargain-basement hard drives the manufacturer can buy by the truckload. There is an excess of processing power and RAM, which apps haven't wasted. There is sure as hell no excess of disk speed.

Re:Get ready for a new wave of poorly coded softwa (1)

MightyYar (622222) | more than 2 years ago | (#38292952)

RAM is certainly not the bottleneck in terms of speed, but quantity is never excessive. In fact, most of the performance penalty you feel on low-end machines is when you run out of RAM and start swapping more heavily to the slow fixed disk.

Re:Get ready for a new wave of poorly coded softwa (1)

GameboyRMH (1153867) | more than 2 years ago | (#38293032)

It doesn't help that Windows uses swap regardless of how much RAM is free (on my gaming machine, if I enable the swap file it is used when there is MORE THAN 10GB RAM AVAILABLE). Switch to Linux and that limitation disappears, it will only start swapping when the RAM is somewhere between 2/3 full and completely full, depending on the (much more sane) default setting. 4GB RAM is actually pretty hard to use up - especially without help from a Firefox instance with a zillion tabs open. Most average users will always have lots to spare.

Re:Get ready for a new wave of poorly coded softwa (0)

Anonymous Coward | more than 2 years ago | (#38293156)

How do you tell it's using your swap? The OS will commit memory to swap when it is not being used yet to free up more real memory for other applications.

Re:Get ready for a new wave of poorly coded softwa (1)

GameboyRMH (1153867) | more than 2 years ago | (#38293212)

Task manager --> Performance?

Re:Get ready for a new wave of poorly coded softwa (1)

Culture20 (968837) | more than 2 years ago | (#38293282)

How do you tell it's using your swap?

Um, by using the system utilities to see disk activity and the associated files (the windows equiv of lsof).

The OS will commit memory to swap when it is not being used yet to free up more real memory for other applications.

Even with 10gb free? Why? Since this is his gaming machine, I'm assuming that he's talking about enabling the "minimum suggested" pagefile of 512mb in size. Even with 16gb of ram, windows will waste time with io to this pagefile.

Re:Get ready for a new wave of poorly coded softwa (1)

billcopc (196330) | more than 2 years ago | (#38294164)

Disable the pagefile entirely then. A gaming machine doesn't need it. Heck, it doesn't need 16gb of RAM either. 6 or 8 would be more than enough.

Re:Get ready for a new wave of poorly coded softwa (1)

AdamHaun (43173) | more than 2 years ago | (#38293982)

Can't you just turn off the swap file? That's what I do. It speeds up application switching remarkably.

Re:Get ready for a new wave of poorly coded softwa (1)

GameboyRMH (1153867) | more than 2 years ago | (#38294058)

That's what I did. I still have to turn it on for some apps though. ScanDisk crashes without it and Spider-Man: WoS wouldn't run.

ScanDisk (1)

tepples (727027) | more than 2 years ago | (#38296574)

I thought ScanDisk was in Windows 95, Windows 98, and Windows Me, and it was replaced with CHKDSK (for Windows NT) [wikipedia.org] in Windows XP.

Re:Get ready for a new wave of poorly coded softwa (3, Informative)

billcopc (196330) | more than 2 years ago | (#38294156)

If you have 12GB in your PC, and you're using it normally, you can disable swap entirely. Sure, your commit rate will jump a bit, but you still have several times more Ram than you need. Swap space has a usefulness even when you have memory available, because a properly tuned VMM will treat it as low-priority commit fodder - meaning if an app requests 10 gigs of buffer space, but has not yet put anything in there, the VMM will earmark swap first, so as to not tie up physical RAM until it is actually needed (if at all). In a sense, it's an accounting trick that allows the OS to "borrow" memory without necessarily using it. It's like a line of credit for memory; you're best to avoid using it, but if you need a security deposit for something, that mastercard is ideal. Swap is like that mastercard. It can help swing you through tight spots, but if you abuse it, you enter a world of pain...

Re:Get ready for a new wave of poorly coded softwa (1)

GameboyRMH (1153867) | more than 2 years ago | (#38294226)

I did disable swap entirely, but a few apps can't run without it, such as ScanDisk which will just fill the RAM and crash.

Re:Get ready for a new wave of poorly coded softwa (1)

badboy_tw2002 (524611) | more than 2 years ago | (#38294286)

Just turn off swap if you've got that much. Once I went to 16GB on my workstation I turned it off and haven't had a problem yet. (And its not like you won't figure out quickly if you are running into issues).

Re:Get ready for a new wave of poorly coded softwa (1)

viperidaenz (2515578) | more than 2 years ago | (#38295182)

Firefox instance with one or two tabs open

FTFY

Re:Get ready for a new wave of poorly coded softwa (1)

mehrotra.akash (1539473) | more than 2 years ago | (#38293068)

You havent has the pleasure of using VS 2010 with an IIS and SQL Express server instance (And McAfee) running on a 2Ghz C2D with 256MB RAM and WinXP have you?

Re:Get ready for a new wave of poorly coded softwa (2)

Microlith (54737) | more than 2 years ago | (#38293216)

I haven't had a desktop system with only 256MB of RAM in 10 years. Even my Athlon 64 system had 1GB to start. Sounds like you were being punished or something.

Re:Get ready for a new wave of poorly coded softwa (1)

mehrotra.akash (1539473) | more than 2 years ago | (#38293304)

My college labs, getting upgraded next semester to i3's with 4GB though
Those were toy programs, but the experience was still painful

Re:Get ready for a new wave of poorly coded softwa (1)

billcopc (196330) | more than 2 years ago | (#38294204)

No, we haven't.

Anyone pairing a C2D with less than a gig of Ram should be tarred, feathered, then chucked into a wood chipper. I haven't built a desktop with less than 2 gb of Ram in at least 6-7 years - and I don't mean my own, I'm talking about PCs I've built for non-techy friends and family.

Re:Get ready for a new wave of poorly coded softwa (4, Interesting)

malakai (136531) | more than 2 years ago | (#38293026)

This isn't about storage size. That war is lost for the desktop ( see Bloatware ). If it wasn't for smart phones and tablets cause people to still think about storage size for some applications, it would be even worse.

When we talk about SSD drives vs HDD drives, were are primarily talking about drive bandwidth and access times. SSD's have no seek time, no spin up time, and their bandwidth in read/writes are at least 2x as fast, and can be up to 4x or 5x as fast.

Think of the engineering and time that goes into making an application 'snappy' to load. Like say Chrome or Word or Photoshop. Now weight that engineering cost vs simply installing an SSD. Now you see how this is going to affect future software development.

But GP ( or uncle, or 2nd cousin) is right, this is a Rant. Each of these Moore's law watermarks tend to have similar effects on software development. I think bleeding edge apps ( including games ) generally herald what is to come....

Buy stock in SSD manuf I guess.

Re:Get ready for a new wave of poorly coded softwa (0)

Anonymous Coward | more than 2 years ago | (#38294128)

and their bandwidth in read/writes are at least 2x as fast, and can be up to 4x or 5x as fast.

My 4TB 6 drive RAID array was like $500 and I get over 700MB/sec transfer rates.

HDs suck on random IO... (0)

Anonymous Coward | more than 2 years ago | (#38294518)

Yup, and the throughput you talk about applies only to streaming (i.e. large, contiguous) reads and writes. You get a sucky 100 IOPS out of that array (assuming it's a 6 x 1TB in a RAID-6 or similar construction). That means your throughput drops to under 10MB/s when doing random reads or writes.

Compare that to even a mid-grade SSD: 40,000+ IOPS read, and 4,000 IOPS write. Sure, a SSD isn't really any faster on sequential read/write, but it blows the doors off of any HD + RAID configuration for random I/O. And, guess what - the vast majority of your workload is random I/O. Practically the only sequential I/O you'll do is read a movie file for playing (or writing it during ripping). Virtually everything else for a desktop is going to be random I/O.

Average access time (stroke+seek) on an HD is about 10-15 milliseconds. It's 10-15 nanoseconds in an SSD. That's 1,000 times faster.

Re:Get ready for a new wave of poorly coded softwa (1)

elsurexiste (1758620) | more than 2 years ago | (#38292854)

So? Who cares how inefficient an app is, as long as it works? If better hardware, instead of better software, makes the switch from 10 seconds of swapping to just 1, the problem is solved.

Sorry, this is my knee-jerk reaction to rants... :P

Re:Get ready for a new wave of poorly coded softwa (1)

MightyYar (622222) | more than 2 years ago | (#38292998)

You are absolutely right... on the desktop. On laptops/tablets/phones you have the battery to contend with. Most of the time when I notice my battery bar go down really fast on my laptop it is some stupid flash thing hogging the CPU in an open web page. Sometimes it is some other poorly written application running wild. I can close it, but by then I've already lost time.

Re:Get ready for a new wave of poorly coded softwa (1)

Eponymous Coward (6097) | more than 2 years ago | (#38293028)

Who cares? It might be a truism, but the people who care, care. As a programmer, there's a balance between maintainability, performance, and ease of development that I care deeply about. I'm proud of my work and I'm not willing to slap something together with no regard for efficiency or longevity.

If I were making a game, I would probably be willing to sacrifice some ease of development for the sake of better efficiency. If I were making a prototype to demo UI concepts, ease of development might be paramount. Whatever trade-offs I'm making, I try to be aware of them and make good decisions because I care about my work.

Re:Get ready for a new wave of poorly coded softwa (1)

geekoid (135745) | more than 2 years ago | (#38293036)

People who like efficiency care. People who understand a computer has many apps working alongside and with each other care.
Efficient saves money in the long run.

You're attitude is why the industry is a shithole compared to other engineering disciplines.

Sorry, this is my knee Jerk rant to small minded, unthinking responses.

Re:Get ready for a new wave of poorly coded softwa (1)

elsurexiste (1758620) | more than 2 years ago | (#38293344)

Ha! My reaction is opposition, not really debate ;) . Most rants are overreactions, so I don't take them seriously.

Of course I care about efficiency, and I can tell a software patch that brings O(n) to O(sqrt(n)) is instantaneously better and cheaper than spending extra bucks in chips and power.

My point still applies, though: adding horsepower to your machine can be a faster solution than optimizing your code, if you have the resources available and need a solution *today*. I did it one or two times, when I had deadlines. But, as I said, I'm not looking for a debate, just a little cynicism on the face of exaggeration.

Re:Get ready for a new wave of poorly coded softwa (1)

billcopc (196330) | more than 2 years ago | (#38294440)

Agreed. It is even more frustrating when you know how little effort would be required to drastically improve performance in many of these apps.

There's one app I love to pick on, because it is legendary in its inefficiencies, and that is Cubase - the multitrack audio production tool. Just to launch it on this $10k PC takes about 15 seconds, because it looks through a few directories for a number of plugins, does god-knows-what with each of them, and eventually presents a blank workspace for me to actually start working. It doesn't actually enable any of the plugins at launch, and each one still takes a second or two to load when you select it. Is it querying for the plugin's name, version and other metadata ? That shouldn't take 2 million CPU cycles per file... Is it preloading them all to speed things up later ? No, because the app still pauses and the audio stutters when you load a new effect or instrument. So what the fuck is it doing that takes so long to launch the host app ? It's doing bullshit, that's what.

I say this as a programmer who was writing assembly back in the day, and have written audio apps and plugins and plug-hosts. Cubase just wastes CPU time and memory doing nothing of value, as do most modern apps. Why ? Because today's developers don't give a crap. Their bosses don't give them the time or resources to do things right. Their workstations probably suck and take too long per build/test/debug iteration. And maybe, just maybe, the developers themselves don't know what their doing, thanks to the many colleges cranking out useless degrees and the industry-wide abomination that is Java, teaching people to write prose rather than code.

Re:Get ready for a new wave of poorly coded softwa (3, Insightful)

Nethemas the Great (909900) | more than 2 years ago | (#38293632)

I understand what you're saying but at the same time software that wastes compute resources is also wasting dollars. Dollars needlessly spend on employee hours (waiting for operations to complete), new/upgraded hardware to cope, and one that many people might not realize extra software development, maintenance and support costs. Inefficient software quite often reflects a poor implementation under the hood and frequently behind the wheel. One thing nearly every engineering discipline recognizes is that the fewer moving parts a system has the inherently more reliable and maintainable that system becomes. This is no different with software. Software bloat is the bane of those trying to implement features to support new requirements and a nightmare for those trying to ensure quality control. Software bloat often shows up in the user interface as well in poorly implemented workflows that further slow down productivity.

Contrary to popular opinion, fancy GUIs replete with eye-candy generally aren't the problem--normally they're built on top of highly abstracted, well optimized and tested frameworks--it's evolution. One of the more common sources of inefficiency is software bloat. Bloat can even plague software that was initially well constructed. Over time, after several iterations of evolution the feature requests, the various modifications and the resulting baggage train required to support them can grow substantially and weigh down a system. It isn't that the bloat is a requirement of a given feature set per-se but rather reflects a set of compromises made necessary by an initial architecture that wasn't designed to support them. Management and even sometimes the engineers have a hard time accepting that a significant or even complete tear down and reconstruction with a new architecture is the best and most appropriate choice. One of the easiest and most notable places this problem can be recognized is in web browsers. Take a trip down memory lane and compare the features, bloat, and usability of the various web browser throughout time.

Re:Get ready for a new wave of poorly coded softwa (1)

billcopc (196330) | more than 2 years ago | (#38294302)

It's not though, because were that software properly written in the first place, we would already be at the 1 second swap on the "old" hardware, and the SSD would make it 1/10th of a second. If all these dramatic improvements in hardware performance are met with equal regressions in the software, then why the heck are we upgrading in the first place ?

Does Firefox need 10 seconds and 400mb of Ram just to launch ? Yes, it does, but according to the functionality it provides, this should be possible with less than 1/10th of the resources, and no, it wouldn't require writing everything in assembly language. It wouldn't require ANY assembly, just a modicum of sanity and restraint from the developers. The OS, which provides all the services Firefox consumes, uses less memory than that. It's not working with massive data sets, I certainly don't have 400mb of text and images loaded in the one tab, so where the heck is all that excess being used ?

Re:Get ready for a new wave of poorly coded softwa (1)

Belial6 (794905) | more than 2 years ago | (#38294986)

Don't say sorry. Your right. Spending $100k on code optimization to avoid spending an extra $100 on hardware makes no sense at all. Code optimization makes sense if $1000 worth of development time will save $10000 in new hardware, sure. But using the sky rocketing of hardware speed and plummeting of hardware prices to avoid large amounts of development cost is certainly a valid path.

Re:Get ready for a new wave of poorly coded softwa (2)

Unknown Lamer (78415) | more than 2 years ago | (#38292860)

It could be argued that the problem lies with hard disks and not the applications. SSDs are nice because you aren't forced to artificially contort data access to fit the slow-seek/fast-linear-throughput model of magnetic hard disks. Removing an arbitrary restriction on program style is a good thing.

Re:Get ready for a new wave of poorly coded softwa (0)

Anonymous Coward | more than 2 years ago | (#38292862)

Already I've already seem some development companies demo financial software on striped SSD's as if that's what everyone runs these days.

Have you seen prices in the financial industry? A Bloomberg terminal costs thousands per month. You can pay $50,000 or more per year for access to some data feeds.

Spec'ing a "disk" that costs $5000 is no big deal.

Unless you are talking about software targeted at individual people that are not rich... The cost of the hardware is miniscule compared to the cost of the access to the data.

Re:Get ready for a new wave of poorly coded softwa (2)

Charliemopps (1157495) | more than 2 years ago | (#38292968)

This is nothing new. There was a time that I vividly remember in which memory cost over $200 a meg (and it cost even more before that.) A single line of redundant code was considered a sin. The price of memory and hard drive space came down and now software is more bloated as programmers focus on other things like security and usability. Is that bad? Yes and no. Like all things, the effect of an improvement in something is many fold. There are positives and negatives... we just hope it's mostly positive.

Re:Get ready for a new wave of poorly coded softwa (1)

geekoid (135745) | more than 2 years ago | (#38293066)

"Is that bad?"
Yes.

The fact that people in the software industry think bloat security and usability are separate independent things and not related is a horrible thinking. People keep doing designed like they aren't related to each other. This i sloppy engineering.

Re:Get ready for a new wave of poorly coded softwa (1)

Kjella (173770) | more than 2 years ago | (#38293118)

On the other end of the scale you have this [thedailywtf.com] . Why solve a problem the really, really hard way?

Re:Get ready for a new wave of poorly coded softwa (1)

billcopc (196330) | more than 2 years ago | (#38294606)

If it's a one-off, and the upgrade route actually does only cost $50 for more memory, yeah, sure, add the Ram. The problem with the Wilhelm anecdote is that nobody sane would have overlooked that simple detail, unless it's one really stupid dead-dumb-and-blind company.

If your software is being deployed to thousands of machines, which would each require an upgrade, you'd better optimize the crap out of it.

If your software is being deployed to a group of servers, where an 8% reduction in CPU and memory usage saves you from adding another $15k node and possibly other infrastructure upgrades, you should spend up to $30k in man-hours optimizing that thing. Scalability today avoids a lot of headaches tomorrow.

Re:Get ready for a new wave of poorly coded softwa (2)

ccguy (1116865) | more than 2 years ago | (#38293154)

Already I've already seem some development companies demo financial software on striped SSD's as if that's what everyone runs these days.

I think it's a fair assumption if you are selling financial software to a financial company that they will buy a SSD if that's a requirement. Just because developers aren't optimizing for a small footprint these days it doesn't mean there's no optimization being done. It just means that they optimize for something else (development cost, feature set, or whatever their business plan says is most important).

By the way when you see a computer game demo these days, do you think "These guys are on crack if they think everyone's got one of those cards?", or "With these recommended specs what is this written in, VisualBasic?" ?

Re:Get ready for a new wave of poorly coded softwa (1)

billcopc (196330) | more than 2 years ago | (#38294666)

By the way when you see a computer game demo these days, do you think "These guys are on crack if they think everyone's got one of those cards?", or "With these recommended specs what is this written in, VisualBasic?" ?

Sometimes, yes. There's one game in particular, that I play a fair bit. It has roughly Xbox (1) era graphics, yet manages to stutter on my balls-to-the-wall quad-SLI GTX 295 system. Even just one of those GPUs should be able to run the stupid game at 300 fps, but the engine is such a mess that it somehow wastes all that time doing nothing useful. I'm also going to mention that the entire game is only 2.5 gigabytes, yet takes about a minute to load one level. With my absurd hardware, I can copy the entire folder in about 5 seconds... So yes, this thing is an insult to Visual Basic. I could write this game in Perl and it would still run ten times faster.

Re:Get ready for a new wave of poorly coded softwa (1)

GreatBunzinni (642500) | more than 2 years ago | (#38293470)

I'm worried what the future is going to hold when the average desktop comes with an SSD drive. Already I've already seem some development companies demo financial software on striped SSD's as if that's what everyone runs these days.

By any chance do those companies also sell the hardware where their software is supposed to run at peak efficiency? And do those companies also sell support contracts to maintain and tweak those systems? Because if they do then I bet dollars to doughnuts that their main motivation isn't technical perfection but the maximization of their company's bottom line.

Re:Get ready for a new wave of poorly coded softwa (1)

billcopc (196330) | more than 2 years ago | (#38293780)

This is what I thought when I bought a Mac, not long ago. My initial impression was that this overpowered laptop was running a lot slower than it should. I don't know what OSX does under the hood, but goddamn that thing was slow. Then I threw one of those hybrid drives at it, with the 4GB of SSD cache, and now it feels "normal". Maybe it's because I'm spoiled by the truly ridiculous SSDs in my PC, or maybe Mac users are accustomed to things being a bit laggy, but to me it screamed of excess background I/O.

It does cut both ways though. An SSD can hide excessive seeking, but it can also underline scalability issues. I have one particular tool that fails to maximize the SSD's throughput, which is quite shameful considering its a video file (de)muxer. It peaks at about 60mb/sec, regardless of the I/O subsystem - whether it's a spinning disk, a Velodrive, or a freaking ramdisk. It is neither CPU nor I/O bound, but there's something fishy in the loop that causes it to idle a lot. For all I know, the guy may have inserted a sleep() in there for some random reason. And no, I haven't found the time to look at the code yet.

I see the SSD as one tool in the arsenal. It has its pros and cons, and is not a magic "solve everything" device. It certainly cannot entirely make up for developer ignorance.

Just move up the "value" chain (1)

Idou (572394) | more than 2 years ago | (#38294374)

Even though it sounds like "market speak," there are lots of hard problems out their that require efficient code skill (DNA sequencing/analysis, anyone?). Your efficient coding will just shine that much more now.

Re:Get ready for a new wave of poorly coded softwa (1)

19thNervousBreakdown (768619) | more than 2 years ago | (#38295098)

How do you know those are bad applications? Is optimizing for rotational media really the measure of a well-written program? What about programs that run well on spinning disks, but destroy SSDs? What about when SSDs are our primary storage medium, or arrays of squid with eidetic memories and pizeoelectric tentacles? Should every program be optimized for rotational media, SSDs, and EMPTS? Or is rotational media the end-all-be-all of a program's optimal storage environment?

Re:Get ready for a new wave of poorly coded softwa (0)

Anonymous Coward | more than 2 years ago | (#38295562)

Gotta disagree.

Yes, software needs to be TESTED on typical hardware configurations and, if necessary, tuned to run efficiently on those configurations.

That does not mean that software must be DEVELOPED on typical hardware configurations. The act of developing software is NOT typical use of typical hardware. A bleeding edge hardware configuration for me as a developer more than pays for itself well within the lifetime of that hardware configuration. For example, on my current project, I figure my third 22 inch monitor paid for itself in 6 months. Three monitors is definitely not a typical hardware configuration for most computer users. But I expect it will pay for itself 5 times over during it's lifetime. Sounds like a good investment to me.

Re:Get ready for a new wave of poorly coded softwa (0)

Anonymous Coward | more than 2 years ago | (#38296298)

While we're at it, let's not make safer cars, because then people will take more risks driving them.

That argument has been placed countless times. It's complete rubbish.

"Next year" is three weeks away... (0)

Anonymous Coward | more than 2 years ago | (#38292692)

Yay!

Oblig XCKD (0, Offtopic)

Anonymous Coward | more than 2 years ago | (#38292726)

Rectum Storage = teh win! (-1)

Anonymous Coward | more than 2 years ago | (#38292736)

Finally, I can upgrade the storage bearing capacity in my rectum!

Blah, Blah (1)

captinkid (1224428) | more than 2 years ago | (#38292744)

Let me know when the real SLC SSD's are out, not this half arsed slow and unreliable "MLC" crap.

Re:Blah, Blah (0)

Anonymous Coward | more than 2 years ago | (#38292804)

Sounds like somebody needs a little TLC...

Re:Blah, Blah (0)

Anonymous Coward | more than 2 years ago | (#38292870)

Let me know when the real CHEAP SLC SSD's are out, not this half arsed slow and unreliable "MLC" crap.

Here, fixed that for you. Btw, all Intel's X-E(xtreme) series are SLC.

Re:Blah, Blah (2)

Microlith (54737) | more than 2 years ago | (#38292890)

They're available. They just cost significantly more and are way lower density. Search the Micron P300/P320.

Re:Blah, Blah (1)

citizenr (871508) | more than 2 years ago | (#38294520)

Let me know when the real SLC SSD's are out, not this half arsed slow and unreliable "MLC" crap.

Manufacturers got something better just for you, its called TLC!

wont (0)

Anonymous Coward | more than 2 years ago | (#38292964)

I suppose some call that language evolving. I call it ignorance.

A question about flash and SSDs (3, Interesting)

Anonymous Coward | more than 2 years ago | (#38293808)

A lot of the tablets, etc. are coming out with eMMC type flash instead of raw flash for internal nonvolatile memory. How come?

I would think eMMC would be more expensive (has built-in controller) than raw flash chips. And slower, too, because eMMC has no concept of file-systems and therefore cannot do optimal space selection or wear-levelling. I'm sure the teeny, tiny controller in the eMMC does the best that it can, but I'm also sure that JFFS2 and YAFFS manage flash chips a lot better. The only savings I see are is that the device manufacturer has to layout and route a fewer traces on a circuit board when using eMMC.

Does anyone really know why eMMC is being used?

Re:A question about flash and SSDs (1)

tlhIngan (30335) | more than 2 years ago | (#38294170)

A lot of the tablets, etc. are coming out with eMMC type flash instead of raw flash for internal nonvolatile memory. How come?

I would think eMMC would be more expensive (has built-in controller) than raw flash chips. And slower, too, because eMMC has no concept of file-systems and therefore cannot do optimal space selection or wear-levelling. I'm sure the teeny, tiny controller in the eMMC does the best that it can, but I'm also sure that JFFS2 and YAFFS manage flash chips a lot better. The only savings I see are is that the device manufacturer has to layout and route a fewer traces on a circuit board when using eMMC.

Does anyone really know why eMMC is being used?

Many reasons.

First, managing the mapping between filesystem blocks and physical flash blocks is the job of the Flash Translation Layer (FTL). An FTL is an extremely heavily patented piece of software used to manage the logical-to-physical mappings, wear levelling, dirty page tracking, etc.

And yes, it's extremely heavily patented. A good FTL can provide significant speed improvements and longevity compared to a crappy one.

Second - eMMC makes development easy. eMMC is just MMC in a chip formfactor. It only requires a few lines (16 at most, compared to 24 or so for NAND), so it's easy to wire up on a PCB. And for early development, you won't believe how good it is to not populate the eMMC part, and stick in an SD/MMC socket instead. Then you can simply use a common SD card to store your filesystems and kernel/bootloader. Especially handy for Android where installing Android on an SD can be done in minutes, while doing it via JTAG or serial or USB download can take forever and take many more steps.

As for cost - eMMC may or may not cost more than raw NAND. Samsung/Toshiba/etc ship so many eMMC chips that the cost of the extra logic may be only a penny or so more. And if you ship enough stuff, they'll give killer discounts.

Finally, it also allows for easy componentization without extra work. Building one piece of software to handle 16 GB of NAND is easy. If you add a 32GB option, you have to modify the software to handle it. And if you switch flash vendors or the layout (page/block size) changes, you have to re-do the flash layer again with new timings and layout.

With eMMC, building a 32GB unit costs almost nothing in the software since it doesn't care about the lowlevel details. Just replace the 16GB part with a 32GB part, test, and ship. And if you switch eMMC vendors, same deal.

Re:A question about flash and SSDs (0)

Anonymous Coward | more than 2 years ago | (#38294950)

AFAIK, JFFS2 is a journalling, log-structured file system that gets around the FTL patents by not doing mapping, or at least not doing mapping in a way that runs afoul of the patents.

I do embedded systems programming and I find that JFFS2 is typically a lot faster than eMMC or MMC flash. I find JFFS2 makes dealing with raw NAND chips no more difficult than any other scheme. Ie. the amount of s/w development work involved when swapping NAND flash is comparable with swapping MMC "class" cards - it's all pretty much automatic in the lower levels of the drivers.

Maybe there are some security features or other features in eMMC?

Re:A question about flash and SSDs (5, Informative)

Xygon (578778) | more than 2 years ago | (#38294180)

Speaking as someone in the NAND industry...

NAND does not have its own reliability controls on-die. Items such as wear-leveling, file management, and ECC mechanisms need to be handled somewhere. So the options are in software, which would then need to be validated and designed for each NAND manufacturer, die, and process; and would consume CPU and batter power from the tablet OS, or it can be done via a separate off-die controller.

And as to the choice of eMMC, it's a cost/performance/reliability trade-off. eMMC is relatively inexpensive (very small die), and includes all of the aforementioned reliability mechanisms at a low-power, and low-cost method, in an I/O language supported by most mobile architectures (SD/MMC). However, it severely lacks in relative performance to an SSD. The other option is an optimized SSD controller, which may cost many times more, but has much higher performance. The problem is how to include a $100 SSD in a $100-200 tablet BOM... impossible.

Suck it Trebek! (-1)

Anonymous Coward | more than 2 years ago | (#38296488)

iPad 4 here I come!

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?
or Connect with...

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>