×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

CPU Competition Heating Up In 2012?

Unknown Lamer posted about 2 years ago | from the lots-of-fancy-devices dept.

AMD 100

jd writes "2012 promises to be a fun year for hardware geeks, with three new 'Aptiv-class' MIPS64 cores being circulated in soft form, a quad-core ARM A15, a Samsung ARM A9 variant, a seriously beefed-up 8-core Intel Itanium and AMD's mobile processors. There's a mix here of chips actually out, ready to be put on silicon, and in last stages of development. Obviously these are for different users (mobile CPUs don't generally fight for marketshare with Itanium dragsters) but it is still fascinating to see the differences in approach and the different visions of what is important in a modern CPU. Combine this with the news reported earlier on DDR4, and this promises to be a fun year with many new machines likely to appear that are radically different from the last generation. Which leaves just one question — which Linux architecture will be fully updated first?"

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

100 comments

Niggertits (-1)

Anonymous Coward | about 2 years ago | (#40015681)

That is all.

We want our 8-core i7 desktop CPU now !! (1)

Taco Cowboy (5327) | about 2 years ago | (#40024709)

Intel has been dragging its feet on releasing an 8-core i7 desktop CPU to the users.

It took merely 2 years to upgrade from uni-processor machine to a 4-core CPU, and then it stopped.

It has been 10 years and counting, and there is still no 8-core i7 desktop CPU.

Evolutionary! (4, Insightful)

Sponge Bath (413667) | about 2 years ago | (#40015743)

Evolutionary upgrades to intel processors and memory standards, titanium is not dead yet, AMD still can't keep up and ARM rules low power applications. Yes, it will be a landmark year for processors.

Re:Evolutionary! (4, Funny)

TheRaven64 (641858) | about 2 years ago | (#40015973)

I think the news is that MIPS is not dead, it's just pining for the fjords.

Re:Evolutionary! (0)

Anonymous Coward | about 2 years ago | (#40017239)

is not dead, it's just pining for the fjords.

That would be the Norwegian Rangifer tarandus tarandus. ;)

Re:Evolutionary! (0)

Anonymous Coward | about 2 years ago | (#40038175)

Congratulations on winning Slashdot Comment of the Day #4! (reading that back to myself I can see it looks like spam, but it's not I promise, heh)

http://www.dailycircuitry.com/2012/05/slashdot-comment-of-day-4.html

Re:Evolutionary! (5, Interesting)

QuantumRiff (120817) | about 2 years ago | (#40016003)

Why do people keep saying AMD can't keep up? because they don't compete in a market you care about?

My wife's laptop has an AMD E-350.. its got an ATI video card built onto the cpu.. it sucks down a whopping 9 watts, making her super light 10.6" laptop last about 7 hours.. 4GB of ram, 500GB hard drive, can stream HD video without a hiccup, and it was $350.. about what you would pay for a nice video card.. I would say AMD is competing rather well..

In the server space, were ditching Intel as fast as we can.. because for our loads, a 16 core Opteron runs oracle at the same speed as a 12 core Intel (CPU usage is not our limiting factor, disk IO is for our databases) and the difference in price last time we looked was about $7k for a Dell R815 spec'd the same as a Dell R810 with dual CPU's.. That difference is a Fusion IO card, or almost another tray of drives.. which would really help IO.

Re:Evolutionary! (4, Interesting)

msobkow (48369) | about 2 years ago | (#40016137)

Unfortunately most tests aren't covering anything business related like calculating join tables and processing large volumes of relational data. Instead, they report on things business could care less about, like the time it takes to transcode a video file or it's ability to render videogame graphics.

The simple truth is that there are very few CPUs currently on the market which aren't perfectly capable of handling business application processing like document editing in a very acceptable fashion. In fact, the issue with even the "slow" CPUs is the time it takes to load and initialize an application, not in it's responsiveness once the application is loaded. That would seem to be more of a question of storage bandwidth than it would be of processor horsepower, but reviewers still blame the CPU for the performance.

For that matter, even the video playback reviews are kind of pointless. Once you have enough snort to render video without dropping frames or tearing, any extra power is pretty much pointless for video processing. While you can start turning on options in the video pipeline, the truth is the effects of those options are virtually unnoticeable unless you use a super-high resolution screen to display expanded video.

I think Windows RT is going to wake up a significant portion of the population to the benefits of low-power ARM processors in the real world.

The business market requirements are not the same as the general gaming/video market's requirements.

Re:Evolutionary! (0)

Anonymous Coward | about 2 years ago | (#40023793)

business couldn't care less about

There, fixed that for you.

Re:Evolutionary! (2)

mhajicek (1582795) | about 2 years ago | (#40028665)

In the CADCAM world we're still aching for faster single-thread execution. Adding cores is kind of a nice side, meaning I can surf with a bajillion tabs open to various forums without slowing down the toolpath verification running on the other screen, but there's isn't much I can do right now to make that verification process faster. Some functions simply need to run sequentially, which means I need a faster clock to make any significant improvements.

Re:Evolutionary! (0)

Anonymous Coward | about 2 years ago | (#40016321)

because AMD *can't* keep up. Their chips are slow, and have been for years. Price? You think it's more expensive for Intel to manufacture their processors?

The only reason AMD is still around is because Intel lets them. All Intel needs to do is drop their prices and then who would buy AMD? The E-350 you mention is an interesting product, probably the only one they have. Doubt that it alone would be enough to keep them alive.

AMD is alive and well, but not by virtue of their products, which are clearly inferior to Intel and have been for a long time.

Re:Evolutionary! (1)

kermidge (2221646) | about a year ago | (#40020613)

Well, it depends. Admittedly, being essentially a luser - surf, email, watch a few vids, read too much, play a few games, and run the occasional vm - I'm part of a very small market share as distinct from any big biz or serious user stuff.

AMD is relevant to me because I could put together a box at prices that I could save up for to let me better do what I wanted. Further, I could move from a Phenom quad-core to a Phenom II hexa-core on the same mobo with only a BIOS update. Also, with the six-core I was able to contribute more work units to World Community Grid in six months than in the previous six years.

For my next build I'll probably use AMD; what I assemble for others will depend on what they need done and what they can afford.

Intel _needs_ AMD to stick around, for engineering-approach competition and to maintain the semblance of an honest market. Intel have no need or desire to lower prices since they're making money hand over fist as it is.

Re:Evolutionary! (1)

reve_etrange (2377702) | about a year ago | (#40023179)

Also, because Intel licenses the x86_64 architecture from AMD. If AMD was going away - and didn't need to keep cross licensing x86 - they could sink Intel's 64-bit lines. Unless the license terms prevent something like that...but if AMD was dismantled one day, I can imaging a troll getting a hold on the IP.

Re:Evolutionary! (3, Interesting)

serviscope_minor (664417) | about 2 years ago | (#40016651)

My wife's laptop has an AMD E-350.. its got an ATI video card built onto the cpu.. it sucks down a whopping 9 watts, making her super light 10.6" laptop last about 7 hours.. 4GB of ram, 500GB hard drive, can stream HD video without a hiccup, and it was $350.. about what you would pay for a nice video card.. I would say AMD is competing rather well..

Which laptop is it? There seem to be a distinct lack of super light laptops of late. My eee 900 at 935g is currently lighter than any netbook on the market and the Asus UX-21 seems (at 1.1Kg) to be the joint lightest non netbook.

What is it? It sounds pretty good and I want one...

In the server space, were ditching Intel as fast as we can.. because for our loads, a 16 core Opteron runs oracle at the same speed as a 12 core Intel

That's what I found a while back, when everyone said Intel was faster: the 12 core 6100s could cram more FLOPS into 1U than the best Intel 4 socket boxes, and at a considerably lower price. Intel have substantially improved their offerings since then (AMD has not by quite so much), but the price of RAM crashed, making the CPUs and system boards a much larger fraction of the cost, increasing the advantage of AMD further.

I actually did the calculation, since I had to budget the 5 year cost, including electricity and cooling and rack space. The AMD systems chew more power, though not as much as the raw CPU differences, since with RAM maxed out, that is a significant fraction of the power draw.

End result was that the AMD systems were substantially cheaper in compute peak performance per $ and in 5 year compute power per $.

Actually, the performance is very application dependent. Some codes suck on AMD, others (rarer) had the PhenomII macthing an i7 for speed.

Intel still have the single thread performance crown, which is really useful on the desktop and laptop. If you're buying a 4 socket machine, it's a fir bet that your task is parallelizable, which reduces the advantage of intel in the 4 socket market.

For single socket stuff that isn't too performance sensitive, AMD has the additional advantage that cheap consumer level boards support the otherwise expensive enterprisey features of ECC memory. If you care about that sort of thing, then getting a Phenom II is much cheaper than a 1 socket Xeon.

Re:Evolutionary! (0)

Anonymous Coward | about 2 years ago | (#40017319)

The OP was bullshitting - there aren't any E-350 machines with a 10.6" screen. Look it up if you want.

Re:Evolutionary! (0)

Anonymous Coward | about 2 years ago | (#40017359)

My wife's laptop has an AMD E-350..

Which laptop is it?

It sounds pretty good and I want one...

Me too. I can't tell you for sure what the model was, but it's got to have been purchased as a coupon/sale item from CostCo. Maybe something from the HP dm1z line.

Re:Evolutionary! (0)

Anonymous Coward | about 2 years ago | (#40017427)

The HP DM1Z's have an 11.6" screen, not the 10.6" the OP was claiming

Re:Evolutionary! (1)

QuantumRiff (120817) | about 2 years ago | (#40017811)

Answered below.. sorry, it was an AMD C-60.. and 11.6 inch.. Acer Aspire one.. I don't use it, so don't pay attention to the specs.. I just needed to replace her old, old laptop that was heavy (17") with something small and faster/newer.. and with two little kids in the house, I am kind of figuring its going to be amazing if it lasts two years, so dirt cheap is best.

Re:Evolutionary! (2)

serviscope_minor (664417) | about a year ago | (#40018181)

I am kind of figuring its going to be amazing if it lasts two years, so dirt cheap is best.

My experience is limited to Asus (eee 900), but the interesting thing about those netbooks was that the only thing sacraficed was speed. The build quality was excellent, surprisingly so.

Over the years, I've been kind of smug when other's vastly more expensive laptops have started to develop cracks, bad hinges, failing screens, broken power connectors, etc etc.

I've used mine a lot: it's about 4 years old and I've done a lot of travelling with it (though less now than I used to), and it's still in excellent condition (with the exception of the right mouse button, but multi-touch is a perfectly find workaround). The PSU also broke recently. They use a standard connector, so an old maplin generic supply worked until the £10 replacement arrived.

Actually, curiously, the improvements in Linux power management have been more than matching the age related degradation of the battery, so the battery life is still better now than it was new.

The build was nothing like super cheap nasty brick laptops it was contemporary with.

IOW you mey be pleasantly surprised.

AMD website lists 84 of them (0)

Anonymous Coward | about 2 years ago | (#40017727)

AMD lists 84 notebooks with E350 chips starting at $299.00. Here is the link http://shop.amd.com/us/All/Search?NamedQuery=visione2&SearchFacets=category:Notebook&promoid=vse201

Re:Evolutionary! (2)

QuantumRiff (120817) | about 2 years ago | (#40017729)

Sorry, i lied a bit.. its an 11.6" screen and it has a C-60 CPU, not an e-350.. looking at the specs, it looks like its half the power, and a bit slower than the "E" class. but still perfect for what she does.

  there are several models of the Acer Aspire One that carry them, think the one I picked up was this: http://www.newegg.com/Product/Product.aspx?Item=N82E16834215172 [newegg.com]
I picked hers up at costco for $350..

Re:Evolutionary! (1)

Rudeboy777 (214749) | about a year ago | (#40018089)

For single socket stuff that isn't too performance sensitive, AMD has the additional advantage that cheap consumer level boards support the otherwise expensive enterprisey features of ECC memory. If you care about that sort of thing, then getting a Phenom II is much cheaper than a 1 socket Xeon.

Phenom II is almost completely gone from the channel now, have you run the numbers or done any performance testing with the AMD FX- line?

Our JBoss/MySQL-based app server is one application where Phenom II indeed performed as well or better than i7. Testing on a new 8-core FX system we just got in made an improvement much higher than I expected over Phenom II.

Re:Evolutionary! (1)

Anonymous Coward | about a year ago | (#40022521)

End result was that the AMD systems were substantially cheaper in compute peak performance per $ and in 5 year compute power per $.

Actually, the performance is very application dependent. Some codes suck on AMD, others (rarer) had the PhenomII macthing an i7 for speed.

Intel still have the single thread performance crown, which is really useful on the desktop and laptop. If you're buying a 4 socket machine, it's a fir bet that your task is parallelizable, which reduces the advantage of intel in the 4 socket market.

I bolded the important bits. Not all parallel applications are created equal. There are lots which scale well to a small number of cores, but see diminishing returns as the core count goes up. Algorithms which are "embarrassingly parallel" (that is, they scale effortlessly to very high core counts) are the exception rather than the rule. So, the advantage of the AMD approach depends a very great deal on the type of code you're running. As you found, that approach (lots of weak cores) kinda sucks for many things. (If it didn't, Sun would've done a lot better with their Niagara CPU family, which was even more about cranking up the core and thread count without caring that each thread was slow.)

For single socket stuff that isn't too performance sensitive, AMD has the additional advantage that cheap consumer level boards support the otherwise expensive enterprisey features of ECC memory. If you care about that sort of thing, then getting a Phenom II is much cheaper than a 1 socket Xeon.

As of Sandy Bridge Intel is finally dabbling with enabling ECC in a low-end non-Xeon chip:

http://ark.intel.com/products/61272/

Hopefully they'll start expanding the options, as this one is an odd combination not suitable for everyone (low clock speed and no turbo, but hyperthreading is enabled, and TDP is only 15W). If you use the advanced search features on ark.intel.com you can find many other ECC enabled Sandy Bridge Pentiums, Celerons, i3s, etc., but they're all BGA package "embedded" parts not suitable for building your own system. Right now the Pentium 350 is the only socketed non-Xeon with ECC.

Re:Evolutionary! (0)

Anonymous Coward | about 2 years ago | (#40017145)

I'm a little surprised actually.

We, like many/most, pay for ORACLE's DBMS on a per-core basis.

For those who don't know, an ORACLE database 'license unit' costs about the same as a basic, commodity, 2-4 socket server. Seriously. You need 0.5 OLUs per x86 core, whether it's Intel, AMD or (conceivably) VIA.

The annual support cost per OLU is about the same as the depreciation and hardware support costs of such a server too.

Hence, we've found that fewer, faster Intel cores ends up saving us a bucketload of cash for the same or better performance.

TBH the CPU costs from either camp dwindle to relative insignificance once you put the cost of the rest of a decent machine into the equation. I'm thinking here about the chassis/mainboards, 256GB RAM, 2x 10GbE NICs, 4x 8Gb HBAs, etc. and your storage subsystem of choice. Most of these things actually are the same, or at least cost about the same, regardless of the Intel/AMD CPU choice.

Intel have a current per-core, per-clock performance advantage of arguably ~50%, and offer SKUs with fewer cores than the comparably-performing Opterons. It's also worth remembering that all code has serial sections that benefit from faster single cores - a la Amdahl.

So if the hardware costs pretty much the same, and you're paying way more than that for the OLUs, then getting the fewest, fastest cores available commensurate with your workload simply makes the most sense.

When AMD had the performance edge, I argued for them too. I would again, if they ever reclaim it. Right now though, Xeon rules.

Which would you choose if it was your money?

Re:Evolutionary! (1)

QuantumRiff (120817) | about 2 years ago | (#40017757)

If I remember right, Oracle enterprise edition and Standard edition have much different pricing on how the CPU's work.. (but standard is locked to two physical processors).. i'm not our DBA, and don't ever have to deal with oracle or the contracts.. so I might be wrong on that..

Re:Evolutionary! (0)

Anonymous Coward | about 2 years ago | (#40017183)

Hmm, Sandy Bridge does 50% better than Bulldozer with half the number of cores on TPC-E, and 40% better on SAP with half the number of cores.

Also don't those 4 extra cores run you up about 20 grand with the cheapest Oracle license, or closer to 200 grand for the base enterprise license?

Re:Evolutionary! (1)

DarthVain (724186) | about a year ago | (#40019395)

I have head AMD is very competitive in the server market. I doubt anyone would make a argument otherwise.

Typically when we talk about this stuff, we are all talking about "Consumer Products"...

That does not include embedded. Many of these low power chips are one degree separation from embedded chips.

I would not call some of these devices a "computer" strictly speaking insofar as modern computing devices are concerned.

Is an iPhone a computer? How about a Tablet? How about an ultralight laptop with a low power cpu?

Limited functionality, and slow, while might be acceptable or OK for some, but it is apples and oranges.

Its like saying, well I can make do with my old 286, it does everything I want it to do, it has totally kept up with that Intel 2500k cpu, I don't need anything more.

Well good for you, but that would be stupid argument to make.

So to be clear (as a lot of this is just assumed knowlege), sure AMD has kept up in certain segments, however in the Mainstream and up consumer markets they have not kept up in many many years.

Hell VIA or ARM can have said to have kept up in specific segments, but am I going to buy a VIA mainstream CPU? No. Last I recall off the top of my head I think was the C3 which was a flop.

Re:Evolutionary! (0)

Anonymous Coward | about a year ago | (#40021149)

In the server space, were ditching Intel as fast as we can.. because for our loads, a 16 core Opteron runs oracle at the same speed as a 12 core Intel (CPU usage is not our limiting factor, disk IO is for our databases) and the difference in price last time we looked was about $7k for a Dell R815 spec'd the same as a Dell R810 with dual CPU's.. That difference is a Fusion IO card, or almost another tray of drives.. which would really help IO.

  TCO, dude, TCO. The increased power consumption and cooling over ths erver lifetime probably screwed you out of a good chunk of that savings.

Re:Evolutionary! (1)

KingMotley (944240) | about a year ago | (#40021695)

One major difference is that the R815 comes with a crap service plan, vs the $1300 the R810 comes standard with. And if I/O is your bottleneck, shouldn't you be considering a R820 anyhow with it's PCIe Gen 3 interfaces, and double the drive bays? Of course this will add to the cost, but there is no AMD alternative.

Re:Evolutionary! (0)

Anonymous Coward | about 2 years ago | (#40016065)

... Yes, it will be a landmark year for processors.

Landmark? Are you serious?

Re:Evolutionary! (1)

na1led (1030470) | about a year ago | (#40018065)

It's going to equal more fragmented support. You buy the latest and greatest just to find that your hardware/software don't support it. Developers and Manufactures are already having a difficult time supporting so many different platforms.

Too many other bottlenecks (5, Insightful)

na1led (1030470) | about 2 years ago | (#40015753)

Fast CPU and Ram is great but we are still limited to slow crappy Hard Drives (SSD's too expensive) and OS's / Software that don't take advantage of current technology, let alone next generation.

Small SSDs are cheaper (5, Insightful)

tepples (727027) | about 2 years ago | (#40015847)

slow crappy Hard Drives (SSD's too expensive)

SSDs aren't too expensive if you don't need to keep your library of videos available at a moment's notice at all times. There exist affordable SSDs that are big enough to hold an operating system, applications, and whatever documents you happen to be working on at a given time.

Re:Small SSDs are cheaper (2)

Pewpdaddy (1364159) | about 2 years ago | (#40015997)

SSD's currently can easily be had at $1 a gig, and even less in some cases. That being said a 120g drive is more than enough for your OS and applications/games you run. Edit the registry to load your profile off an secondary drive and viola. SSD hawtness... I saw a marginal increase when I moved to my SSD. I'm a gamer so thats my benchmark mostly; SWTOR for instance loaded a good bit faster.

Re:Small SSDs are cheaper (1)

Anonymous Coward | about 2 years ago | (#40017739)

"Edit the registry to load your profile off an secondary drive and viola."

I have a similar setup (128GB SSD). You make it sound easy. I tried several ways to move everything over to a hard drive so that nothing user-related was stored on the system/boot SSD. I tried hard links, fiddling with the registry, changing environment variables, but in the end I gave up and kept the stub of my user directory on C: where Windows seemed to want it, and moved all the individual directories (Documents, Music, Pictures, etc.) using the "right-click and use Location tab" approach. Microsoft does NOT make it easy to do this elegantly for all users. Everything I tried either didn't work in the end (e.g., trying to do it via a hard link for the whole Users directory), or was a horrible kludge that left dribbles of user profile stuff on C:, or it didn't recognize things had moved when you try to create new users, or similar "close but not quite" breakage. Either I'm dumb, or you found a magical way to do it easily and effectively. Do share.

Re:Small SSDs are cheaper (1)

jjjhs (2009156) | about a year ago | (#40018159)

I have a hard drive, it can last indefinitely without doing all this nonsense to reduce wear.

Re:Small SSDs are cheaper (2)

wbo (1172247) | about a year ago | (#40018685)

I have a similar setup (128GB SSD). You make it sound easy. I tried several ways to move everything over to a hard drive so that nothing user-related was stored on the system/boot SSD. I tried hard links, fiddling with the registry, changing environment variables, but in the end I gave up and kept the stub of my user directory on C: where Windows seemed to want it, and moved all the individual directories (Documents, Music, Pictures, etc.) using the "right-click and use Location tab" approach. Microsoft does NOT make it easy to do this elegantly for all users. Everything I tried either didn't work in the end (e.g., trying to do it via a hard link for the whole Users directory), or was a horrible kludge that left dribbles of user profile stuff on C:, or it didn't recognize things had moved when you try to create new users, or similar "close but not quite" breakage. Either I'm dumb, or you found a magical way to do it easily and effectively. Do share.

It is actually fairly easy to configure Windows to put user profiles for all users on a different drive than the boot (system) drive, however it must be done when first installing Windows.

As far as I am aware Microsoft does not officially support moving user profiles after the OS is installed. There are ways to do it, but as you found they all have their problems. By doing it during the OS install pretty much all of the problems are avoided.

The option to change the location for user profiles is not exposed through the graphical installer for Windows but it can be configured by using an Unattend.xml file by setting the FolderLocations key to point to the drive you want the data to reside on. You can either create the file manually or use the free Windows AIK to do it.

You can use the FolderLocations key to move several key folders and one of which is the User profile folder.

If you have never installed Windows using the unattended setup before you may want to experiment using a VM before doing it on real hardware just to make sure you have your Unattend.xml file doing what you want it to.

The only problem with this is some of the junctions created on the system drive will continue to point to the wrong location. You can find more information on this in MS KB929831 [microsoft.com]. An easy way to fix this is to create a junction at c:\Users pointing to the new user profile location. (For instance on one of my systems I have a junction at C:\Users pointing to F:\Users). This neatly fixes this issue and as a bonus fixes any applications that have hard-coded paths for user profiles.

I have been running my home system like this for the past 2 years and have not encountered any problems.

Re:Small SSDs are cheaper (1)

darrylo (97569) | about a year ago | (#40019095)

I've moved user directories after installation using these basic instructions [lifehacker.com], without having to resort to installation foo. I've actually done this 3-4 times over the past year, due to stupidity on my part trashing my system drive (and not having any backups, which I now do have). I've never seen any junction issues, but that's probably because I have c:\users\spoo pointing to d:\users\spoo (c:\users still exists and is valid).

Re:Small SSDs are cheaper (1)

Anonymous Coward | about 2 years ago | (#40016015)

Indeed, you can get a reasonably big SSD (Big enough that it's enough for a normal single-OS work laptop) for less than $200, and if you ship out $400, you'll be over 256 GB.

When the cost of labor in western countries is what it is, an SSD as an investment is well worth the money, with payback time measured in months even if it only saves a mere 5 minutes per day. Oh, it also acts as a nice extra protection against shocks (I'm probably not the only one who's online with a toddler on the lap)

Re:Small SSDs are cheaper (1)

AngryDeuce (2205124) | about 2 years ago | (#40016265)

Exactly. Most people buying SSDs are using them to store OS and regularly opened programs, but have standard HDDs to store the stuff that doesn't "need" the added performance. Media is a perfect example of something that is silly to put on a SSD (unless you're actually editing said media, not merely consuming it).

The bulk of the data being generated by most people today does not need to be stored on SSDs, really, nor should it be. It's the equivalent of buying a Ferrari to use as your daily driver.

Re:Small SSDs are cheaper (1)

tlhIngan (30335) | about 2 years ago | (#40017555)

by AngryDeuce (2205124) Alter Relationship on Wednesday May 16, @07:01AM (#40016265)

Exactly. Most people buying SSDs are using them to store OS and regularly opened programs, but have standard HDDs to store the stuff that doesn't "need" the added performance. Media is a perfect example of something that is silly to put on a SSD (unless you're actually editing said media, not merely consuming it).

Even editing media doesn't require SSDs - media is huge and even buffering a "small" amount like 1MB the seek time advantage of an SSD is minimal. Sure the transfer rate is impressive (under 200MB/sec reads if you go "conservative" instead of "cutting edge" and thus avoid most of the firmware issues), but RAID0 is actually plenty fast for it (and "good enough" considering you need terabytes of storage more than a single drive feeding the entire chain).

An SSD's benefit is zero seek time - with hard drives basically stuck at 100-200 seeks/second but an SSD can easily increase that by 2 orders of magnitude (10-20,000 IOPS is common). It makes little seeks so much faster (like when the OS is paging or loading executable images/resources from disk where you're doing little 4K reads more than massive 1MB reads).

Hell, IT upgraded my laptop with a spare SSD (it was destined for someone else, but their laptop had a 1.8" drive), and even though it's 2007-era "Vista Ready", the darn thing flies.

Re:Small SSDs are cheaper (1)

Hadlock (143607) | about 2 years ago | (#40017643)

I would really like to see smaller (32gb) ssd drives hard wired to the motherboard in laptops, with the option to add a second spinning drive in the open hd bay. Unfortunately two hard drives is only reserved for the upper most echelons of business laptops.

Re:Small SSDs are cheaper (1)

toddestan (632714) | about 2 years ago | (#40036929)

You could always buy one of the laptops that allows you to remove the optical drive and put a hard drive in its place.

Re:Small SSDs are cheaper (1)

Hadlock (143607) | about 2 years ago | (#40037795)

I'm relatively sure that feature is only found in business class laptops, as well.

Re:Small SSDs are cheaper (0)

Anonymous Coward | about 2 years ago | (#40016995)

There exist affordable SDRAM chips that can do that as well. The problem is an OS which is incapable of getting the full advantage out of it.

Re:Small SSDs are cheaper (1)

sociocapitalist (2471722) | about a year ago | (#40018767)

Yep - I've got a 512G hybrid Seagate (32G of which is SSD) in one bay and a 1T Western Digital in what used to be my optical bay (macbook pro) giving me plenty of storage for not much money at all, relatively speaking, and the 32G does seem to help with performance especially for what I run often.

Re:Small SSDs are cheaper (0)

Anonymous Coward | about a year ago | (#40019289)

check out the NEW SSD!

now holds 20% more documents!

Re:Too many other bottlenecks (4, Informative)

AngryDeuce (2205124) | about 2 years ago | (#40016199)

SSD's too expensive

Regular hard drives were just as expensive (if not more so) when they were at a comparable point in their development and life-cycle

Here is an awful-colored chart [ns1758.ca] showing price per MB over the years. It's not so much that SSDs are really that expensive, it's that traditional HDDs have gotten ridiculously cheap, and capacities have grown beyond the storage needs of most average people. I remember actually filling up hard drives and having to buy larger and larger disks to hold my shit every couple years, but the 500 GB WD in my most current build is running at 40% capacity and I've got a lot of media on there.

Re:Too many other bottlenecks (1)

Kjella (173770) | about 2 years ago | (#40023395)

but the 500 GB WD in my most current build is running at 40% capacity and I've got a lot of media on there

No you don't, but I'm finally starting to figure out what the people that lived on campus and had 100 Mbit around y2k was talking about when they said streaming was the future while I was still fighting with 64 kbps ISDN hoping to get a 1 Mbps ADSL line. Or even download and delete, which is a lesser form. Right now there's ~20.000 BluRays on Amazon and there's no reason for me to have a petabyte array to store them on. That's 100 people with a 10 TB server or 1000 people with a 1 TB disk or 10000 people with 100 GB in a corner of their disk, as long as we're all connected by a ridiculously fast Internet. Keeping a local "cache" is just becoming less and less necessary.

Re:Too many other bottlenecks (0)

Anonymous Coward | about 2 years ago | (#40016235)

Reviewers think that it's an option that appeals neither to the budget market nor the high end market, but the Intel Smart Response technology on z68 and other recent boards is a feature I couldn't do without.
 
  I can't be stuffed managing everything to work of a separate HDD or SSD manually so I put two HDD's in RAID 0, and have 60GB of a 120GB SSD as the Smart Response cache, and everything that I really use learns to boot quickly and run quickly in a cycle or two. The other 60GB on the SSD I use to install virtual machines or games, but to be honest I don't think I'd notice that much if I shifted them to the main drive.
 
So screw the reviewers, just because I can afford a $240 motherboard for overclocking and a $120 SSD, does not mean I may as well be using a $500 SSD, or can't see the benefits of the system learning what I want from a large drive array to be accelerated.

Re:Too many other bottlenecks (1)

bky1701 (979071) | about 2 years ago | (#40017569)

"OS's / Software that don't take advantage of current technology"

And bloat. Let's not forget bloat. There is much to be said when a modern computer running modern programs and OSes is only slightly more responsive than the same in the 90s.

Re:Too many other bottlenecks (1)

rrohbeck (944847) | about a year ago | (#40019813)

With the proper software (e.g. Flashcache) you can run a write back cache on your SSD so as long as your file working set is smaller than the SSD everything will be cached.

All that's great but (2)

Rosco P. Coltrane (209368) | about 2 years ago | (#40015785)

When is the battery problem going to be solved? Yes I know batteries have been getting better over the years, but devices these days have a hard time saying alive more than 24 hours doing anything useful these days.

All these wonderful gadgets all end up sucking pond water from the bottom because you need to tether them to a mains socket every few hours...

Never (2)

stooo (2202012) | about 2 years ago | (#40016053)

>> When is the battery problem going to be solved?

Never. How do you want to "solve" that "problem" ?
System power is a design issue, but the current state of the art is not really problematic. Of course, if you want turbo-gaming for 12 hours, it's heavy. But else ....

Re:All that's great but (2)

Idbar (1034346) | about 2 years ago | (#40016115)

Batteries have a bit of a slower pace. But the more power budget you give to designers, the more features they add and more power they consume. If some people is fine with having a device recharged every 24h, many people (designers) will work with that budget in mind.

Re:All that's great but (2)

msobkow (48369) | about 2 years ago | (#40016189)

And how many people stay up 24 hours at a stretch using a battery powered device?

Sure I can see the need for longer than a 12 hour lifetime in a few cases, like someone who's "off roading" and can't plug in while they're sleeping, but for the vast majority of the population they just need it to function while they're awake, charge while they're sleeping, and that's more than they need.

As I've never seen a battery powered portable device that requires you to shut down in order to plug in the power supply to recharge, and I've never seen one that couldn't charge while still running, your complaint is pointless nitpicking about numbers that have no practical meaning in the real world.

some people still need long life (1)

Chirs (87576) | about a year ago | (#40020905)

My mom's a midwife. Her work replaced her blackberry with an iPhone and she went from multiple days without recharging to under a day. Given that she can be away for 30hrs straight, this meant getting multiple phone chargers (home, car, work, etc) and she basically needs to plug it in whenever she can.

Re:All that's great but (1)

DigiShaman (671371) | about 2 years ago | (#40016527)

The backlight of a modern LCD uses quite a bit of power most of the time. The CPU and the rest of the chipset have gotten pretty efficient at conserving power while mainly in idle usage mode (reading, checking e-mail, composing a word doc, etc). It's only when the GPU (Flash, video, HTML5 heavy stuff) kicks in that your battery life really starts to suffer.

None of this is news BTW. Unless you want something with the energy density of thermite, I suggest getting used to tethering power to your mobile device for extended usage and heavy work loads.

Re:All that's great but (3, Insightful)

serviscope_minor (664417) | about 2 years ago | (#40016947)

Unless you want something with the energy density of thermite

Thermite doesn't have an especially high energy density. See here: http://en.wikipedia.org/wiki/File:Energy_density.svg [wikipedia.org]

Pure aluminium has a moderate energy density. Once you mix in the iron oxide in stochiometric quantities, the energy density goes down by quite a bit (factor of 4). That still puts it as better than any known battery technology, but only by a factor of 2 for zinc air and 5 dor li-poly. All the common fuels have a much higher energy density.

The reason that thermite burns so hot is that the products of combustion have a fairly low specific heat capacity and there is no need to heat up a huge bunch of useless nitrogen (compared to burning fuel in air).

Bottom line is that thermite beats existing battery tech by a wide margin, but falls very far short of common fuels.

Re:All that's great but (0)

Anonymous Coward | about 2 years ago | (#40017663)

Thanks for reminding me I'm on Slash dot. And that's a good thing.

Re:All that's great but (1)

justthinkit (954982) | about a year ago | (#40018185)

That is a good start but there is a bit more to it than that.
.

Fe2O3 + 2Al --> 2Fe + Al2O3 is an "extremely intense exothermic reaction" [wikipedia.org]

Also, aluminum has quite a number of useful properties that enhance the reaction -- "at least 25% oxygen, have high density, low heat of formation, and produce metal with low melting and high boiling point", etc.

FWIW I am not sure how "fairly low specific heat capacity" helps (or even if Fe2O3 & Al have it).

Re:All that's great but (1)

serviscope_minor (664417) | about a year ago | (#40018473)

FWIW I am not sure how "fairly low specific heat capacity" helps (or even if Fe2O3 & Al have it).

The reaction inputs and outputs all have to be raised up to the final temperature. If the heat capacity is high then the temperature will be lower, since more energy will be required to raise the temperature.

Locked down (4, Interesting)

tepples (727027) | about 2 years ago | (#40015813)

How many of these CPUs will appear only in devices with cryptographically locked bootloaders? The license agreement for Microsoft's forthcoming Windows RT operating system, for example, explicitly bars device manufacturers from allowing the end user to install a custom signing certificate. And even on devices that do allow homemade kernels, how many devices incorporating these non-x86 CPUs will have driver source (or even proper data sheets) that allow support for all the SoC's features in a freely licensed operating system?

Re:Locked down (1)

L4t3r4lu5 (1216702) | about 2 years ago | (#40017247)

The license agreement for Microsoft's forthcoming Windows RT operating system, for example, explicitly bars device manufacturers from allowing the end user to install a custom signing certificate.

I missed the part where you were forced to buy Microsoft devices, instead of employing a little forward-thinking and buying a device without a locked bootloader.

It's only an issue if you make it one. WinRT will die a death, as long as nobody buys it.

Re:Locked down (0)

Anonymous Coward | about a year ago | (#40019129)

> It's only an issue if you make it one. WinRT will die a death, as long as nobody buys it.

Tell that do Joe Six-pack who is getting a BJ from Barbara Booty Babe - IF he buys the next Windows crapware.

Re:Locked down (0)

Anonymous Coward | about a year ago | (#40019411)

Yeah, that argument works except in the real world, where the vast majority of people don't know anything else but Windows, so when they see a phone or tablet running Windows RT, they think "oh cool, I can run all my things on my tablet now instead of my computer". It isn't until they get home and try to load x86 programs on their ARM tablet that they realize it doesn't work. But by that point, they've already bought the device, and they would have to bring it back to the store and ask for help, or they can suck it up, use all of the Microsoft "technologies" on their tablet (like full-blown MS Office, Internet Explorer, a Facebook, Twitter, and Hotmail app, and that's it). That means they are further dependent on Microsoft, and it gets more and more difficult for anybody else to compete directly with Microsoft, because if YOU don't interoperate then YOU are the problem to them, not Microsoft.

Allowing people to install Android or Ubuntu-ARM on their devices would go a great deal towards making Microsoft actually compete instead of continuously abusing their monopoly to push competitors out of every market they are interested in. The same goes for Apple, but they aren't selling a competing operating system or web browser, they are selling an appliance. Microsoft is not, they are selling software, and they have a much stronger monopoly than people think.

Nope (0)

Anonymous Coward | about a year ago | (#40020563)

WinRT will make the Average Joe very, very angry at MS. Good for everybody else.

First you have to know that unlocked devices exist (2)

tepples (727027) | about 2 years ago | (#40026453)

The license agreement for Microsoft's forthcoming Windows RT operating system, for example

I missed the part where you were forced to buy Microsoft devices

That or you missed the "for example".

instead of employing a little forward-thinking and buying a device without a locked bootloader.

To employ forward-thinking and buy an unlocked device, first you have to know that unlocked devices exist. For example, in the United States market, the most popular handheld gaming devices with physical buttons are the DS series and PSP series. Only hardcore geeks ever mail-order a GP2X product, for example; non-geeks don't even know they exist.

Re:Locked down (1)

rrohbeck (944847) | about a year ago | (#40020065)

There's a very simple solution: Don't buy a device that has a locked-down bootloader that hasn't been cracked yet.

Re:Locked down (1)

tepples (727027) | about a year ago | (#40020149)

The question remains the same: How many of these CPUs will appear only in devices that one should not buy by that metric?

Re:Locked down (1)

jd (1658) | about a year ago | (#40020357)

The MIPS won't, since Microsoft doesn't write for it, so that's 3 of the CPUs. Same for the Itanium, since Microsoft has abandoned that. It's very unlikely Microsoft is developing for both the A9 and A15, so that eliminates half of what's left. Most ARMs won't be running a MS OS, it'll be a minority OS for a long time on that chip. So really only the AMD CPU even has the potential for vendor lock-in by Microsoft.

Microsoft is not the issue (1)

tepples (727027) | about a year ago | (#40020749)

Microsoft's forthcoming Windows RT operating system, for example

The MIPS won't, since Microsoft doesn't write for it

I wasn't referring to Windows RT as the only example of a locked bootloader. For MIPS, I'd be more inclined to use the examples of PlayStation 2, PlayStation Portable, TiVo DVR, and various companies' set-top boxes.

Doesn't that defeat the purpose? (0)

Anonymous Coward | about 2 years ago | (#40015871)

I thought the point was to keep them cooled, heating up CPUs tend to cause them to malfunction.

Yo mama (-1)

Anonymous Coward | about 2 years ago | (#40015947)

has a stanky vagina. Bye.

omg (0)

Anonymous Coward | about 2 years ago | (#40015979)

Itanium is still around? Why!? Put it out of its misery already!

Itanic (1)

unixisc (2429386) | about a year ago | (#40020209)

That's what I thought, reading that. Who buys it? If they're continuing it, Intel & HP might as well make lower end servers and workstations and stop pretending that it's an ultra high performance CPU and instead promote a variety of platforms that use it. The OSs that support it has shrunk, but they could offer servers w/ options of FreeBSD or Debian, workstations/laptops w/ a specially compiled PC-BSD or Debian, and try promoting it that way.

If they're not killing the CPU line, why keep it stuck to a niche product that nobody is buying?

64 bit ARMv8 (3, Interesting)

Lazy Jones (8403) | about 2 years ago | (#40015989)

The 64 bit ARM architecture for server CPUs is much more interesting [eetimes.com] ...

Re:64 bit ARMv8 (2, Interesting)

Anonymous Coward | about 2 years ago | (#40016609)

No, it's not.

ARM cpus are actually pretty lousy when it comes to computations/watt. That crown goes to low-end celeron CPUs, by a massive margin. It's just that ARM can operate in the very low-end power sipping envelope that a smartphone/tablet demands.

You have to remember that these new arm SOCs are actually not very fast when compared to desktop CPUs. The lowest-end single core celeron murders the highest-end quad core arm SOC in terms of computational power. This is the real reason you don't see ARM based desktops.

Now, it's interesting that you can cram hundreds of arm cores in to a small rack mount chassis.. But how useful that is has yet to be proven.

Re:64 bit ARMv8 (0)

Anonymous Coward | about 2 years ago | (#40017505)

Let's see a source for your claim about power efficiency.

Re:64 bit ARMv8 (0)

Anonymous Coward | about 2 years ago | (#40017657)

Now, it's interesting that you can cram hundreds of arm cores in to a small rack mount chassis.. But how useful that is has yet to be proven.

Uh, that's how the human brain works - you have billions of trillions of very simple processing units (neurons), each of which has as many as 10,000 inputs and a handful of outputs. All running asynchronously. Floating point and integer values aren't represented as a parallelized bit-pattern, but rather as the inverse time between neuron pulses. Shorter time => higher value. Since there is a minimum time limit between pulses, there is an upper limit on what a single neuron connection can represent, but that is compensated for by having more connections.

Power consumption is 30 watts/hour. (Humans consume the equivalent of 100 watts/hour with 10% going on brain-power, and 30% of that is on vision processing).

Re:64 bit ARMv8 (0)

Anonymous Coward | about a year ago | (#40020469)

Power consumption is 30 watts/hour. (Humans consume the equivalent of 100 watts/hour with 10% going on brain-power, and 30% of that is on vision processing).

What? 30 watts/hour is 8.3 mJ/s^2. How is that a unit of power consumption?

Re:64 bit ARMv8 (0)

Anonymous Coward | about a year ago | (#40022821)

Power consumption is 30 watts/hour. (Humans consume the equivalent of 100 watts/hour with 10% going on brain-power, and 30% of that is on vision processing).

Er, friend? Watts per hour is not a unit of power consumption. Watts are the unit of power consumption, and it doesn't make sense to divide them by hours. They already have a unit of time in the denominator:

1 W = 1 J / s

(The Joule (J) is the SI unit for energy.)

Maybe you're thinking of the way that utilities bill you for kilowatt-hours? But those are kilowatts multiplied by time, not divided by it. What they're billing you for is the total energy they've delivered to you, not the rate they delivered it at. The only reason they bill in kWh instead of joules or kilojoules is that it makes it easier for people to relate the numbers on their bills to the power ratings on their appliances. If you run a 1kW device for one hour, you get billed for 1 kWh.

"Heating up"? (2)

ewg (158266) | about 2 years ago | (#40016129)

Please don't use the phrase "heating up" referring to CPUs, even as a metaphor!

Itanium Dragster!!! (0)

Anonymous Coward | about 2 years ago | (#40016183)

Oh much thanks for calling Itanium a Dragster. I've printed out this post and pinned it against my cubical wall.

Re:Itanium Dragster!!! (1)

jd (1658) | about a year ago | (#40021327)

It goes very fast in a straight line, corners horribly and requires enormous amounts of energy. :)

I'm shocked... (2)

macromorgan (2020426) | about 2 years ago | (#40016409)

The Itanic hasn't sunk just yet.

Re:I'm shocked... (0)

Anonymous Coward | about 2 years ago | (#40016817)

I'm always surprised when people bad-mouth the Itanium now-a-days. Admittedly, it had a troubled beginning, but it's become quite a capable platform. HP has HP-UX, OpenVMS, and NonStop running on it. Bull is using it for their high-end mainframes. China’s Huawei and Inspur are building servers with it. Itanium is pretty solid, technically speaking.

Itanic OSs (1)

unixisc (2429386) | about a year ago | (#40020313)

OpenVMS and NonStop are dead OSs. Only people interested in them would be former DEC and Tandem houses that have too much sunk in, but even they would have been better off either staying w/ their AlphaServers or Himalaya servers, and not bother sinking cash into Itanium.

Not only that, you know that the platform is a loser when even Linux brands choose to drop it - talking specifically about Red Hat, Oracle and Canonical. Even Microsoft, which previously dropped Windows NT on RISC platforms, has dropped Windows Server support for this platform. The only major Linux that supports it is Debian, and the only BSD that supports it is, not (yet) NetBSD but FreeBSD. Those are one's only choices if one happens

The only other good use for this CPU would be in super computers. Maybe make a few such submarines based on the Itanic, port DragonFly BSD or some massively parallel BSD or Linux to it, and then then sell it to whoever is interested.

CPU Wars? New Boxes? What? Why? (2)

David_Hart (1184661) | about 2 years ago | (#40016711)

I have a quad core i5 desktop and I rarely use it now except for home video encoding/decoding and editing and to stream media to my TV, and most of that is offloaded to the GPU. I use my PS3 and Wii for game playing. Even my relatively new HP DM4T (2010) laptop has been gathering dust lately. I've been spending most of my time, like most people, on my tablet, a HP Touchpad running CM9 android.

For personal use, CPUs simply do not matter any more, just battery life...

For corporate use, CPUs matter as we keep trying to pack more application servers on VM machines.

Tablet devices? Touchscreen Interfaces? What? Why? (1)

L4t3r4lu5 (1216702) | about 2 years ago | (#40017329)

I have a quad core gaming PC and I use it daily for internet access, gaming obviously, and streaming media to my TV. I don't own a games console or tablet as I don't believe in having three devices to badly do the job of one. I've been spending most of my time, like some people (i won't presume to apply my anecdote to most people), on my PC, a Q6600 which I'm about to upgrade. It's done 5 years, it's time for something new.

For personal use, CPUs matter a whole lot to me. My PC is my entertainment centre, so I spend a lot of time and money making it just how I like.

I like this game! Let's play again sometime :)

personal use varies... (1)

Chirs (87576) | about a year ago | (#40020929)

I find it hard to use a desktop late at night with a toddler in my lap, but a tablet (touchpad for me too, actually) works fine.

Re:personal use varies... (1)

L4t3r4lu5 (1216702) | about 2 years ago | (#40026805)

I would find it hard to use a desktop late at night with a toddler in my lap too.

I decided to not have the baby. Horses for courses.

Re:CPU Wars? New Boxes? What? Why? (-1)

Anonymous Coward | about 2 years ago | (#40017901)

OK, that's just great, pops. Thanks very much, old fella. You can go home now, it must be past your bed time.

You were the one complaining when the 486 came out because your trusty old 386 did everything you ever needed, so what are all these young punks in such a hurry for, eh? And why don't they stay the hell of your lawn, for that matter.

Competition heating up! (0)

Anonymous Coward | about 2 years ago | (#40017589)

Bigger heat sinks weigh more. Fans drag down battery life. You're going the wrong way guys.

No, not really... (0)

Kjella (173770) | about 2 years ago | (#40017705)

Look at AMDs client roadmap for 2012 [anandtech.com] and 2013 [anandtech.com]. Did you see the recent Trinity benchmarks? Sucky CPU, decent GPU. Well look at the roadmap, those Piledriver cores are all you're going to get in AMDs "high-end" all the way through 2013. I'm sure you'll get more power in a cell phone or tablet format, but if you just want CPU power and don't care that it burns 100W because it's plugged to the wall then the future is mostly depressing. To use a car analogy, lower MPGs are great but it's not exactly what's going to get cheers from the Top Gear crowd. Sure a good soccer mom car sells and it's the same for CPUs, but they don't excite anybody.

Re:No, not really... (1)

rrohbeck (944847) | about a year ago | (#40020115)

I think you missed the point where the 8150 beats the 2600k when you're using the right software, at half the price.
Yes, with single threaded code or code (especially benchmarks) compiled with Intel compilers, Intel CPUs are faster.

Re:No, not really... (1)

tyrione (134248) | about 2 years ago | (#40023401)

Look at AMDs client roadmap for 2012 [anandtech.com] and 2013 [anandtech.com]. Did you see the recent Trinity benchmarks? Sucky CPU, decent GPU. Well look at the roadmap, those Piledriver cores are all you're going to get in AMDs "high-end" all the way through 2013. I'm sure you'll get more power in a cell phone or tablet format, but if you just want CPU power and don't care that it burns 100W because it's plugged to the wall then the future is mostly depressing. To use a car analogy, lower MPGs are great but it's not exactly what's going to get cheers from the Top Gear crowd. Sure a good soccer mom car sells and it's the same for CPUs, but they don't excite anybody.

You write like a clueless shill. More and more consumer software will be leveraging the design of once Bulldozer, now replaced by Piledriver which is much improved. Even the FOSS world has a lead on the Windows world when it comes to Concurreny Development. LLVM/Clang/Libc++/Compiler-RT/LLDB/Libclc and more are being optimized with target hardware from AMD, ARM, Nvidia, Intel and much more to take advantage of their various design tradeoffs. AMD bit the bullet and in the next 12 months it will heavily pay off. More and more of those tests are useless as applications work to task both the multicore CPUs and the Streams/Cores on GPGPUs from AMD, Nvidia and ImgTec. GCN architecture from AMD will be releasing it's 8000 series shortly and once again leap frog Nvidia, but will come with OpenCL 2.0 support and fully optimized. It's just up to user space apps to work with AMD ala Adobe, GIMP, Blender and Handbrake to leverage it all.

MIPS Aptiv (1)

unixisc (2429386) | about a year ago | (#40020393)

I found the MIPS Aptiv line interesting, and hope that they have some success in regaining some market share. Already, they've made some inroads into the Android tablet market, and their specs seem to suggest that they hold their own against ARM on power consumption, while being far more advanced in terms of 64-bit processing (MIPS had it since the 90s, whereas ARM is only thinking about it now.

I hope MIPS regains some of its marketshare in games, and becomes key in new IPv6 gear. Some more tablets based on MIPS would be nice as well - not just Android, but also using Plasma Active and any other options that might be available.

Check for New Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...