×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Intel To Debut Limited-Run Ivy Bridge Processor

samzenpus posted about a year ago | from the low-energy dept.

Intel 86

abhatt writes "Intel is set to debut the most power efficient chip in the world — a limited edition 'Ivy Bridge' processor in the upcoming annual Consumer Electronics Show in Las Vegas. Only a select group of tablet and ultrabook vendors will receive the limited Ivy Bridge chips. From the article: 'Intel did not say how far below 10 watts these special "Y" series Ivy Bridge processors will go, though Intel vice president Kirk Skaugen is expected to talk about the processors at CES. These Ivy Bridge chips were first mentioned at Intel's annual developer conference last year but it wasn't clear at that time if Intel and its partners would go forward with designs. But it appears that some PC vendors will have select models in the coming months, according to Intel.'"

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

86 comments

Why not servers? (4, Insightful)

jackb_guppy (204733) | about a year ago | (#42470085)

We need to cut the power and heat of NOCs. Why only build these for the junk market of throw way toys?

Re:Why not servers? (2)

WarJolt (990309) | about a year ago | (#42470215)

Just because it's efficient in a tablet/laptop doesn't make it efficient in a data center.

Re:Why not servers? (4, Interesting)

TheGratefulNet (143330) | about a year ago | (#42470309)

a bay area company (that got bought by AMD) makes its business using atoms and atom-like cpus in datacenter 'io clusters'.

not all DS's need compute-power. often, its about io and you don't need fast cpus for io-bound tasks.

Re:Why not servers? (1)

BitZtream (692029) | about a year ago | (#42472737)

Your just looking at the wrong CPU. An Atom processor maybe controlling the general function, but there's some other processor(s) handling the hard work then.

High speed IO still requires high speed processing.

Re:Why not servers? (1)

gl4ss (559668) | about a year ago | (#42472781)

high width io.

shitloads of connections, not much of processing per connection.

Re:Why not servers? (1)

TheRaven64 (641858) | about a year ago | (#42473717)

Most high-speed connections these days are serial, not parallel. As you ramp up the speed, the complexity of keeping the signals from the different wires synchronised gets harder. Above a certain threshold, it's easier to make a serial connection ten times faster than to keep ten wires synchronised.

Re:Why not servers? (0)

Anonymous Coward | about a year ago | (#42471013)

But the opposite is true: If it's efficient in a tablet/laptop then it's efficient in a data center.
Now, back to my raspberry pi beowulf cluster ;)

Re:Why not servers? (0)

Anonymous Coward | about a year ago | (#42471761)

But the opposite is true: If it's efficient in a tablet/laptop then it's efficient in a data center.
Now, back to my raspberry pi beowulf cluster ;)

Maybe, maybe not. In mobile, a lot of the power savings comes from powering down all the systems when not in use. That doesn't come into play so much in the data center. Even if you say your CPUs are 99% idle, that' still not on the same level as mobile devices, where idle systems may only wake up for a few ms each minute.

Re:Why not servers? (1)

TheRaven64 (641858) | about a year ago | (#42473833)

It's not quite so clear cut. On a modern chip, to get the kind of power usage that corresponds to an amount of heat they you can dissipate, you need to leave a lot of the chip idle at any time. This is true for data centres as well as mobile, although the amount of heat is more limited in mobile than in a well-ventilated rack in an air-conditioned room. This is why modern chips have a lot of dedicated circuitry for specialised uses.

Re:Why not servers? (0)

Anonymous Coward | about a year ago | (#42471677)

sure it does.

I think they need more cpu power and maybe more IO (1)

Joe_Dragon (2206452) | about a year ago | (#42470251)

I think they need more cpu power and maybe more IO then some of the very low end chipsets.

Also what about ECC ram.

Re:Why not servers? (1, Troll)

SpazmodeusG (1334705) | about a year ago | (#42470501)

Probably because the price vs performance vs power triangle is focused almost entirely on power. Which is ok for something like an overpriced macbook air but datacentres would want more of a balance.

Re:Why not servers? (1)

jd2112 (1535857) | about a year ago | (#42470549)

Because it is still more efficient to buy a few big honkin' servers and virtualize as many of your workloads as possible.

Re:Why not servers? (2)

Charliemopps (1157495) | about a year ago | (#42470555)

We have hundreds of NOCs all over the country and power isn't our problem. Heat, however, definitely is. But, most of the equipment we have does not have Intel chips in them. I think most of the heat comes from Cisco stuff. A large cisco core router puts out magnitudes more heat than any Intel chip could possibly put out. When our guys bring those things up to do testing on their desk, the fans sound like large vacuum cleaners or something.

Re:Why not servers? (0)

Anonymous Coward | about a year ago | (#42470879)

there is a reason you don't put a fuel efficient 4 cylinder engine in a top fuel car.
car analogies never get old.
And only an idiot builds a DC in a hot climate with exorbitant utility rates (not necessarily in that order)
MFlops/m^2 get it?
The cloud is so you don't have to put your DC in the middle of prime real estate on top of all the rest.
If you don't have max processing power, you went to the wrong CS school.

Re:Why not servers? (0)

Anonymous Coward | about a year ago | (#42471347)

You certainly went to the wrong school for your English education...

Re:Why not servers? (2, Insightful)

Anonymous Coward | about a year ago | (#42471031)

Well, why do a limited run at all?

Maybe they get crap yields, or have to do aggressive binning to meet the specs.

Either way it looks like a warning shot to stave off the growth of ARM in server and netbook sectors. "We don't want to serve this market because it would hurt our profits. But, y'know... we could."

Re:Why not servers? (1)

R3d M3rcury (871886) | about a year ago | (#42471245)

As I understand it--and I'm not up on the latest and greatest, granted--Intel is coming out with a new family of processors sometime this spring (Haswell, I believe they're calling it) which are better than the Ivy Bridge CPUs regarding power/heat.

So I would imagine that this is a stop-gap type of thing.

Re:Why not servers? (2)

Sockatume (732728) | about a year ago | (#42473405)

If Sandy Bridge to Ivy Bridge was any indication, Haswell will be a 22nm "tock" aimed at performance but it'll be followed by Broadwell that takes it down to 14nm and gives remarkable power and thermal performance improvements.

Re:Why not servers? (1)

beelsebob (529313) | about a year ago | (#42473675)

You're right that Haswell is a tock aimed at boosting performance... But, the resulting chips will be sold at increments only 5% higher than ivy bridge's performance, and significantly reduced power consumption.

Everything is about power consumption these days.

Re:Why not servers? (0)

Anonymous Coward | about a year ago | (#42471367)

Yeah I know ! I'm waiting for the director's cut myself...

But you gotta wonder... "Limited", in what way ? Is it, you know, "special" ?

Captcha: redneck

Limited Edition is a Euphemism (3, Interesting)

hamjudo (64140) | about a year ago | (#42471039)

If they could make enough of these wonder chips to satisfy the projected demand, they wouldn't bother with a "Limited Edition". They're limiting sales to match their manufacturing capacity. They don't want to cannibalize potential Atom design wins with this chip that they can't yet make in high enough quantity. Expect the "Limited Edition" moniker and associated high price to go away "real soon now".

Once they can make the things in sufficient quantity they will undoubtedly make versions with server features. Most server buyers don't need or want on chip graphics, but do want ECC.

Re:Limited Edition is a Euphemism (2)

nanoflower (1077145) | about a year ago | (#42471671)

If I had to guess this is probably being done to test out production facilities that will be used for Haswell. They can make limited runs with this special version of Ivy Bridge and start to generate more interest in the low power CPUs while also getting some good data for the real productions runs of Haswell in a few months.

Re:Limited Edition is a Euphemism (1)

Anonymous Coward | about a year ago | (#42482353)

If I had to guess this is probably being done to test out production facilities that will be used for Haswell. They can make limited runs with this special version of Ivy Bridge and start to generate more interest in the low power CPUs while also getting some good data for the real productions runs of Haswell in a few months.

You're almost right, but missing the whole picture.

All Ivy Bridge CPUs are made on the exact same production facilities which will be used for Haswell. Intel's "tick-tock" strategy is to alternate between doing process shrinks and new designs. When they bring a new process online, at first they manufacture a shrunk version of an older processor design, to reduce the amount of unknown problems they might have to deal with. After about a year of learning, they switch over to a new design which is fully optimized for the process.

Ivy was the 22nm shrink of Sandy Bridge, a 32nm optimized design. Haswell is the new 22nm-optimized design. (And in the future, Broadwell is the Haswell shrink which will be used to kick off Intel's 14nm process.)

Limited editions like the Ivy CPUs this article's talking about aren't made in separate facilities. Intel is just taking advantage of statistical process variation. When you set up a fab to make a chip design, they don't all come out identical. Even ignoring defects, some of the production will be too far out of spec (slow, hot, whatever) to be sold as good product. The flip side is that some of it will be much better than spec. (The distribution for any given parameter usually looks like a bell curve.) Want to sell an ultra low power variant? Create test programs which enable your test equipment to find all the chips which are on the better-than-average tail of the bell curve for power-related parameters. But you won't be able to sell all that many cherry picked parts: there isn't much area under the tails of a bell curve.

You can choose up front in the design phase where to target the center of the bell curve, and if your designers and process engineers are good at their jobs you might even hit that target. Intel is very good at this. The reason Haswell will have volume production of 10W (or less) chips is simply that Intel deliberately chose to target a 10W TDP in their ultramobile Haswell design (instead of the previous 17W target for Sandy and Ivy). This implies that if Intel wants to, they'll probably be able to offer ~5W Haswells by cherry picking.

Re:Why not servers? (1)

toddestan (632714) | about a year ago | (#42471695)

Because for servers you'd be better off buying a handful of faster, higher-power chips than pile of slower, low-power chips?

Re:Why not servers? (0)

Anonymous Coward | about a year ago | (#42472637)

You say that as if NOCs aren't filled with commodity devices.

Re:Why not servers? (1)

Sockatume (732728) | about a year ago | (#42473399)

Presumably they can't produce these chips at that scale yet. I would be stunned if Intel didn't announce a CPU with a similar profile in a later microarchitecture.

Re:Why not servers? (0)

Anonymous Coward | about a year ago | (#42473849)

The luxury junk market probably gives them larger margins, with no need to increase production (testing) volume rapidly like it would be with a server product. Cos' Y is da new Bling in CES.

Uhhh, static power is meaningless. (0)

Anonymous Coward | about a year ago | (#42470155)

I want to know how many Joules for a given workset. I design video accelerators and the difference in power between QCIF and 1080p is over 100x.

Re:Uhhh, static power is meaningless. (1)

kestasjk (933987) | about a year ago | (#42470305)

Yeah, and I need to know the number of joules needed for this processor to compute SHA-256->WHIRLPOOL rainbow tables per chain, for a chain length of 1000. Why do they give such generalized figures anyway?

Re:Uhhh, static power is meaningless. (0)

Anonymous Coward | about a year ago | (#42470341)

Wow, somebody commenting on Slashdot who actually knows something. I thought all you guys ran to back to IRC long ago.

-- Ethanol-fueled

Re:Uhhh, static power is meaningless. (1)

LordLimecat (1103839) | about a year ago | (#42477687)

Static power gives you the upper bound for how much power will be consumed over a given period. Benchmarks will give you the workload per period. Math will help you bridge the two.

Wattage has been a standard way of comparing the power usage of chips for a long time.

Surface Pro anyone? (0)

Anonymous Coward | about a year ago | (#42470241)

Bring it home Intel

"Most power efficient chip in the world" (1)

Anonymous Coward | about a year ago | (#42470259)

I wonder how true this ACTUALLY is? Are we talking x86 flop/watt comparisons, or...?

what's the bet... (1)

smash (1351) | about a year ago | (#42470411)

... that apple buy all of them for the Macbook Air?

Re:what's the bet... (0)

Anonymous Coward | about a year ago | (#42471103)

That's happened before, but at that time Apple made the announcement. Seems not this time. Maybe it'll be something more interesting.

Welcome to the new Value Add (0)

dpilot (134227) | about a year ago | (#42470517)

Intel has always been about Value Add... There are "crippled" products on the market, sold by others as well as Intel. Sometimes it's so they can build one part in their fab, cripple the mainstream part with a fuse, and then charge a premium for the un-crippled part. Sometimes it's so a crippled system can be sold, and then for an upgrade fee, be "enhanced" in the field. But in any case, it's all about revenue. The annoying thing about this is that they've gone to extra expense and effort to produce the crippled part - the premium part would actually cost less without the extra crippling capability.

As a different perspective, Intel has also evolved into a performance-oriented company. I don't think that as a company they're very comfortable with this whole "power thing", and I think a limited production like this is probably the way to sell it to management and marketing.

There's also a chance that the low power parts may be a deep sort out of the distribution, and there aren't many.

Re:Welcome to the new Value Add (1)

Anonymous Coward | about a year ago | (#42470779)

Sometimes it's so they can build one part in their fab, cripple the mainstream part with a fuse

Yea a fuse. Wait what?

I don't think that as a company they're very comfortable with this whole "power thing"

That's why Ivy Bridge already basically kills anything remotely comparable in terms of power usage?

I will never understand the brand hate people exhibit, in this case against Intel by an AMD fan.

Re:Welcome to the new Value Add (5, Insightful)

Pinhedd (1661735) | about a year ago | (#42471135)

By a 'fuse' he's talking about the selective factory or post factory programming of a chip.

Intel has only 5 different pieces of silicon serving 150+ different Sandybridge and Sandybridge-E processors, the same is true for Ivybridge.

When the fabrication process is finished, the chips on each wafer are tested for quality. Chips that fail completely are discarded. Chips that have flaws in a core or cache segment will have that core or cache segment disabled disabled. This allows a faulty chip to be sold as a lower end model.

Similarly, if demand for a lower end model is higher than the supply of the lower end models, higher quality chips can have parts disabled so that they may be repackaged as a lower end product for marketing purposes.

All of this is done at the factory before the chip is inserted into a processor package. An additional step invented by IBM allows for firmware upgrades themselves to reprogram the chip, possibly reactivating parts that have been deactivated at the factory, or changing CPU parameters so that older firmware revisions cannot be installed (this is done with the PS3)

Re:Welcome to the new Value Add (0)

Anonymous Coward | about a year ago | (#42471171)

Yea a fuse. Wait what?

A fuse is an electrical connection that can be permanently opened by running excess current through it. In the context of GP's post, a chip maker can add electrical connections that reduce the functionality of a chip, eg. disabling a set of features when the connection is present, and then perform a "field upgrade" of the chip by cutting the fuse through some secret proprietary method, probably by executing some bit of code with the right secret data key, so that it can be done without using any special hardware.

Re:Welcome to the new Value Add (5, Informative)

Gadget_Guy (627405) | about a year ago | (#42471325)

There are "crippled" products on the market, sold by others as well as Intel. Sometimes it's so they can build one part in their fab, cripple the mainstream part with a fuse, and then charge a premium for the un-crippled part.

Sometimes that actually turns our cheaper to make it this way. They could design a low end CPU to sell cheaper than their premium product, but the cost of producing an entirely different fabrication line for that CPU might actually more than just including a switch in their higher end processor to cripple the chip. The costs include having to reconfigure the production line to a different line of wafers.

Using the same chip design means that they can still sell the CPUs that fail quality control testing. If one of the cores fails in a quad core CPU, they can just turn that one off and sell it as a dual core part. So instead of increasing the price of the premium chip by having the "fuse" as you put it, they are making the chip cheaper because it reduces the wastage if having to discard the failed processors.

Re:Welcome to the new Value Add (1)

DigiShaman (671371) | about a year ago | (#42472289)

The theory holds that even if you have 100% production rate where each CPU is flawless, you still have to segment the market based on a supply/demand curve. The ability to generate as much profit as possible is necessary to pay off R&D and move on to the project all while growing the business. It's not all that uncommon to "cripple" a perfectly good CPU in order to sell it at reduced cost (loss made up at the high end segment). The idea being that reduced profit is better than no profit earned for any given product sold.

Welcome to the concept of chip manufacturing (5, Informative)

Sycraft-fu (314770) | about a year ago | (#42471351)

Intel is NOT crippling Ivy Bridge processors. Rather what happens is that minor variations silicon wafer mean that different chips come out with different characteristics. It doesn't take much to change things either, we are talking thins with features just 22nm wide, little things have large effects.

When you get a wafer of chips, you have to test and bin them. Some just flat out won't work. There'll have been some kind of defect on the wafer and it screws the chip over. You toss those. Some will work, but not in the range you want, again those get tossed. Some will work but not completely, parts will be damaged. For processors you usually have to toss them, GPUs often will disable the affected areas and bin it as a lower end part.

Of the chips that do work, they'll have different characteristics in terms of what clock speed they can handle before having issues and what their resistance is, and thus their power usage.

What's happening here is Intel is taking the best of the best resistance wise and binning them for a new line. They discovered that some IB chips are much lower power usage than they though (if properly frequency limited) and thus are selling a special line for power critical applications.

They can't just "make all the chips better" or something. This is normal manufacturing variation and as a practical matter Intel has some of the best fab processes out there and thus best yields.

CPU speeds are sometimes an artificial limit (though often not, because not only must a chip be capable of a given speed, it has to do it at it's TDP spec) but power usage is not. It uses what it uses.

Re:Welcome to the concept of chip manufacturing (1)

dtdmrr (1136777) | about a year ago | (#42478939)

I highly doubt that's the case. I suspect the defect/variation distributions of few if any generations of intel chips have actually matched the distribution of market demand. Going back at least to the 386, they have artificially crippled higher end models (beyond what was necessary from defects), to provide different price/feature/performance points for consumers. The SX line was just DX chips with the internal floating point unit disabled.

We might feel a little less cheated if Intel actually designed and fabricated different products so that we actually get what we pay for, and don't have to feel cheated because that new cpu is artificially crippled. Realistically, the production cost of the extra silicon is far less than the cost of designing different chips. And, yes some chips will naturally fall into the lower end models because of defects and variations. So whether or not we feel cheated, they are actually delivering a better value to the customer by artificially differentiating the models. People are accustomed to market segmentation, airfares being one major example, I suppose it just feels a little different knowing that you get a black box that is a first class seat with extra blocks added to squish you because you only wanted to pay for ecconomy class.

As to why the segmented market is reasonable, the fact of the matter is people do have different values and needs, and people want a price tag that matches those needs. Intel could charge a uniform price for all cpus, but then they would have to decide between alienating a huge number of customers by setting to high a price, or drastically reducing their profits. Say what you will about corporate greed, and even Intel's stagnation. They do reinvest huge ammounts of their profits into building new fabs, and other aspects of producing subsequent generations. Market segmentation enables them put more of the cost burden on the customers that have more money to play with and really care about the getting the performance now. Really, it most benefits the consumers that will feel cheated (I haven't heard people complaining about the higher prices of the higher end chips). If chip makers were acting with more bad faith (just look at the telcom and cable industry), then this would be more upsetting (and less about consumer value).

As for the lying issue. I don't think this has been much of a secret for the last 25 years. Its probably just a matter of more people are becoming aware of it now, people who are less familiar with the literature and issues. Also, I think Intel has been fairly stupid about marketing. The post-sale "upgrades" drew people's attention to issues that most people just don't want to know about, and probably didn't really appeal to all that many consumers. Its also a little sickening to see them put effort into developing a "secure" system that lets them sell hardware upgrades. Perhaps something like that would just work better for consumer relations if they provided a trade-up program, even if it does mean it would cost more for consumers to get the same upgrade (assuming the same profit for intel).

Re:Welcome to the new Value Add (2)

gman003 (1693318) | about a year ago | (#42471395)

Intel has always been about Value Add... There are "crippled" products on the market, sold by others as well as Intel. Sometimes it's so they can build one part in their fab, cripple the mainstream part with a fuse, and then charge a premium for the un-crippled part. Sometimes it's so a crippled system can be sold, and then for an upgrade fee, be "enhanced" in the field. But in any case, it's all about revenue. The annoying thing about this is that they've gone to extra expense and effort to produce the crippled part - the premium part would actually cost less without the extra crippling capability.

While you're correct that Intel relies heavily on testing chips, disabling whatever doesn't work/lowering the clock speed until it works, and selling it as a cheaper product, that's really a cost-saving measure, not a revenue-boosting one. When a chip rolls out with half the cores broken, they'd much rather sell it as a cheap processor than throw it away. AMD does the same thing - even more, actually. As does Nvidia, and pretty much any company that produces enough chips.

These chips are likely a high bin, not a low bin. They're testing for stability at low clock speeds and low voltages, and these are probably the best they have. As you said, there's also a chance that the low power parts may be a deep sort out of the distribution, and there aren't many.

As a different perspective, Intel has also evolved into a performance-oriented company. I don't think that as a company they're very comfortable with this whole "power thing", and I think a limited production like this is probably the way to sell it to management and marketing.

That was the Intel of the early- to mid-2000s. Surprisingly enough, when they got their ass kicked in the Athlon XP/64 vs. Pentium IV wars, they learned from it. They obsess over instructions-per-clock now, not clock speed (old Intel) or core count (current AMD). They've actually been designing their microarchitectures for the laptop, not desktop, for years now, ever since Core 2. They build a 25W laptop chip, then scale it up to meet the 50W-130W desktop and server market (and scale it down to 10W for the even more power-conscious computers). And they've been maintaining a separate microarchitecture, Atom, for the sub-10W range for years now. It's worked pretty well for them.

I wish I could say the same for AMD - they bet pretty heavily on high core counts, disregarding power consumption, and they seem to be faltering on the desktop and laptop because of it. Servers seem to be holding up better, since 130W processors aren't unusual and server applications scale better to more cores, but for the consumer market, the *only* thing they have going for them right now is Fusion, having a powerful GPU on the same die as a half-decent CPU.

Re:Welcome to the new Value Add (0)

serviscope_minor (664417) | about a year ago | (#42473599)

but for the consumer market, the *only* thing they have going for them right now is Fusion, having a powerful GPU on the same die as a half-decent CPU.

Really?

The top end non fusion CPU genreally comes between the i5 and close to much more expensive i7, sometimes beating out the i7 in multithreaded benchmarks. It's over 75% of the speed of the i5 single threaded now.

Piledriver is much better, and the performance is much more on a par with Intel now for many things.

These days an awful lot of stuff is multithreaded. Compiling, web surfing, media transcoding, picture editing, compression, decompression, games are getting there, and so on.

Power draw is higher but unless you're flogging the CPU 27x7 it will take you a very long time for that to be a cost advantage.

Re:Welcome to the new Value Add (2)

beelsebob (529313) | about a year ago | (#42473715)

The top end non fusion CPU genreally comes between the i5 and close to much more expensive i7, sometimes beating out the i7 in multithreaded benchmarks. It's over 75% of the speed of the i5 single threaded now.

No, no it doesn't. It gets beaten by the i3 3220 for everything except for very multithreaded tasks, where it roughly draws with it:
http://www.anandtech.com/bench/Product/675?vs=677 [anandtech.com]

Piledriver is not much better - it's the same architecture as trinity, but with the GPU stripped off.

There actually really aren't that many tasks where multithreading makes up the difference, as you can see from a comparison of a top end piledriver, against a cheaper i5 that consumes about half the power:
http://www.anandtech.com/bench/Product/697?vs=701 [anandtech.com]

As you can see, even in a lot of the parallel benchmarks the i5 wins, and that's ignoring the i5's hardware video encoder, which utterly demolishes software mode. Overall, the i5 is a significantly better chip, at significantly lower power, and cheaper to boot.

Re:Welcome to the new Value Add (-1, Troll)

serviscope_minor (664417) | about a year ago | (#42474713)

No, no it doesn't.

Yes it does!

It gets beaten by the i3 3220

No it doesn't!

Oh waid, you've compared a fusion processor to an i3 rather than the non fusion ones I was talking about.

Firstly, let's try here, for some multithreaded benchmarks:

http://www.phoronix.com/scan.php?page=article&item=amd_fx8350_visherabdver2&num=4 [phoronix.com]

If you look the reports are generally exactly what I said with the addition that the A10 is much flower than the 8350. Generally, the 8350 is between the i5 and i7, occasionally a bit slower than the i5 sometimes much faster than the i7. That's a 100% multithreaded benchmark. It includes things like compiling, rendering, compression, image manipulation, media transcoding and some scientific work. Scientific codes are actually used inside modern games and modern filters in things like photoshop aso you can't dismiss them as "not for normal people".

In fact, the conclusion of that benchmark is that the 8350 is competetive with the 3770k which is much more expensive.

For a nice extreme example look at the CRay benchmark, where the FX8350 runs in under 2/3 of the time of the i7 3770K. There are extremes in the opposite direction, too.

There actually really aren't that many tasks where multithreading makes up the difference,

Apart from all the cases I listed. Running 200 single threaded benchmarks and 10 multithreaded ones doesn't imply single threaded tasks are more common.

And if you want a more mixed benchmark, go here:

http://www.tomshardware.com/reviews/fx-8350-vishera-review,3328-12.html [tomshardware.com]

Where for a lot of tasks, like single threaded media encoding, the AMD processors are about 75% as fast.

And... back to your benchmarks.

So, I looked at the benchmarks, and the results were pretty mixed. Sometimes one processor wins by a large margin, other times the other one does. No graphics intensive or OpenCL benchmarkes are included for fusion versus i3 I note. There would be a no contest win to the fusion for those.

Re:Welcome to the new Value Add (1)

beelsebob (529313) | about a year ago | (#42475657)

The top end non fusion CPU genreally comes between the i5 and close to much more expensive i7

Oh waid, you've compared a fusion processor to an i3 rather than the non fusion ones I was talking about.

Score: -1 Lying

Re:Welcome to the new Value Add (1)

serviscope_minor (664417) | about a year ago | (#42475693)

Score: -1 Lying

For you? I agree. After all, the first link you posted was this:

http://www.anandtech.com/bench/Product/675?vs=677 [anandtech.com]

Which compares the Fusion A10-5800K to an i3. You also claimed "piledriver is not much better" where it actually wins by a wide margin in every single benchmark.

http://www.anandtech.com/bench/Product/675?vs=697 [anandtech.com]

Re:Welcome to the new Value Add (2)

beelsebob (529313) | about a year ago | (#42475819)

No, I claimed that the A10-5800k *is* a piledriver. Which it is.

You made three assertions.

1. That fusion was between an i5 and i7 in terms of speed
2. That piledriver was faster still.
3. That you were talking about the FX line, not the fusion line all along.

All are false.
1. Is false because the fastest fusion chip (the A10-5800K) is only roughly as fast as the i3-3220, no where near as fast as an i5 or i7.
2. Is false because the A10 *is* a piledriver chip. There are faster piledriver chips out there, I compared the fastest available piledriver chip to intel's slightly cheaper i5. The i5 is faster than the FX-8350 in 23 of the 37 tests, some by a very significant margin. Also notably, the other tests include things like h264 encoding, which the i5 has a hardware unit for, which was not tested.
3. Is false as you can observe from the above quote.

Re:Welcome to the new Value Add (1)

serviscope_minor (664417) | about a year ago | (#42476435)

1. That fusion was between an i5 and i7 in terms of speed

No, read my post. I said "non fusion".

2. That piledriver was faster still.

Than fusion, but not than the i5 or i7.

The phrase "piledriver is much better" is in comparison to bulldozer. It does not make any sense in relation to fusion, since the A10 core is a piledriver microarchitecture based core.

3. That you were talking about the FX line, not the fusion line all along.

Yes. Actually read my post. Of course, I said "non fusion" so I could have been talking about bobcat. That's pretty unlikely though.

All are false.

You lack basic powers of observation.

1. Is false because the fastest fusion chip (the A10-5800K) is only roughly as fast as the i3-3220, no where near as fast as an i5 or i7.

Good job I was talking about "non fusion" then, rather than about the A10 which is a fusion processor.

2. Is false because the A10 *is* a piledriver chip.

Um, yeah. Then perhaps I was not referring to the A10, then?

I compared the fastest available piledriver chip to intel's slightly cheaper i5. The i5 is faster than the FX-8350 in 23 of the 37 tests, some by a very significant margin

Yeah, and I compared the FX-8350 to the much more expensive i7 3770K and it was significantly faster in a number of tests and between the i5 and i7 in most others. Your point?

Also notably, the other tests include things like h264 encoding, which the i5 has a hardware unit for, which was not tested.

That's because the software is exceptionally flakey. It very quickly produces corrupt or poorly compressed files.

Also notably, there were no tests like graphics performance or OpenCL, where the fusion processor wins by a huge margin.

3. Is false as you can observe from the above quote.

The only thing that proves that is my original post. If you actually try reading that rather then midlessly making things up then you will see that you are being very, very silly.

Re:Welcome to the new Value Add (1)

LordLimecat (1103839) | about a year ago | (#42477801)

The top end non fusion CPU genreally comes between the i5 and close to much more expensive i7, sometimes beating out the i7 in multithreaded benchmarks

Wish it were true but its not. Very little that AMD has is "close" to the i7s. What AMD has is value, and pretty decent graphics cores, as well as top end core counts.

What number is limited edition? (0)

Anonymous Coward | about a year ago | (#42470693)

Is it millions of CPU? Did the article say 10Watts or 0.10Watts? I don't know if my tablet that has AA batteries is going to work with 10W.

Intel needs to embrace 3D to remain relevant (2)

CuteSteveJobs (1343851) | about a year ago | (#42470705)

Intel really needs to get its act together: It's Atom processors are a decent low power x86 solution, but as usual Intel has delivered them with a crappy 3D graphics to the point the graphical benchmarks can't even run on them, let alone any recent computer games. For the Atom Cedar Trail release they didn't even do DX10 drivers, and sheepishly back-speced it to the now outdated DX9. ARM tablets can deliver decent 3D, so why can't Intel? Even AMD can provide 3D graphics for low-power PCs. Why can't Intel? And Intel wonders why it's becoming irrelevant to the future of computing!?

No DX10 for you!
http://semiaccurate.com/2012/01/03/intel-thinks-cedar-trail-is-a-dog-reading-between-bullet-points/#.UOY58uRJNxA [semiaccurate.com]

Windows must live with DX9. Linux can't do anything at all...
http://tanguy.ortolo.eu/blog/article56/beware-newest-intel-atom [ortolo.eu]

Oh and did I mention it doesn't work on Windows 8.
http://communities.intel.com/message/175674 [intel.com]
http://www.eightforums.com/hardware-drivers/12305-intel-gma-3600-3650-windows-8-driver.html [eightforums.com]
http://answers.microsoft.com/en-us/windows/forum/windows_8-hardware/windows-8-on-intel-atom-d2700dc-graphics-driver/2a6015d3-af92-453d-b0c2-20cc56b764de [microsoft.com]

Re:Intel needs to embrace 3D to remain relevant (1)

Gadget_Guy (627405) | about a year ago | (#42471101)

So what is your solution then? Does Intel need to come out with a range of very low powered CPUs based on their main Ivy Bridge processors with better performance than their Atom line? Do you think that they could announce this, and then we could discuss the story here on Slashdot?

You can see where I am smugly going here. That is exactly what TFA was all about. In act, it also said:

Atom chips will move to an entirely new design later this year that is expected to get them closer to Intel's mainstream processors in performance.

Re:Intel needs to embrace 3D to remain relevant (0)

Anonymous Coward | about a year ago | (#42471227)

See this is the kind of thing i come to /. for.

People RTFA for me so I don't have to read the misleading summary and headline.

Re:Intel needs to embrace 3D to remain relevant (1)

bill_mcgonigle (4333) | about a year ago | (#42471759)

That is exactly what TFA was all about

Thank you for this.

It's true, though, that the current Atom chipset is poorly considered. They even broke VGA text mode, for Pete's sake. FreeBSD 9.1 has the patch, at least, but boy was I surprised last spring!

Re:Intel needs to embrace 3D to remain relevant (1)

toddestan (632714) | about a year ago | (#42471793)

Well, one solution would be for Intel to lift some of the restrictions on the Atom CPU, which are mainly in place because Intel fears that they could otherwise cut into their other more profitable CPU lines. Though I see that you can now buy an Atom board with a PCI x16 slot so I guess Intel may be seeing the light, or perhaps just feeling some pressure from AMD's Bobcat line in that segment.

Re:Intel needs to embrace 3D to remain relevant (1)

CuteSteveJobs (1343851) | about a year ago | (#42472101)

> You can see where I am smugly going here. That is exactly what TFA was all about. In act, it also said:
>> Atom chips will move to an entirely new design later this year that is expected to get them closer to Intel's mainstream processors in performance.

I own quite a few Atom PCs and in terms of performance, I think Atom's are quite okay. They're not big on grunt, but still sufficiently powerful to do anything you throw at it EXCEPT 3D. That's their big weakness. Office, web services, software development, yeah fine. Even video transcoding: slower than a desktop, but it will get there. But 3D? No. 3D is ithe new black so that's a deal breaker for many. A wise man summed it up nicely: "I'd rather have a mediocre CPU and a fast GPU than the other way around."

Here's an Atom-powered Netbook. Does nicely in every category except 3D: http://www.notebookcheck.net/Review-Acer-Aspire-One-D270-26Dbb-Netbook.73534.0.html [notebookcheck.net]
Now compare that to a similar Netbook with the AMD chipset: http://www.notebookcheck.net/Review-Asus-Eee-PC-1015B-Netbook.56840.0.html [notebookcheck.net]
AMD delivers over twice the performance, and that's my experience: Open up a 3D app on an Atom device and you get three mouldy grey triangles before it crashes. Intel don't take 3D seriously. Never have.

Re:Intel needs to embrace 3D to remain relevant (1)

Gadget_Guy (627405) | about a year ago | (#42472317)

That is true. When I was looking for a netbook, I chose an AMD based one precisely for the better 3D. I now use it as a gaming system. As long as you are happy to play games from around 2005, then it performs fine. But it is far beyond what the Atom based netbooks can do.

There are occasions where I think that it is the CPU that is limiting a game rather than the GPU. I wonder in those situations how the Atom systems would fare.

I just wish that you could still find these AMD netbooks around. Netbooks are getting rare (and will die out soon), and the only ones that I ever find these days are Atoms.

Re:Intel needs to embrace 3D to remain relevant (0)

Anonymous Coward | about a year ago | (#42471169)

I concur, you can't do anything that involves 3d graphics or even video on an Intel Atom.

I'll admit i'm only talking from experience of a D410 based system.

Re:Intel needs to embrace 3D to remain relevant (0)

Anonymous Coward | about a year ago | (#42471699)

Cedar Trail drivers were done externally, and it was their first DirectX driver. That same company usually does the OpenGL drivers for the chips.

Re:Intel needs to embrace 3D to remain relevant (0)

Anonymous Coward | about a year ago | (#42473099)

Umm, Clover trail (as used in the Windows 8 tablets, Atom Z2670) has a PowerVR GPU core. IIRC, it's DirectX 11 level and quite fast. Well, at least faster than the Tegra 3 in my Nexus 7. The world's moved on from your old links. It ain't granddad's N450 netbook processor in these puppies.

Z2760 only supports DirectX 9 (1)

CuteSteveJobs (1343851) | about a year ago | (#42473567)

Those "old" links are dated Nov 2012. If you have something more recent suggest you offer it, because you don't give any links at all. I can't see a single reference anywhere on the web that GMA 3650 supports anything other than DirectX 9. The links referred to the N2600. You made up the N450 stuff. Even Intel's own web site says it only supports DirectX 9. http://ark.intel.com/products/36331/Intel-Atom-Processor-N270-512K-Cache-1_60-GHz-533-MHz-FSB#infosectiongraphicsspecifications [intel.com] .

Here's a review of a Z2670 tablet which you claim runs DirectX. "One of the limitations of the W510's Intel Graphics Media Accelerator GPU is that it's not DirectX 11 compatible, so our standard 3DMark11 benchmark wouldn't run. You can forget about playing "World of Warcraft," too. Even when effects were set to low, the W510 averaged just 12 fps, and even hung up during our test flights." 'Quite Fast' my ass. http://www.laptopmag.com/acer-iconia-w510.aspx [laptopmag.com] Posted Dec 28.

The spec sheets for Z2760 tablets I googled either says DirectX 9. http://www.tipidpc.com/viewtopic.php?tid=279073 [tipidpc.com] http://www.pinoytechblog.com/archives/acer-iconia-w510-the-windows-8-tablet-netbook-hybrid [pinoytechblog.com]

Or in the case of Dell doesn't give the version at all.http://www.dell.com/uk/enterprise/p/latitude-10-tablet/fs

No wonder you're posting as AC. Google Moar.

Re:Z2760 only supports DirectX 9 (0)

Anonymous Coward | about a year ago | (#42473879)

Go have a look at Anand's. He calls it dx10 but whatever. I bet it means nothing to you although you're spreading fud about something you know nothing about. Yeah, you had to find some pretty obscure links there. Not interested in link spamming. Don't have an account. What I am doing is buying a Clover trail tablet when they eventually turn up in the world's most isolated capital city. Google that.

FUD (0)

Anonymous Coward | about a year ago | (#42480299)

Post your own link or it didn't happen.

Re:Z2760 only supports DirectX 9 (1)

CuteSteveJobs (1343851) | about a year ago | (#42484537)

I looked at Anand's and found nothing claiming the Z2670 was capable of anything other than DirectX 9. I used links to back up my assertions, even Intel's own damned spec sheet which says DirectX 9. You accuse me of FUD and didn't offer any links at all.

PS. If you have been trolling me, well played! :-) But your Atom still only runs DX9. ;-)

Re:Z2760 only supports DirectX 9 (0)

Anonymous Coward | about a year ago | (#42485183)

How about you look at the Z2760 datasheet?

2D/3D Graphics Core
— DirectX* 9.3, OpenVG* 1.1, OpenGL-
ES*2.0, OpenGL* 2.1 support

So what's next? Claiming intel is lying in their own datasheets?

Re:Intel needs to embrace 3D to remain relevant (0)

Anonymous Coward | about a year ago | (#42473473)

That's because the PowerVR GPUs they use are shit.

Re:Intel needs to embrace 3D to remain relevant (2)

LordLimecat (1103839) | about a year ago | (#42477871)

Intel's onboard SB / IB graphics are pretty darn competitive for an "integrated" solution. I believe they support DX10, and certainly are sufficient for most games on a "modest" setting.

Smoke and Mirrors (1)

Anonymous Coward | about a year ago | (#42471041)

This is an Intel parlor trick to draw attention away from other vendors who have something new and interesting to offer in the sub 10W power envelope. The fact that they are pulling these shenanigans leads me to suspect AMD will have something interesting to show off at CES.

Re:Smoke and Mirrors (1)

Anonymous Coward | about a year ago | (#42471951)

Yep. January. The time of year when Intel trots out "100s of design wins" for "fabulous consumer technology" you're never going to see. It's part of the run-up to CES. This year they have an imaginary cable box, which is new, and imaginary tablets, which are not. It's in the Autumn that we get amazingly cool server technologies we're never going to see. If I was them, I'd reverse this as autumn is when the ramp to a big consumer holiday happens and winter is the best time to be warning enterprise customers off of competitor technologies.

Re:Smoke and Mirrors (1)

LordLimecat (1103839) | about a year ago | (#42477905)

Im not aware of other players in the sub 10w region who can provide performance competitive with ANY of Intel's core line.

Before responding, possibly go take a look at performance-per-watt comparisons between a modern intel and anything ARM, theyre in different leagues.

Limited run? (0)

GumphMaster (772693) | about a year ago | (#42471581)

Did anyone else read "Intel To Debut Limited-Run Ivy Bridge Processor" and ponder why anyone would want a processor that was guaranteed to run only a limited number of times? Perhaps this is the new monetization (I hate that word) strategy, where you are forced to buy desktop processor hours on subscription. Or perhaps the limited-runs could be along the lines of "5 runs with DVD/Bluray player or non-Microsoft OS running" to give the MPAA/Microsoft something else to prop up profits with?

Efficient my ass! (0)

Anonymous Coward | about a year ago | (#42472155)

Add the smoking-hot north bridge to the calculation, and you see that the actual values are complete shit. Intel is well-known to play this deliberate lie.
After all, they are a criminal company. Their "success" depends heavily on monopolistic behavior, blackmail, harassment, revolving doors and general lobbyism. Never forget that.

This thing still can't hold a candle to ARM (also with NB) in comparison.

Re:Efficient my ass! (0)

Anonymous Coward | about a year ago | (#42473511)

The northbridge has been integrated into the CPU since Sandy Bridge. You are a fucking moron.

Intel's success depends heavily on high performance and reliability, something that neither AMD nor ARM CPUs have been able to match. Again, you are a fucking moron.

Re:Efficient my ass! (1)

LordLimecat (1103839) | about a year ago | (#42477931)

Their success is mostly because they produce some of the best and most efficient chips in any market.

You can argue that they got their by virtue of their HUGE R&D which were funded by the shady behavior you mention, but at this point choosing someone other than intel would be a principled, rather than technical, decision (unless you need high core count or extremely low power usage).

Check for New Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...