Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

The Economics of Chips With Many Cores

kdawson posted more than 6 years ago | from the please-insert-25-cents dept.

Upgrades 343

meanonymous writes "HPCWire reports that a unique marketing model for 'manycore' processors is being proposed by University of Illinois at Urbana-Champaign researchers. The current economic model has customers purchasing systems containing processors that meet the average or worst-case computation needs of their applications. The researchers contend that the increasing number of cores complicates the matching of performance needs and applications and makes the cost of buying idle computing power increasingly prohibitive. They speculate that the customer will typically require fewer cores than are physically on the chip, but may want to use more of them in certain instances. They suggest that chips be developed in a manner that allows users to pay only for the computing power they need rather than the peak computing power that is physically present. By incorporating small pieces of logic into the processor, the vendor can enable and disable individual cores, and they offer five models that allow dynamic adjustment of the chip's available processing power."

Sorry! There are no comments related to the filter you selected.

How is this new? (3, Informative)

lintux (125434) | more than 6 years ago | (#22047860)

IIRC this is done in mainframes for *ages* already...

How is this [business model] new? (0)

Anonymous Coward | more than 6 years ago | (#22047904)

Well mainframe technology does migrate down. Why not their economic model?

Re:How is this [business model] new? (4, Informative)

ozmanjusri (601766) | more than 6 years ago | (#22048144)

Why not their economic model?

Because it's dumb.

In 1999 I paid about AU$600 for a midrange Pentium Pro CPU. In 2008, I bought a midrange Xeon Dual-core for the massively increased price of... AU$600.

In 2000, I bought a shiny new Intergraph TDZ2000 with two PII 350s for the bargain cost of just $5,000. Now, Apple is prepared to sell me a Mac Pro with two 2.8GHz, quad core Xeons for the stupefying price of $2,799.00.

Now, explain to me again why it would be in my best economic interest to buy a computer with cores that could be disabled if I don't pay my rent?

Re:How is this [business model] new? (3, Insightful)

argiedot (1035754) | more than 6 years ago | (#22048210)

Now, explain to me again why it would be in my best economic interest to buy a computer with cores that could be disabled if I don't pay my rent?
I suppose because you could just buy the ones with some cores disabled and get someone who knows stuff to enable them again, like the way people did for some of the older nVidia cards that had some things disabled. Or maybe I don't know anything about how the two things work.

Re:How is this [business model] new? (1)

cheater512 (783349) | more than 6 years ago | (#22048290)

I'm fully supportive of this.

Works well for your average user and we all know that everyone else will just find flaws to turn on the unpaid cores.

Re:How is this [business model] new? (3, Funny)

CheShACat (999169) | more than 6 years ago | (#22048428)

That was my first thought. Then my second thought was having to go through the "Intel Genuine Advantage" activation process every 45 minutes.

Re:How is this [business model] new? (0)

Anonymous Coward | more than 6 years ago | (#22048430)

i even bought a quad core complete system from fujitsu-siemens for 700eu and it works absolutely awesome

yes, i should exchange the 2gb ram for faster one (its now only capable of 4gb/s), but graphics card is ok - gforce 8600gs

so no - crippled cpus dont make sense for me, too

Re:How is this new? (1)

ErroneousBee (611028) | more than 6 years ago | (#22047952)

Not only that, but the spare "cores" can become CPUs or IO channels. Also, with parrallel sysplex, you can shunt work between boxes on-the-fly. This means you can steal your test or other non-essential systems for mission-critical work.

I dont know whether this is possible with zLinux partitions, as alot of the moving about of stuff is very much a z/OS function, I.e. done by the OS, not the hardware or virtualisation.

Requires a near-monopoly (5, Insightful)

Ed Avis (5917) | more than 6 years ago | (#22047970)

In mainframes you have pretty much a single vendor (IBM). Even in the days of Amdahl and Hitachi, once you were committed to a single vendor they had a lot of market power over you. So the vendor can set its own price, and squeeze as much money out of each customer as possible by making variable prices that relate to your ability and willingness to pay, rather than to the cost of manufacturing the equipment.

In a competitive market where 100-core processors cost $100 to produce, a company selling 50-core crippled ones for $101 and 100-core processors for $200 would quickly be pushed out of business by a company making the 100-core processors for $100 and selling them, uncrippled, for $101. I expect the Intel-AMD duopoly leaves Intel some scope to cripple its processors to maintain price differentials (arguably they already do that by selling chips clocked at a lower rate than they are capable of). But they couldn't indulge in this game too much because customers would buy AMD instead (unless AMD agreed to also cripple its multicore chips in the same way, which would probably be illegal collusion).

Compare software where you have arbitrary limits on the number of seats, incoming connections, or even the maximum file size that can be handled. It costs the vendor nothing more to compile the program with MAX_SEATS = 100 instead of 10, but they charge more for the 'enterprise' version because they can. But only for programs that don't have effective competition willing to give the customer what he wants. Certainly any attempt to apply this kind of crippling to Linux has failed in the market because you can easily change to a different vendor (see Caldera).

Re:Requires a near-monopoly (1)

Desipis (775282) | more than 6 years ago | (#22048094)

I thought nVidia and ATI/AMD have been doing this kind of thing for years with the number of parallel units activated on their GPUs.

Re:Requires a near-monopoly (3, Insightful)

Anonymous Coward | more than 6 years ago | (#22048154)

To be fair to the graphics companies, they sometimes at least did that because of relatively low yields. If you can take a chip that has ten pipes, two of which are faulty, and disable those two faulty pipes, you've effectively created an eight pipe chip for nothing. This reduces the overall cost of producing a single chip, because a partial failure is still usable.

This is also why Sony used a Cell with only seven SPUs instead of the eight designed on the chip: if a single SPU fails (which is much more likely than none) in test, the chip is still usable. It pushes up yields significantly.

IOW: you're comparing the wrong business model. The model you're describing is "oh, this chip isn't quite up to spec, let's put it in a lower spec card where it will meet the spec", rather than "let's sell a fully capable chip deliberately crippled, and re-enable the crippled part later if the customer pays for it."

S/W licensed per processor (2, Insightful)

petes_PoV (912422) | more than 6 years ago | (#22048164)

In a competitive market where 100-core processors cost $100 to produce, a company selling 50-core crippled ones for $101 and 100-core processors for $200 would quickly be pushed out of business by a company making the 100-core processors for $100 and selling them, uncrippled, for $101.

And when your software is licensed per processor at (let's say) $100 per cpu, your extra, unwanted, 50 processors quickly become a burden. I'd be willing to pay more for a crippled processor if it saved me money elsewhere, and there was no way to slice up domains to reduce the liability

Re:Requires a near-monopoly (1)

Dr. Spork (142693) | more than 6 years ago | (#22048186)

I take it that the idea would be this: For $100 chip you sell at $200, you get some extra money to subsidize $100 chips that you sell for $100 in order to maintain market share.

If there is performance parity along the entire product line of two processor competitors, like there had been until the Core2 era, that doesn't stop crippling. You don't need collusion - both companies could have parallel reasons to offer tiered prices for differently-crippled variants.

But here's what I think is interesting: If future processors overshoot our needs so much that we'll only be using a small fraction of their available power, it will no longer be so important to be in the technological lead. In fact, if the bulk of the sales are in the $200 range, I wonder who will make more money: A company that makes the greatest processor, worth $800, but mostly sells crippled versions of it, or another company that can't make $800 processors, but makes perfectly acceptable uncrippled $200 - $300 chips?

My point is that in the future, almost all of us will be "low end" customers, in terms of comparing our needs to what is available in the way of processor power. So even if AMD ends up being only a low-end chip manufacturer, maybe that will turn out ok.

Re:Requires a near-monopoly (1)

Gerzel (240421) | more than 6 years ago | (#22048352)

Didn't someone predict that we'd only ever need 128 MB of ram and that more ram would be superfluous for most consumers?

While in theory technology might out pace demand, and I think it may very well happen someday, in practice this is something that I'll believe when I see it.

Right now there are a lot of flashy games out there. Users may want to run many more applications at once (or more likely turn on M$ poorly executed eyecandy and not notice their computers slowing down).

I don't think this is something companies like AMD should be drastically steering their policies towards. It will probably happen gradually, as most customers are already low-end.

Re:Requires a near-monopoly (1)

asliarun (636603) | more than 6 years ago | (#22048240)

..I expect the Intel-AMD duopoly leaves Intel some scope to cripple its processors to maintain price differentials...
Interestingly, of late, it is AMD that is trying to create product differentiating by crippling their processors, or at least by selling processors with one core switched off. They're trying to do this by selling "tri-core" processors based on their Barcelona/Phenom cores, which are nothing much an actual quad-core with a core turned off, either deliberately, or because it is defective. They probably want to position this as a mid-range offering, to make it more competitive to Intel's relatively cheaper quad-cores.

When Paul Otellini was asked about this, he said that he would rather prefer selling processors in which all the cores are working. Didn't expect Paul to have a funny bone.

Of course, Intel has done this in the past as well (remember 386DX/SX??). However, it looks like Intel is moving away from this strategy, which is evident in the fact that many of their cheaper CPUs with lower cache are actually different designs, and not just CPUs with defective cachets (is this correct usage?). Again, this is probably because Intel is much more confident about their process than AMD is, and is hence, not too bothered in dealing with partially defective cores.

Having said this, all these examples are of CPUs with cores/cachets permanently disabled. It would an intersting marketing strategy for a mass-production CPU companies such as AMD and Intel to deploy many-core processors, and turn it on-off via say, the internet (deploy bios updates?). Processing power, in this case, could easily become a subscription service, instead of a fixed asset.

If you take this concept to the logical extreme however, you again go back to the days of central processing and dumb terminals, similar to what Google is trying to do. After all, why go through the hassle of physically installing a many-core processor in your personal computing device, when you can subscribe to the processing power from a service provider.

Re:How is this new? (0)

Anonymous Coward | more than 6 years ago | (#22048162)

The you have software problems - like oh thats a 4 cpu, so you pay for windows or whatever 3.98 times , SAS 4 times, plus a fudge factor. #rd party software licencing generally kills off this golden screwdriver theme. Note Amdahl actually had a mainframe with an accelator pedal.

Re:How is this new? (1)

catwh0re (540371) | more than 6 years ago | (#22048348)

In a way they already do something similar when they sell one chip with various clock speeds. Artificial limitations are nothing new to the tech industry.

Re:How is this new? (1)

marafa (745042) | more than 6 years ago | (#22048504)

hear hear
we have an ibm p570 lpar that can even partition each core

Where can we store that many cores?! (-1, Troll)

Anonymous Coward | more than 6 years ago | (#22047868)

Well, step aside my friend
I've been doing it for years
I say, sit on down, open your eyes
And open up your ears

Say
Put a tree in your butt
Put a bumblebee in your butt
Put a clock in your butt
Put a big rock in your butt
Put some fleas in your butt
Start to sneeze in your butt
Put a tin can in your butt
Put a little tiny man in your butt
Put a light in your butt
Make it bright in your butt
Put a TV in your butt
Put me in your butt
Everybody say

I, hey, that's, man, I ain't putting no trees in nobody's butt,
no bees in nobody's butt, putting nothing--
You must be out your mind, man,
y'all get paid for doing this?
Cause y'all gotta get some kind of money
Cause this don't sound like the kind of--
I'd rather golf, to be perfectly honest,
than put somethin in somebody's butt
to be truthful

Well step aside my friend and let me
show you how you do it
When big bad E just rock rock to it

Put a metal case in your butt
Put her face in your butt
Put a frown in your butt
Put a clown in your butt
Sit on down in your butt
Put a boat in your butt
Put a moat in your butt
Put a mink coat in your butt
Put everything in your butt
Just start to sing about your butt
Feels real good

Re:Where can we store that many cores?! (0)

Anonymous Coward | more than 6 years ago | (#22047998)

WTF was that?!

Hardware DRM.... (5, Funny)

foobsr (693224) | more than 6 years ago | (#22047880)

In related news, an initiative of car manufacturers spearheaded by Ford has introduced an enabling 'cylinder per need' model. Car performance is wirelessly monitored in real time to give the customer the option to add in additional power according to his needs if he has signed to a plan designed to optimally fit his profile (composed on his overall lifestyle information). This also creates a new exciting opportunity to reduce individual carbon tyreprints for the consumer.

CC.

Re:Hardware DRM.... (1, Informative)

Anonymous Coward | more than 6 years ago | (#22048028)

Actually, disabling cylinders has been around and in limited practice for a while now.

That's one of the driving factors (hahaha) behind electrically controlled valves. (It's much more complicated to do when you have to manipulate the cam shaft to disable the valves).

Re:Hardware DRM.... (1)

shmlco (594907) | more than 6 years ago | (#22048122)

Yeah, and equally dumb. In both cases the manufacturer had to build it and pay for parts and materials and processing and all of the other costs involved, and you have the entire end product sitting there, whether you're using it to its full potential or not.

I may only use four of my eight cores most of the time, but there are eight of them there, nonetheless.

Re:Hardware DRM.... (5, Funny)

peas_n_carrots (1025360) | more than 6 years ago | (#22048150)

Those 100-cylinder engines sure are light. After all, the metal necessary to build such an engine would only make up the majority of the weight of the car. Use 10 cylinders to drag around the rest of the 90, now that's efficiency.

Re:Hardware DRM.... (1)

eiapoce (1049910) | more than 6 years ago | (#22048188)

. Car performance is wirelessly monitored in real time to give the customer the option to add in additional power according to his needs if he has signed to a plan designed to optimally fit his profile
So, generally speaking, you wouldn't have a problem with microsoft software monitoring and runming your car? Seriously, get a grip on reality, there are already jokes about that.

Re:Hardware DRM.... (1)

Gordonjcp (186804) | more than 6 years ago | (#22048224)

It's a valid point. Certainly the European car manufacturers have a "gentleman's agreement" to limit their high-end sports cars to a maximum speed of 155mph (around 250km/h). Now, I know that I wouldn't use that kind of power every day, but it would annoy me to know that the car was capable of more but prevented from doing so by an artificial limitation. If I'm paying for a 500bhp car, I want it to run like a 500bhp car...

Re:Hardware DRM.... (1)

aproposofwhat (1019098) | more than 6 years ago | (#22048322)

Your 500 bhp car will still run like a 500 bhp car, up to the agreed 156 mph limit.

It'll still accelerate like shit off a chrome shovel, and if you really want the 200 mph or so that 500 bhp will give you, it's possible to remap the ECU to remove the limit.

The best use for disabling cylinders is when driving in traffic - to be able to run on half the normal number of cylinders at idle saves a hell of a lot of fuel, especially in a 500 hp behemoth.

Disclaimer - I drive a slightly tweaked Scorpio Cossie that knocks out around 240 bhp - 150 mph (ish - never trust the speedo at that speed) , and am insanely jealous of BMW M5 drivers :P

I propose a new moderation option: (0)

Anonymous Coward | more than 6 years ago | (#22048404)

-1 Car analogy

640 cores is enough for anyone (-1, Troll)

Anonymous Coward | more than 6 years ago | (#22047882)

niggers :(

Re:640 cores is enough for anyone (0)

Anonymous Coward | more than 6 years ago | (#22048146)

Can we add 'nigger' and 'gnaa' to the lameness filter? while we're at it any link that redirects to myminicity?

Multicore NIGGERS? (-1, Troll)

Anonymous Coward | more than 6 years ago | (#22047886)

When can we, the consumer, get access to niggers with multiple cores?

New form of overclocking - "over coring" (2, Insightful)

Xhris (97992) | more than 6 years ago | (#22047888)

This should lead itself to a whole new form of hacking - buy the 10 core system and tweak it to use all 100

The role of the vendor (-1, Troll)

Anonymous Coward | more than 6 years ago | (#22047898)

The role of the vendor is simple:
to put niggers on blocks and sell them to the highest bidder.

Vendor tip: be sure to split up families for maximum tears... and maximum laughs!

Re:The role of the vendor (-1, Offtopic)

Duncan Blackthorne (1095849) | more than 6 years ago | (#22048126)

<flame>I like many black people. For instance, I always liked Richard Pryor and his tell-it-like-it-is way of expressing himself. For example: "White people can be niggers, too". Specifically in this case, fucking asshole white-supremacist lilly-livered, yellow-bellied little pussies who scream like little girls when you hit them, who display their magnificent cowardice and incredible lack of maturity by posting the word "nigger" everywhere under the guise of "Anonymous Coward".</flame>

Now, get the fuck off my lawn, nancy-boy. You're making the grass turn brown.

Signed,
Not A Coward

Re:The role of the vendor (0)

Anonymous Coward | more than 6 years ago | (#22048220)

Why don't you shut the fuck up. Noone cares what you think about Slashdot trolls, and your racist, juvenile take on racism is embarrassing.

UofI at Urbana-Champaign (-1, Troll)

Anonymous Coward | more than 6 years ago | (#22047908)

Is that a good school?
Many niggers go there?

This is the real case for virtualization... (1)

compumike (454538) | more than 6 years ago | (#22047910)

I've been looking for a new web host recently, and I'm consistently attracted to ones based on the Virtual Private Server concept -- your own box within another box. The multicore economics argument is definitely tied in here, where we can balance demand not just within our own enterprise, but between different consumers of computing time.

Beyond that, I don't really get it... if I have a certain computational workload X, I'd probably prefer to use more cores temporarily rather than pace the work longer over a smaller number of cores. Can they really make the cost incentives enough to fight that? They're really trying to change the model from paying for hardware to paying for cycles, but it's not clear why that should imply a time factor.
--
Get your code outside the computer! Microcontroller kits for the digital generation. [nerdkits.com]

same old as software rental... (5, Insightful)

k-zed (92087) | more than 6 years ago | (#22047912)

I don't want to "rent" the processing power of my own computer, thank you. Nor do I want to "rent" my operating system, or my music, or movies. I buy those things, and I'm free to do with them as I wish.

Renting your own possessions back to you is the sweetest dream of all hardware, software and "entertainment" manufacturers. Never let them do it.

Re:same old as software rental... (0)

Anonymous Coward | more than 6 years ago | (#22047954)

I have to agree, this is one of the most stupid ideas I've heard, from the user's perspective. For a start, it's easy to occupy a multicore machine by running multiple applications. Do I really want to start paying once I hit my 5th raytracer instance?

It's also one of the stupidest idea's I've heard from the HW manufacturer's perspective. It costs MONEY to make chips; you can't just sell them for half price and hope for the best.

Furthermore, the reason the Intels and IBMs of this world are putting multiple cores onto one chip is because it's CHEAP. There's not even a problem to be solved here. You'll buy a "CPU" in 2009 that costs the same as it did in 2003, except it'll have 8 or 16 cores.

Seriously this is one of the most stupid ideas I've heard this century, if you exclude everything that comes out of Ray Kurzweil's mouth.

Re:same old as software rental... (2, Insightful)

markus_baertschi (259069) | more than 6 years ago | (#22048142)

For the individual, personal computer, such a model will not fly, as outlined.

However, in the enterprise market this is already there. IBM is using such a 'on-demand' model for its Series P hardware since a couple of years. For a small fee, IBM is installing a bigger configuration (CPU, memory) than the customer bought. The additional hardware is used automatically in case of a failure (built-in replacement parts) or can be unlocked by the customer on the fly.

In the enterprise case it makes sense:

  • In enterprise servers the hardware cost is small, compared to the engineering cost. So installing additional hardware does not cost much. A GB of memory costs much more for a high end Unix server than for a PC, even if the technology of the components (simm's) is the same. The difference is in the much lower number of these servers sold and the additional complex engineering needed to build these machines.
  • The additional hardware is already there and can be unlocked and added to the configuration on-line. For man enterprise applications this alone is a huge advantage as maintenance windows are scarce. Typically you have a maintenance window four times a year between Sunday 23:00 and Monday 02:30.

Markus

Re:same old as software rental... (0)

Anonymous Coward | more than 6 years ago | (#22048140)

Sorry. If you have a copy of Windows XP, for example, you do NOT own it. You purchase a license to use a copy in the manner that Microsoft has dictated in it's End Use License Agreement ...including:

Consent to Use of Data. You agree that Microsoft and its affiliates may collect and use technical information gathered in any manner as part of the product support services provided to you, if any, related to the Product. Microsoft may use this information solely to improve our products or to provide customized services or technologies to you.

Life just sucks that way

Re:same old as software rental... (4, Insightful)

ozmanjusri (601766) | more than 6 years ago | (#22048178)

Life just sucks that way

Microsoft != Life.

Re:same old as software rental... (1)

matt206 (642292) | more than 6 years ago | (#22048380)

No software is owned by the purchaser and you don't have to pay Microsoft every year to keep it running.

Re:same old as software rental... (1)

bms20 (827647) | more than 6 years ago | (#22048396)

GODDAMN RIGHT!
Well said.

Re:same old as software rental... (0)

Anonymous Coward | more than 6 years ago | (#22048416)

In a server that some one else owns? (Web hosting, remote processing, or distributed computing like a render-farm or the such.) Yes, I could see that making sense. In fact I think such business models are in place already.

My own box PC sitting on my desk? No thanks. I'll pass on that one.

There is such a thing as too many cores, y'know (-1, Troll)

Anonymous Coward | more than 6 years ago | (#22047928)

You CAN be too rich, too thin, and have too many cores.

And too many niggers. My personal nig-limit is 0, but YMMV.

Hoo wee!

erm... (2, Interesting)

Anonymous Coward | more than 6 years ago | (#22047934)

So, Intel is going to charge us less for a processor with 4 cores because we can turn three off most of the time? Or is the power saving supposed to make the cost of the chip less prohibitive?

Maybe it'll be a subscription service, 9.99 per month and .99 cents per minute every time you turn another core on.

You know what I don't get? (3, Interesting)

Moraelin (679338) | more than 6 years ago | (#22047946)

You know what I still don't get? Why's everyone acting like dividing a CPU into several separate cores is a good thing?

Let me compare it to, say, a construction company having a number of teams and a number of resources, e.g., vehicles:

1. One team, 4 vehicles. That's classic single core. Downside, at a given moment it might only need 2 or 3 of those vehicles. (E.g., once you're done digging the foundation, you have a lot less need of the bulldozer.)

2. Two teams, can pick what they need from a common pool of 4 vehicles. That's classic "hyperthreading". Downside, you're not getting twice the work done. Upside, you still paid only for 4 vehicles, and you're likely to get more out of them.

3. Two teams, each with 4 vehicles of its own. They can't borrow one from each other. This is "dual core." Downside, now any waste from point 1 is doubled.

But the one I don't see is, say,

4. Two teams with a common pool of 8 vehicles. It's got to be more efficient than number 3.

Basically #4 is the logical extension of hyperthreading, and it seems to me more efficient any way you want to slice it. Even if you add HT to dual-core design, you end up with twice #2 instead of #4 with 4 teams and a common pool. There is no reason why splitting the pool of resources (be it construction vehicles or execution pipelines) should be more efficient than having them all in a larger dynamically-allocated pool.

So why _are_ we doing that stupidity? Just because AMD at one point couldn't get hyperthreading right and had its marketers convince everyone that worse is better, and up is down?

Re:You know what I don't get? (3, Insightful)

lintux (125434) | more than 6 years ago | (#22047972)

You know what I still don't get? Why's everyone acting like dividing a CPU into several separate cores is a good thing?

AFAIK adding more MHz was getting more and more complicated, so it was time to try a new trick.

You misunderstood my question (1)

Moraelin (679338) | more than 6 years ago | (#22048032)

Well, yes, obviously. But that's not what I was asking. My question was _not_ "why don't they stick to the MHz race?"

What I'm saying is: ok, so now they have to expand in width, so to speak, instead of in MHz. Fine. But why is (A) two separated sets of, say, 3 pipelines better (B) than a set of 6 with two execution units, allocated dynamically? It's still 8 pipelines, only the second one can be dynamically allocated with better results. If one particular thread could use 4 while another used only 2, solution A results in wasted cycles, solution B does not.

Why are you even on Slashdot? (-1, Flamebait)

Anonymous Coward | more than 6 years ago | (#22048060)

If you'd been awake over the last few years, you'd know that processors are going multicore because they can't be clocked much faster using existing semmiconductor technology, so the "clock faster" path to regular progress is closed. There is nowhere to go in that direction.

As a result, processor manufacturers are putting the extra transistors that result from shrinking geometries into separate on-chip cores. Since all modern operating systems run numerous processes concurrently, this is just as good a method of speeding up normal computation. With "extra" cores handling all the background chores, a primary application no longer loses part of its CPU resource through multitasking, so the user perceives an improvement.

So everything you wrote was irrelevant.

Heh. Why are YOU on Slashdot? (1)

Moraelin (679338) | more than 6 years ago | (#22048172)

So everything you wrote was irrelevant.


No, everything _you_ answered was irrelevant, because you don't even seem to understand the question. You just repeat the marketing line without even understanding what was asked.

Yes, we need to do more things in parallel. That much is clear, Captain Obvious. The question is how we do that the most efficiently, with the same amount of silicon.

The question was, yes, if other CPU architectures and designs could still do those background tasks, but make better use of the same number of transistors. Or does going in such details go so far over your head as to not even make sense?

So, heh, yeah. I'll answer with your own question: why are _you_ on Slashdot? I mean, seriously, if thinking deeper than repeating the marketing line isn't your thing, shouldn't you be more at home on some other sites?

Re:You know what I don't get? (5, Informative)

RzUpAnmsCwrds (262647) | more than 6 years ago | (#22048148)

Your metaphor on multi-issue CPUs is interesting, but not necessarily valid.

Instruction scheduling is the biggest fundamental problem facing CPUs today. Even the best pipelined design issues only one instruction per clock, per pipeline (excluding things like macro-op fusion which combine multiple logical instructions into a single internal instruction). So we add more pipelines. But more pipelines can only get us so far - it becomes increasingly more difficult to figure out (schedule) which instructions can be executed on which pipeline at what time.

There are several potential solutions. One is to use a VLIW architecture where the compiler schedules instructions and packs them into bundles which can be executed in parallel. The problem with VLIW is that many scheduling decisions can only occur at runtime. VLIW is also highly dependent on having excellent compilers. All of these problems (among others) plagued Intel's advanced VLIW (they called it "EPIC") architecture, Itanium.

Another solution is virtual cores, or HyperThreading. HTT uses instructions from another thread (assuming that one is available) to fill pipeline slots that would otherwise be unused. The problem with HTT is that you still need a substantial amount of decoding logic for the other thread, not to mention a more advanced register system (although modern CPUs already have a very advanced register system, particularly on register-starved architectures like x86) and other associated logic. In addition, if you want to get benefits from pipeline stalls (e.g like on the P4), you need even more logic. This means that HTT isn't particularly beneficial unless you have code that results in a large number of data dependencies or branch mispredicts, or if pipeline stalls are particularly expensive.

Multicore CPUs have come about for one simple reason: we can't figure out what to do with all of the transistors we have. CPUs have become increasingly complex, yet the fabrication technology keeps marching forward, outpacing the design resources that are available. This has manifested itself in two main ways.

First, designers started adding larger and larger caches to CPUs (caches are easy to design but take up lots of transistors). But after a point, adding more cache doesn't help. The more cache you have, the slower it operates. So designers added a multi-level cache hierarchy. But this too only goes so far - as you add more cache levels, the performance delta between memory and cache decreases, because there's only a finite level of reference locality in code (data structures like linked lists don't help this). You may be able to get a single function in cache, but it's unlikely that you're going to get the whole data set used by a complex program. The net result is that beyond a certain point, adding more cache doesn't do much.

What do you do when you can't add more cache? You could add more functional units, but then you're constrained by your front-end logic again, which is a far more difficult problem to solve. You could add more front-end logic, which is what HyperThreading does. But that only helps if your functional units are sitting idle a substantial percentage of the time (as they did on the P4).

So you look at adding both functional units and more front-end logic. You'll decode many instruction streams and try to schedule them on many pipelines. This is what modern GPUs do, and for them, it works quite well. But most general-purpose code is loaded with data dependencies and branches, which makes it very difficult to schedule more than a very few (say, 4) instructions at a time, regardless of how many pipelines you have. So, now, effectively, you have one thread that is predominantly using 4 pipelines, and one that is predominantly using the other 4.

Wait, though. If one thread is mostly using one set of pipelines, and one is mostly using the other, we can split the pipelines into two groups. Each will take one thread. This way, our register and cache systems are simpler (because we only have to keep track of one set of registers and one PC, again ignoring things like register renaming). We get nearly the same efficiency, but with a simpler design.

Wait once again! Each "thread group" is almost like an entire core. Each has a decoding front-end, a register file, and a pipeline with a full set of functional units. We can make this simpler yet! We'll design a single complete core, that can decode a single instruction stream. Then we just slap two of them on a die, and tie them together using some kind of bus.

But, wait, this isn't particularly efficient, you say. Why not? We're wasting cache space. Despite the fact that there are two independent instruction streams (and two register sets), main memory is shared by both cores. Since cache mirrors what's in main memory (although in write-back systems it is sometimes more up-to-date), we only need one cache for both cores. So the designer designs two cores, with (relatively) small independent caches, which share a common second or third-level cache.

Guess what? This is exactly how Core 2 and Barcelona work. Is it the best architecture we could make with the number of transistors we have? No. But it's about the best we can do with the finite design resources and theoretical parallelism issues we have today. Multi-core processors aren't the most elegant or exciting solution, but they are cheaper to design and perform well enough in practice.

The designs we have are by no means the only approach. Many other designs have been tried, but none so far are as successful in practice at running today's general-purpose branch-happy code. That's why we have what we have.

Well, thanks for the answer (1)

Moraelin (679338) | more than 6 years ago | (#22048244)

Well, first of all, thanks for the in depth answer.

Another solution is virtual cores, or HyperThreading. HTT uses instructions from another thread (assuming that one is available) to fill pipeline slots that would otherwise be unused. The problem with HTT is that you still need a substantial amount of decoding logic for the other thread, not to mention a more advanced register system (although modern CPUs already have a very advanced register system, particularly on register-starved architectures like x86) and other associated logic. In addition, if you want to get benefits from pipeline stalls (e.g like on the P4), you need even more logic. This means that HTT isn't particularly beneficial unless you have code that results in a large number of data dependencies or branch mispredicts, or if pipeline stalls are particularly expensive.


Well, yes, that's what I was getting at.

Sure, each HT pseudo-core still has a decoder. So does a separate core. So IMHO 2x cores with 2x decoders and 4x pipelines each, should really be roughly the same amount of silicon as 1x core with 4x decoders and 8x pipelines each. It's still a total of 4 decoders, 4 register files, and 8 pipelines, right? The question is whether we could make better use of those in other ways than splitting it down the middle.

Wait, though. If one thread is mostly using one set of pipelines, and one is mostly using the other, we can split the pipelines into two groups. Each will take one thread. This way, our register and cache systems are simpler (because we only have to keep track of one set of registers and one PC, again ignoring things like register renaming). We get nearly the same efficiency, but with a simpler design.


If one thread is mostly using one set of pipelines and the other is mostly using the other, yes. But IMHO:

A) "mostly" doesn't mean all the time. If only 5% of the time one core could use one extra pipeline, while the other is idle, you'd still see slightly better speed out of a shared design.

B) That's already assuming you'll know exactly how many pipelines will each thread ever need. Unless you're also writing all the software for that CPU, that seems a bit less clear.

Point in case, look at all the CPUs with 2 or 4 cores _and_ 2 decoders per core. Are you sure that the two decoders on core 1 combined will never ever need an extra pipeline, while core 2 has one that's currently stalling?

But, really, I'd buy the argument if they didn't also pack HT on those cores. Once they went that route, it tells me that they're already not entirely sure how that 1 decoder will always need exactly X pipelines. Never more, never less.

That said, though, ok, I'll concede the point about simplicity. I can see how a multi-core design would be simpler.

Re:Well, thanks for the answer (1)

philipgar (595691) | more than 6 years ago | (#22048470)

One major problem you're missing is that having an extra decoder on the chip (that is used by another core) is not, and cannot be that useful to the other core. The problem is that accessing the other decoder will incur a huge latency penalty (20+ cycles). During those cycles, dependent instructions will generally stall in the main pipeline, and overall throughput could be decreased. Of course the scheduling to choose the other one is also a nightmare.

Comparing it with the construction analogy. if you were building a highway that is 20 miles long, having all your construction vehicles in one spot will significantly slow down each vehicle. Having them spread out could be useful (if they're all doing useful work), but having different vehicles at different intervals doesn't make sense if it requires workers to constantly walk down the highway to get the other vehicle. What does make sense is to have separate work crews operating on different portions of the highway at the same time. This way each portion might have its own functional units (vehicles) that are all somewhat close together. Trying to share them all together doesn't make sense unless one of the functional units is rarely used. This is actually done or could be done on some cores. For example the niagara processor has 8 cores, but only a single FPU. This allows any code that needs it to use it (although at a degraded speed). This makes a lot of sense for codes where it is rarely used (as replicating the fpu 8 times is expensive). Such a thing is also done with CPU cores when it comes to memory requests. There is a shared L2 that has limited bandwidth. As long as only one of the 2 cores is requesting from it at a time things should be okay. Of course, as we get more advanced these tradeoffs will only increase, resulting it what will likely be highly heterogeneous processors.

Phil

Re:You know what I don't get? (2, Informative)

peas_n_carrots (1025360) | more than 6 years ago | (#22048206)

"..because AMD at one point couldn't get hyperthreading right and had its marketers convince..."

Quick history lesson. Intel tried pawning off hyperthreading to the market. If you mean that AMD should have done hyperthreading, perhaps you should look at the reviews/benchmarks to see that it reduced performance in many cases. In the future, more software might by able to take advantage of increased thread parallelism, but that future is not now, at least in the x86 world.

Yes, yes it is. (1)

Nursie (632944) | more than 6 years ago | (#22048312)

that future is most certainly now. It's been here for a while.

Parallel processing is not some weird dream, way off in the future, that lots of people here on slashdot think it is. It's a reality and it's here now.

In fact it's been with us since the 70s in the form of multi-process software.
Multithreading has some idiots running scared ("It's so *hard*!" being their favourite lie), but it's been with us for quite some time. I've been writing multi threaded server and workstation software for about 8 years now and I'm not any sort of pioneer.

The fact that GAMES currently don't usually have threads is not in any way the same thing. And even a game benefitss from being able to run on one of the cores whilst the whole OS (and anything else running) gets shipped off to the other.

Yes and no (2, Insightful)

Moraelin (679338) | more than 6 years ago | (#22048324)

Quick history lesson. Intel tried pawning off hyperthreading to the market. If you mean that AMD should have done hyperthreading, perhaps you should look at the reviews/benchmarks to see that it reduced performance in many cases. In the future, more software might by able to take advantage of increased thread parallelism, but that future is not now, at least in the x86 world.


While I'll concede the point that Intel's first implementation was flawed, you can't judge and damn a technology for all eternity just by its first implementation. In the meantime even Intel's competitors (e.g., Sun) are implementing it, so it can't be that horribly worse than nothing.

Plus, then by the same kind of historical reasoning we should have said goodbye a long time ago to such stuff as:

- any kind of computing or calculating machines. After all, Babbage tried pawning off that idea to the market, and his implementation was never even finished.

- heavier than air airplanes. The first attempts with kites and bird wings were an outright disaster. We should have buried that idea right there and then.

- using rockets for space travel. There was this medieval Chinese dude who tried it first, with completely disastrous results.

- breech loaded guns. The first attempts had _major_ problems with sealing the barrel, because of poor tolerances.

- cavalry. It just wasn't that horribly good before it successively also got a good saddle, horseshoes, stirrups, and specially bred horses. There's a reason why the Romans created their empire with elite infantry, and the cavalry was just some specialized auxiliary.

- in fact, even earlier, we shouldn't have had even chariots. I mean, until someone invented a harness that allowed horses to pull one, it was pretty much useless. We know that the Sumerians tried using oxen there, and it couldn't have been that horribly effective. Should have discarded that idea right there and then.

- agriculture. Until the right plants, irrigation and cats became available, it was very much a losing proposition wherever it was tried.

Etc, etc, etc.

Re:You know what I don't get? (1)

eiapoce (1049910) | more than 6 years ago | (#22048232)

Basically I think (I am not a engeneer) that building a multicore is easyer than further development of Hyperthreading. In other words I suppose replicating 2 or more copies of the same "work" on a chip is faster and cheaper than continuosly developing new architectures that share a common pool. Otherwise your comment makes much sense.

Would we know the difference? (5, Informative)

SanityInAnarchy (655584) | more than 6 years ago | (#22048302)

I know that on Linux, I cannot immediately tell the difference between an SMP-enabled kernel on a single-core Hyperthreading system, and an SMP-enabled kernel on a dual-core system with no hyperthreading.

In either case, I'm fairly sure I see at least two items in /proc/cpuinfo, I need an SMP kernel, etc. So if someone (Intel) suddenly decided to make a dual-core hyperthreaded design in which the "teams" actually shared a common pool, would I notice, short of Intel making an announcement?

As for your assertion, a quick scan of Wikipedia suggests that you're a bit naively wrong here. (But then, I'm the one pretending to know what I'm talking about from a quick scan of wikipedia; I suppose I'm being naive.) Wikipedia makes a distinction between Instruction level parallelism [wikipedia.org] and Thread level parallelism [wikipedia.org] , with advantages and disadvantages for each.

One of the advantages of thread-level parallelism is that it's software deciding what can be parallized and how. This is all the threading, locking, message-passing, and general insanity that you have to deal with when writing code to take advantage of more than one CPU. As I understand it, a pipelining processor essentially has to do this work for you, by watching instructions as they come in, and somehow making sure that if instruction A depends on instruction B, they are not executed together. One way of doing this is to delay the entire chain until instruction A finishes. Another is to reorder the instructions.

But even if you consider this a solved problem, it requires a bit of hardware to solve. I'm guessing at some point, it's easier to just throw more cores at the problem than to try to make each core a more efficient pipeline, just as it's easier to throw more cores at the problem than it is to try to make each core run faster.

There's also that user-level interface I talked about above. With multicore and no hyperthreading, the OS knows which core is which, and can distribute tasks appropriately -- idle tasks can take up half of one core, the gzip process (or whatever) can take up ALL of another core. With multicore and hyperthreading, the OS might not know -- it might simply see four cores. And with multicore, hyperthreading, and shared pipelines, it gets worse -- as I understand it, there's no longer any way, at that point, that an OS can specify which CPU a particular thread should be sent to. Threading itself may become irrelevant.

Well, anyway... What confuses me is that we still haven't adopted languages and practices that naturally scale to multiple cores. I'm not talking about complex threading models that make it easy to deadlock -- I'm talking about message-passing systems like Erlang, or wholly-functional systems like Haskell.

Hint: Erlang programs can easily be ported from single-core to multi-core to a multi-machine cluster. Haskell programs require extra work at the source code level to be made single-threaded, and can (like Make) use an arbitrary number of threads, specifiable at the commandline. They're not perfect, by far; Haskell's garbage collector is single-threaded, I think. But that's an implementation detail; most programs in C and friends, even Perl/Python/Ruby, will not be written with multiple cores in mind, and, in fact, have single-threaded implementations (or stupid things like the GIL).

Well, yes, but... (1)

Moraelin (679338) | more than 6 years ago | (#22048344)

Well, yes, the thing about thread level parallelism vs instruction level parallelism is very insightful and true, but it only says why we're leaving case #1 behind. Cases #2, #3 and #4 all had thread level parallelism.

As for the languages, good question. I guess because it's cheaper to use existing skills and libraries than to port everything to Erlang? No real idea, though. I'm sure someone is better qualified than me to answer that.

What's wrong with this picture? (1)

c.r.o.c.o (123083) | more than 6 years ago | (#22047992)

First of all, most people buy low to mid-range CPUs and other goods, and while this may be enough to cover the production costs, the manufacturers' largest profits are on the high end CPUs, cars, watches, etc. Currently the increased price tag is justified to some extent by the increased quality, performance and even status given by the high end goods. But under the proposed model, there would be no physical difference between the CPUs, other than artificial limitations imposed by the manufacturer. Suddenly the increased price would be seen not as a result of a better product, but of greed.

Which brings me to the next point. If there are no physical differences, what would stop someone from removing them? Intel has a long history of shipping higher spec CPUs underclocked so not to swamp the market with fast CPUs. And yet in all cases people found ways to overclock them. The major factor that prevented many from doing so was the uncertainty of whether their CPU could handle the increased speeds or not. But when KNOWING they are identical?

And how would the manufacturer upgrade or downgrade the CPU? Yet another Windows Genuine (dis)Advantage?

Re:What's wrong with this picture? (1)

Dr. Spork (142693) | more than 6 years ago | (#22048074)

AMD's upcoming 3-core chips were actually supposed to be 4-core chips, but one of the cores had a defect in it so it got turned off. Even if there were a way to reverse the "turning off" of the core, it wouldn't do you any good. In the future, AMD might turn off the fourth core not because of a defect, but just because it wants to sell a chip at a lower price point without diluting the margins on their high end. The point is, you wouldn't know which of these reasons are responsible for your chip being thrown into the "three-core" bin.

Why? (2, Interesting)

RuBLed (995686) | more than 6 years ago | (#22047994)

If one could make a 5 core processor for the price of $300 and be able to sell it with 5 cores enabled to a customer for $600. Why would he sell the same unit for $400 with only 2 cores enabled?

Wouldn't he profit more if he could sell the 5 core processors all at $600 and make a separate 2 core processor for the price of $200 and sell it for $400?

Well if they're going to rent it (as some of TFA said), it would make sense but if they're not, then it would be a profit not maximized.

Re:Why? (0)

Anonymous Coward | more than 6 years ago | (#22048042)

The idea is that you want to give your demanding customers a reason to pay more than $400 for a processor, while also selling to customers who don't insist on having the highest available performance. If you don't offer them a $400 processor, they'll buy one from the competitor.

Re:Why? (4, Insightful)

Anne Thwacks (531696) | more than 6 years ago | (#22048128)

Because in reality, it costs $4.99 to make the chip, and $10,000,000 to design it.

The cost of designing one core is the same as the cost of designing 10 or 100 cores, because copy and paste was invented several years ago. The cost of adding a core to the design is about 1%.

There might be a case for powering down unused processors to save energy, and there is a case for selling cheaper processors with reduced core counts where some cores don't work, but there is no case for disabling working processeors for economic reasons.

Sun's Niagra technology differs, cos it has "virtual cores" which gives you more virtual cores but slower. Its very good if you multi-thread (run apache) and p*ss- poor if you dont (run Windows).

Re:Why? (1)

rm999 (775449) | more than 6 years ago | (#22048152)

"Wouldn't he profit more if he could sell the 5 core processors all at $600 and make a separate 2 core processor for the price of $200 and sell it for $400?"

Economy of scale says not necessarily. If you can build a factory that only builds one product, you can make it incredibly efficient. One possibility under this plan would be to intelligently disable cores. For example, let's say there is some failure rate in each core. The chips with high failure rates can have the failed cores disabled, and the company can still turn a profit (this ignores the lame renting idea).

I know graphics cards manufacturers do this - they disable pipelines and lower the clock rate in their cards that have high failure rates, and then sell it as a cheaper model.

Re:Why? (0)

Anonymous Coward | more than 6 years ago | (#22048168)

The only advantage I can see to this is if you have all your fabs making 5 core processors you may be able to reduce your mfg costs.
Just like how nVidia makes chips with all the pipelines enabled, but disables some of them, usually it's because they fail testing, but on occasion (very rarely) to fill demand for "cheaper" chips.

Though I still don't see it becoming common practice do disable perfectly good hardware just to fill a cheaper pricepoint.

Re:Why? (1, Interesting)

Anonymous Coward | more than 6 years ago | (#22048208)

Ahh, but if he sells the same unit for $400 with only 2 cores enabled, he can then come back and charge $100 per core to re-enable them, making it a net $700 for 5 cores working.

This is very common in the mainframe world: most mainframes shipped historically were shipped with all the CPUs and expansion boards already populated, just not turned on. This allowed the manufacturer to "upgrade" the system while it's still running. No need for hot-swap hardware.

This is a truly stupid idea (2, Interesting)

SmallFurryCreature (593017) | more than 6 years ago | (#22048002)

In theory it makes sense and some of you might point at mainframes as an example. However that would like comparing cars to trucks (real trucks not big cars), they are both vehicles and a company might use both but their usage is totally different.

PC's just ain't upgraded, either they are good enough or they are replaced. I love building my own computer but am not as crazy as to replace the CPU whenever a new clockspeed comes out and this means that even a self-builder will often have to bite the bullet and just replace everything.

Be honest, how often in business do you upgrade your desktops by replacing the CPU?

We can test this easily, in the era of the P3 a lot of office systems were DUAL ready, so that when your needs increased you could ad another P3 and have lots more power. How many of you did that with a P3 that had been in the office for more then a year?

This scheme seems like overthinking the problem. PC's in my experience either last until they die and by that time it cheaper to buy new then upgrade/repair, or they are simply replaced with the latest shining model because tech moves so fast that upgrading just the CPU will turn everything else into a bottle neck. Just check how many different types of memory we have had over the years. Would you really want a quad core on your IDE-33 motherboard? Play DVD's on a single speed cd-rom?

Either you need all the cores now, or by the time you activate them because your apps need them everything else will need to be upgraded too and a brand new CPU will be available that is far better AND cheaper.

But in a way we have had this solution for a long time now, but instead of activating extra cores when paid for, chipmakers instead sell defective chips for a reduced price so your still got a 4 core inside your machine but only 2 actually function (not sure wether this happens with entire cores but it is offcourse the case with cache memory).

I don't see this happening, especially if you consider that an army of nerds would be trying their best to break the enabling code to get their extra cores for free, just see what happened with the "dual" P2 and cheapo P3's, Intel would have a heart attack.

Re:This is a truly stupid idea (1)

MidnightBrewer (97195) | more than 6 years ago | (#22048496)

I agree that their approach to the problem is based on a flawed understanding of how processor development works, not to mention the tech industry's marketing strategy ("People like to buy shiny new objects on a regular basis.(tm)") We are still a ways away from reaching a design plateau where we have achieved some ultimate chip design that can no longer be improved on.

When you buy a computer, you buy it for the worst-case scenario. Your processing needs are probably not going to mysteriously increase over time; it's not like game developers are going to say, "Let's code for *ten* processors this year" while processor manufacturers cheer them on while sitting on their laurels. Even if you could enable four new cores to keep up with more power-hungry applications, this does you no good whatsoever if the latest dual-core chip is ten times faster.

Finally, the idea of being comfortable with paying a certain price now only to see those costs continue, and worse, increase over time is just bad math. Once you buy a piece of hardware, you want it to remain yours. The only way this scenario seems remotely viable is in a shared computing environment such as a college campus, not in the home of a private citizen. It's just silly, I tell you, silly.

Re:This is a truly stupid idea (-1, Troll)

Anonymous Coward | more than 6 years ago | (#22048498)

The thesis is so poor it's not even wrong.

Assumption 1: "the associated cost of buying unused computing power becomes more prohibitive."

Assumption 2: "the cost of manufacturing the processor does not rise linearly with the number of cores on the die"

Can you say "mutually contradictory assumptions"?

Do I understand this right? (5, Insightful)

Dr. Spork (142693) | more than 6 years ago | (#22048004)

TFA is written really badly, but from what I gather, the "more advanced" models of figuring out how much to charge for chips goes like this:

1. Everybody gets the same chip, but it will be crippled unless you pay the highest price.

2. Everybody gets the same uncrippled chip, but there's a FLOPS meter on it that phones home, and you pay Intel according to the amount of numbercrunching your chip did for you.

Both of these models seem completely retarded to me, although the first is already sort of in use in the CPU/GPU market. Have modern processors overshot our needs by so much that our big worry now is to find innovative ways to cripple them? If so, maybe this processor war we're fighting is ultimately not even worth winning.

Re:Do I understand this right? (1)

foobsr (693224) | more than 6 years ago | (#22048238)

If so, maybe this processor war we're fighting is ultimately not even worth winning.

Probably more a sign of a new kind of software gap, IMHO due to still missing AI (not everyone is dealing with video/visual data), this again caused by an imbalance in investment in basic research which favours 'hard science' (with the assumption that there is much more to AI than 'logic', even if it is 'fuzzy').

If there were 'intelligent' applications that could fix Joe Sixpack's everyday problems more autonomously – e.g., write this letter to ...! – you would even need more power than you have today.

CC.

ChipBricks? (0)

Anonymous Coward | more than 6 years ago | (#22048016)

All I can see is another fight between hackers and chip makers to "unlock" the chips.

Aaannnd (1)

AndGodSed (968378) | more than 6 years ago | (#22048046)

Will it work with Linux?

Seriously though, how will this be managed, how will this tie in with open source business models. If anyone can see the source code of the program "managing" this, anybody can open up whatever cores they need.

I can't see this working.

Broken economics... (1)

Goalie_Ca (584234) | more than 6 years ago | (#22048086)

CPU economics are all about yields. They will design a chip, say with 8 cores. Some of the cores might have manufacturing problems so they disable them. The chips with all 8 working cores cost more while the chips with 4 or 6 working cores cost less.

Back in the "olden" days of two years ago the same would happen but with clock speed. The chips that could clock higher without problems got sold as the 1800+ while ones that failed under testing at higher frequencies would get sold as 1600+.

Chips use so much power that if all circuits were enabled for any given period of time the whole chip would fry. We already disable/undervolt quite a lot. There are chips out there already where entire cores get shut off.

Calculators (3, Interesting)

Detritus (11846) | more than 6 years ago | (#22048108)

Someone already mentioned mainframes. Something similar is often done with calculators. Rather than design a new chip for each model, they design a single chip with all of the features. In mid-range and low-end models, it is crippled by the design of the keyboard and/or jumpers. It is often cheaper to dumb down a single hardware design than to produce unique designs for each segment of the market.

Come on, this "reseracher" proposes DRM for CPUs (2)

eiapoce (1049910) | more than 6 years ago | (#22048158)

In the sense of Digital Restriction Managment a part of the article states:

This can be accomplished with small pieces of logic incorporated into the processor that enables the vendor to disable/enable individual cores
Now think once or maybe twice about it. The situation could be that of a manager of a datacenter, which probably handles sensitive data, and lets the vendor mess realtime whith the CPUs (and possibly the data) driving the system just because he wants to save a few hundred dollars on a digitally castrated chip. Though idiotness is a widespread illness I don't see who could be such a moron. This could only be acceptable by the CIA or the vendors themselves; not long ago the pentium whith Serial on chip were rejected by the market, this is much worse.

This business model is dead meat to me. I think that the market will continue to offer processors classified on the maximum data processing rate and pricing them accordingly. I don't see a future for this Processor Restriction Managment nor for the career of the guy who wrote the article in the first place. Suggestion: after he's been dumped from University don't get him as datacenter manager.

Here's a better business model (2, Interesting)

WaZiX (766733) | more than 6 years ago | (#22048166)

1) Sell your super high power 20 cores CPU uncrippled.
2) Make a platform where researchers can rent CPU power.
3) Allow your customers to rent their unused CPU power/cores.
4) Charge double what you give to your customers to the researchers.
5) Profit! (From both the sale and the rental afterwards).

And there is no ?...

Re:Here's a better business model (0, Funny)

Anonymous Coward | more than 6 years ago | (#22048406)

While that's a far brighter solution than what these idiots came up with it still doesn't address my foremost concern on the subject. I don't want my damned computer to have another corporate backdoor recording everything my CPU touches on some chip manufacturers server farm.

It's like this:

Regular assholes work like this:

1. You eat food.
2. You digest food.
3. You shit digested food.

Proposed assholes work like this:

1. You eat food.
2. A stranger sticks his grubby hands up your asshole and fondles your shit.
3. You digest food.
4. Said stranger accounts for how much food was digested.
5. You shit digested food.
6. Stranger makes you pay only for the food you ate.

See, I don't want a stranger fondling my shit while it's still inside me. I'd much rather just estimate beforehand how much food I need, and buy accordingly. If I only need 2 servings, I dont want to have to eat 5 servings and then let someone repossess my leftovers.

I think I'm going to use this metaphor for all of these retarded business models.

Nobody bought the original DivX idea.. (1)

Duncan Blackthorne (1095849) | more than 6 years ago | (#22048196)

..so why would they bite on this one? Here, you can buy this processor for really cheap, but every time you want to use it, you have to call us and pay a rental fee.

Rediculousness. Besides which, it's a no-brainer that it'd be a zero-day hack to enable all the available processing power on a given chip.

This should be illegal (0)

daem0n1x (748565) | more than 6 years ago | (#22048200)

IMHO, this should be illegal. Compete in improving, not crippling.

Nice idea, then reality hits (2, Insightful)

JRHelgeson (576325) | more than 6 years ago | (#22048212)

So, what would happen if the Microsoft DRM update management and monitoring "feature" has a "bug" and hits 100% utilization as it tries to verify the authenticity and my right to possess my entire music collection... do i have to pay a processor tax for that? What about a runtime condition? An app locks up and hits 100% utilization until it is killed. OOPS, I need to ante up for the Tflop tax. Or when I file my annual procmon return I cna apply for earned op/sec credit, filing as head of household...

I'm not about to pay a tax on other peoples poorly written software.

Try and sell this to open source geeks (1)

OeLeWaPpErKe (412765) | more than 6 years ago | (#22048218)

It just won't work.

Also in the news (0)

Anonymous Coward | more than 6 years ago | (#22048222)

This morning a manufacturer of toilets presented their new SecureProtect(tm) flushing technology. The new toilet range, sold at a 20% discount to toilets that lack SecureProtect(tm), will contain a level of flushing power that is enabled by default. For the occasion where the user is experiencing additional load, the user may swipe a credit card through the SecureProtect(tm) slot and double or even triple (by choice) the flushing power available.

Whyyyyyyyy (1)

Kashgarinn (1036758) | more than 6 years ago | (#22048242)

I've never understood how customers really benefits from a model of produce x-core productand sell it as x-y cores, especially as we all know that they never sell any type of the product below what it actually costs to produce, so at minimum you're paying for what it cost them to make the product anyway, and at maximum, you're paying for having the product locked in some arbitrarily stupid way.

When you're already profiting from making the product at the lowest tier.. with highest tier capability already in it...

If you people were guessing:
1) Create a single production line for your product making it cheap and affordable
2) add in arbitrary, stupid lock-in to increase perceived value of an unlocked product
3) Profit!

I'll never buy anything sold with this model, it's stupid and meaningless.

K.

Re:Whyyyyyyyy (1)

Goffee71 (628501) | more than 6 years ago | (#22048314)

I'd buy into this if: Each PC had a big panel on the front with a key need to activate each processor and a big red ON switch like they have on ICBM launch desks in the movies. Big LEDs for each each 25% CPU power used for each core, that make you feel like you're in charge of something powerful The panel makes a 'Starship Enterprise warp engaging' sound whenever you activate a core And makes that Death Star 'tractor beam off sound' when you deactivate one Meet these demands and I might just think about it, until then, bog off with your silly ideas!

STOP TAGGING whatcouldbpossiblygowrong ALREADY (3, Insightful)

quitte (1098453) | more than 6 years ago | (#22048254)

really. STOP IT!

Re:STOP TAGGING whatcouldbpossiblygowrong ALREADY (1)

Arimus (198136) | more than 6 years ago | (#22048298)

Sadly in this instance I think its valid...

pity it is about the only bloody occurance where it is but throw enough darts and one is bound to hit the board...

Re:STOP TAGGING whatcouldbpossiblygowrong ALREADY (0)

Anonymous Coward | more than 6 years ago | (#22048480)

I love the "whatcouldpossiblygowrong" tag.

Personally, I find it interesting how many Slashdot stories fall so easily into the category of "whatcouldpossiblygowrong". Maybe this is a by-product of the way stories are pitched here, with sensational, fearmongering write-ups getting the most play. Or maybe it's that nerds have an innate tendency to ask this question; that's the essence of the hack, right? We see the technological or societal holes and then we're compelled to exploit them, or figure out ways to fix them, or just point and say, "Oh, yeah. What could possibly go wrong?" and then watch as our skepticism is proven to be entirely justifiable.

I also find it interesting that, if there were a story about a Slashdot feature that allowed users to put whatever words they wanted on the front page, it would most certainly get the tag "whatcouldpossiblygowrong"... ;)

Crippleware... (4, Insightful)

Bert64 (520050) | more than 6 years ago | (#22048266)

This is crippleware, and a terrible idea for the average consumer...
Paying more for a product that costs the same to produce, or potentially even less because they don't have to disable the extra cores is a terrible rip off, and it happens already...

The same people who currently overclock, will buy the cheaper cpus with cores disabled and re-enable them... You will also get third parties who make a business out of doing the same, tho without the "exceeding design spec" risks of overclocking.

Personally, I will never pay more for a more expensive version of the same product, i will buy the cheapest available just as soon as people have worked out how to re-enable the disabled cores, and i will help my less technical friends do the same.

Graphics will use many cores (1)

G3ckoG33k (647276) | more than 6 years ago | (#22048300)

Search on google for "Intel" and "Larrabee" if you need to know slightly more. Rumors have floated around for almost two years now about that project, with a release date estimated to 2009 or so.

Also, if you need a job in the multicore business, check out http://www.intel.com/jobs/careers/visualcomputing/ [intel.com]

In short, visual computing (read gaming) will use all those cores mentioned in the article, word processing will not. Be so sure.
.

Alternative (1)

ozbird (127571) | more than 6 years ago | (#22048326)

Use FOSS, and tell them where to stick their core tax.

Dude... wait, what? (4, Insightful)

The Master Control P (655590) | more than 6 years ago | (#22048332)

So Intel is going to design a CPU with N cores on it, then add hardware that disables half of them, then manufacture the chip with all N cores and sell it for less, even though it actually costs more to design/build because of the added hardware to cripple it, then try and make us pay for access to the other half of the cores and hope we don't notice that our computers have suddenly become a constant expense instead of a one-time purchase?

And moreover, they apparently forgot which problem they're trying to solve between paragraphs 4 and 5. They start talking about the real problem of many cores creating a very large space of core/memory architectures that would be difficult to choose between and support. Then they veer off into the rent-your-own-hardware-back-to-you idea and never finish reasoning out just how it would work before they come back. A few minor things they ignored:
  • How do they turn cores on? Difficult level: No, you can NOT have a privileged link through my firewall onto my network.
  • How do they stop me from hacking it and enabling it all myself? Difficulty level: Mathematically impossible since you can't stop Eve from listening if Eve and Bob are the same person.
  • How do they propose to bill me? Difficulty: No, I will NOT let my CPU spy on me.
  • Why should I hand you everything you need to force me to upgrade against my will?
  • What happens if you go out of business and leave me stranded?
  • Even if you don't see what's wrong with charging me continually to access my own hardware, do you actually think I won't?
In conclusion, Profs. Sloan & Kumar of the University of Illinois, I believe the premises and reasoning behind your proposal to be flawed, and the proposal itself to be unworkable and contradictory to openness in computing. Or, as we say on the Internet, wtf r u doin???

Yes, but will it... (1)

hAckz0r (989977) | more than 6 years ago | (#22048438)

Will it automatically up/down grade our Microsoft licenses and make the proper deductions from our bank accounts too? Microsofts accounting staff would love that feature!

Not only in mainframe (0)

Anonymous Coward | more than 6 years ago | (#22048442)

Also in Unix/Linux Pseries and AS/400 Iseries servers. It is called Capacity On Demand :) CoD [ibm.com]

I like "ohgodno" too. (0)

Anonymous Coward | more than 6 years ago | (#22048462)

I'm actually looking forward to when articles on ./ will simply be tagged "whatcouldpossiblygowrong" and nothing else.

COOL FEATURE DEFINETLY (0)

Anonymous Coward | more than 6 years ago | (#22048464)

Just need to find a guy able to unlock such CPU's for $30
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?