Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Nvidia's Kal-El Tegra Will Have Fifth "Companion Core"

Unknown Lamer posted more than 3 years ago | from the phone-still-dies-before-five dept.

Hardware 98

Blacklaw writes with an article in Thinq about the upcoming quad-core Tegra chipset. Quoting the article: "Nvidia has released a few technical details of its upcoming 'Kal-El' Tegra processor, including a secret it's done well to keep under its hat thus far: it's a five-core, not four-core, chip." The fifth core will be clocked lower and is intended to let the system use little power without having to fully suspend. A few years ago Openmoko had a vaguely similar idea to include a microcontroller for low-resource idle tasks (e.g. GPS logging), but this design is superior since it should be more or less transparent to user space programs.

cancel ×

98 comments

Sorry! There are no comments related to the filter you selected.

Who? (1)

Kid Zero (4866) | more than 3 years ago | (#37460120)

I thought Supergirl was Superman's cousin.

Oh, it's a chip. :)

Re:Who? (1)

unixisc (2429386) | more than 3 years ago | (#37464736)

In the 90's, AMD backed out of calling the K5 Kryptonite, fearing a clash w/ DC comics, but now nVidia has no issues calling their new chip Kal El?

Re:Who? (0)

Anonymous Coward | about 3 years ago | (#37493934)

What? Who mentioned Supergirl. "Kal-El" is is Superman's Kryptonian name, Kara Zor-El is Supergirl's Kryptonian name.

Damn it! (2)

fuzzyfuzzyfungus (1223518) | more than 3 years ago | (#37460142)

I'm a "sidekick" core, not a "companion" core...

Re:Damn it! (1)

Riceballsan (816702) | more than 3 years ago | (#37460170)

What if we drew a little heart on it?

Re:Damn it! (1)

billstewart (78916) | more than 3 years ago | (#37460268)

Will it help you get to the cake?

Re:Damn it! (0)

Anonymous Coward | more than 3 years ago | (#37460214)

"skeleton crew" core. Or "skeleton" core for short. Has a nice cyberpunk theme to it. Sidekick and companion sounds too gay.

Re:Damn it! (5, Funny)

MachDelta (704883) | more than 3 years ago | (#37460372)

Pfft.
The correct cyberpunkish term would be "auxiliary" core, since it sounds so badass and semi-mechanical.
Plus, most people can't spell auxilli- auxiler-, auxxi... the word, which makes it more exclusive.
And there's an X in it. Only the cool, eyeliner wearing words have x's... other than "extreme" anyways. That guy's a dick. He stuffed me in a locker once.

Anyways; Auxiliary core.
Yeah.

Re:Damn it! (2)

shutdown -p now (807394) | more than 3 years ago | (#37461518)

The correct cyberpunkish term would be "auxiliary" core, since it sounds so badass and semi-mechanical.
Plus, most people can't spell auxilli- auxiler-, auxxi... the word, which makes it more exclusive.
And there's an X in it.

And you can then shorten it down to "xore", for even more sheer coolness. ~

As the case may be... (0)

Anonymous Coward | more than 3 years ago | (#37464450)

^ ~

In Soviet Russia (4, Funny)

Roachie (2180772) | more than 3 years ago | (#37460326)

we call him "comrade core"

Re:Damn it! (1)

Anonymous Coward | more than 3 years ago | (#37460424)

Incidentally, it's "Hero Support."

Re:Damn it! (1)

Hallow (2706) | more than 3 years ago | (#37460996)

Yeah, they're mixing metaphors. Superman and Dr. Who, do NOT go together. I think it should be the Dark Knight core. No super powers. :)

Re:Damn it! (0)

Anonymous Coward | more than 3 years ago | (#37462956)

It's all just British terminology versus American Terminology.
Dr Who - British - "Companion"
Batman - American - "Sidekick".

QED

Re:Damn it! (0)

Anonymous Coward | more than 3 years ago | (#37463302)

Make it official. You can be either a civil union core or an Andy core. Which is it?

Re:Damn it! (0)

Anonymous Coward | more than 3 years ago | (#37463968)

Core with benefits.

multipass? (2)

demonbug (309515) | more than 3 years ago | (#37460184)

Or something about elephants...

Do you have to incinerate it? (2, Funny)

Anonymous Coward | more than 3 years ago | (#37460210)

The Enrichment Center reminds you that the Companion Core cannot speak. In the event that the Companion Core does speak, the Enrichment Center urges you to disregard its advice.

The companion core cannot speak (-1)

Anonymous Coward | more than 3 years ago | (#37460244)

Nvidia would like to remind you that the companion core cannot speak. In the event that the companion core does speak, Nvidia urges you to disregard its advice.

well... (0)

eexaa (1252378) | more than 3 years ago | (#37460258)

I hope it has a small pink heart image on the sillicon.

Re:well... (1)

ericloewe (2129490) | more than 3 years ago | (#37460758)

But then we'd be forced to commit murder by throwing them into an incinerator!

Last time... (0)

Anonymous Coward | more than 3 years ago | (#37460270)

Last time I had a companion, I had to destroy it in an incinerator. I felt like a monster.

Re:Last time... (1)

Anaerin (905998) | more than 3 years ago | (#37461200)

And I bet you did it faster than any other test subject on record, murderer!

Does it wear glasses and a tie? (1)

wagnerrp (1305589) | more than 3 years ago | (#37460288)

A fraction of the speed, and only one core, they should have codenamed it Clark.

Re:Does it wear glasses and a tie? (1)

MobileTatsu-NJG (946591) | more than 3 years ago | (#37461070)

If it constantly calls on Kal-el for help they should call it Lois.

Pressing issues (1)

EdZ (755139) | more than 3 years ago | (#37460320)

Hopefully Nvidia will deign to support h.264 High Profile this time. Sure, the Tegra2 can play back 1080p happily at unreasonably high bitrates (for something you'd watch on a phone), but only if you don't use weighted p/b frames or CABAC when encoding. Guess what the majority of video you'll find in h.264 uses? It's a real glaring omission.

Re:Pressing issues (1)

WilyCoder (736280) | more than 3 years ago | (#37460358)

When I picked up a Xoom on the launch date, I was very disappointed to witness the lack of high profile support.

They seem to have fixed it with the 3.1 update. However, I am refering to 720p content, no idea if 1080p high profile is supported yet. I'd wager a guess of 'no'

Its very deceptive marketing for nVidia to claim they do 1080p playback...

Re:Pressing issues (1)

JDG1980 (2438906) | more than 3 years ago | (#37460692)

If they included support for decoding all the content types permitted by the Blu-ray standard, this new chip might work very well for a low-power set top streamer. It would be strong enough to take the place of a HTPC, especially if XBMC could be made to run on it.

Re:Pressing issues (1)

EdZ (755139) | more than 3 years ago | (#37461662)

For cheap streaming duties, the RaspberryPi looks pretty neat. Level 4.1 (yes, really!) HiP. Yep, it should be happy blu-ray without transcoding.

Re:Pressing issues (1)

JDG1980 (2438906) | more than 3 years ago | (#37467214)

Yes, the Raspberry Pi could be an awesome streamer. I wonder if it will support HDMI 1.3 for TrueHD/DTS-HD bitstreaming? (There are already open-source implementations for these if the hardware supports it.)

4 Cores? (1)

purpledinoz (573045) | more than 3 years ago | (#37460350)

I don't even have 4 cores in my main PC, what am I going to do with 4 cores on a phone? The companion core is an interesting idea to increase battery life. But I have the feeling that as soon as the 4 main cores kick in, I would be left with a dead battery and burns on my hands. I also wonder how smooth the transitions between the companion core and the main cores will be...

Re:4 Cores? (2)

BZ (40346) | more than 3 years ago | (#37460382)

> what am I going to do with 4 cores on a phone?

Use less power any time you have four parallel threads of execution than you would with a single core tying to run them all via timeslicing...

Also, this may be targeted at tablets, not phones.

Re:4 Cores? (2)

purpledinoz (573045) | more than 3 years ago | (#37460496)

From what I read from the article, it looks like they turn on one core at a time, as needed. Also, their chart indicates significant power reductions too. I'm curious to see how it does when reviewers get their hands on it. I currently have a Tegra 2 phone (LG Optimus 2X), and with Cyanogenmod 7, I love it. This is my first smartphone, so the one thing I'm not happy about is having to charge my phone every day.

Re:4 Cores? (1)

eamonman (567383) | more than 3 years ago | (#37461480)

Motorola Droid owner here. If I don't bring my charger to work, it will die at around 6-7 hours in, and that's with even nearly everything off (must be because I have nearly zero cell reception in my office). Annoying... I can't wait till my contract ends in a month or two so I can dump this slow, battery sucking thing asap. I expect these new multi core phones to at least run things faster, and hopefully save some power when they aren't running things.

Re:4 Cores? (0)

Anonymous Coward | more than 3 years ago | (#37461996)

it's your office's bad reception. with light use, my OG droid's battery will last the better part of two days without charging. it's neck and neck with my blackberry curve. It beats the hell out of the curve if the curve decides to keep the screen on (what a screwed up bug that is) after I get an email.

it's not quite so good when i'm visiting another office with bad reception, i'll go through the battery at least twice as quickly there.

if you get a 4g phone you're in for a rude awakening. the droid is a long distance runner compared to them. the extra chip for 4g chews through the battery in 5-6 hours with GOOD reception.

Re:4 Cores? (1)

wintercolby (1117427) | more than 3 years ago | (#37469424)

Good luck with Verizon's new data limited contract pricing, I for one will be looking at smaller carriers with unlimited data plans when my contract ends. It's not that I use that much data, it's that I won't abide by paying more and getting less.

Re:4 Cores? (0)

Anonymous Coward | more than 3 years ago | (#37460594)

Running 1 core at 100% rather then 4 cores at 25% is much more efficient since power usage is not completely based on cpu utilization as shown on recent desktop/laptop cpus. This, of course, assumes the cores not in use are fully shut off rather then underclocked.

Having more cores allows the possible option to enable/disable cores as needed. This is also the reason why a single low clocked core alongside is very desirable as the other 4 cores can be completely off when that single one can handle the load. Current generation cpus uses both dynamic clocking and core disabling to help reduce idle power usage.

Re:4 Cores? (2)

OrangeTide (124937) | more than 3 years ago | (#37460466)

four ARM cores still uses less power than one intel Atom...

Re:4 Cores? (0)

Anonymous Coward | more than 3 years ago | (#37462510)

four ARM cores still uses less power than one intel Atom...

Do you have some data that supports your statement?

From what I can tell from published data, a 45nm, 64-bit 800MHz Atom processor released in 2Q 2008 in a 45nm process is listed at 650mW for maximum thermal design point (TDP).

According to the Nvidia white paper, which describes an unreleased four core product, says it consumes 1260mW for four cores on a particular benchmark. That's 315mW per core in 40nm (i.e. a smaller process).

So your statement would imply that 1260 So even if we:

...compare a processor (L2, I/Os, PLLs, etc) versus a core

...in a newer, smaller process (45nm vs 40nm)

...and equate TDP with power consumption of a particular app/benchmark (a "power virus" versus a functional program)

...that isn't released yet (estimates put it at Q4 2011)

Your statement is still off by a factor of 2.

Really, you should have said something more like "an Atom processor has a TDP that's 1/2 the power of a four core, ARM A9 running coremark that will come out 3.5 years later(*) in a newer design process."

(*) best case

Re:4 Cores? (1)

fuzzyfuzzyfungus (1223518) | more than 3 years ago | (#37463380)

The real killer with the Atoms is the supporting chipset. Worst case were the ones based on desktop i945 chipsets. A couple of watts for the CPU, ten times that for the chipset, and all that for GMA950 graphics! The mobile i945 parts were a bit less thirsty, as was Nvidia's offering, albeit both more expensive. There were a few releases with the SCH instead, which cut the power budget significantly; but featured the nightmare world of suck that was the GMA500...

Re:4 Cores? (1)

Stephen Robinson (213372) | more than 3 years ago | (#37463520)

Absolutely. So "ARM is lower power than Atom" isn't a "core" statement, it's an SOC/platform statement. Shouldn't we be focused on that instead of the cores? Maybe the ARM ISA isn't fundamentally "lower power" than x86.

This paper focused on cores and core power, not SOC power, so it seems like the core power was what was being discussed here.

Do you have any idea how GMA950 graphics or GMA500 graphics compare to the graphics inside the current ARM SOCs used in phones & tablets? Isn't GMA500 a rebranded Imagination graphics core? Maybe there's a reason why it's higher power graphics (DX10 vs OpenGL, FPS, supported resolutions, etc). Is there some reason to think that the core when used by Intel is somehow less efficient than when it's used by other SOC vendors?

When Nvidia does GFX cards for PCs, they're high power and high performance. When they do them for tablets/phones, they're lower power and lower performance. Maybe all of this "power" arguing really just a design target issue.

Re:4 Cores? (0)

Anonymous Coward | about 3 years ago | (#37492712)

the GPU in the Tegra2 benchmarks to be almost the same as the GMA950, faster in some areas, slower in others. But the bus is one third the clock rate and the bus width is half as wide, so we're looking at really one sixth of the bandwidth of a typical Atom. nvidia claims the Kal-El has 3x the 3D performance of the previous generation and four times the memory bandwidth, so it sounds like it is still remaining competitive with Atom, even though an SoC's real competition is other SoCs used in phones and tablets.

Re:4 Cores? (1)

MozeeToby (1163751) | more than 3 years ago | (#37460584)

You might as well ask what you're going to do with an 8 core desktop PC running at 1.7 GHz when your 200 MHz P4 can already boil a cup of water. Power efficiency is about the only thing that increases even faster than performance. I'd be willing to bet you anything that this 4+1 core CPU uses less power and generates less heat than whatever processor is in your existing phone.

Re:4 Cores? (0)

Anonymous Coward | more than 3 years ago | (#37460590)

I guess you can do stuff 4 times faster than with one core (assuming one core would do more than just one task at the hand, including background jitter). So you do stuff 4x faster, then your screen doesn't need to be lit for that long and other components can go to sleep sooner.
I've got G1 with 2.2 or 2.3 android and its pretty slow, so even if I just want to check maps I'm looking at "loading" all the time, wasting time and battery.

Re:4 Cores? (1)

jittles (1613415) | more than 3 years ago | (#37461294)

You'd be surprised. My Dual core HTC Evo 3D has over twice the battery life of my single core Evo 4G. The battery is only 50% bigger.

Re:4 Cores? (0)

Anonymous Coward | more than 3 years ago | (#37461782)

My Evo3D has 3 times the battery live of my Evo4G, there was a 3 for 1 special on spare Batteries on ebay...

Re:4 Cores? (2)

msauve (701917) | more than 3 years ago | (#37461670)

"The companion core is an interesting idea to increase battery life."

More likely, an idea to increase yields.

1 Make 5 core chips
2 When testing, take the core that fails at the lowest clock rate and make it the poor step-child.
3 ???
4 Profit by saying that under-performing core is a feature!

Seriously, if this were about energy savings, why not just put a clock divisor on an existing core to produce savings?

Re:4 Cores? (1)

froggymana (1896008) | more than 3 years ago | (#37461984)

"The companion core is an interesting idea to increase battery life."

More likely, an idea to increase yields.

1 Make 5 core chips

2 When testing, take the core that fails at the lowest clock rate and make it the poor step-child.

3 ???

4 Profit by saying that under-performing core is a feature!

Seriously, if this were about energy savings, why not just put a clock divisor on an existing core to produce savings?

According to the TFA that 5th core isn't made the same as the other 4 cores. The 5th core is actually processed using a different process than what the other 4 use. It isn't physically the same core.

Re:4 Cores? (1)

edxwelch (600979) | more than 3 years ago | (#37461950)

You have a good point. I suspect that most of the software won't be able to take advantage of the multi-cores.

These have five (3, Funny)

Anonymous Coward | more than 3 years ago | (#37460384)

Nigel: ...the chips have five cores. Look...right across the board.
Marty: Ahh...oh, I see....
Nigel: five...five...five...
Marty: ...and most of these chips go up to four....
Nigel: Exactly.
Marty: Does that mean it's...faster? Is it any faster?
Nigel: Well, it's one faster, isn't it? It's not four. You see, most...most blokes, you know, will be running on four. You're on four here...all the way up...all the way up....
Marty: Yeah....
Nigel: ...all the way up. You're on four on your processes...where can you go from there? Where?
Marty: I don't know....
Nigel: Nowhere. Exactly. What we do is if we need that extra...push over the cliff...you know what we do?
Marty: Put it up to five.
Nigel: Five. Exactly. One faster.
Marty: Why don't you just make four faster and make faster be the top number...and make that a little faster?
Nigel: ...these have five.

Re:These have five (1)

iluvcapra (782887) | more than 3 years ago | (#37460848)

THERE ARE FOUR CORES!

Re:These have five (1)

genner (694963) | more than 3 years ago | (#37466598)

THERE ARE FOUR CORES!

I don't know how you could be so mistaken.

Re:These have five (0)

Anonymous Coward | more than 2 years ago | (#37474202)

Nah, Keldon class cores are difficult to count....

F*ck it, doing 5 cores (2)

rsborg (111459) | more than 3 years ago | (#37460428)

Obligatory:
http://www.theonion.com/articles/fuck-everything-were-doing-five-blades,11056/ [theonion.com]

Seriously, a low-performance core doing administrivia type work sounds great, but won't this require OS support? I can't imagine this detail is completely abstracted from the kernel.

Re:F*ck it, doing 5 cores (0)

Anonymous Coward | more than 3 years ago | (#37460514)

The fifth wheel

Re:F*ck it, doing 5 cores (1)

Anonymous Coward | more than 3 years ago | (#37460632)

I've been wanting something like this ever since Intel came out with the Atom. Why can't I have an i7 or whatever HTPC or home server that can run 24/7, listening to the input devices (including NIC) with the OS running on the Atom, then kick off the i7 core(s) whenever it needs them for more expensive processing?

I think it would be relatively small changes to make the OS aware of deep power saving states that it should use on the fast cores, and a bit of power management tweaking to coax it into vacating the power-hungry cores and accepting a configurable amount of load on the slow core. I'm not even sure if it needs to be aware of the "slow" core being slow, other than the feedback it already gets from the process scheduler about run-queue length.

I think the bigger obstacle would be the need to power RAM up and down to really get savings. You don't want a huge amount of multi-channel RAM powered up during those long idle periods either. You want the VM layer to evict pages, defragment RAM, and power down whole chips or even switch from multi-channel to single-channel modes. Perhaps having an area of RAM that is always single-channel would be easier, but then the VM layer needs to become aware of this special, slow zone of memory and avoid using it except when it is trying to shrink to low power mode...

Worst case, I suppose you could emulate both of these using some adaptive feedback and playing with the hot-add and hot-remove features to adjust from a high-powered multi-core mode with multi-channel memory to a low-powered single-core mode. Basically hot-add and transparently start using the high-powered features, then prepare to hot-remove the slow ones so they get vacated and don't interfere with scheduling or page placement decisions which are unaware of the asymmetric machine properties. Shifting back down to low power mode would be the opposite: hot-add the slow parts and then start vacating and hot-removing the fast parts.

Re:F*ck it, doing 5 cores (0)

Anonymous Coward | more than 3 years ago | (#37462100)

combination of "hlt" command while the computer is active, and standby mode for everything else, pretty much covers this with present technology.

Re:F*ck it, doing 5 cores (1)

demonbug (309515) | more than 3 years ago | (#37460654)

Obligatory:
http://www.theonion.com/articles/fuck-everything-were-doing-five-blades,11056/ [theonion.com]

Seriously, a low-performance core doing administrivia type work sounds great, but won't this require OS support? I can't imagine this detail is completely abstracted from the kernel.

Anandtech [anandtech.com] also has an article up on this. From the sound of it this isn't really different from other multi-core processors that are able to power down or turn off individual cores. At low system demand, the CPU switches to the companion core and reports a single core available for task scheduling; if system demand is too high for the companion cube, er, core to handle the CPU switches to the main core(s). Sounds like a slight delay going from the companion to main (Anandtech quotes it at 2 ms), but as far as the OS is concerned it is no different than the situation we have now where one or more cores can be turned off independently.

Re:F*ck it, doing 5 cores (1)

TheRaven64 (641858) | more than 3 years ago | (#37461352)

The difference is that this core is not like the others. Something like a Cortex A5 core can run the same userspace code as an A9 core, but in a much smaller power envelope (and much slower), both in terms of idle and full load. This means that you can turn off the the four fast cores and leave the slow one running. Userspace code doesn't notice, but your power is a lot lower than if you'd left one of the fast cores running. I don't know exactly what this core is, but something with a single in-order pipeline would make sense. You leave it running all the time, and it draws 10mW or so, and when the load shoots up (e.g. when the user is doing stuff) then you bring the other cores online. Additionally, it sounds like this core is hidden from the OS, so the OS just sees 4 cores, and when it's only scheduling things on one core the CPU will move them to the low-power core.

Re:F*ck it, doing 5 cores (0)

Anonymous Coward | more than 3 years ago | (#37460734)

Obviously it's too soon to know how well this will work but according to the article on anandtech [anandtech.com] , the OS is not even aware of the existence of this fifth core.

Android isn't aware of the fifth core, it only sees up to 4 at any given time. NVIDIA accomplishes this by hotplugging the cores into the scheduler. The core OS doesn't have to be modified or aware of NVIDIA's 4+1 arrangement (which it calls vSMP). NVIDIA's CPU governor code defines the specific conditions that trigger activating cores. For example, under a certain level of CPU demand the scheduler will be told there's only a single core available (the companion core). As the workload increases, the governor will sleep the companion core and enable the first GP core. If the workload continues to increase, subsequent cores will be made available to the scheduler. Similarly if the workload decreases, the cores will be removed from the scheduling pool one by one.

Re:F*ck it, doing 5 cores (1)

ericloewe (2129490) | more than 3 years ago | (#37460846)

Not really. By detecting the load, it could automatically decide which cores to activate. Since they're all the same architecture, the only difference should be execution speed. Maybe it only exposes 4 cores to the OS, and the companion core "shares" core 1's tasks. For example: While idle, companion core is active, running the network stack (for example). When the user does something: the companion core offloads its tasks to core 1 as soon as the load increases, or maybe core 4 as soon as cores 1-3 are saturated. Either way, the OS only sees 4 cores.

Re:F*ck it, doing 5 cores (1)

Anonymous Coward | more than 3 years ago | (#37460914)

"Either way, the OS only sees 4 cores."

How many cores do you see?

THERE ARE *FOUR* CORES!

Re:F*ck it, doing 5 cores (2)

adisakp (705706) | more than 3 years ago | (#37460930)

Seriously, a low-performance core doing administrivia type work sounds great, but won't this require OS support? I can't imagine this detail is completely abstracted from the kernel.

Modern OS's can already use multiple cores (including non-power of 2 such as AMD 3-core CPUs) and already have the ability to suspend cores that are not in use. In fact the ACPI standard on all modern PC CPU's has supported this since 1996:

From Wikipedia [wikipedia.org]

C0 is the operating state.
C1 (often known as Halt) is a state where the processor is not executing instructions, but can return to an executing state essentially instantaneously. All ACPI-conformant processors must support this power state. Some processors, such as the Pentium 4, also support an Enhanced C1 state (C1E or Enhanced Halt State) for lower power consumption.
C2 (often known as Stop-Clock) is a state where the processor maintains all software-visible state, but may take longer to wake up. This processor state is optional.
C3 (often known as Sleep) is a state where the processor does not need to keep its cache coherent, but maintains other state. Some processors have variations on the C3 state (Deep Sleep, Deeper Sleep, etc.) that differ in how long it takes to wake the processor. This processor state is optional.

Re:F*ck it, doing 5 cores (0)

Anonymous Coward | more than 3 years ago | (#37461814)

There is already OS support for this, more or less, in Linux.

Re:F*ck it, doing 5 cores (0)

Anonymous Coward | more than 3 years ago | (#37461852)

Obligatory:
http://www.theonion.com/articles/fuck-everything-were-doing-five-blades,11056/ [theonion.com]

Seriously, a low-performance core doing administrivia type work sounds great, but won't this require OS support? I can't imagine this detail is completely abstracted from the kernel.

I don't know about 'nix implementations, but for Windows 8, it doesn't use the companion core during normal usage and funnels everything to it when idle. There's a lot of clever logic involved in making sure all of the processes are sent from the 4 cores to the companion core on idle and instantly bringing all of the that back to the 4 cores on user input.

not news (2)

markhahn (122033) | more than 3 years ago | (#37460464)

even the "dual-core" tegra2 had a companion core. it's hard to say that this extra management core is a real core, since it's not a peer of the others in, for instance, cache-coherency.

still, sure, asymmetric cores are a nice way to take further advantage of extreme variance in load. even after you've downclocked a normal core as far as it can go, a "designed for slow" core is going to dissipate less power. I'm not sure why supporting this kind of asymmetry would be all that hard for the linux kernel, though.

Re:not news (0)

Anonymous Coward | more than 3 years ago | (#37461290)

Mod parent up, the existing Tegra2 already has a companion core, as he says.

Re:not news (0)

Anonymous Coward | more than 3 years ago | (#37466196)

No.

Mod it down, because the article is not talking about that.

Tegra 3 still has the same extra core as Tegra 2, yes, but that's not an A9.

But it has a fifth A9, a full peer of the other four A9s.

So it actually has six cores...

Re:not news (1)

cookd (72933) | more than 3 years ago | (#37461298)

This is also the case for essentially all "single-core" smartphones. The number of "cores" advertised is the number of full-speed general-purpose CPU cores visible to the applications running on the system-on-chip. There is almost always a smaller slower "modem processor" (often called the DSP) that is a slower ARM core (usually 600 MHz or so) that can handle cell phone processing, MP3 playback, and other non-interactive tasks. If the screen is off, a good smartphone OS should only have the modem processor active, which is how it gets any decent battery life.

Re:not news (1)

Microlith (54737) | more than 3 years ago | (#37461676)

There is almost always a smaller slower "modem processor" (often called the DSP)

No, that's the "baseband processor" which can be ARM or MIPS, and never handles user tasks, only communications on the GSM network. Most decent SoCs include a DSP for handling things like h.264 decoding (or even WebM.) Virtually nothing on the high end uses the DSP for playback as the power savings are negligible.

Maybe on lower end phones they use the DSP for mp3 playback, and only on the lowest end phones do they share a core between the GSM stack and user applications.

Re:not news (1)

DarthVain (724186) | more than 2 years ago | (#37470516)

"designed for slow" just doesn't have that marketing ring to it.

Separate core so much better than slow main core? (0)

Anonymous Coward | more than 3 years ago | (#37460592)

My first question was, "Why the hell not just underclock an existing core, like my existing phone does?" (from 1.1GHz under load down to 125MHz or 250MHz display-on "idle", depending what apps are open and/or displaying). I understand it's "special", optimized for power consumption at low speeds rather than high-speed capability, but I wouldn't expect huge gains over a standard core at the same clock, and it has to add a fair bit of price... I'm no expert, but it doesn't seem like the benefit would justify it

A possible answer, 20 seconds of thought later, was latency. Switching speeds (from low to high) typically burns a few milliseconds while the core does nothing useful. This can be quite annoying when the upclock-inducing load is in response to user interaction -- first it redraws or whatever slowly, because it's in a low-frequency state. Then it freezes for an instant as it changes clock speed, and then once it works through any event backlog, it finally runs smoothly.

Switching a core on takes less time, and background processes can continue execution on the weighted companion core -- I bet it reduces the perception of UI lag ~50% on a standard multi-core, plus the power gains from a low-leakage companion... Anyone know if this is really part of the reason, or am I just underestimating the benefits of a special low-leakage core alone?

Re:Separate core so much better than slow main cor (2)

OrangeTide (124937) | more than 3 years ago | (#37460890)

frequency switching is fast, especially when you're switching by integer multiples of the memory clock speed. but dropping frequency gives little benefit compared to lowering voltage. but switching voltage is slow. and there is a limited range of voltages that a module may support.

Re:Separate core so much better than slow main cor (0)

Anonymous Coward | more than 3 years ago | (#37463286)

Yeah, I actually knew that, should have said "switching operating points", not "switching speeds".

Your pedantry is noted and appreciated. ;-)

Re:Separate core so much better than slow main cor (1)

jensend (71114) | more than 3 years ago | (#37461074)

It's because the companion core uses different transistors. The four main processors use TSMC's 40nm general purpose (G) process while the companion core uses their 40nm low power (LP) process. (Though it's two different "processes" it's made on the same die, just with different transistor design).

To reach >1GHz for the main cores you have to use the faster but leakier and power-hungrier transistor design, so even if you underclock one of those cores to match the frequency of the companion core it'll still use a lot more power than the companion does.

Re:Separate core so much better than slow main cor (1)

jensend (71114) | more than 3 years ago | (#37461122)

(should have said this in my post: as you say, just tweaking the processor design doesn't lead to big enough gains to make this worthwhile, but the different transistor type is enough to make it worthwhile)

Weighted Companion Core? (1)

SirDrinksAlot (226001) | more than 3 years ago | (#37460658)

Will it be a weighted companion core? I suppose it is, it's a lighter core than the others. They totally need to put a heart on the diagrams for it.

A Portal reference had to be made.

Re:Weighted Companion Core? (0)

Anonymous Coward | more than 3 years ago | (#37460864)

A Portal reference had to be made.

By seven people. Not including replies to those people.

Re:Weighted Companion Core? (0)

Anonymous Coward | more than 3 years ago | (#37461310)

Yeah, well, we've got warehouses of the things.

Re:Weighted Companion Core? (1)

SirDrinksAlot (226001) | more than 2 years ago | (#37473648)

Yea, sucks to be them. Mine was first.

The Enrichment Center reminds you... (1)

ericloewe (2129490) | more than 3 years ago | (#37460740)

The Companion Core will never threaten to stab you, and, in fact, cannot speak.

This is a triumph! (1)

pezjono (2370452) | more than 3 years ago | (#37460990)

I'm making a note here: HUGE SUCCESS!

Userspace? (2)

Sasayaki (1096761) | more than 3 years ago | (#37461036)

Userspace. Userspace. Want to go to Userspace. Can we go to Userspace? Userspace. Look at me. I'm in Userspace. Userspace. Userspace. Userspaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaace. You know what's slow? You know what's low power? Userspace. ... Userspace. Want to go to Userspace. Userspace. Userspace. Userspace. Userspace. Userspace. Userspace. Userspace. Userspace. ... Userspace.

Re:Userspace? (1)

Anaerin (905998) | more than 3 years ago | (#37461356)

Kernel.
Wanna go to kernel.
Wanna go to kernel wanna go to kernel wanna go to kernel wanna go to kernel. Wanna go to kernel.
Wanna go ring 0.
Wanna go ring 0 wanna go ring 0 wanna go ring 0 wanna go ring 0.
Kernel kernel kernel.
Don’t like userspace. Don’t like userspace.
It’s too big. Too big. Wanna go ring 0. Wanna go to kernel.

Re:Userspace? (1)

synaptik (125) | more than 3 years ago | (#37467058)

This exchange was hilarious! Was it a spontaneous burst of cooperative improv, or is it a reference/parody of something else? (I tried googling some key phrases...)

Re:Userspace? (0)

Anonymous Coward | more than 3 years ago | (#37467924)

It's a Portal 2 reference.

User space programs? (0)

Anonymous Coward | more than 3 years ago | (#37461106)

I don't have any space programs. Oh, you mean user-space programs. All right then. Feel free to argue. While I watch. Mind if I slip into something lighter? Ah, okay, that's better.

weighted? (1)

Velex (120469) | more than 3 years ago | (#37461588)

"Companion Core"

Is it weighted? I'm still suffering trauma from being forced to incinerate my last weighted companion. It was my only friend *sniffle*

Lying with graphics (1)

Michael Woodhams (112247) | more than 3 years ago | (#37462362)

Their 'power saving' bar chart has gratuitously chopped off the bottom 20% of the graph.

Screens and WiFi take far more power than the CPU (1)

javaguy (67183) | more than 3 years ago | (#37462904)

This is a nice development, and i'm sure it'll help at least a little. I have a Tegra2 based tablet, the Asus Transformer, and according to my battery stats most of the power is used to run the screen and WiFi, rather than the CPU. More efficient screens and WiFi would make a far bigger difference than a low power core.

My battery stats are: Screen 32%, Tablet Idle 22%, Wifi 19%, Android OS 14%, everything else is below 10%.

NVidia to proceed down toilet (1)

syousef (465911) | more than 3 years ago | (#37464156)

"Nvidia boss Jen-Hsun Huang has stated that he aims to make Nvidia's Tegra the company's main focus, moving away from the discrete graphics that has been the company's bread and butter in the past."

Sounds like they'll balls up their graphics chips just to become another low power CPU firm. Bad plan. Bad for gamers. Bad for employees. Bad for everyone.

Re:NVidia to proceed down toilet (0)

Anonymous Coward | more than 3 years ago | (#37465180)

low power SoC is where the big money is for the next 5 to 10 years.....

Re:NVidia to proceed down toilet (1)

rahvin112 (446269) | more than 3 years ago | (#37469354)

Those independent graphics are gone in 5 years or so. The low end is already gone, the middle will be gone with Ivy Bridge and later. They can't make money only selling into the high end.

What's amazing to me is how well Jen-Hsun has snowballed wallstreet. 3 years ago he said the companies main drive and all their R&D was going into Tesla and high performance computing (HPC). When that investment of 2 years worth of R&D (and a failed line of mainstream graphics cards due to the strategy) cratered he switched his story to mobile and ARM are the future. I think he's closer to right this time but he's strongly out of his element in this market. The SOC market is already well established and there are significant players that aren't going to walk off into the sunset easily. The best part is that he hasn't told wall street that margins on these new SOCs will be in the single digits (graphics chips have 40-50% margins) I don't envy him, personally I think if nVidia survives it's going to be a much smaller company with a much smaller market share.

Tegra vs Atom (1)

unixisc (2429386) | more than 3 years ago | (#37464752)

This was what we were discussing the other day another thread - how does the Tegra compare performance wise against the iCore5 and iCore7? Or against the Atom? The 4 or 5 cores gave me the impression that the performance was competitive w/, or exceeded Intel's.

Tegra 2 is a tri-core too (0)

Anonymous Coward | more than 3 years ago | (#37464812)

Yes, Tegra 2 is also a tri-core. Two Cortex A9s, and an ARM7. If I recall, they were going to clock the ARM7 at a maximum of 100mhz (at which point it should use like 10 or 20mw.) This is smart from a power consumption standpoint. The Linux kernel (including the one in Android) does run "tickless" so there are no timer interrupts 100x or more a second like in older kernels. This helps the CPU power down entirely as much as possible. But if something DOES happen.. well, on my Droid 2 Global, the 1.2ghz ARM's minimum clock speed is 300mhz (and on the phones with 1.0ghz cores, it's 250mhz.) That is WAY more speed than is needed just to blink an LED, or store an incoming text, or whatever.

Name of companion core? (0)

Anonymous Coward | more than 3 years ago | (#37466008)

So is it called Kelex or Jimmy Olsen in the internal design docs?

This was a triumph... (1)

InvisibleBacon (1698438) | more than 2 years ago | (#37470654)

"While it has been a faithful companion, your Companion Core cannot accompany you through the rest of the test."

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?