Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Marvell Launches First Triple-Core Hybrid ARM Chip

kdawson posted about 4 years ago | from the just-a-trickle dept.

Hardware 117

Blacklaw passes along an excerpt from Thing.co.uk that begins "While other manufacturers are content to develop dual-core ARM processors, Marvell has gone one better — literally — with a new triple-core chip called the Armada 628. The system-on-chip design, based on ARM's v7 MP series, features two dedicated 1.5GHz processing cores plus a third 624MHz core in a single application processor — making Marvell the first company to bring such a beast to market. While two of the cores are a pretty standard SMP setup, as seen in other dual-core ARM implementations, the third is a standalone processor designed for ultra-low-power draw. The idea behind such a design is that when the system is idle, or only running a low-performance application on a single thread, it can shut off the dual-core portion and save oodles of power."

Sorry! There are no comments related to the filter you selected.

Iron Man (0, Offtopic)

geekoid (135745) | about 4 years ago | (#33677338)

Now with the power of ARM!

Re:Iron Man (2, Funny)

MobileTatsu-NJG (946591) | about 4 years ago | (#33677418)

Does it run on DC?

Re:Iron Man (2, Funny)

ajlitt (19055) | about 4 years ago | (#33677588)

No, Marvell.

Re:Iron Man (3, Informative)

kimvette (919543) | about 4 years ago | (#33680338)

Oh come on, that was FUNNY, not OFFTOPIC.

Attention uninformed moderator: DC and Marvel are both comic book publishers. In fact have for decades been rival companies and each "universe" is known for certain superheroes. On the DC side you have Superman, Batman, Wonder Woman, Teen Titans, and so on. The Marvel universe features The Amazing Spider Man, X Men, Fantastic Four, Daredevil, Iron Man, and so on. Heck, with all the comic book-based movies over the last 20 years, even women know about comic books, and not just the geeky ones among us.

In other words, parent post was a topical JOKE (NOT OFFTOPIC), and you should instead focus on modding "insightful" and "informative" posts UP rather than modding posts you disagree with or jokes you don't get "down". In other words, follow the moderation guidelines - and consider developing your sense of humor while you're at it.

HTH!

Re:Iron Man (0)

Anonymous Coward | about 4 years ago | (#33681448)

It's slashdot, the day the mod's don't look like a bunch of retards trying to hump a doorknob will be the same day that pigs fly, hell freezes over and an openly gay communist gets elected potus...

Re:Iron Man (1)

RebootKid (712142) | about 4 years ago | (#33681910)

mod parent funny

Re:Iron Man (0)

Anonymous Coward | about 4 years ago | (#33679476)

Some mod definitely needs a cluestick or a funny bone. Iron Man is a Marvel Comics character.

Super Hero Chips? (-1, Offtopic)

Anonymous Coward | about 4 years ago | (#33677362)

I didn't know Stan Lee made computer chips. Do these ones have super powers? If you get it angry does it turn into an Itanium?

Wait... (3, Interesting)

CajunArson (465943) | about 4 years ago | (#33677370)

Why can't it just shut down one of the two normal cores, and run the other core at a highly reduced rate to get the same power savings? Additionally, I've seen plenty of benchmarks where a higher-power draw chip that can get done with a task quickly and drop back to low-power idle mode is actually more energy efficient than a lower-power chip that takes longer to get the task done. What sort of tasks is the third core intended to do that it would be so much more efficient than a regular ARM core?

Re:Wait... (1)

Mitchell314 (1576581) | about 4 years ago | (#33677420)

IDK. Maybe because it uses a modified architecture that is more efficient, but because of that nature it can't scale up as high? Maybe max speed itself also determines efficiency passively.

Re:Wait... (1)

Mitchell314 (1576581) | about 4 years ago | (#33677484)

I actually decided to read tfa to clarify.

the third is a standalone processor designed for ultra-low-power draw

Sounds like it was an architecture thing. IANA compute chipologist though.

Re:Wait... (0)

Anonymous Coward | about 4 years ago | (#33678542)

I just read the summary, it also said "the third is a standalone processor designed for ultra-low-power draw".
Sounds like there is no need to read tfa

Re:Wait... (3, Informative)

santax (1541065) | about 4 years ago | (#33677492)

From the full article: While two of the cores are a pretty standard SMP setup, as seen in other dual-core ARM implementations, the third is a standalone processor designed for ultra-low-power draw. The idea behind such a design is that when the system is idle, or only running a low-performance application on a single thread, it can shut off the dual-core portion and save oodles of power. So, they have designed that third core from the ground up with powerconsumption in mind. Whereas the 'normal' cpu's are designed with speed, processing-power and scalability in mind.

Re:Wait... (2, Insightful)

CajunArson (465943) | about 4 years ago | (#33677572)

That doesn't really answer my question though. First what does "standalone" mean? Any CPU that can access memory and run a process could be called "standalone". Second.. you mentioned power draw which is nice, but also not the important factor. The important factor is ENERGY efficiency. As an example: A 100 watt power draw from a CPU that takes 1 second to finish a task is more energy efficient than a 10 watt power drawn that takes 12 seconds to finish the same task. In the case of the faster ARM cores, if they can get a task done quickly and then drop back into standby, they could very well be more energy efficient than this "standalone" processor.

    I can take a guess since the article doesn't say, but this "standalone" processor might actually be some sort of DSP that is doing things the normal ARM cores really can't do effectively, and the power savings come from using the DSP for the specialized tasks it is more efficient at... otherwise I think it's jut Marvell trying to move silicon that didn't bin too well.

Re:Wait... (1)

Zerth (26112) | about 4 years ago | (#33677704)

The point of the third processor is not efficiency, it is low power. As you said, chips that are designed to be low power aren't necessarily efficient. Similarly, lots of chips have a idle/watchdog mode, but they aren't as low power as a chip designed to do that.

This way you can have the third core do event detection and last a long time on very little power, but then wake up the dual-core to process the events quickly and efficiently before going back to sleep.

Re:Wait... (1)

sunking2 (521698) | about 4 years ago | (#33677804)

Imagine an application that only logs key strokes. Which would you rather run on from a power stand point? Extreme and silly example, but point is this is probably geared toward things that are interaction bound with plenty of idle time. And their belief seems to be that it saves them power to run on a slower more power efficient core.

Re:Wait... (0)

Anonymous Coward | about 4 years ago | (#33677942)

The "standalone" part implies that there's no SMP handling present, i.e. no need to synchronize caches etc, so the whole cache snooping logic can be skipped. Additinally, there's a lot of efficiency to be gained by using an in-order design with inherently lower performance.

Re:Wait... (3, Insightful)

xouumalperxe (815707) | about 4 years ago | (#33678186)

The important factor is ENERGY efficiency. As an example: A 100 watt power draw from a CPU that takes 1 second to finish a task is more energy efficient than a 10 watt power drawn that takes 12 seconds to finish the same task

What if the task at hand is a continuous, undemanding one, like, say, basic mobile phone functions?

Re:Wait... (1)

medv4380 (1604309) | about 4 years ago | (#33678300)

It's like designing a car engine. You can build it for high horse power or you can build it for low horsepower to improve the mileage. It is very hard to get the highest horse power and th best gas mileage using the same design and even if possible you'd have to redo the timing when you wanted to change from one to the other and that's not something you do while it's running. They decided that they can put two different engines in this chip. One designed for performance with a duel core and one designed to work only on the most basic tasks when the system is idle so its more like a hybrid car now where you switch to electricity when you're stopped at a light.

Re:Wait... (1)

espiesp (1251084) | about 4 years ago | (#33678596)

While I don't disagree with your point either in principle or in practice, there is something one of us is confused on.

You mention "timing", I'm assuming it's in reference to the engine. Ignition timing changes with engine speed and load in virtually every EFI vehicle made in the past 30 years. Also, cam timing, on both intake and exhaust valves, is variable (whether in steps or 'infinitely') on-the-fly in a huge number of vehicles built today and for the past 20 years has been done in mass produced vehicles in some form (vtec) or another.

So I'm really not sure what kind of timing you refer to that changes while running, but all the important ones I'm aware of are changed all the time.

Re:Wait... (1)

medv4380 (1604309) | about 4 years ago | (#33678862)

I'll admit I'm probably more then a little confused on timing. My primary experience with timing is my old friends from HS 15 years ago who always wanted to mess with their cars timing belts to improve the horse power on their old 57 Chevy classics.

Re:Wait... (1)

MightyYar (622222) | about 4 years ago | (#33678360)

As an example: A 100 watt power draw from a CPU that takes 1 second to finish a task is more energy efficient than a 10 watt power drawn that takes 12 seconds to finish the same task.

This is only true if the "task" has a finite duration. If the task is continuous, then the lower-power chip wins, provided it is powerful enough. For example, if the "task" is "sit there and wait for the phone to ring", then whichever chip can consume the least power while doing almost nothing wins.

Marvell's press release has a little more (3, Informative)

PCM2 (4486) | about 4 years ago | (#33678878)

I think we're tripping up over the reporter's choice of language here. From Marvell's actual press release [prnewswire.com] :

The tri-core design integrates two high performance symmetric multiprocessing cores and a third core optimized for ultra low-power. The third core is designed to support routine user tasks and acts as a system management processor to monitor and dynamically scale power and performance.

Depending on what their definition of "routine user tasks" might be, it sounds like it doesn't actually shut off both cores and run exclusively off the third core, the way TFA makes it sound -- it only does that if the device isn't doing anything. More interesting stuff:

Marvell's ARMADA 628 tri-core CPU comprises a complete SoC design – a first for the industry. In addition to the tri-core CPU, there are six additional processing engines to support stunning 3D graphics, 1080p video encode/decode, ultra high fidelity audio, advanced cryptography, and digital photo data processing – for a total of nine dedicated core functions.

This sounds like a pretty cool chip.

Re:Wait... (2, Interesting)

david.given (6740) | about 4 years ago | (#33679938)

Looks to me like this is designed for the mobile market: two SMP processors to run the application OS, one lower-power processor to run the radio OS. Pretty much every smartphone on the market works like this, albeit with a single application processor.

The reason is that GSM (and most likely CDMA, although I've no experience of that) is critically real-time and needs a real-time OS to manage: Linux basically doesn't cut it. So you dedicate a complete processor with its own real-time operating system to it. I know that the G1, for example, runs L4/Iguana on its radio processor.

This allows you to run an ordinary desktop operating system, which isn't hard real time, on the application processor; Linux, Windows, etc.

There are also legal benefits in that since you can update each OS independently, you don't need to get your device (expensively) recertified by your local radio regulator in order to do an application OS update. If you were using a one-chip device, where there was a single processor that ran both the application layer and the radio stack combined, then you couldn't update one without updating the other.

This, BTW, is one of the reasons why so many people are interested in virtualisation on embedded devices; if you can run two operating systems at the same time on a single processor, it allows all the benefits of a two-chip device on vastly cheaper single-chip hardware.

Re:Wait... (0)

Anonymous Coward | about 4 years ago | (#33680290)

Most likely the third "standalone" core is not a core but a whole chip on the same multi-processor module as the dual core processor. A setup like this could be advantageous to shut off the whole processor instead of just a single core - leaving things like large caches and the circuitry to support it still drawing power. It might be interesting to see if this type of processor select (much like we've seen in recent laptops that contain both chipset based video and discrete video chips) will allow for a smoother transition between idle and full performance. In my experience none of the processor speed scaling technologies adjust quickly and for the most part they simply don't work; either keeping the processor on full-throttle all the time or failing to upclock at all on a load.

Thinking of the speed scaling options, most don't provide an adequate voltage differential between a low and high clock, which means that the processor will use less power, but not significantly except on load. Since the goal of a speed scaling processor is to up the clock on a load this power savings scheme only saves power if the technology fails in its goal, and simply provides an experience which is akin to an underclocked processor.

Re:Wait... (0, Offtopic)

jd (1658) | about 4 years ago | (#33677532)

Because superheros never work well on reduced power? Or maybe there's a patch they'll sell later (as per Intel's "upgradeable processor") that will let you run all three.

Re:Wait... (0)

Anonymous Coward | about 4 years ago | (#33677594)

this might be right for task that finished at some point (iE some calculation)

but is this true for tasks that have to run a short routine every few [arbitrary timestep]?
something like {check where you are by gps, if you are at A power up wifi if not sleep for another 0.1 second}

depending on the lenght of the time interval, the time (and extra power consumtion) needed to send the chip to (and get it out of) idle mode this new idea could prove usefull.

Re:Wait... (2, Interesting)

pclminion (145572) | about 4 years ago | (#33677694)

*shrug* -- Even in seemingly low-requirements applications like a laser printer you'll typically find at least two or three microprocessors. It's common for those CPUs to be running at different clock speeds, be closer or farther away from certain kinds of memory, and dedicated to specific tasks. You might have the main core running Linux with a firmware RIP engine while some subsidiary core runs a custom RTOS to drive the print mechanism. This new CPU is just packaging a few of those heterogeneous cores up in a convenient way. In other words, these are the sorts of configurations the industry is using already. I think it sounds interesting, but not really earth shattering.

leakage (4, Interesting)

YesIAmAScript (886271) | about 4 years ago | (#33677788)

At current processes (44nm, 32nm, etc.), switching power isn't critical at low speeds, it's leakage that is the issue. So a fast (big) processor takes a fair amount of power even if you run it slow.

Whereas a slow core is smaller, so that means fewer transistors to leak. You also can make the gates out of lower-leakage cells, so that even when on they leak less. This limits top speed which would be a problem for the main core but isn't a problem for this non-main core.

Having additional low-power cores isn't that strange, many current phone SOCs do this. What is unusual is most of those have one main core and many slower ones and this one has two main cores and one slower one.

Re:Wait... (1)

MooseMuffin (799896) | about 4 years ago | (#33677886)

Will it ever run all 3 cores at once?

Re:Wait... (1)

owlstead (636356) | about 4 years ago | (#33680918)

Good question, I haven't found out yet. But I presume that is all in the software, which might make it a tricky processor to program for. I cannot see it transfer all state from one core to one of the faster ones without some help of software in the operating system. And if it does run all three processors at the same time, you will basically have an ASMP. So it is an interesting design, even though it is rather obvious. It is not hard to see ASMP's become more common ground on SoC's (you just don't need each and every function X times on an X core chip). This seems to be a first step - scheduling algorithms will be an interesting subject for the years to come.

Diff Cores: Higher Performance + Lower Power (1)

perpenso (1613749) | about 4 years ago | (#33677974)

Cores can be optimized for low power consumption or high performance. A core that can switch between a performance mode and a low power mode is probably making compromises on both modes. If you have completely different cores your can avoid such compromises and maximize performance in one core and minimize power consumption in the other. Having heterogeneous cores most likely yields better results than any mode switching scheme.

Re:Wait... (1)

GWRedDragon (1340961) | about 4 years ago | (#33678034)

I've seen plenty of benchmarks where a higher-power draw chip that can get done with a task quickly and drop back to low-power idle mode is actually more energy efficient than a lower-power chip that takes longer to get the task done.

Maybe if you have to do some simple task very frequently? Seems like a realistic usage scenario for this kind of chip.

Re:Wait... (1)

j1m+5n0w (749199) | about 4 years ago | (#33678056)

Why can't it just shut down one of the two normal cores, and run the other core at a highly reduced rate to get the same power savings?

I'm not really the expert at hardware design, but I'd guess that the energy savings from reducing the clock on a high-speed chip aren't all that dramatic. If you have a 1.5 ghz chip, it has to be designed around circuits that can reach a stable state in less than a nanosecond. A chip clocked at a third the speed can use longer wires and more complex circuits, and probably use lower voltages because it has a lot more time between clock cycles. The optimal design for the slower chip may differ considerably from the optimal design of the fast chip. Similarly, the fast cores might possibly be simpler if they don't have to be capable of running at a slower clock.

Additionally, I've seen plenty of benchmarks where a higher-power draw chip that can get done with a task quickly and drop back to low-power idle mode is actually more energy efficient than a lower-power chip that takes longer to get the task done.

I'd guess that the slow core is designed for tasks that really aren't cpu constrained at all, but which might have real-time requirements, such as logging gps coordinates or accelerometer readings.

Re:Wait... (1)

bytta (904762) | about 4 years ago | (#33680306)

I don't have a car analogy for this, but can't this be compared to a gas stove that must always have a fire going?
The new chip is like a dual-burner model with the third one acting as a pilot light, which makes a lot more sense than running one of the burners at its lowest setting...

Re:Wait... (1)

Kagetsuki (1620613) | about 4 years ago | (#33678096)

Check the data-sheets? I've been writing code on ARM for 6 years now and I can't recall any devices having a dynamic or software-set clock rate... I'm just blindly assuming there is a reason for that. I also think technologies like Intel SpeedStep shut down paths as well as dropping clock rates, and I also have no idea just how much SpeedStep actually saves. If anyone has any numbers I'd love to see them, but I'm too lazy to try and remember login credentials to download datasheets and figure out W@[Hz] .

Re:Wait... (1)

dmitrygr (736758) | about 4 years ago | (#33680540)

Seriously? What ARMS have you worked with? Every one I've seen has ability to set speed

Intel PXA gen1 series have software settable MEMORY and CPU speeds with 12MHz granularity
Intel PXA gen2 series have software settable BUS, MEMORY, AND CPU speeds with 13MHz granularity
Freescale i.MX series have software settable MEMORY and CPU speeds, with sub-Hz granularity
TI's older OMAPs can set CPU speed with 6 MHz granularity

Re:Wait... (0)

Anonymous Coward | about 4 years ago | (#33680354)

Why can't it just shut down one of the two normal cores, and run the other core at a highly reduced rate to get the same power savings?

Google for "hold time violations".

That is just... (1)

PmanAce (1679902) | about 4 years ago | (#33677398)

Maverllous!

Re:That is just... (1)

Smauler (915644) | about 4 years ago | (#33678708)

Erm... what? You've just managed to misspell marvellous, despite the company being called Marvell... all you had to do was add "ous" to the end. Marvel is the root of marvellous.

If I'm missing something here... well, it won't be the first time.

Re:That is just... (1)

PmanAce (1679902) | about 4 years ago | (#33679904)

Nah you are not missing anything, I just wrote something early in the morning without having had my coffee. You just knit-picked the hell out of my comment.

Netbook market creep (1)

assemblerex (1275164) | about 4 years ago | (#33677410)

Intel should be seriously worried about their creep into netbooks. Intel brought the notebook market to all time low prices with atom and things like this chip might turn around and bite them in the ass.

Re:Netbook market creep (1)

SimonTheSoundMan (1012395) | about 4 years ago | (#33677512)

Same was said in the 1990's when Acorn Computers had machines that were faster than Intel's own Pentium and 486 machines. RISCOS was pretty much better than Windows 3.1 at the time too. *weeps*

Re:Netbook market creep (1)

John Hasler (414242) | about 4 years ago | (#33677620)

But nobody sees any need to run Windows legacy software on cellphones.

Re:Netbook market creep (1)

Lumpy (12016) | about 4 years ago | (#33678228)

tell that to phb's and Microsoft.

Re:Netbook market creep (1, Interesting)

Anonymous Coward | about 4 years ago | (#33677912)

You have it backwards. The Atom parts are a partial response to ARM and not a cause for it. ARM already ships a large multiple more CPUs than x86 and is starting to eat into the traditional x86 market via smart phones, tablets, set top boxes/appliances, and servers. Intel has responded by producing 'low power' Atoms and running around trying to convince phone manufacturers that 2 hours of battery life would be worth it for 'the full internet experience(flash and legacy windows crap)'. However it's also important to note that Intel is an ARM licensee and could trivially switch to that architecture if they wanted to. However they want to preserve their x86 margins as much as possible rather than becoming a commodity manufacturer.

BWAHAHAHAHAHAHA! (1)

bornagainpenguin (1209106) | about 4 years ago | (#33678482)

Intel should be seriously worried about their creep into netbooks.

Ahh...you....

Bwahahahahahahahahahahahahahaha! Heh...heh... And ..and..any day now there will be smartbooks everywhere!

--bornagainpenguin

PS:Wake me up when they FINALLY release these things in a smartbook device about the size of the HP Jordana hand held PCs...then maybe I'll be interested, otherwise this is just more vaporware.

Imagine a (4, Funny)

DevConcepts (1194347) | about 4 years ago | (#33677422)

Beowulf Cluster of...... Never Mind......

Re:Imagine a (1)

jd (1658) | about 4 years ago | (#33677582)

You can cluster Dr Strange, but it can alter the mystical properties of the universe.

Re:Imagine a (3, Interesting)

SimonTheSoundMan (1012395) | about 4 years ago | (#33677600)

I saw 32, 600MHz ARM chips demonstrated in 1999 in an desktop computer. Check her out: http://www.acornuser.com/acornuser/year18/issue210.jpeg [acornuser.com]

Re:Imagine a (1)

EnglishTim (9662) | about 4 years ago | (#33680834)

*spooge*

Beowolf (0)

Anonymous Coward | about 4 years ago | (#33677440)

Obligatory Imagine a Beowolf cluster of these!

Fuck Everything, We're Doing Five Blades (4, Funny)

imag0 (605684) | about 4 years ago | (#33677534)

Reminds me of This old chestnut [theonion.com] from the Onion.

Stop. I just had a stroke of genius. Are you ready? Open your mouth, baby birds, cause Mama's about to drop you one sweet, fat nightcrawler. Here she comes: Put another core on that fucker, too. That's right. Three cores, one chip, and make the third one play MP3's or someshit. You heard me--the third core plays MP3's. It's a whole new way to think about computing. Don't question it. Don't say a word. Just key the music, and call the chorus girls, because we're on the edge--the razor's edge--and I feel like dancing.

Re:Fuck Everything, We're Doing Five Blades (2, Interesting)

Walterk (124748) | about 4 years ago | (#33677946)

Sometimes reality is stranger than fiction: http://www.amazon.co.uk/Gillette-Fusion-Manual-Razor-Replacement/dp/B000GE5712 [amazon.co.uk]

Re:Fuck Everything, We're Doing Five Blades (1)

WarwickRyan (780794) | about 4 years ago | (#33678406)

Every time I see those advertised I think back to that Onion article.

Stopped buying Gillette blades around that time too. It seemed to me that their 2-blad razors suddenly got a lot blunter, so I switched over to Wilkinson Sword's twin-blade system. Which is much sharper and thus more comfortable.

+1 (1)

elsJake (1129889) | about 4 years ago | (#33678742)

I've switched to Wilkinson double sword , classic double edge blades , sharper than anything else and a lot cheaper too.
The fact that they're cheaper also helps with comfort because you feel more inclined to change them before they ware out.
Haven't had a problem yet but i am considering switching to a straight razor as soon as i can buy myself a Dovo. Sure enough you need to be careful but the increase in comfort and quality of the shave are at least the same as the switch from crap razors to Wilkinson. More so you don't need to rinse the blade every centimeter if you left your stubble grow a little too much.
Either way I'm keeping the Wilkinson for when I'm too tiered to be safe using a straight razor.

Re:Fuck Everything, We're Doing Five Blades (1)

Taxman415a (863020) | about 4 years ago | (#33681206)

If you like that, you'll really like double edge razors. You can get a decent handle on Amazon for $9 and the blades get to as low as $.17 a piece for decent blades. Yeah you'll spend a little more to get a decent brush, sample pack of a variety of razor blade brands, and soap and stuff, but you'll never regret it. Anyone that is sick of paying absurd prices for cartridge razors should consider it as well. It does take a little more time, but it's well worth the better shave and knowledge that you're not supporting a ridiculous business model. Then save the disposable razors for travel. Double edge/straight razor shaving is the slashdot way to shave for sure.

Re:Fuck Everything, We're Doing Five Blades (1, Funny)

Anonymous Coward | about 4 years ago | (#33678370)

there is also an old saturday night live sketch from either the second or third episode of the first season that is pretty much the same joke. except the sketch was making fun of manufacturers of two blade razors and one upping them with then-fictitious three blade razors

niCggA (-1, Troll)

Anonymous Coward | about 4 years ago | (#33677586)

Has b8oughT upon

Ahead of everyone else. (1)

sergeantsoda (1907546) | about 4 years ago | (#33677598)

I guess they're trying to get a LEG up on the competition.

Re:Ahead of everyone else. (0)

Anonymous Coward | about 4 years ago | (#33677666)

I don't get it.

Re:Ahead of everyone else. (0)

Anonymous Coward | about 4 years ago | (#33679504)

ARM -> leg

*winces*

Why 624? (0)

Anonymous Coward | about 4 years ago | (#33677662)

Why not a 50MHz or 200MHz core or something?

Re:Why 624? (0)

Anonymous Coward | about 4 years ago | (#33678164)

624 * 2.5 = 1560. It's probably a single 624 MHz clock which is multiplied up 2.5x for the faster cores.

Pay no attention to the CPU behind the curtain (4, Interesting)

Animats (122034) | about 4 years ago | (#33677664)

There's a tendency to put a little CPU in devices to handle activity when the device is "off". Something has to sit there and watch for the remote if the TV is to be turned on remotely. Many machines have a "wake on LAN" capability, and most servers have an extensive remote management capability built into the network controller. All of these imply some little CPU, invisible to the main operating system, doing things when the device is supposedly "off".

This isn't necessarily a bad thing, but it does provide an attack surface. Especially since those little machines tend to have very powerful access to the rest of the system, bypassing most security measures.

This new chip looks like an effort to integrate the "power off" CPU onto the same silicon as the main CPUs. That's a routine use of silicon real estate by putting more on one part. But the concept isn't new.

Re:Pay no attention to the CPU behind the curtain (1)

xonicx (1009245) | about 4 years ago | (#33678272)

Embedded System on Chip= ARM cores + Peripheral(SDIO, USB, I2C..) Controllers + Video/Audio Decoder + Interrupt Controller + Power Manager Unit Controller + RTC + ... Most of the SoCs are designed to have different power rails to all ( or few combinations) of these modules. I am not aware of any SoC which keeps ARM core running while playing music when screen is OFF. To serve an interrupt, you don't need to keep ARM running. You can always wakeup ARM core through some custom circuitry and save power.

Re:Pay no attention to the CPU behind the curtain (1)

fatphil (181876) | about 4 years ago | (#33682256)

Regarding your 'I am not aware ...', here's a powertop summary of an mp3 playing on high end ARM-powered mobile phone with no other apps active:

Freq: 500MHz 3.4%, 250MHz 96.6%
IRQs in 30s: DMA 3200, [elided] 2150, programable timer 1880, i2c 640
Core power domain: Off 0%, Retention 0%, Inactive 48%, On 51%
(other powrdomains like graphics-related stuff: 100% off)
Total wake-ups 10000 = 333/s, Total IRQs 8000 = 266/s, Timers 2000 = 66/s
H/W wakeups 100 = 3.3/s

(OK that was on prototype hardware, but not significantly different from the production version. The kernel was an R&D kernel too, so might have additional logging overhead, but I don't think so, however, I wasn't running it on a jig, it was a live system (i.e. my phone).)

So I guess you do now know of one.

Just for reference, this device will play mp3s for longer than a fully-charged iPhone (or what Apple claim in their marketting bumf). So I'm guessing Apple have an even less optimal architecture.

Note to $EMPLOYER - this is on the device that's already on the market, anyone could have obtained these results.

Re:Pay no attention to the CPU behind the curtain (1)

owlstead (636356) | about 4 years ago | (#33678294)

They market it as a 3 core CPU. This more or less implies that the slower core is still fully ARMed. This also implies that it will be a nice challenge for operating system engineers: what are you going to run on the slower core and what will you run on the faster one? Can you move one thread from the one core to the other ones, and how easy is this? How do you handle applications that don't play nice and keep using large quantities of CPU time after you've told them to go into sleep mode? This is rather different than having a separate "CPU" or dedicated logic for a specific purpose.

If you just look at the hardware it is a smart thing to do, but it might get tricky for OS vendors to support it (at full potential, getting just the slower core to run will probably be easy). Other CPU's probably inform the applications and adjust the frequency of the single or dual core. This seems to be easier to support.

As for the attack surface: sure thing, the more functionality (or: doubly implemented functionality in this case) the higher attack surface, the more vulnerable you are. But normally the attack surface of a CPU is more or less restricted to certain higher level functions and of course memory access. It should not be too hard to get this right; I don't know too much malware that attack a specific CPU implementation directly.

Re:Pay no attention to the CPU behind the curtain (0)

Anonymous Coward | about 4 years ago | (#33678714)

There are a lot of ARM-CPU's out there that run their code from flash memory, can limit code execution to flash and require physical access to reprogram. The attack surface becomes pretty small when code injection becomes impossible.

Re:Pay no attention to the CPU behind the curtain (1)

amorsen (7485) | about 4 years ago | (#33678746)

How do you handle applications that don't play nice and keep using large quantities of CPU time after you've told them to go into sleep mode?

You're the bloody OS! You don't have to give the applications any CPU time at all, if you feel it's time better spent playing tic-tac-toe with imaginary opponents.

It's tricky to switch from the fast to the slow CPU and back at precisely the right point, but all you lose by getting the timing wrong is slightly worse performance or slightly worse battery life. Not the end of the world.

Re:Pay no attention to the CPU behind the curtain (1)

owlstead (636356) | about 4 years ago | (#33680816)

Sure thing, but you still have to program the OS to do it, and for now each and every CPU has evenly matched cores. If you look at the mainstream desktop operating systems, they are all assuming that every core is equal to all the other cores. Then you've got the Cell computer which is a single CPU and multiple "Cell" CPU's. This is a different beast altogether - each and every program can be switched from one core to the other, but there is one special CPU that is much slower. So your scheduler has to be adopted for this kind of CPU. I'm not saying that that cannot be done, but it *IS* going to add complexity none-the-less.

Of course, it gets even more interesting if (and maybe this is already so) when cores have different capabilities (ecryption, graphics, multi-media instructions etc) as well. In other words:

http://en.wikipedia.org/wiki/Asymmetric_multiprocessing [wikipedia.org]

yep (1)

bhcompy (1877290) | about 4 years ago | (#33677680)

You have the right to bear ARMS, don't let any authoritarian inner-city councilman tell you otherwise

Re:yep (2, Funny)

durrr (1316311) | about 4 years ago | (#33677828)

Well, there are agreements which prohibits the use of Multiple Independent RISC Vehicles in your devices.
Though i guess you're fine as long as the russians don't find out.

Uhmm (0)

Anonymous Coward | about 4 years ago | (#33677710)

Haven't triple-core processors been around since, at least, the xbox360 came out?

Re:Uhmm (0)

Anonymous Coward | about 4 years ago | (#33677844)

@AnonymousCoward - #triplecore predates #xbox360. this is new 4 #arm

Re:Uhmm (1)

MightyYar (622222) | about 4 years ago | (#33678054)

I don't write much flamebait, but why are you even here? You have completely missed the point of the article - in fact the summary - in a way that makes it clear that you have absolutely no business being on a site "for nerds".

If you are a fledgling nerd, then I apologize for being a dick - but please, just lurk. As Abraham Lincoln and Lisa Simpson once said, "Better to remain silent and be thought a fool than to speak out and remove all doubt."

The triple-core is not what is interesting here - as you say, it has been done for years. The other aspects of the design are what merit a slashdot mention - specifically this weird bastardized ARM stuck on there in addition to the more-standard ARM chips and the graphics engines.

Pad pro please (0)

Anonymous Coward | about 4 years ago | (#33677782)

Imagine a ipad with this with retina display and no glasses stereoscopic 3d with this chip. Call it the pad pro and nerds everywhere will want one.

Anand Chandrasekher says... (1)

kkwst2 (992504) | about 4 years ago | (#33677954)

Fuck everything, we're doing 7 cores!

Re:Anand Chandrasekher says... (1)

fatphil (181876) | about 4 years ago | (#33682296)

You're behind the curve, the cell had 9 cores.

Although in reality most smartphones will have about 5 ARM cores in them. There really aren't that many ASICs as there used to be, many are just general purpose processors, perhaps with that tell-tale sign - 'firmware'. Your wireless card - I bet you there's an ARM core in it. Your bluetooth chip too? Ditto.

New power rating. OR "What is an oodle?" (1)

Chas (5144) | about 4 years ago | (#33677984)

Will we rate multiple oodles in binary or standard method? Kilooodles or Kibioodles? Megaoodles or Mebioodles? Gigaoodles or Gibioodles?

INQUIRING MINDS WANT TO KNOW!

Re:New power rating. OR "What is an oodle?" (0)

Anonymous Coward | about 4 years ago | (#33678166)

Meh, I'll wait 'til I can save at least one Poodle.

Re:New power rating. OR "What is an oodle?" (1)

suomynonAyletamitlU (1618513) | about 4 years ago | (#33678784)

I think I'll skip the Killapoodle chips and wait for the Megapoodle ones.

pity you are all unobservant prats (0)

Anonymous Coward | about 4 years ago | (#33678172)

Blacklaw passes along an excerpt from Thing.co.uk that begins ....


pity it's thinq.co.uk not thing.co.uk otherwise this would be a good thread

Power != Energy (5, Interesting)

Theovon (109752) | about 4 years ago | (#33678338)

It's common for people (myself included) to conflate Energy with Power, but it's often an important distinction. To begin with, technically, we don't consume power. We consume energy (to do work, which is in the same units), and power is the rate at which it is consumed.

An important factor often left on the floor is processing efficiency, meaning how fast are we getting work done for a given power level. If you reduce power by half, but the work takes twice as long, you've accomplished nothing. For the same amount of work, your battery will drain the same amount. Indeed, what we really want to do here is make systems take less energy, and within reasonable limits, it doesn't matter how much power you consume while you're doing it.

This has actually been one of the things that makes ARM processors energy-efficient. Not to say they're not also low power, the strategy has always been to build event-driven systems. Something happens (user input, sensor reading, etc.), which causes the CPU to wake up in your embedded system. The ARM processor then blasts through the work to be done, and then goes to sleep, powering down completely until the next event. (Some systems will use intermediate "sleep" states that are less time-expensive to sleep and wake.) An ARM is more efficient than an Atom, in part because it uses less power, but also in part becauses it needs less time to complete the same task.

In today's technology, this is especially important. At 90nm and 65nm, the Intel Core and Core 2 used clock gating to save power. Functional units (e.g. floating point multiply) that are idle have their clock signals gated, which reduces power being used by that part of the clock distribution tree. This is important because in those technologies, dynamic (switching) power dominates. In the Core i7, Intel uses POWER gating. When a functional unit is idle, it's powered down completely. This is because in 45nm and 32nm CMOS, static (leakage) power is what dominates.

Going back to ARM, this is something being applied in the Cortex A9. They've made a more complex processor in order to execute out of order, but as a result, computation goes appreciably faster. During computation, leakage is constant. By getting the work done faster and powering down completely, more leakage power is saved. Less time translates into less energy, even if the A9 uses more power than the A8.

Re:Power != Energy (2, Informative)

MorpheousMarty (1094907) | about 4 years ago | (#33679884)

Processing efficiency is ARM's whole focus. They usually talk about the performance per milliwatt of their chips. This is why they never compare themselves to Intel or AMD, which focus on performance, and why most hand-helds use ARM based processors (PSP, Nintendo DS, iPhone, basically every single Android device, etc). If your device runs on a battery this makes good sense unless you have to have x86 compatibility.

Re:Power != Energy (0)

Anonymous Coward | about 4 years ago | (#33681858)

If you reduce power by half, but the work takes twice as long, you've accomplished nothing. For the same amount of work, your battery will drain the same amount.

Not likely if you plan to use components from the non-ideal world. Since series-resistance applies to everything, your non-ideal battery will have a different behaviour for differing drain rates. In all likelihood, your battery will last longer if you drain it slower.

This is such bullshit (1)

pslam (97660) | about 4 years ago | (#33678388)

They're not the first, there's nothing different about this, and it's not even to market.

"Triple core" as they describe it is pretty standard stuff in the embedded/mobile world. You have one or two main "application" cores, and one or more I/O processing cores doing DSP, graphics, data processing, low rate data moves, etc. Just have a look at an NVidia Tegra 2, or even a Tegra 1 marketing slide and you'll see it has even more cores than this Marvell chip. Better yet, any Qualcomm 1GHz class cell phone chip, as those are at least actually shipping in quantity.

And hey, Tegra 2 hasn't even managed to get to market. The Marvell chip hasn't even got into a design! It's not even available for samples. How the fuck is that "to market"? There are even Tegra 2 based tablets being demoed at trade shows even if it's not available in quantity. Marvell has nothing whatsoever to show but that's not stopping them gloating about being first with something that blatantly isn't a first in any way.

Is there anything in this press release and submission which is accurate other than the clock speed? Is that even correct?

Re:This is such bullshit (1)

amorsen (7485) | about 4 years ago | (#33678778)

The difference is that this one seems to be able to run all three cores as SMP with a single kernel being able to run on all of them. Traditionally the cores either run completely different systems, or the kernel runs on one along with general system tasks, whereas the other is dedicated to DSP-like work.

Re:This is such bullshit (1)

pslam (97660) | about 4 years ago | (#33679790)

The difference is that this one seems to be able to run all three cores as SMP with a single kernel being able to run on all of them. Traditionally the cores either run completely different systems, or the kernel runs on one along with general system tasks, whereas the other is dedicated to DSP-like work.

Except they're not Symmetric. If you go to Marvell's main site and look at the larger press release, it calls them - and note the use of "quote marks" here: "Heterogeneous multiprocessing". As in, not symmetric. Being cache coherent is nothing special these days.

This is just definitions. They have a 3rd core which is also ARM-v7 and happens to be cache coherent, so they're making the grand claim that it's a "Tri-core" (they also "quote mark" that because it's not strictly true). This is despite it being a different design to the 1.5GHz parts. Why stop there? Why not just claim it's a hexa-core or some other arbitrary number including all the other ARM-v7 cores in it?

I bet nobody even uses it in this configuration. Why would you want a third core which runs at a fraction the speed of the other two? What happens when your thread is scheduled onto it? Why wouldn't you just use it as an off-load core like everyone else on the planet does and save the scheduling overhead compared to the marginal gains? Why not just run the other cores at low speed in idle? Is their mips-per-milliwatt rating so bad they actually need the 3rd core for decent idling?

There's good reasons nobody else has done this. It's not because nobody has thought of it.

Re:This is such bullshit (1)

fatphil (181876) | about 4 years ago | (#33682426)

"Why would you want a third core which runs at a fraction the speed of the other two"

Presumably as it's a simpler core with fewer optional modules (jazelle, (vector) floating point, thumb modes, ...), with a much lower transister count, smaller cache, less leakage, and lover voltage OPPs, so that at 600MHz it comsumes less than 600/1500 of what the big cortex cores use.

But that's a WSITD.

Re:This is such bullshit (1)

PCM2 (4486) | about 4 years ago | (#33678984)

"Triple core" as they describe it is pretty standard stuff in the embedded/mobile world. You have one or two main "application" cores, and one or more I/O processing cores doing DSP, graphics, data processing, low rate data moves, etc.

"One or two" isn't the same as three. This chip has three application cores and six I/O or DSP cores like you describe.

Re:This is such bullshit (1)

pslam (97660) | about 4 years ago | (#33679846)

"One or two" isn't the same as three. This chip has three application cores and six I/O or DSP cores like you describe.

It has two symmetric cores and a third of a different design which just happens to be compliant to the same spec (ARM-v7) and cache coherent. That doesn't make it "Tri-core". It's arbitrary how many they pick to claim it is. Hell, NVidia was claiming Tegra 2 was the world's first 7 core processor a while back, because they simply counted number of cores. You could probably run the OS on most of those cores, but you wouldn't. I suspect all customers of this Marvell chip will reject it for this purpose too, because they'll spot it for what it is: not a true tri-core.

Re:This is such bullshit (1)

owlstead (636356) | about 4 years ago | (#33681134)

Personally, I read it as a three core chip, where one of the chips is the controller (since the others could be powered down). I'm not certain of it though, since there is just not enough information about it on the internet.

Furthermore, I think it says in their press release that they are sampling to OEM's. That does not sound like vaporware, even though it is not produced in quantity just yet. It does normally mean that for most part their processor design is done.

But I'll probably completely wrong about this, since you can read a hell of a lot /from just a few fuckin' quotation marks/.

Re:This is such bullshit (1)

PCM2 (4486) | about 4 years ago | (#33681570)

Well look, maybe it's not as revolutionary as the press release makes it sound. It's probably best seen as an incremental innovation in a very crowded market. But if you look at it another way, the ARM market is so crowded that it really wouldn't make any sense to put out a new chip that wasn't actually innovative in some way. If it's not cheaper, it has to be better, or else OEMs will pick someone else's chip -- it's not like there's any shortage of them.

Why ARM7 not ARM9? (1)

MobyDisk (75490) | about 4 years ago | (#33678894)

Why do I keep seeing articles about new super-powerful ARM7 chips, when ARM9 has been out for a long while? Even my Nintendo DS has an ARM9, so I can't imagine it is that ARM9 is too big or complicated or inefficient.

Re:Why ARM7 not ARM9? (2, Informative)

Halo1 (136547) | about 4 years ago | (#33679408)

ARM is very good at confusing numbering schemes. There are basically two separate numbers: the architecture version and the cpu model number. The ARMv7 (note the *v*) in the article is about the architecture version (ARMv7 is currently the latest version), while the ARM9 you are talking about is a core that implements the ARMv5 instruction set. See http://en.wikipedia.org/wiki/ARM_architecture#ARM_cores [wikipedia.org] for a list of ARM cores and the corresponding architecture version they implement.

Re:Why ARM7 not ARM9? (1)

MobyDisk (75490) | about 4 years ago | (#33680274)

Thank you. That makes a lot more sense now.

_C_PU, Duh! (1)

rawler (1005089) | about 4 years ago | (#33679370)

With only two processing units, you'd have no CENTRAL processing unit, duh!

Coincidentally, in other news... (1)

hahn (101816) | about 4 years ago | (#33679460)

Blackberry will be announcing [engadget.com] a new tablet supposedly powered by a Marvell chip. Looks like we might see this chip in action soon enough.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?