Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

The Fight Against Dark Silicon

timothy posted more than 3 years ago | from the can't-we-all-just-get-along? dept.

Android 137

An anonymous reader writes "What do you do when chips get too hot to take advantage of all of those transistors that Moore's Law provides? You turn them off, and end up with a lot of dark silicon — transistors that lie unused because of power limitations. As detailed in MIT Technology Review, Researchers at UC San Diego are fighting dark silicon with a new kind of processor for mobile phones that employs a hundred or so specialized cores. They achieve 11x improvement in energy efficiency by doing so."

Sorry! There are no comments related to the filter you selected.

Darkies (-1)

Anonymous Coward | more than 3 years ago | (#35982256)

Why is it dark silicon they fight against? This represents the struggle of the black man to overcome racial prejudice and retake the word "nigger". The parallels are deep, man.

Re:Darkies (3, Funny)

Anonymous Coward | more than 3 years ago | (#35982446)

Why is it dark silicon they fight against? This represents the struggle of the black man to overcome racial prejudice and retake the word "nigger". The parallels are deep, man.

Exactly. Why do you think green olives are in glass jars and black olives are in tin cans? So the black olives can't look out. It's subliminal racism I tell you.

Donald Trump - Insperational White Man (-1)

Anonymous Coward | more than 3 years ago | (#35982530)

I agree with Donald Trump. The only way a nigger should ever go to Harvard is through the delivery entrance.

A black man wouldn't know what to do in court, if not for his white lawyer. What's a dark mystery like Barack Obama doing heading the Harvard law review.

It must be affirmative action. Otherwise that nigger would be President and have a birth certificate by now!

Re:Donald Trump - Insperational White Man (-1, Troll)

sulphurlad (772436) | more than 3 years ago | (#35982748)

God damn fucking white people, give him a little cock and they build the biggest guns to make up for it............. Racist Bitches.... it's ok though, i just keep fucking your woman..... they love this 'nigger' cock.

Re:Donald Trump - Insperational White Man (-1)

Anonymous Coward | more than 3 years ago | (#35984268)

God damn fucking white people, give him a little cock and they build the biggest guns to make up for it............. Racist Bitches.... it's ok though, i just keep fucking your woman..... they love this 'nigger' cock.

Ha ha ha. You think youre making us mad. When you go for our fat women really we don't mind. If you think a nasty pasty fat white bitch is some kind of special trophy just because she's white well that's even funnier since its kind of racist on your part. But any time I see those fat bitches with a man he's always a nigger. Anytime I see them waddling along and pushin a baby carriage the baby is always half black. So yeah you guys have fun with that.

huh ? (0)

Anonymous Coward | more than 3 years ago | (#35982276)

yeah what a great idea. just like the PS/3 with its specialized processors. 100 + should be easy to program for .... NOT.

Re:huh ? (2)

the_humeister (922869) | more than 3 years ago | (#35982326)

From the article, it seems like the processor usage would be transparent such that you don't need to explicitly target each processing element directly.

Re:huh ? (1)

DurendalMac (736637) | more than 3 years ago | (#35983066)

Well, unless you really, really like assembly. Then you're probably a masochist and this would be right up the ol' alley.

That's not the solution, this is (5, Informative)

thisisauniqueid (825395) | more than 3 years ago | (#35982340)

Language support for ubiquitous and provably threadsafe implicit parallelization -- done right -- is the answer to using generic dark silicon -- not building specialized silicon. See The Flow Programming Language, an embryonic project to do just that: http://www.flowlang.net/p/introduction.html [flowlang.net]

Re:That's not the solution, this is (0)

Anonymous Coward | more than 3 years ago | (#35982374)

char x = getc();

if (x==1)
{
        printf("one");
}
else
{
        printf("somethingelse");
}

Are we done?

Re:That's not the solution, this is (0)

Anonymous Coward | more than 3 years ago | (#35982632)

That isn't the solution either. Function getc() returns an int, not a char.

Re:That's not the solution, this is (2)

oliverthered (187439) | more than 3 years ago | (#35982512)

programmer-safe language.

That's just asking for trouble,that's like saying a keyboard is safe from illiterate people because it has letters printed on the keys.

Re:That's not the solution, this is (2)

FatdogHaiku (978357) | more than 3 years ago | (#35982706)

...that's like saying a keyboard is safe from illiterate people because it has letters printed on the keys.

Sadly, that statement is true. An illiterate person will shy away from a keyboard, an on screen (TV) menu, a newspaper, etc. the same way someone who is broke is embarrassed by the sight of a checkbook or wallet... it becomes a reflex. I know someone who is a good intuitive mechanic, but somehow managed to get to adulthood with less than third grade reading and writing skills. Left to himself, a typical 5 page job application takes a couple of hours and many phone calls to complete. Now he has a 2 year old son, and it is beginning to dawn on him that by the time the son is 8 or 9 he will be left in the dust... I can only hope he chooses to grow rather than try and retard the advancement of his son...

Re:That's not the solution, this is (2)

oliverthered (187439) | more than 3 years ago | (#35982732)

cats are illiterate, they walk all over the bloody keyboard causing all kinds of havoc.

"I know someone who is a good intuitive mechanic, but somehow managed to get to adulthood with less than third grade reading and writing skills.",
quite possible the way that he learns things (ergo... schools are crap)

I have/had that problem, in that language is generally poorly designed and people like to fuck with other peoples heads. But I worked out how they do that now and it kind of, mostly, started to sort itself out.

Re:That's not the solution, this is (1)

Anonymous Coward | more than 3 years ago | (#35982800)

My Dad was a bit like that, Mum introduced him to Science Fiction and made him read enough to get him hooked on the story. He now has no problem with reading.
Took me longer than most kids to pick up reading so she used the same technique on me, couple of years later my reading comprehension was far above my age group.
Writing and spelling have never really caught up though. still have problems with that at 30.

Re:That's not the solution, this is (1)

Kjella (173770) | more than 3 years ago | (#35983770)

Left to himself, a typical 5 page job application takes a couple of hours and many phone calls to complete.

Not so many phone calls, but job applications can take me a while. The "spray and pray" variety may be useful if you're unemployed, but if you already have a job and it's one of those rare opportunities I could easily spend 2 hours on it. Not because of language problems but for making the best possible application for the position. It's usually well spent time.

Re:That's not the solution, this is (2)

hairyfeet (841228) | more than 3 years ago | (#35983606)

Not to mention what is so bad about 'dark silicon' (ohhh scary name) anyway? That is how AMD Radeon chips deal with those huge amounts of stream processors, just turn off the ones that aren't being used to save power, and kick them on when you do need the muscle. Or how both AMD and Intel use the extra heat savings from turning off unneeded cores to allow a speed boost with TurboCore.

Personally I'd rather have a device that had plenty of "dark silicon" so that it can kick it on long enough to do large jobs quicker then quickly drop back down than I would have a device with tons of specialized chips all sucking power, especially if they don't turn off when not needed (thus making them dark silicon) thus constantly trickling away juice.

Besides didn't we go through that in the 80s with machines like Amiga, and in the 90s with Sega Saturn and the 00s with PS3? Didn't they all turn out to be more of a PITA to program? And reading TFA it seems they are saying sometime in the future you'll hit a wall on heat transfer and as you go lower down the nanometer chain the worse things will be. Well when we get to that point wouldn't the smart answer simply be to stop trying to go lower?

Re:That's not the solution, this is (2)

Samantha Wright (1324923) | more than 3 years ago | (#35982586)

That would be so much better if it wasn't in "early design" stage. Their "no garbage collection" plan seems particularly worthwhile.

Re:That's not the solution, this is (5, Interesting)

Anonymous Coward | more than 3 years ago | (#35982622)

Uuum, no need to learn some obscure weird language that doesn't even exist yet, when you can learn a (less) obscure weird language that already exists. ;)

Haskell already has provable thread-safe implicit parallelization. In more than one form even. You can just tell the compiler to make the resulting binary "-threaded". You can use thread sparks. And that's only the main implementations.

Plus it is a language of almost orgasmic elegance on the forefront of research that still is as fast as old hag grandma C and its ugly cellar mutant C++.

Requires the programmer to think on a higher level though. No pointer monkeys and memory management wheel reinventors. (Although you can still do both if you really want to.)

Yes, good sir, you can *officially* call me a fanboy.
But at least I'm a fan of something that actually exists! ;))

(Oh, and its IRC channel is the nicest one I've ever been to. :)

Re:That's not the solution, this is (0)

Anonymous Coward | more than 3 years ago | (#35982890)

old hag grandma C and its ugly cellar mutant C++.

In that context, what would C# and Java be?

Re:That's not the solution, this is (4, Funny)

DurendalMac (736637) | more than 3 years ago | (#35983092)

Java is a lumbering, bloated behemoth that everyone seems to know, but far less know well. C# is what happened when Microsoft knew Java in a Biblical sense.

Re:That's not the solution, this is (1)

Nimatek (1836530) | more than 3 years ago | (#35983306)

You're right about Haskell being a beautiful language, but it is not as fast as C/C++. Even Java is usually faster. It's still pretty fast for a declarative language and has a C interface for when you need to speed up certain parts of code.

Re:That's not the solution, this is (2)

m50d (797211) | more than 3 years ago | (#35983462)

You're right about Haskell being a beautiful language, but it is not as fast as C/C++.

Depends on the problem. My previous company found the Haskell proxy we wrote for testing could handle 5x the load of the best (thread-based) C++ implementation.

Re:That's not the solution, this is (0)

Anonymous Coward | more than 3 years ago | (#35983766)

That simply shows that it doesn't matter if a compiler produces world's fastest binaries if the language and the environment make finding performance bottlenecks too difficult.

Re:That's not the solution, this is (1)

Intron (870560) | more than 3 years ago | (#35985100)

You're right about Haskell being a beautiful language, but it is not as fast as C/C++. Even Java is usually faster. It's still pretty fast for a declarative language and has a C interface for when you need to speed up certain parts of code.

Who cares? CPUs are 1000X as fast as they were 12 years ago, but I/O speed has barely changed. There are no CPU-bound problems anymore.

The only thing that matters is programmer efficiency. It takes 5X as long to write C code as to solve the same problem in a modern language.

Re:That's not the solution, this is (1)

serviscope_minor (664417) | more than 3 years ago | (#35983316)

Plus it is a language of almost orgasmic elegance on the forefront of research that still is as fast as old hag grandma C and its ugly cellar mutant C++.

People always claim this. And always against C and C++. It's essentially never true except for a) FORTRAN and b) occasional synthetic benchmarks. While it is undeniable elegant, the lack of for-loops is anything but elegant in scientific computation, image processing etc.

Does Haskell allow you to parameterize types with integers yet? It didn't last time I looked and that is one feature of C++ I really couldn't live without.

for vs. map (1)

tepples (727027) | more than 3 years ago | (#35984052)

the lack of for-loops is anything but elegant in scientific computation, image processing etc.

What's the difference between imperative "for" and functional "map" for iterating through a collection? Python has both, and I end up using generator expressions (which use syntax not unlike "for" and the semantics of "map") at least as often as an ordinary for-loop.

Re:That's not the solution, this is (1)

Jimbookis (517778) | more than 3 years ago | (#35983828)

Haskell reminds me of glossolalia. Every time I take a look at it all I see is gobbledygook with the proponents of it claiming they have seen God.

Re:That's not the solution, this is (1)

tepples (727027) | more than 3 years ago | (#35984040)

Haskell [...] is as fast as old hag grandma C and its ugly cellar mutant C++.

As I understand it, purely functional languages use a lot of short-lived immutable objects and therefore generate a lot more garbage than languages that rely on updating objects in place. If your target machine is a handheld device with only 4 MB of RAM, this garbage can mean the difference between your design fitting and not fitting. And for a design on the edge of fitting, this garbage can mean the difference between being able to keep all data in RAM and slowing down to read the flash over and over.

Re:That's not the solution, this is (1)

JamesP (688957) | more than 3 years ago | (#35984382)

I like Haskell but it has its warts.

The main problem of Haskell is going "full functional", with monads, etc. Monads are very difficult to understand and master.

Still, I think Haskell is much more close to "the solution" than Lisp for example. (or maybe Scala gets better)

Not to mention it's great to play with Hugh/GHC with its interactive console

Re:That's not the solution, this is (1)

TheRaven64 (641858) | more than 3 years ago | (#35985504)

The main problem of Haskell is going "full functional", with monads, etc. Monads are very difficult to understand and master.

Monads are what makes Haskell interesting. Take a look at some of the stuff like the STM implementation from Simon Peyton-Jones's group, for example. Functional programming without monads is just imperative programming with a bunch of irritating and pointless constraints.

Re:That's not the solution, this is (1)

geekpowa (916089) | more than 3 years ago | (#35982722)

No need to reinvent the wheel. Plenty of stuff out there, based on functional programming model which by design can be setup to parallelise well. I know some folk messing around with this: not my particular area of interest, but demonstrates that this is a well understood problem space with alot of clever people already having committed alot of hours of brainwork over long periods of time to progress solutions in this problem space. Mercury Programming Language [wikipedia.org]

Re:That's not the solution, this is (1)

istartedi (132515) | more than 3 years ago | (#35982768)

What's your plan of attack on GC? Reference counting doesn't pause; but fails if you create cyclic references. Mark and Sweep doesn't have that problem; but creates the dreaded pause. The state of the art, AFAIK, is to check for recently created objects and kill them early (generational GC). There are heuristics to avoid a full mark-and-sweep; but AFAIK there aren't any airtight algorithms.

Now I wonder, is it possible to do a static analysis on a parse tree for some language and determine whether or not it could create cyclic references at runtime? I'm inclined to think that it's generally impossible... but I'm not even sure how to set that equation up. At any rate, rejecting a program that creates cycles would be cheating. Also, there might be some valid reason to create a ring-like data structure.

Anyway, good luck with your project. It reminds me of a lot of the independant language development project I hack on; but I'm less concerned with parallelism. I admire the Erlang message-passing and lightweight process approach for that.

Re:That's not the solution, this is (0)

Anonymous Coward | more than 3 years ago | (#35983262)

It's possible to design a language that doesn't allow cyclical data structures or only limited cyclical structures (that can easily be GC without overheads). This is possible in a single-assignment language with borrow/own semantics. A reference passed to a function is considered either borrowed (the function can't capture it, only read it or pass it to another function taking a borrowed reference) or owned (the passed reference is considered consumed and the function can read, modify or capture it)

A limited cyclic structure is one that supports cyclical structures within an object but not between several objects. In this case GC can be done whenever the object isn't referenced anymore (ignoring the internal cycles).

-- Megol

Re:That's not the solution, this is (1)

istartedi (132515) | more than 3 years ago | (#35984584)

It took me a bit longer to parse your answer than the one further down.

As long as I've been studying languages this is the first I've heard of the term "single assignment". After looking at the wiki for it, it seems to be nothing more than the consequence of being "purely functional". Maybe that's why I haven't heard it yet, the former being quite common in things I've read. Borrow/own is obvious enough.

I'm not sure if limiting yourself to purely functional programming qualifies as a "cheat" or not. Every practical "functional" programming language I've looked at has some kind of hack to let you do non-functional programming. Some even have object systems (CLOS, for example). It seems to be human nature to not do purely functional programming. The real world is not purely functional. Some of the things you need to interact with the real world (like Monads) are difficult to grasp and/or not explained well. In some ways that's a good thing--it means there's still interesting work for language designers.

Re:That's not the solution, this is (0)

Anonymous Coward | more than 3 years ago | (#35983732)

Not to be a dick, but the phrase in your sig should be, "for all intents and purposes". I'm definitely not a grammar nazi, hell mine ain't all that great, but I figured it would be nice to let ya know.

Re:That's not the solution, this is (2)

Jesus_666 (702802) | more than 3 years ago | (#35983888)

Whooosh. The whole sig is a collection of language abuses commonly seen on the internet.

Re:That's not the solution, this is (0)

Anonymous Coward | more than 3 years ago | (#35983870)

> Now I wonder, is it possible to do a static analysis on a parse tree for some language and determine whether or not it could create cyclic references at runtime?

You can take the conservative approach and look for cycles in the type system. If an object of type A can only contain references to objects of type B and objects of type B can only contain references to objects of type C and so on, then there can't be any cycles.

And it doesn't have to be all-or-nothing. If you can identify specific types which don't allow cycles, you can prune the garbage collection at that point. The net result is that garbage collection may only need to traverse a fraction of the total data.

Re:That's not the solution, this is (1)

istartedi (132515) | more than 3 years ago | (#35984506)

Now this is what Slashdot should be. Your answer is a bit more clear to me than the "borrown/own semantics" the other poster described. Since Haskell is "type oriented", would anybody be aware if it does this kind of analysis?

Re:That's not the solution, this is (1)

thisisauniqueid (825395) | more than 3 years ago | (#35985036)

You can create memory in "arenas" (an overloaded word, unfortunately) where the entire arena is freed at once. e.g. you can create a graph as a collection of nodes that can have arbitrary inter-connectivity. When the the last reference to anything in the collection is dropped, the whole collection is freed. This will cover a lot of cases with circular deps.

Re:That's not the solution, this is (1)

parlancex (1322105) | more than 3 years ago | (#35982792)

I checked it out and it sounds really interesting, but at the moment all they seem to have is the idea. Not to say it isn't a very good idea, but I think the main challenge will be making such a language intuitive and human readable whenever they get that far.

Re:That's not the solution, this is (-1)

Anonymous Coward | more than 3 years ago | (#35982796)

I think its safe to say, anything with "flow" in the name can be disregarded as hippy nonsense - this is certainly no different. Lets fight tech issues with a language solution - neglecting the non-existent nature of the cited "language" - this is flawed due to the simple fact ITS A HARDWARE ISSUE CAUSED BY OVERHEATING FROM USING THE THING. If you simply change the language it doesn't change how much data goes through the chip unless you're speaking of throttling the thing - in which case its as equally worthless as the solution in this article (making more cores on the chip and cutting them off when they overheat to use others on the same chip does stop the "dark silicon" issue). You need better thermal conductance to remove the heat or lower power transistors, otherwise you just have exactly what this claims to fix in a more ordered manner (probably the latter since people tend to dislike being burnt or having the batteries explode from too much heat). For for fuck's sake - how is the parent post marked +5 Informative - where's the -5 Uneducated Self-Righteous Hippy Propaganda button?

Re:That's not the solution, this is (1)

Interoperable (1651953) | more than 3 years ago | (#35983008)

I don't think that's an answer to the same problem. The problem is that it simply isn't possible to make a general purpose processor arbitrarily small due to power dissipation. You can parallelize all you want, you still might not hit the same performance for specific tasks that optimizing the processor architecture itself will. Quite clever if chips customized to particular phones can be cost effective.

Re:That's not the solution, this is (0)

Anonymous Coward | more than 3 years ago | (#35983146)

Your post suggests you don't even know what dark silicon is. However, I do admire your attempt to hijack the link with your own vaporware pet project!

Re:That's not the solution, this is (0)

Anonymous Coward | more than 3 years ago | (#35983168)

The "manifesto" seems like a lot of words for saying the language is single assignment. It also mentions a lot of implementation stuff that's not really language related ("no stack"? As an example C semantics can be supported by a variety of methods without requiring a stack).

-- Megol

Re:That's not the solution, this is (0)

Anonymous Coward | more than 3 years ago | (#35983188)

How does this solve the problem that your processing unit becomes too hot? It just lets you do parallelisation right. While that's important, TFS describes a problem with heat.

(fun fact: CAPTCHA = flowed)

Re:That's not the solution, this is (1)

NoSig (1919688) | more than 3 years ago | (#35983966)

That helps, but it will still require powering up all the silicon you are using. In this apporach you only power up the part of the special-purpose silicon you need, but in return get much greater speed out of that piece of silicon. This is more power-efficient if you need some of what is on the chip.

Not required.. (5, Informative)

willy_me (212994) | more than 3 years ago | (#35982382)

The CPU in a cell phone does not use much power so there is little to gain. Now if you can make more efficient radio transceivers - that would be something. Or the display, that would also significantly reduce power consumption. But adopting a new, unproven technology for minimal benefits.... That's not going to happen.

Re:Not required.. (1)

ChronoReverse (858838) | more than 3 years ago | (#35982488)

Agree 100%

The two biggest power draws are the screen and radios. This is what needs to be made more efficient.

With proper GUI design and AMOLED screens, the screen power draw can be drastically reduced but things like the 3G radio drain power like mad if the signal isn't perfectly strong (while 4G radios gobble power under all circumstances).

Re:Not required.. (2)

gman003 (1693318) | more than 3 years ago | (#35982542)

Hell, even in laptops that's the case. I've got a high-end (well, medium-end now, but two years have gone by) gaming laptop. I've noticed that the biggest power draw is unquestionably the display - just turning the brightness down triples my battery life. Then comes turning off the wifi/bluetooth (there's a handy switch to do so), which gives me an extra half-hour. And this is while the Crysis-running CPU and graphics card are on normal gaming settings. Setting the CPU to half the clock speed barely gives me five more minutes.

Re:Not required.. (1)

FishOuttaWater (1163787) | more than 3 years ago | (#35982644)

So, what we really need to be working on is connecting the output directly to your brain and skipping all that wasted light.

Re:Not required.. (1)

artor3 (1344997) | more than 3 years ago | (#35982742)

Well, some companies are working on HUD goggles for personal computers, so I guess that's a step in the right direction, even if it does make you look like a total dork.

Re:Not required.. (0)

Anonymous Coward | more than 3 years ago | (#35982808)

even if it does make you look like a total dork

Come now, that ship sailed long ago.

Re:Not required.. (1)

SuricouRaven (1897204) | more than 3 years ago | (#35983058)

WANT!

Stareing into someone's eyes, you'll be able to see the porn they are viewing reflected off their cornea.

Re:Not required.. (0)

Anonymous Coward | more than 3 years ago | (#35983846)

Just think of the possibilities during speed dating. No need for that later discussion about the funny and possibly magical clothes the other person is about to wear during an intimate moment..

Re:Not required.. (1)

Jesus_666 (702802) | more than 3 years ago | (#35983916)

Of course CSI Miami would do this through a surveillance camera, zooming into the reflection and determining the eye color of the porn actress.

Re:Not required.. (0)

Anonymous Coward | more than 3 years ago | (#35983980)

The CPU/GPU uses far more power @ 100% than wifi... the difference is you can't turn it off. And as you discovered, slowing it down doesn't have much effect. Run SETI@Home while playing a game and see how long your battery lasts if you need to see this to believe it. That's what the article is talking about in regards to dark silicon... in modern laptops, it's about turning off as much of the CPU as possible when in an idle state (which is most of the time) and using the CPU for graphics instead of more power hungry external GPUs.

As for phones, I've discovered that when my Droid is idling, it can go for almost 2 days without a charge, but if I'm using the GPS while playing pandora, I can drain the battery in about 3 hours. I think everything has an effect on this one... CPU usage is high and the screen stays on for navigation and Pandora keeps the radios constantly in use.

Re:Not required.. (1)

artor3 (1344997) | more than 3 years ago | (#35982640)

Unfortunately, there's a pretty fundamental problem with making more efficient transceivers. They have to operate, by their very nature, at high frequencies. High frequency signals inevitably draw more current, because they see capacitors as having a low impedance. Basic EE stuff: Z = 1/(jwC). And how do we generate the radio frequency? With a VCO that invariably involves big capacitors (big for an IC, at any rate). Those VCOs typically end up drawing at least 50-60% of your operating current.

Another third or so is the amplification, no real way around that unless you find a way to cool the universe a couple hundred Kelvin so we can lower the noise floor. Then you have all the DACs and ADCs and comparators and whatnot that do all the mod/demod and soak up the rest of your current, but they're pretty small potatoes.

Build a better VCO and the world (or at least Silicon Valley) will beat a path to your door.

Re:Not required.. (1)

timeOday (582209) | more than 3 years ago | (#35982760)

I wonder if you could make a high gain steerable antenna to track the dish on the cell tower while you're transmitting? Or, if it *really* tracked accurately, how much power would be needed to transmit the signal over the same range with a laser?

Re:Not required.. (1)

artor3 (1344997) | more than 3 years ago | (#35982828)

Lots of problems with the laser... how does the phone know where the closest tower is, especially if you turn it off while flying across the country or the world? What happens if something gets in the way of the laser? Can you even use it inside?

A directional, directable antenna might be possible, but it still presents some problems. You'd need a moving part with three axes of motion that can respond as quickly as a person swings a phone from one ear to another, and you'd need the motors behind it to operate on low enough current for it to be worthwhile. And moving parts tend to break. Maybe there are directional antennae that can be steered electrically (as opposed to mechanically).... that's really outside the scope of my knowledge.

A better solution, which I think we'll have to move to regardless as the wireless spectrum becomes more crowded, is to focus on short range wireless with ubiquitous minitowers connected up with fiber. Spend as little time in the air as possible. Less signal is lost, fiber transmits faster anyway, and you only need to compete for spectrum with people in your immediate vicinity. Rural areas would still be problematic though.

Unfortunately, that still leaves the bulk of the power getting used up in your LO. Maybe some physicist will come up with something awesome, like an electrically-controllable phononic oscillator or who knows what. But as long as we're using the current method, I don't see us getting more than a 30% improvement in energy efficiency out of transceivers.

Re:Not required.. (1)

SuricouRaven (1897204) | more than 3 years ago | (#35983072)

You can do a directional, steerable antenna with no moving parts using software radio and an array. But it'd still be impractical for phones - too big, and it'd take even more power. Perhaps at the base stations it might be of more use. The better the SNR you recieve by excluding sources of interference, the less power the phone needs to transmit. With an array and the right software, a base station could have a thousand virtual directional antennea, all turning to track an individual handset in real time.

Re:Not required.. (1)

willy_me (212994) | more than 3 years ago | (#35983128)

I wonder if you could make a high gain steerable antenna to track the dish on the cell tower while you're transmitting?

What you described could be done without physically moving the antennas. Read a paper on it a few years ago (sorry, no link) where some researchers built an antenna on a chip that consisted of hundreds of different physical antennas. By applying the signal to different antenna at different times, a directional beam can be formed, much like yagi. But unlike a yagi, the beam can be sent in any direction, one just has to alter the timing. I believe it is similar to how modern RADAR works.

Re:Not required.. (1)

currently_awake (1248758) | more than 3 years ago | (#35985316)

Google Phased array antenna, it's a non mechanical directional antenna. Also suggest looking up ceramic resonators, they allow building much smaller radio transievers with lower power use. If the VCO uses half the power then don't use one, just have fixed frequencies using crystals (trade larger size for lower power).

Re:Not required.. (1)

Anonymous Coward | more than 3 years ago | (#35983212)

And how do we generate the radio frequency? With a VCO that invariably involves big capacitors (big for an IC, at any rate). Those VCOs typically end up drawing at least 50-60% of your operating current.

I can tell you from direct experience that the VCO (voltage controlled oscillator) used to generate the high frequency carrier is not the issue. The radio I'm messing with right now, in standby with the xtal and VCO running draws 200uA. 50uA with just the xtal. The problem with the radio's is

1) The demodulation circuitry is computationally expensive. Though as dies shrink this becomes less of an issue.
2) Transmit power, here you are up against a wall. You need to transmit a high enough power so that the far end receiver has enough signal to noise to work with. Directional antenna's at the cell tower cell helps in a lot of ways.

And then you have the network stack, link management and things that deal with the either.

Re:Not required.. (1)

artor3 (1344997) | more than 3 years ago | (#35985094)

What frequency are you working in? Obviously something like AM/FM radio, RFID, or TV will draw low current because the frequency is relatively low. But once you start getting up to the GHz range, there's no way a VCO could draw that little current, unless your entire radio had less than 30 fF of capacitance. Are you sure it's the RF VCO that's running, and not some IF one?

Re:Not required.. (2)

rrohbeck (944847) | more than 3 years ago | (#35982726)

The CPU in a cell phone does not use much power so there is little to gain.

Except when it's running Flash video or similar crap.

Re:Not required.. (1)

SuricouRaven (1897204) | more than 3 years ago | (#35983080)

I found two things will drain my phone quickly: Angry Birds and ebuddy. The latter I assume because it keeps the radio on continually.

Re:Not required.. (0)

Anonymous Coward | more than 3 years ago | (#35983166)

I think you kind of missed the point. The article is saying that without the ability to reduce voltage, power consumption will double with each chip generation, i.e. exponentially with time. So even if CPU power consumption isn't a problem now, it will be very soon. If you accept that power consumption must stay about the same, then we are left with no alternative than to turn off 1/2 the chip in the first gen, 3/4 in the next, 7/8 in the next etc. So now we have to choose which 1/8 to run, which we probably want to be the 1/8 best suited to the task at hand. The article is suggesting just one way to do this.

Re:Not required.. (0)

Anonymous Coward | more than 3 years ago | (#35985126)

"The CPU in a cell phone does not use much power"

Except that users of smart phones are complaining about short battery life.

Not to mention that many other computing devices (mobile and non-mobile) would also benefit from a 10-fold increase in energy efficiency.

"minimal benefits"

wut?

Link to attached Paper about specialized cores... (3, Informative)

file_reaper (1290016) | more than 3 years ago | (#35982386)

http://cseweb.ucsd.edu/users/swanson/papers/Asplos2010CCores.pdf [ucsd.edu]

They call the specialized cores "c-cores" in the paper. I took a quick skim through it. C-cores seem like a bunch of FPGA's and they take stable apps and synthesize it down to FPGA cells with the use of the OS on the fly. The C-core to hardware chain has Verilog and Synopsis in it.

Cool tech, guess they could add gated clocking and all the other things taught in classroom to further turnoff these c-cores when needed.

cheers.

Re:Link to attached Paper about specialized cores. (0)

Anonymous Coward | more than 3 years ago | (#35982934)

Well that makes some sense, when you implement functions in FPGA/silicon you usually get a 10X/100X reduction in the number of gates needed and power consumed over compiled code. I will point out though that 'writing' FPGA code is 10X harder than programming in C/C++ let alone Java/C#/Python so it's more likely that such would be used for OS code and drivers.

Nice idea... (1)

LongearedBat (1665481) | more than 3 years ago | (#35982414)

A couple of thoughts:

1. The common functionalities surely would include OS API's, as they seem pretty stable. But would they include common applications such as social networking apps, office apps, etc.?

2. If a patch is necessary, then upgrading hardware might be a little tricky. This will become a serious issue with the invasion of malware.

ZX81 logic array 100% used... (1)

Anonymous Coward | more than 3 years ago | (#35982418)

The Sinclair ZX81 replaced fourteen of the chips used on the ZX80 with one big programmable logic array chip that was only supposed to have 70% of the gates programmed in it. However, Sinclair used up all the gates on the chip and it ran nice and hot because of that. I suppose that the design could have used two chips instead, leading to lots of dark silicon and a cost implication.

Benchmarks in June. (1)

ChunderDownunder (709234) | more than 3 years ago | (#35982426)

I realise openjdk's is stack-based vm and dalvik is register-based. But aren't they essentially mapping virtual machine instructions to hardware instructions? In a rudimentary manner this was tried a decade ago with Java. It was found that general purpose processors would spank a Java-CPU in performance due to the way that a VM would interact with a JIT instead of processing raw instructions.

[Aside - ARM does include instructions for JVM-like code - Jazelle/ThumbEE. Can/does Dalvik even take advantage ?]

The extent to which this idea can escape from a research lab depends on relative performance. Quite interesting from the power consumption aspect though.

Re:Benchmarks in June. (1)

jensend (71114) | more than 3 years ago | (#35982612)

A quick Google search only turns up one serious discussion [mail-archive.com] about the possibility of a ThumbEE - oriented Dalvik. The only reply wasn't very optimistic about it, saying that a 16-cycle mode switch between ThumbEE and regular instructions makes it unlikely to be worth it.

More's the pity- I really think VM guys and processor design folks need to get their heads together.

Re:Benchmarks in June. (1)

ChunderDownunder (709234) | more than 3 years ago | (#35982650)

Cheers. I'm assuming the original instructions were concocted for Sun's proprietary Java ME/SE embedded platforms - AFAIK, none of which supported has made it into phoneME, openjdk.

Maybe if MIPS had 'won' on phones we'd greater synergy e.g. The reverse of NestedVM.

If they can get my phone to last a week or more (1)

sandytaru (1158959) | more than 3 years ago | (#35982460)

- then I'll be impressed. Currently I sit at 3 days with very heavy usage, and 5-6 days with low to moderate usage. If this sort of mult-core stuff breaks the all important one week barrier, then it'll be a welcomed technology.

Re:If they can get my phone to last a week or more (0)

Anonymous Coward | more than 3 years ago | (#35982490)

my Pre is more like 3 hours of heavy usage, one day of moderate usage, and maybe two if I don't use any of the smart phone functions or as a phone.

Re:If they can get my phone to last a week or more (0)

Anonymous Coward | more than 3 years ago | (#35984334)

Same with my Samsung Galaxy Captivate android phone. Terrible battery life.

Re:If they can get my phone to last a week or more (5, Insightful)

RobbieThe1st (1977364) | more than 3 years ago | (#35982872)

They can, they just don't want to. All they have to do is make it slightly thicker amd double the size of the battery.
Heck, I want to see a phone where the battery is the back cover(like the old Nokia dumbphones), and also has a small second battery inside it, something that can power the ram/cpu for 5 minutes.
Then, you can just yank the dead battery, plug a new one in /without rebooting/.
It would also allow for multiple battery sizes: Want a slim phone? Ok, use a small battery. Need two weeks of life? use a large battery.

Easy solution.

Re:If they can get my phone to last a week or more (1)

Sloppy (14984) | more than 3 years ago | (#35984156)

They can, they just don't want to.

This is one of the great mysteries of the phone market, a situation where it seems to my ignorant amateur eyes that they're doing the same thing as the MPAA companies: saying, "No, we don't want your money. Fuck off, customers. Go find someone else to do business with."

Wouldn't a 2011 phone whose battery lasts as long as a 2006 phone sell like hotcakes? Is "slim" really all that "cool?"

I just want a new Nokia 5190 (0)

Anonymous Coward | more than 3 years ago | (#35985402)

As in rolled out the factory within the last few months, warranty starts ticking when I open the box. Hell, I'd be happy just to be able to buy new batteries for the one I have.

Re:If they can get my phone to last a week or more (1)

perryizgr8 (1370173) | more than 3 years ago | (#35983604)

long ago i saw phones from philips(!) that they claimed had a battery life measured in months. the main problem with your need is that most people are ok with 5-6 days of battery on a smartphone.

cost inefficient (1)

aepurniet (995777) | more than 3 years ago | (#35982730)

guess using unused space is a good thing, but will it be cost effective to make these huge low nm chips? it might be more cost efficient to include two higher process chips. also batteries are always getting a little better (albeit very slowly). i think android phones especially would benefit from more cores. there are hundreds of threads running on that OS with just a few apps open.

Drama (1)

SnarfQuest (469614) | more than 3 years ago | (#35982788)

Dark Silicon: Luke, I am your father.

Re:Drama (0)

Anonymous Coward | more than 3 years ago | (#35983922)

Luke, the lukewarm circuit outputs an agonized answer: "NOOoooP, that assertion holds no voltage. I just power gated!"

asynchronous design ? (1)

cats-paw (34890) | more than 3 years ago | (#35982874)

the claim is that this is the most power efficient design route.

the problem is that there just aren't the sophisticated tool sets you need for design and analysis.

of course I've never been clear on why you couldn't just use the asynchronous design ideas and substitute
very low clock speeds in place of disables or some such thing.

not a digital designer so can't get too far into the details.

But what do you put in a specialized core? (4, Insightful)

Animats (122034) | more than 3 years ago | (#35982972)

Specialized CPU elements have been tried. The track record to date is roughly this:

  • Floating point - huge win.
  • Graphics support - huge win, mostly because graphics parallelizes very well.
  • Multiple parallel integer operations on bytes of a word - win, but not not a huge win.
  • Hardware subroutine call support, such as register saving - marginal to negative. Many CPUs have had fancy CALL instructions that were slower than writing out the register saves, etc.
  • Hardware call gates - in x86, not used much.
  • Hardware context switching - in some CPUs, not used much.
  • Array instructions - once popular at the supercomputer level, now rare.
  • Compression/decompression (MPEG, etc.) - reasonable win, but may be more useful as part of a graphics device than a CPU.
  • List manipulation, LISP support, Java stack support - usually a lose over straight code.
  • Explicit parallelism, as in Itanium - usually a lose over superscalar machines.
  • Filter-type operations (Fourier transform, convolution, wavelets, etc.) - very successful, but usually more useful as part of a signal processing part than as part of a CPU.
  • Inter-CPU communication - useful, but very hard to get right. DMA to shared memory (as in the Cell) seems to be the wrong way. Infiniband, which is message passing hardware, is useful but so far only seen in high end machines.
  • Hardware CPU dispatching - has potential if connected to "hyperthreading", but historically not too successful.
  • Capability-based machines. - successful a few times (IBM System/38 being the best example) but never made it in microprocessors.

A lot of things which you might think would help turn out to be a lose. Superscalar machines and optimizing compilers do a good job on inner loops today. (If it's not in an inner loop, it's probably not being executed enough to justify custom hardware.)

Re:But what do you put in a specialized core? (0)

Anonymous Coward | more than 3 years ago | (#35983074)

  • Hardware call gates - in x86, not used much.
  • Hardware context switching - in some CPUs, not used much.

Except with every system call on any machine that uses protected memory...

Given that system calls are somewhat expensive, the "not used much" is rather a feature.

Re:But what do you put in a specialized core? (1)

siride (974284) | more than 3 years ago | (#35985492)

I think he's referring to the x86 task feature, whereby the hardware would actually handle a context-switch instead of having the software set everything up manually. Neither Linux, nor Windows, nor the BSDs (and thus Mac OS X) use this feature, although the very earliest versions of Linux did. It's faster and less error-prone to do it in software.

Call gates are also not used, if they indeed ever really were. Old days: interrupts for system calls. These days: syscall or sysenter instructions, which are specially designed to transfer to ring 0 and do it fast.

Re:But what do you put in a specialized core? (0)

Anonymous Coward | more than 3 years ago | (#35983164)

Most of these efforts do not consider energy.

CPU power usage, really? (1)

SuperSlug (799739) | more than 3 years ago | (#35983718)

So here is the power usage breakdown from my Samsung Galaxy S running Froyo:

Display: 89%
Maps: 5%
Wifi: 3%
Cell Standby: 2%

So how is enabling "Dark Silicon" going to help the power usage on my phone when the display uses the
vast majority of the power?

Dark Silicon? (2)

orkysoft (93727) | more than 3 years ago | (#35983812)

We can't see it, we can only detect it by its power draw, and it makes up 95% of your chips!

Re:Dark Silicon? (2)

Jesus_666 (702802) | more than 3 years ago | (#35983984)

No, we can detect its mass but it doesn't interact electromagnetically with the rest of the device.

Watch out for programmers (1)

Sloppy (14984) | more than 3 years ago | (#35984130)

"If you fill the chip with highly specialized cores, then the fraction of the chip that is lit up at one time can be the most energy efficient for that particular task,

You can't win, because when a performance hacker reads this, he thinks, "Ooh, such waste! I need to parallelize all my stuff to increase utilization. Light 'em up!"

I'm forced to ask (1)

RogueWarrior65 (678876) | more than 3 years ago | (#35984274)

Why do we need ever more powerful phones? I don't think people are going to want to run CFDs, protein-folding, or SETI-at-home on their phone.
On the other hand, if the phone consumes 11 times less power, you could go a few months without charging it which would be good.

Re:I'm forced to ask (1)

currently_awake (1248758) | more than 3 years ago | (#35985344)

If you can minimize standby current you could have a thermocouple generate power from your body heat to charge the phone giving unlimited battery life.

What if... (0)

Anonymous Coward | more than 3 years ago | (#35985090)

a protocol changes. Hardware solutions are nice if the inputs and outputs never change. Why not many computers?

http://greenarrays.com/

CPU more like a database look up table of gates (0)

Anonymous Coward | more than 3 years ago | (#35985250)

Putting Memory super small makes it possible to emulate a whole chip with a database lookup table where each loop would do nothing more than lookup what the next gate would be on a real chip, tracing a virtual path through the chip
My idea is that one would create chips that are massively parallel in there structure and the OS on top of it would be massively paralelized in its structure and the programs on top of them would be massively parallelized.
Simpler chips that have massive lookup tables instead of many gates (just enough to make a loop and query a well index database of the gates )
So you would have a loop as I explained earlier that would execute based on inputs a search of a the tiny memory database and create an emulated NAND AND NOR XOR etc for each pass thru the loop so the chips would be simple and have hundreds or thousand or millions of them running in parallel.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?