Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Asynchronous Logic: Ready For It?

Hemos posted about 12 years ago | from the is-now-the-time dept.

Hardware 192

prostoalex writes "For a while academia and R&D labs explored the possibilities of asynchronous logic. Now Bernard Cole from Embedded.com tells us that asynchronous logic might receive more acceptance than expected in modern designs. The main advantages, as article states, are 'reduced power consumption, reduced current peaks, and reduced electromagnetic emission', to quote a prominent researcher from Philips Semiconductors. Earlier Bernard Cole wrote a column on self-timed asynchronous logic."

Sorry! There are no comments related to the filter you selected.

New improved FP version 21.0 (-1, Troll)

Anonymous Coward | about 12 years ago | (#4495642)

Get your own!!!

Re:New improved FP version 21.0 (-1)

macksav (602217) | about 12 years ago | (#4495757)

interesting, but irrelevant. instead of hawking your pathetic fp shit, you should, instead, be humping a whale for jesus. asslicker.

alright, question... (2, Interesting)

Anonymous Coward | about 12 years ago | (#4495645)

so how long do you think this will take before it's implemented on any sort of a large scale?

Re:alright, question... (1, Funny)

Anonymous Coward | about 12 years ago | (#4496006)

The brain contains ten billion neurons. Each neuron has an independent timing system. Neighboring neurons can send pulsed inputs to provide relative time coordination to the said neuron. All this seems to be a relatively reliable asynchronous logic system.
Considering that there are 6 billon humans on earth, we already have an installed base rivaling that of currently employed synchonous designs.

MiniMips, Philips Pager (5, Informative)

darn (238580) | about 12 years ago | (#4496247)

The largest ascynchronous project (to my knowledge)is the MiniMips [caltech.edu] that was developed at Caltech 1997 and has 1.5 M transistors. It was modelled after the R3000 mips architecture.
The best selling larg scale asynchronous circuit seems to be a micro controler that Philips [philips.com] developed and used in a pager series.

Asynchronous Logic? (5, Funny)

scott1853 (194884) | about 12 years ago | (#4495651)

Isn't that when your boss gives you several conflicting ideas on how he wants a product to be implemented, all at the same time?

Re:Asynchronous Logic? (0)

Anonymous Coward | about 12 years ago | (#4495683)

yes

then it will be parrell async logic because he
will expect all of them to be done at the same time.

Re:Asynchronous Logic? (5, Funny)

dubiousmike (558126) | about 12 years ago | (#4495902)

Scott,
This is your boss...

Please shut off that damn computer and get back to coding!

- Scott's boss

No (2)

Catskul (323619) | about 12 years ago | (#4496112)

Thats synchronus dislogic

Re:Asynchronous Logic? (1, Funny)

Anonymous Coward | about 12 years ago | (#4496797)

Asynchronous Logic: Ready For It?

Part of me thinks I am ready for it.

Asink what? (-1, Offtopic)

Anonymous Coward | about 12 years ago | (#4495653)

my logic isn't good enough to understand

I hope slashdot goes under (-1)

IAgreeWithThisPost (550896) | about 12 years ago | (#4495656)

and linux follows in a burning trail of death.
follow the titanic, lemmings!!

Some further information (-1, Redundant)

PhysicsScholar (617526) | about 12 years ago | (#4495663)

Here's some fairly interesting skiddaddle about the paradigm of an asynchronous machine -->

Asynchronous (also called event driven and self-timed) logic is defined here as logic that operates without the co-ordination of a global clock. That being said, here are today's Top 3 interesting details about this kind of computing.

1. When completion detection is used in asynchronous circuit design the computation rate tends towards the average rate of the system components rather than the
worst case rate of components as in clocked systems.
2. Because asynchronous components only begin processing data when it becomes available they will only consume dynamic power when doing useful work; as compared with a clocked system which consumes dynamic power on every clock cycle regardless of the work done. This reduces system power consumption which is especially important for portable equipment.
3. Asynchronous circuits are more modular because they rely only on local communication between components as compared to circuits with global clocking. The modularity claim leads to arguments that asynchronous circuits may be easier to design and formally verify.

Jesus. (-1, Offtopic)

Anonymous Coward | about 12 years ago | (#4495710)

I want to say that you are a crappy troll but, you keep getting the moderators to fall for it so I guess that proves me wrong.

Still I just shake my head. Pathetic.

Re:Jesus. (-1, Offtopic)

Anonymous Coward | about 12 years ago | (#4496008)

Thou shalt not criticize thy moderator, lest he modth thee down as offtopic.

Re:Some further information (4, Informative)

Anonymous Coward | about 12 years ago | (#4495796)

"PhysicsScholar", don't karma-whore and post plagiarizing. You copy-and-pasted from http://www.cis.unisa.edu.au/~cisdak/nResearch/Asyn c.html [unisa.edu.au] . This gets you moderated "Redundant."

It'll be enlightening for people to just go there and read your information in context anyway, plus there are links to papers and stuff. You shoulda posted the link!!

Re:Some further information (-1, Offtopic)

PhysicsScholar (617526) | about 12 years ago | (#4495820)

I seem to have forgotten the quotes.

Re:Some further information (-1, Offtopic)

Anonymous Coward | about 12 years ago | (#4496577)

You seem to have "forgotten the quotes" [bbc.co.uk] there, too.

Re:Some further information (-1, Offtopic)

Anonymous Coward | about 12 years ago | (#4495835)

what you say is all true, but how does it make it redundant? the post is informative and it is a good post, why do you care if he get karma for it? copy-n-pasting is usually rewarded here.

don't be bitter because he's working the system because that's how the system is designed.

Re:Some further information (-1, Offtopic)

Anonymous Coward | about 12 years ago | (#4495867)

not to mention that by modding it down, you are depriving the readers of reading a good post in your zeal to stop his karma whoring.

so who really loses in this fight? the readers.
the winners? the jealous zealots.

Re:Some further information (1, Informative)

Milican (58140) | about 12 years ago | (#4495908)

Because posting without giving credit to original author is wrong.

JOhn

Re:Some further information (-1, Offtopic)

Anonymous Coward | about 12 years ago | (#4495942)

this aint english lit class here, it's fuckin slashdot. you are abusing the moderation system if you mod down posts just because of who the author is, and not looking at the actual content of the post.

if he doesn't credit it, so fucking what? the moderation guidelines don't say you must provide sources or credit your material. you are using criteria to mod that isn't there.

btw, i like how these discussions about the stupid moderation and karma system get modded down so quickly. seems like somone is rather insequre about people talking about it.

Re:Some further information (-1, Offtopic)

Anonymous Coward | about 12 years ago | (#4496002)

interesting, this is this post modded down to -1, yet its parent is untouched at -1? if one is offtopic, shouldn't the other one be? why is this not so?

this is simply moderation abuse. whoever modded these posts to -1 is clearly using their mod points to punish for expressing the "wrong" opinion. this person should be removed from ever being able to moderate again.

Re:Some further information (-1, Offtopic)

Anonymous Coward | about 12 years ago | (#4496157)

i love this editor modbombing. more proof that this secret unaccountable system sucks, and what hypocritical censors they are.

you people are no better than the old communist censors who silently and secretly wipe off opinions who don't toe the party line.

Re:Some further information (1, Offtopic)

Milican (58140) | about 12 years ago | (#4496212)

Its not about English class, we don't need a MLA style credit here. A simple URL with a couple of quotes will do. Its not cool to copy-n-paste and not give credit and its straight up misrepresentation to do so. Misrepresentation should not be encouraged or tolerated on this forum or any other.

Also, I agree how its rather funny we got modded down. Do your worst moderators, my Karma is still excellent and these small posts won't hurt it!!

JOhn

Re:Some further information (0)

Anonymous Coward | about 12 years ago | (#4496115)

I've said it before, I'll say it again. Shut up! I don't know who you're trying to impress. Your comments don't add to the discussion. If I want to know about your research (mentioned twice in the past week I believe!) I'll go to your friggin homepage. You seem to post on every 5th story! Looking up that much information on the net to copy and paste into a slashdot comment must take up an enormous amount of time. Do yourself (and us) a favor and use this time to search for a girlfriend or at least go buy a fleshlight [fleshlight.com] (warning, not safe for work or kids.)

Re:Some further information (2, Insightful)

dgmartin98 (576409) | about 12 years ago | (#4496153)

This guy has been plagiarizing for a while, I looked at a few of his posts, did a Google search on the text of his post, and found that this isn't the first example of his cut and paste job. If he really is at Imperial College, he'd know that plagiarizing without giving credit is frowned upon in the academic community, and would probably get you expelled (if you're a student) or demoted (if you're staff).

BTW, he goes by different names, usually those with the word "Physics" in it.

Here's another example of his copy and pasting:
This post: http://developers.slashdot.org/comments.pl?sid=426 99&cid=4486740 [slashdot.org]
is copied from this web page:
http://www.intuitor.com/moviephysics/mpmain.html [intuitor.com]

Take a look for yourself at his post history, the wide range of topics, and supposed knowledge.

Dave

Re:Some further information (-1, Offtopic)

Anonymous Coward | about 12 years ago | (#4496202)

so on simply the basis of all this circumstantial evidence and past history, you simply throw out whatever he says? how about you read his posts first, then decide?

don't judge a book by its cover, as my momma told me. but slashbots here seem to have a deep insecurity about the evil trolls.

just because he makes the system and the moderators look completely stupid, don't take it out on him.

Re:Some further information (4, Insightful)

JayBat (617968) | about 12 years ago | (#4496155)

Those of us that have been around the block more than twice know that asynchronous design has been the technology of the future for a long, long time. My personal experience goes back to the mid-seventies, but I'm sure there were asynch he-men doing their thing with vacuum tubes and RTL. :-)

The catch, then as now, is that asynch logic is just plain more difficult for our tiny little human brains to grok. This was true back in the days when humans designed their own logic, and it is even more true now when 99%+ of all logic is designed not by humans, but by logic synthesis software (Synopsys DC and Cadence PKS).

That said, there are always folks out there doing Cool Stuff w/asynch circuits. Hope that Ivan Sutherlands's group [sun.com] at Sun Labs survives Sun's recent massive layoffs. [theregus.com]

Re:Some further information (2)

Anonvmous Coward (589068) | about 12 years ago | (#4496229)

"2. Because asynchronous components only begin processing data when it becomes available they will only consume dynamic power when doing useful work; as compared with a clocked system which consumes dynamic power on every clock cycle regardless of the work done. This reduces system power consumption which is especially important for portable equipment."


That rung a bell with me. When I heard about 'clockless computing' a couple of years ago, one of the first examples was the microprocessor inside of a pager. They wanted to go clockless (which I assume is the same type of processor here, polite corrections invited...) so they could have a lower power processor. The idea? Make pagers last longer on a single battery.

I'd say it worked. However, from that same article, it'll be quite a while before desktop PC's use a processor like that. I should probably go read the article heh.

timerless design elements in pentium4? (4, Interesting)

sp00nfed (518619) | about 12 years ago | (#4495665)

Doesn't the pentium 4 and most CPU's nowadays have at least some elements of asynchronous logic? And I'm pretty sure that certain circuits in most current CPU's are implemented asynchronously.

Isn't this the same as having a CPU without a timer? (i.e. no MHz/GHz rating).

Re:timerless design elements in pentium4? (1, Informative)

Anonymous Coward | about 12 years ago | (#4495711)

I think you are correct that modern microprocessors use asynchronous logic in certain parts and synscronous design in others.

Re:timerless design elements in pentium4? (4, Informative)

Xeger (20906) | about 12 years ago | (#4495865)

Within the space of a single clock cycle, the Pentium (or other designs) might make use of asynchronous logic, but (and this is the important bit) the asynchronicity only exists within the domain of the CPU. The external interface to the CPU is still governed by a clock: you supply the CPU with inputs, trigger its clock, and a short (fixed) while later it supplies you with outputs. Asynchronous logic removes the clock entirely.

Kurzweil (3, Insightful)

Anonymous Coward | about 12 years ago | (#4495681)

Brains use async logic elements. Maybe the only way to achieve good artificial intelligence with practical speeds is with async logic. With a cluster of async nodes you can build a physical simulation of neural nets. Consider having a small array of async nodes simulating parts of a neural net at a time. That would be a lot faster than what would be possible with ordinary sequential processing. Async logic might very well bring large neural net research into practicality.

Re:Kurzweil (3, Insightful)

pclminion (145572) | about 12 years ago | (#4496176)

Brains use async logic elements.

First off, there's no proof of this. The brain certainly appears to be asynchronous, but there's no evidence to suggest that there isn't some kind of internal, distributed clocking mechanism that keeps the individual parts working together. There's not enough evidence either way.

Async logic might very well bring large neural net research into practicality.

Why does everyone seem to think that ANNs are the way toward "true AI?" ANNs are superb pattern matching machines. They can predict, and are resilient to link damage to some degree. But they do not think. ANNs have nothing to do with what's really going on in a biological brain, except that they are made of many interacting simple processing elements. The biological brain inspired ANN, but that's all.

Re:Kurzweil (0)

Anonymous Coward | about 12 years ago | (#4496455)

To prove that a system is asynchronous, it is sufficient to show the independence of only a small number of elements on a absolute timer. There are ALOT of Central Nervous System neurons that recieve no input from other neurons. That leads to the conclusion that a synchronizing signal can not originate from the rest of the network
The only remaining way for an absolute time coordinate to exist in the brain through intracellular processes or the chemical environment of the entire brain.
Now, the physiology of neurons is very diverse, there are more types of cells in neural tissue than in any other kind of tissue in the body. This means that the chances that a population of neurons will not respond to a single common chemical signal is quite high.
Furthermore, and most importantly, chemical signals are most likely too slow to provide a high resolution timer for the neuron.
As for intracellular mechanisms for timing, each neuron must have some common communication pathway to keep every neuron in sync. Many neurons have been shown to generate continous pulses without synaptic stimuli. No two neurons has ever been shown to be perfectly synchronized, or even have the exact frequency.
This shows that it is unlikely that any synchronizing system exist for the brain as a whole. Of course the neural nets in concert with each other in the brain. This can be done with repetition of signals comming form each network, network redundancy, or linear pathways in order to avoid time mismatches. The brain is most certainly an asynchronous system by the way we define asynchronous.

Re:Kurzweil (2, Interesting)

LordKane (582228) | about 12 years ago | (#4496659)

"ANNs have nothing to do with what's really going on in a biological brain, except that they are made of many interacting simple processing elements."

I'm not sure quite how this can be. ANNs are inspired by and based on the biological brain. But they are not related? ANNs are just pattern matchers, and our brains are nothing like that? I beg to differ. ANNs are very similar to our brains. Humans are giant pattern matchers. How do we learn? Does stuff just pop into out heads and !BANG! we know it? No, we discover it or are told first. Science is based on being able to reproduce the results of an experiment, matching a cause->effect pattern. Speech is matching sound to meanings. Not everyone equates a sound to the same meaning, as patterns that people learned can be different. Animals are great examples of pattern machines. How are animals trained? Most often, by positive and negative reinforcement, which is essentially, conditioning. We do the same thing to our ANNs, you get a cookie if it's right, nothing if it's wrong. The close matches are kept, the others are thrown out. So, in what way are ANNs nothing like a biological brain? To me, it seems that they are incredibly similar, just on different scales. Our ANNs today are tiny and don't do to much as compared to the standard of a brain, which is layers upon layers of interlaced patterns. ANNs use simple structures as the basis for their overall structure, are our brain cells not very similar? To me, it seems that they are incredibly similar, just on different scales.

Re:Kurzweil (5, Interesting)

imadork (226897) | about 12 years ago | (#4496667)

Why does everyone seem to think that ANNs are the way toward "true AI?" ANNs are superb pattern matching machines. They can predict, and are resilient to link damage to some degree. But they do not think. ANNs have nothing to do with what's really going on in a biological brain, except that they are made of many interacting simple processing elements. The biological brain inspired ANN, but that's all.

I couldn't agree more. I remember reading a comparison between the current state of AI and the state of early Flight Technology. (it may have even been here, I don't recall. I make no claim to thinking this up myself. Perhaps someone can point me to a link discussing who first thought of this?)

One of the reasons that early attempts at flight did not do well is because the people designing them merely tried to imitate things that fly naturally, without really understanding why things were built that way in the first place. So, people tried to make devices with wings that flapped fast, and they didn't work. It wasn't until someone (Bernoulli?) figured out how wings work - the scientific principles behind flight - that we were able to make flying machines that actually work.

Current AI and "thinking machines" are in a similar state as the first attempts to fly were in. We can do a passable job at using our teraflops of computing power to do a brute-force imitation of thought. But until someone understands the basic scientific principles behind thought, we will never make machines that think.

Re:Kurzweil (2)

Yokaze (70883) | about 12 years ago | (#4496271)

Actually, it's the only practical way to create a fairly sophisticated computation unit.

The problem is, to have a synchronous chip, there has to be synchronicity.

Problem: The more transistors a chip has, the smaller the production process has to be to keep the production profitable.
The smaller the process the slower the signal travels.

So, you'll get a system, where the clocks signal can't be synchronous (or it'll be terrible complicated to distribute the signal). Hence, it'll have to be asynchronous.

Not necessarily a completion-based asynchronous logic, maybe a multi-core like the current IBM PowerPCs.
But async-logic actually seems to be the easier way as SMP doesn't scale perfectly.

Furthermore, the smaller the structures and the higher the clock, the larger the clock driver has to be.

IRC, 1/3 of the current chips is used to drive the clock alone. But don't cite me on that.

Re:Kurzweil (1)

Chiggy_Von_Richtoffe (565992) | about 12 years ago | (#4496341)

>Maybe the only way to achieve good artificial intelligence with practical speeds is with async logic. With a cluster of async nodes you can build a physical simulation of neural nets.

Okay, new poll
The first successful neural-net AI shouldn be named:
1. "Adam"
2. "Isaac"
3. "sky.net"
4. "Beowulf" -laugh it's funny
5. "Lore/Data"
6. What do you mean CowboyNeal isn't the first?

Re:Kurzweil (2)

naasking (94116) | about 12 years ago | (#4496423)

I have strong reservations about whether logic alone can achieve consciousness at all. The fact that it is asynchronous adds nothing that wasn't there before.

Re:Kurzweil (0)

Anonymous Coward | about 12 years ago | (#4496510)

There is a great dichotomy between synchronous and asynchronous quantum computers. Sync QCs can not generate the superpositions that some theorize that concious entails. This is because with the recycling of a quantum state through a single processing unit is vastly different from having two equivalent processors acting on a superposition. Async QC's networked like the brain might very well be the very physical basis for conciousness.

Re:Kurzweil (2)

brejc8 (223089) | about 12 years ago | (#4496631)

Well lets see.
Average (reather than worst) case ferformance.
Lewer latency.
Lewer power consumption.
Zero power consumption static state.
Lower EMI.
Security by being imune to clock glitch attacks and some power attacks
what else do you want?

What's wrong with synchronous? (5, Interesting)

phorm (591458) | about 12 years ago | (#4495693)

On the flip side, the millions of simultaneous transitions in synchronous logic begs for a better way, and that may well be asynchronous logic

The advantage outlined here seems to be independant functionality between different areas of the PC. It would be nice if the components could work independently and time themselves, but is there really a huge loss in sustained synchonous data transfer?

From what I've understood, in most aspects of computing, synchronous data communication is preferable. IE, network cards, sound-cards, printers, etc. Don't better models support bi-directional synchronous communication?

Re:What's wrong with synchronous? (4, Funny)

Hard_Code (49548) | about 12 years ago | (#4495840)

"but is there really a huge loss in sustained synchonous data transfer?"

I'll answer that question, right after I look up the answer in memory...

Re:What's wrong with synchronous? (5, Insightful)

Junks Jerzey (54586) | about 12 years ago | (#4495915)

From what I've understood, in most aspects of computing, synchronous data communication is preferable. IE, network cards, sound-cards, printers, etc. Don't better models support bi-directional synchronous communication?

You're just talking about I/O. Of course I/O has to be synchronous, because it involves handshaking.

I think there are some general misconceptions about what "asynchronous" means. Seriously, all I'm seeing are comments from people without a clue about chip design, other than what they read about at arstechnica.com or aceshardware.com. And if you don't know anything about the *real* internals of synchronous chips, then how can you blast asynchronous designs?

So-called asynchronous processors have already been designed and prototyped. Chuck Moore's recent (as in "ten years old") stack processors are mostly asynchronous, for example. Most people are only familiar with the x86 line, and to a lesser extent the PowerPC, and a much, much lesser extent the Alpha and UltraSPARC. Unless you've done some research into a *variety* of processor architectures, please refrain from commenting. Otherwise you come across like some kind of "Linux rules!" weenie who doesn't have a clue what else is out there besides (Windows, MacOS, and UNIX-variants).

Re:What's wrong with synchronous? (3, Informative)

default luser (529332) | about 12 years ago | (#4496486)

Actually, most popular communications formats are "asynchronous".

Don't confuse yourself. Synchronous communications involve a real-time shared clock between points.

Then you have asynchronous communications standards like RS-232. The sender and receiver choose a baud rate, and the receiver waits for a start bit, then starts sampling the stream using it's local clock. So long as the clocks are close enough, and the packets are short enough, you'll never get an error.

Then you have standards like Fast Ethernet, which are also asynchronous. AFAIK, the clock used to decode the Ethernet packet is contained somewhere in the preamble, and a PLL is tuned to the packet's clock rate. This is to avoid the obvious problems of the simple async communications of RS-232.

A SAMPLE OF THE ACTUAL CLOCK used to encode the packet is avaliable to the receiver, but the receiver can only use this to tune it's local clock. It has to do the decoding asynch.

Re:What's wrong with synchronous? (5, Informative)

Orne (144925) | about 12 years ago | (#4496009)

The root problem is data transfer within the CPU, not data transfer between I/O devices.

The clock speed (now >10E9 Hz) is the upper limit of your chip's ability to move a voltage signal around the chip. Modern CPUs are "staged" designs, where data is basically broken into an opcode "decode" stage, "register load", "operation", and "register unload" stages. For a given stage, you cannot clock the output of the stage faster than the time it takes for the computations to complete, or you're basically outputting garble.

A synchronous design indicates that every flip-flop on the chip is tied to the same clock signal, which can mean one HUGE amount of wiring just to get everything running at the same speed, which raises costs. On top of that, you have charging effects due to the switching between HI and LO, which can cause voltage problems (which is why capacitors are added to CPUs) Then add resistive effects, where current becomes heat, and you run the risk of circuit damage. All of this puts some hard limits on how fast you can make a chip, and for what price.

Asynchronous chip design allows us to throw away the clock circuitry, and every stage boundary becomes status polling (are you done yet, are you done yet, ok, lets transfer the results). With proper design, you can save a lot of material, you can decouple the dependance of one stage on another, so the max instruction/second speed can now run at the raw rate of the material.

Re:What's wrong with synchronous? (3, Informative)

Rolo Tomasi (538414) | about 12 years ago | (#4496154)

AFAIK the modern CPUs are already asynchronous internally to a large extent. This is because at today's clock frequencies, the signal runtime difference becomes significant, i.e. by the time it takes for the signal to move across the whole die, several clock cycles would already have passed. So, prefetch, ALU, instruction decoding, FPU, etc. all operate independently from each other. I'm no expert on this though, maybe someone more knowledgable than me can shed more light on this.

Pipelining (5, Informative)

Andy Dodd (701) | about 12 years ago | (#4496744)

In most modern CPUs, all of those occur independently in different units in the pipeline.

But they still do their function once per global clock cycle. After that, they pass their results on to the next stage.

As a result, the clock rate is limited by the longest propagation time across a given pipeline stage. A solution that allows for higher clock speeds is to increase the number of pipeline stages. This means that each stage has to do less. (The P4 one-ups this by having stages that are the equivalent of a NOP just to propagate the signal across the chip. But they're still globally clocked and synchronous.)

P4 has (I believe) a 20-stage pipeline. (It's in that ballpark) - The Athlon is sub-10, as are almost all other CPUs. This is why the P4 can achieve such a high clockrate, but it's average performance often suffers. (Once you have a 20-stage pipeline, you have to make guesses when branching as to WHICH branch you're going to go on. Mispredict and you have to start over again, paying a clock cycle penalty.)

Shorter pipelines can get around the branch misprediction issue by simply dictating that certain instruction orders are invalid. (For example, the MIPS architecture states that the instruction in memory after a branch instruction will always be executed, removing the main pipeline dependency issue in MIPS CPUs.)

With asynch logic, each stage can operate independently. I see a MAJOR boon in ALU performance - Adds/subtracts/etc. take up FAR less propagation time than multiplies/divides - but in synch logic the ALU has to operate at the speed of the slowest instruction.

Most important is the issue of power consumption - CMOS logic consumes almost no power when static (i.e. not changing its state), power consumption is almost exactly a linear function of how often the state changes, i.e. how fast the clock is going. With async logic, if there's no need for a state change (i.e. a portion of the CPU is running idle), almost no power consumed. It is possible to get some advantages in power consumption simply by changing the clock speed. (e.g. Intel SpeedStep allows you to change between two clock multiplier values dynamically, Transmeta's LongRun gives you FAR more control points and saves even more power, many Motorola microcontrollers such as the DragonBall series can adjust their clock speed in small steps - One Moto uC can adjust from 32 kHz to 16 MHz with a software command.)

Re:What's wrong with synchronous? (5, Informative)

anonymous loser (58627) | about 12 years ago | (#4496121)

The advantage outlined here seems to be independant functionality between different areas of the PC. It would be nice if the components could work independently and time themselves, but is there really a huge loss in sustained synchonous data transfer?


Yes, for many reasons which are somewhat glossed over in the article (I guess the author assumes you are an EE or CPE familiary with the subject). Here's a quick breakdown of the two major issues:


1. Power Distribution & Consumption - In a synchronous system, every single unit has a clock associated with it that runs at some multiple of the global clock frequency. In order to accomplish this you must have millions of little wires running everywhere which connect the global clock to the individual clocks on all the gates (a gate is a single unit of a logic function, sorta like a 0 or 1). Electricity does not run through wires for free except in superconductors. Real wires are like little resistors in that to push the current through them, you have to give up some of the power you are distributing (how much is a function of the cross-sectional area of the wire). The power which doesn't make it through the wire turns into heat. One of the reasons you can fry an egg on your P4 is because it's literally throwing away tons of power just trying to syncrhonize all the gates to the global clock. As stated in the article, in an asynchronous system, the clocks are divided up on a modular basis, and only the modules that are running need power at all. This design technique is already used to some degree in synchronous designs as well (sorta like the power saving feature on your laptop), but does not benefit as much since in a synchronous design must always trigger at the global clock frequency rather than only triggering when necessary.


2. Processor Speed - Much like the speed of an assembly line is limited to the slowest person on the line, so too is the speed of a CPU limited to the slowest unit. The problem with a synchronous design is that *everything* must run at the slower pace, even if they could theoretically move faster. In an asynchronous design, the parts that can go faster, will, so the total processing time can be reduced.


Hope that helps.

Re:What's wrong with synchronous? (2)

CvD (94050) | about 12 years ago | (#4496741)

But that doesn't take out the fact that there's still a slowest person in the line. Granted, for some cycles this person might not be needed, but what if they were for some calculation... it would not speed up the process.

It's not that I disagree with the asynchronous design... I see the benifits, just pointing out a little (IMHO) flaw in your logic. :-)

Re:What's wrong with synchronous? (2, Informative)

hamsterboy (218246) | about 12 years ago | (#4496331)

Actually, the biggest advantage is in routing.

On a synchronous design of any complexity, quite a bit of the routing (i.e. where the wires go) is due to clock distribution. The CLK signal is one of the few that needs to go to every corner of the chip. There are various strategies for doing this, but they all have difficulties.

One method is to lay a big wire across the center of the chip. Think of a bedroom, with the bed's headboard against one wall; you end up with a U-shaped space. Now, suppose you (some data) need to get from one tip of the 'U' (the decoder) to the other (an IO port). Either you have to walk around the entire bed (a long wire), or go over it (a shorter wire). The obvious choice is to go over, but when you have a wire with one voltage crossing a wire with a (potentially different) voltage, you get capacitance, and that limits the clock speed of the entire chip.

With an asynchronous design (lots of smaller blocks with their own effective clocks), you don't have this. Data can be routed wherever it needs to go, without fear of creating extra capacitance. The downside is that they're very difficult to design. This is partially because there are no tools for this - most of the mainstream hardware simulators slow waaaaaaayyy down once you get more than a few clock signals running around.

-- Hamster

What everybody's wondering now is... (-1, Redundant)

whereiswaldo (459052) | about 12 years ago | (#4495704)

What do you get when you mix a Beowulf cluster with this? Parallel computing plus asynchronous computing... drooolll.

Beowulf redundant once you have async chips (4, Interesting)

Xeger (20906) | about 12 years ago | (#4495834)

In a way, an asynchronous circuit design already is a parallel computer. An asynchronous machine contains many (largely) independent components that communicate with each other in order to solve computational problems more efficiently by breaking them down into small pieces and working on them in parallel.

In this context, your notions of parallel computing will change greatly. Currently, individual nodes in a cluster are islands of computation, separated by (comparably) vast distances. Messages between nodes take orders of magnitude more time than messages within a node.

When you set out to build a supercomputing cluster in the asynchronous world, ideally the entire cluster would be within a single die. Then the latency between nodes would be reduced to microseconds or nanoseconds, and nodes could split work more effectively. The high-speed buses and complex arbitration schemes required for asynchronous computing will be equally useful for designing massively parallel clusters-on-a-chip.

Re:What everybody's wondering now is... (-1, Offtopic)

Anonymous Coward | about 12 years ago | (#4496297)

Get off the crack, moderators. It's a valid point.

Problem with Async (3, Insightful)

adrox (206071) | about 12 years ago | (#4495705)

The problem with asynchronous logic is that even though it might seem faster in theory you have to deal with the introduction of many new race conditions. Thus to prevent to the race conditions you need to implement many handshacking methods. In the end it really becomes no faster than sychronous logic due to the handshacking. This is especially true these days with 2.5 GHz CPUs.

Re:Problem with Async (2)

brejc8 (223089) | about 12 years ago | (#4496094)

Nowdays we have good software which ensures you dont have race conditions. Infact this is where async becomes great as your data and clock is one big rase condition. The clock must be slower than data.

Re:Problem with Async (3, Informative)

taeric (204033) | about 12 years ago | (#4496390)

Not sure if you were serious or not...

Software will having next to nothing to do with the race conditions in the processor. Instead, the race condition you pointed out will be the difficulty. That is, how can you ensure the "ready" signal is indeed slower then the computations that a module is performing? This is not an easy thing to do. Especially if you want it to report as soon as it is done. Most likely, a signal will fire after the longest time a unit could take. You do not have a speed up for the fast solutions, but you don't have to worry about complex logic on the ready path, either. Another solution would be handshaking, but then you may run into an explosion in the amount of logic.

Also, something I think would be a problem. Many of the current optimizations in out of order execution are almost custom fit to a clocked design. That is, the processor knows IO will take so many cycles, branches take a certain amount, etc. Now, currently (especially with hyperthreading) the processor is coming closer to keeping the execution units busy at all times. Do people really expect some magical increase when the clock is taken out? The scheduler will have to change dramatically. Right?

Re:Problem with Async (2)

brejc8 (223089) | about 12 years ago | (#4496550)

This is done using handshaking. Its a method of communication and ensuring that both partys are happy before moving onto the next piece of data.
As for logic there are several methods to ensure that the result is ready before the latch switches. Using matched delays involves races but is safer as its nore local than a global clock.
A better method is using things like dual rail and Delay insensitivety. This uses two wires to communidate data. Wiggle one for a one and the other for a zero. No races.
Asynchronous isnt that weird you know. Fine an instruction might take a 1ns or 1.2ns depending on the data. It still follows the rules of sequencing.

Read first chapter of this [man.ac.uk] for more details of race free computation.

I even made a method of converting synchronous designs into async ones automaticly.

Re:Problem with Async (2)

brejc8 (223089) | about 12 years ago | (#4496775)

Oh wait i see what youre getting at. The software is used in the design process reather than at run time. If you use a tool like "balsa" to design then you can get race free implementations.

Cyclic History (5, Interesting)

nurb432 (527695) | about 12 years ago | (#4495725)

Isn't this where the idea of digital logic really got started? At least its how it was taught when I was in school.

We even did some design work in async. Cool stuff. Easy to do, fast as hell...

Never did figure out why it never caught on. Except for the difficulty in being general purpose.. so easy of a job with sync logic. And i guess it does take a certian mind-set to follow it.

Re:Cyclic History (4, Interesting)

Anonvmous Coward (589068) | about 12 years ago | (#4496335)

"Never did figure out why it never caught on."

I think the internet is a good metaphor of this technology. Take Quake 3 for example. Think about what all it takes to get several people playing over the net. They all have to respond within a certain time-out phase, for adequate performance they have to respond in a fraction of the timeout time, and there's a whole lot of code dedicated to making sure that when I fire my gun, 200ms later it hits the right spot and dings the right player for it.

It works, but the logic to make that work is FAR more complex than the logic it takes to make something like a 'clocked internet' work. The downside, though, is that if you imagine what a clocked internet would be like, you'd understand why Q3 wouldn't work at all. In other words, the benefits would probably be worthwhile, but it's not a simple upgrade.

Intel and asynch clocks (4, Interesting)

catwh0re (540371) | about 12 years ago | (#4495739)

A while back I read an article about intel making p2 clockless chips, that performed rougly 3 times(in MHz terms not overall performance) faster.

Intel recognise clockless as the future, and hence the P4 actually has portions designed that are clockless.

Before know-it-all's follow this up with "but it runs at 2.xx GHz", let them please read an article on about how much of your chip is oscilating at that immense speed.

As it's said in the EE industry, "oh god imagine if that clock speed was let free on the whole system"

asynchronous logic (-1, Offtopic)

Anonymous Coward | about 12 years ago | (#4495743)

1)My prostate is infected.

2)Money is the root of all evil.

Thus, my prostate is made of gold.

I've had this for years... (5, Funny)

Call Me Black Cloud (616282) | about 12 years ago | (#4495745)

and it doesn't work all that great.

It usually goes like this: little head decides to take some action that big head later decides wasn't such a good thing to do.

Fortunately I've invested in a logic synchronization device, which I like to call "wife". Wife now keeps little head from failing to sync with big head through availability (not use) of tools "alimony", "child support", and "knife" (aka "I'll chop that damn thing off while you sleep!")

Re:I've had this for years... (2)

Hard_Code (49548) | about 12 years ago | (#4495863)

Yes, but then you lose the benefits of branch prediction...

Doing it already... (2, Informative)

Sheetrock (152993) | about 12 years ago | (#4495752)

Technically speaking, if you're not using a SMP system you're processing logic asynchronously.

But more to the point: while asynchronous logic may appear to offer a simple tradeoff (slower processing time for more efficient battery life), recent advances in microsilic design make the argument for asynchronous components moot. For one thing, while two synchronous ICs take twice the power of one asynchronous IC (not quite because of the impedance caused by the circuit pathway between two chips, but that's negligible under most circumstances), they will in general arrive at a result twice as quickly as its serial pal. Twice as quick, relatively equal power consumption.

The real reason for the drive towards asynchronicity is to cut down on the costs of an embedded design. Most people don't need their toaster to process the 'Is the bread hot enough' instruction with twice the speed of other people's toasters. But for PDAs (Personal Data Assistants) or computer peripherals I wouldn't accept an asychronous design unless it was half as much.

Are you confused? (2)

Jhan (542783) | about 12 years ago | (#4496372)

Eh... "Asynchronous" means "without synchronization" (ie. "without clock"). It has nothing to do with serial vs. parallell operation.

HIBT?

might be a while... (2, Interesting)

jaredcoleman (616268) | about 12 years ago | (#4495764)

These chips are great for battery powered devices, such as pagers, because they don't have to power a clock. Extends the batt life at least 2x. But even if the advantages are superior to clocked chips for larger markets, how do you market something like this to people who want to see "Pentium XXXVIV 1,000,000 Ghz" on the packaging?

va linux annual report (-1, Offtopic)

larry bagina (561269) | about 12 years ago | (#4495766)

Hey, I know this is offtopic, but it is important, and, for obvious reasons, the story submission got rejected.

VA Linux released their annual report [yahoo.com] Friday.

The future looks bleak:

IF WE FAIL TO ADEQUATELY MONITOR AND MINIMIZE OUR USE OF EXISTING CASH, WE MAY NEED ADDITIONAL CAPITAL TO FUND CONTINUED OPERATIONS BEYOND FISCAL YEAR 2003.
Since becoming a public company, we have experienced negative cash flow from operations and expect to experience negative cash flow from operations for at least the foreseeable future. Unless we monitor and minimize the level of use of our existing cash, cash equivalents, marketable securities and credit facilities, we may require additional capital to fund continued operations beyond our fiscal year 2003. We may require additional funding within this time frame, and this additional funding, if needed, may not be available on terms acceptable to us, or at all. A continued slowdown in technology spending as compared to the general economy, as well as other factors that may arise, could affect our future capital requirements and the adequacy of our available funds. As a result, we may be required to raise additional funds through private or public financing facilities, strategic relationships or other arrangements. Any additional equity financing would likely be dilutive to our stockholders. Debt financing, if available, may involve restrictive covenants on our operations and financial condition. Our inability to raise capital when needed could seriously harm our business.

WE HAVE A HISTORY OF LOSSES AND EXPECT TO CONTINUE TO INCUR NET LOSSES FOR THE FORESEEABLE FUTURE. We incurred a loss of $18.8 million for our fiscal fourth quarter ended July 27, 2002, primarily due to the continued slowdown in technology spending as compared to the general economy, restructuring charges, long-lived asset impairments and our ramp up of our software business, and we had an accumulated deficit of $725.9 million as of July 27, 2002. We expect to continue to incur significant product development, sales and marketing and administrative expenses. We expect to continue to incur net losses for at least the foreseeable future. If we do achieve profitability, we may not be able to sustain it. Failure to become and remain profitable may materially and adversely affect the market price of our common stock and our ability to raise capital and continue operations.

Re:va linux annual report (-1, Offtopic)

Anonymous Coward | about 12 years ago | (#4495982)

*Gasp* Slashdot might die *before* BSD? :-)

Oh, good lord, now you've gone and done it! (-1, Troll)

Anonymous Coward | about 12 years ago | (#4496684)

Stephan Hawking, found dead at 60

Noted comsmologist and gansta rapper Stephan "M.C." Hawking was found dead in his Cambridge crib early this morning, the apparent victim of a drive by function collapse. You may not have understood "A Brief History of Time", but there is no denying his impact upon the British gangsta rap scene. Truly an English icon, he will be missed.




(dammit, I'm gonna turn this meme yet!)

Asynchronous logic vs radiation ? (4, Interesting)

renoX (11677) | about 12 years ago | (#4495838)

I'm wondering how asynchronous logic stand up against transiant errors induced by a cosmic ray?

On a synchronous circuit most of the time such glitch won't do anything because it won't occur at the same time the clock "ring" so the incorrect transient value will be ignored.

As the "drawing size" of circuits gets lower and lower, every circuit must be hardened against radiations, not only circuits which must go on space or in planes..

Re:Asynchronous logic vs radiation ? (2)

Xeger (20906) | about 12 years ago | (#4495965)

You could mitigate this problem somewhat by including redunant computation as part of your asynchronous workload. If radiation only causes local transients, then any sensitive operations could be performed by two different units, and their results compared in a third unit.

The disadvantage is that a glitch in any of the three units would result in a computation being detected as invalid. And, of course, it adds even more complexity to an already staggeringly complex intra-unit communication problem.

The advantage is that you don't need to spend as much time radiation-hardening your chips. Also, they become more naturally fault tolerant. For the longest time, system design in the space exploration field has been dominated by multiple redundancy; I think they would really dig multiple redundancy within a single chip.

Re:Asynchronous logic vs radiation ? (3, Informative)

brejc8 (223089) | about 12 years ago | (#4496392)

There are two factors here.
Firstly the on a glitch the synchronous part will take a certain period to return a wire low/high and resume its operation. By then it would be too late as the clock has gone. A asynchronous property called Delay Insensitivety which some designs have allows any wire to have any delay to rise or fall. So you can pick of any wire from your lets say ALU reeroute it outside the chip through a a telephone line to the other side of the world and back to the inside of the chip and the design would still work (maybe 1 ips but never the less the result would be correct)
Secondly async releases much less EMI. The inside of your computer is riddled with radiations much nastyer than cosmic rays. Most chips are composed of millions of arials which pickup all these rays and make your chip malfunction. Fine you can slow down your clock and hope for the best but its better not to create them in the first place.

Re:Asynchronous logic vs radiation ? (2)

pmz (462998) | about 12 years ago | (#4496769)

I'm wondering how asynchronous logic stand up against transiant errors induced by a cosmic ray?

What about ECC on each internal bus? It works well for external busses (RAM, etc.).

ok, but... (1, Insightful)

Anonymous Coward | about 12 years ago | (#4495892)

... where are the designn tools?

We all know about the advantages async logic has in many respects to clocked one. The problems is, the async logic *design* tools are nowhere as good or as many as the tools available for designing clocked logic.

Chicken and egg problem? Maybe, or maybe just another untapped opportunity for those crazy software people...

Asynchronous Logic (0)

Anonymous Coward | about 12 years ago | (#4495894)

Ready? Not right now.

More info: (5, Informative)

slamden (104718) | about 12 years ago | (#4495930)

There was an article [sciam.com] in Scientific American about this just recently...

What if? (5, Insightful)

bunyip (17018) | about 12 years ago | (#4495975)

I'm sure that many /. readers, like me, are wondering if asynchronous chips get faster if you pour liquid nitrogen on them.

Seriously though, does the temperature affect the switching time? Or does the liquid nitrogen trick just prevent meltdown of an overclocked chip?

Re:What if? (3, Funny)

brejc8 (223089) | about 12 years ago | (#4496174)

Yes they do. We had a demonstration board where if you sprayed some CFC sprey then it would increase in speed. Only a little because it was plastic packaging but it was quite cool.
When testing it I left it running a dhrystone test overnight logging the results and as the office would cool down at night the chip went a little bit faster. as slow down by the morning. I think i might have invented the most complex thermomiter ever.

Read the article (5, Informative)

Animats (122034) | about 12 years ago | (#4495994)

Read the cited article: "Asynchronous Logic Use -- Provisional, Cautious, and Limited". The applications being considered aren't high-end CPUs. Most of the stuff being discussed involves low-duty-cycle external asynchronous signals. Think networking devices and digital radios, not CPUs.

In synchronous circuits, there are power spikes as most of the gates transition at the clock edge. It's interesting that this issue is becoming a major one. ICs are starting to draw a zillion amps at a few millivolts and dissipate it in a small space while using a clock rate so high that speed of light lag across the chip is an issue. Off-chip filter capacitors are too far from the action, and on-chip filter capacitors take up too much real estate. Just delivering clean DC to all the gates is getting difficult. But async circuitry is not a panacea here. Just because on average, the load is constant doesn't help if there are occasional spikes that cause errors.

One of the designers interviewed writes: "I suspect that if the final solution is asynchronous, it will be driven by a well-defined design methodology and by CA tools that enforce the methodology." That's exactly right. Modern digital design tools prevent the accidental creation of race conditions. For synchronous logic, that's not hard. For async logic, the toolset similarly has to enforce rules that eliminate the possibility of race conditions. This requires some formal way of dealing with these issues.

If only programmers thought that way.

Re:Read the article (2)

naasking (94116) | about 12 years ago | (#4496497)

For synchronous logic, that's not hard. For async logic, the toolset similarly has to enforce rules that eliminate the possibility of race conditions. This requires some formal way of dealing with these issues.

aka, Design Patterns.

asynchronous logic? (2, Interesting)

snatchitup (466222) | about 12 years ago | (#4496012)

Sounds like my wife.

But seriously, isn't that an oxymoron?

At first, I thought it meant that we take a program, break it up into logic elements and scramble them like an egg. That won't work.

But after reading, I see it means that everything isn't pulsed by the same clock. So, if a circuit of 1,000 transistors only needs 3 picoseconds to do it's job, while another 3000 transistors actually need 5 picoseconds, then entire 4000 transistors are turned on for5 picoseconds. So, 3000 transistors are needlessly powered for 2 picoseconds.

This adds up when we're talking 4 million transistors and living in the age of the Gigahertz.

Re:asynchronous logic? (2)

Kragg (300602) | about 12 years ago | (#4496353)

But if the 2nd circuit takes somewhere between 2 and 5 picoseconds, depending on the operation being executed, then half the time you're more efficient, the other half the same.
Full-chip synchronized clock-ticks bring the average operation execution time down to the speed of the slowest, every time.

am i ready? (4, Funny)

cheesyfru (99893) | about 12 years ago | (#4496022)

Am I ready for asynchronous logic? It doesn't really matter -- it can come along whenever it wants, and I'll come use it when I have some spare cycles.

re: Ready for it? (3, Funny)

tomhudson (43916) | about 12 years ago | (#4496063)

Definitions for the real world: Asynchronous logic: anything you think about before your first cup of coffee...

Second real-world definition: When someone else (usually of the opposite sex) answers your question with an accusation that's completely off-topic.

Third real-world definition: Many slashdot posts (sort of including this one :-)

1963 PDP-6 had it, surely? (4, Funny)

dpbsmith (263124) | about 12 years ago | (#4496081)

Surely Digital Equipment Corporation's PDP-6 [mit.edu] had it in 1963?

Or is this modern "asynchronous" logical some totally different concept?

Notice the article title? (2)

GlobalEcho (26240) | about 12 years ago | (#4496082)

For those who may have missed it (as I did the first time)...the article title itself is a bit playful.

UARTs? (2)

Florian Weimer (88405) | about 12 years ago | (#4496120)

Isn't an UART at least partly an asynchronous chip? So you probably have got one your PC today...

And Chuck Moore's description of an asynchronous Forth chip is available in Google's cache [216.239.39.100] (I don't know why he pulled it from the web site).

Sun likes this area. (2)

AtariDatacenter (31657) | about 12 years ago | (#4496152)

Sun has talked quite a bit about async logic in their own designs. I forget if it is in their current generation of chips or not, but they've talked about putting 'islands' of async logic into their chips, with an eventual goal of using it throughout.

The article as embedded.com talks about 'security'. What they really mean here is like, for example, in those smart access cards in a DirecTV. They say a clockless design is harder to figure out what is going on. So, it is a DRM monster, they say.

Ready Am I (3, Funny)

istartedi (132515) | about 12 years ago | (#4496225)

No problem asynchronous logic will be. To program some say difficult but they weak minded people are. Excuse me, I have to post a response to the story on Slashdot about logic asynchronous now.

This is *New*?!?!?! (2, Funny)

Chiggy_Von_Richtoffe (565992) | about 12 years ago | (#4496245)

Aww hell, My SO has been doing this for *Years*, i mean she is the queen of one-sided logic for ages ;-)
p.s. Kylie if your reading, j/k love ya!
~what was that? I dunno, but you've got it's license plate number stamped on your forhead ~*ouch*~

Code samples? (2)

rocjoe71 (545053) | about 12 years ago | (#4496322)

All I want to know is:
  1. how this would change the appearance of code that I can write?
  2. Would there be any difference at all?
  3. Would I need an entirely new programming language, replete with syntax to leverage asynchronous logic?
  4. Are there (sensible) examples of this for me to gawk at?
This really sounds interesting but being just a dumb programmer, I'd be interested in seeing this concept in terms I can understand (if it exists...).

Re:Code samples? (2)

brejc8 (223089) | about 12 years ago | (#4496425)

1. it doesnt. The processor might be async but the code is just your original code. Amulet have been making asynchronous ARMs and I made an async MIPS with no need to alter the code.
2. Well it would use less power if thats what you want. Or go fatser or if the chip dentects a freese instruction it will staticly sit there waiting for an interrupt.
3. nope
4. C

Async research group at Caltech (3, Interesting)

mfago (514801) | about 12 years ago | (#4496414)

Here at Caltech [caltech.edu] the CS department is into this kind of thing.

They've even built a nearly complete MIPS 3000 compatible processor [caltech.edu] using async logic.

Seems pretty cool, but I'm waiting for some company to expend the resources to implement a more current processor (such as the PowerPC 970 perhaps) in this fashion.

Re:Async research group at Caltech (2)

brejc8 (223089) | about 12 years ago | (#4496459)

Here at Manchester [man.ac.uk] in the CS department is into this kind of thing.

They've even built a complete ARM compatible processors using async logic.

We did make one with an external company to use in their products.

I am currently working on making a nice fast MIPS design myself

Excellent Overview article in Scientific American (0, Redundant)

MarkedMan (523274) | about 12 years ago | (#4496427)

Windows... (0)

Anonymous Coward | about 12 years ago | (#4496823)

Since Windows sometimes seems asynchronous, perhaps it would crash less on such a machine?
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?