Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Clockless Computing?

timothy posted more than 13 years ago | from the proc.-detected:-P9-"around-eightish" dept.

Hardware 225

ContinuousPark writes: "Ivan Sutherland, father of computing graphics, has been for the last ten years designing chips that don't use a clock. He's been proposing a method called asyncronous logic where there's no clock signal being distributed and regulating every part of the chip. The article doesn't give many technical details (greatly needed) but Sutherland, now doing research for Sun, is telling that significant breakthroughs have been made recently to make this technology viable for mass production. It is estimated that 15% of a chip's circuitry is dedicated to distributing the clock signal and as much as 20% percent of the power is consumed by the clock. This is indeed intriguing; what unit will replace the familiar megahertz?"

cancel ×


Sorry! There are no comments related to the filter you selected.

No clock cycle?? Hmmm.... (1)

grape jelly (193168) | more than 13 years ago | (#384374)

Even without a set clock cycle, any CPU must have some sort of regulatory system which coordinates the execution of instructions (this is, of course, the primary function of the system clock). Without such a system, all parts of the CPU could execute instructions at random, making performance-improving techniques such as pipelining useless. So where would the regulatory circuitry be on such a chip? Surely adding it to the CPU itself would counteract the supposed gains from ditching the cock.

Measuring Speed (1)

IRNI (5906) | more than 13 years ago | (#384378)

You would measure this form of speed with names like

Intel Pentium V Fast
Intel Pentium V Really Fast
Intel Pentium V Yeah we know this one costs the same as the fast one last year but it is so much faster.
AMD Thunderbird Oh my god did you see how fast that was
AMD Thunderbird Seriously ya'll this is quick


Re:Units (1)

matlhDam (149229) | more than 13 years ago | (#384380)

You forgot the all-important CowboyNeal. I don't know about you, but I wouldn't buy a computer with less than 50 giga-CowboyNeals of processing power.

Re:This sounds like a dataflow machine (4)

Cenotaph (68736) | more than 13 years ago | (#384382)

Along with the comment below about these problems being moved to the compiler/assembler writers, I'd like to add that you can have a machine that is very much like a dataflow machine, but uses conventional instructions. It's been done at Sun labs and is called the CounterFlow Pipeline Processor (CFPP). The original paper that proposed it, coauthored my Sutherland, can be found here [] in PDF and PS formats. I did a presentation on this architecture for a class a few years ago. If you're interested, the slides for that presentation can be found here [] in PowerPoint format. There was also a research group at Oregon State, but their web page is MIA.

So, what is a CFPP? It is a processor with a pipeline where data and instructions flow in opposite directions, with the instructions usually thought of as moving "up" and data as moving "down". The functional units (FU) are attached as sidings to the main pipeline. Each FU launches from a single pipeline stage and writes its results to a different stage, further "up" the pipeline. The main goals of this architecture were to make the processor simple and regular enough to create a correctness proof and to achieve purely local control.

If Sun ever produces a processor that is asynchronous, it will likely look similar to this.
"You can put a man through school,
But you cannot make him think."

Re:Units (not floating point operations) (2)

Relic of the Future (118669) | more than 13 years ago | (#384383)

I wouldn't want the manufacturer optimizing for that over other, useful things.

You mean, like the way they optimize for MHz over other, useful things, like flops? Remember when AMD did that little ad campaign of "Our 800 MHz chip is faster than Intel's 766 MHz chip!" How many "normal" people followed that one? Today, MHz is the standard rating of speed, and is misleading. mflops would be a much better measure (although you're right that, with different ops taking different amounts of time, you'd have to carefully define what you mean by an operation).

Secondly, I don't think it will take "several years" of experimentation to figure out how much faster your add is than your multiply. We already know the answer to that question, and it depends on how you decide to impliment your circuit. If you decide to do multiplication with shift/add you could get a tiny little multiplier that's freaking slow, or you can go hog-wild with 7,3 counters, wallace trees, fast adders, etc. etc., and have a gigantic circuit that's really fast, but that's how hardware design has always worked and the options for solutions will be unchanged. Now though, you have a few more choices to make since your ops don't all have to fit into equal length pipeline stages, and also each op doesn't have to take the same amount of time for each set of inputs (for example, 7 + 1 might take x gate-dealays of time, whereas 7 + 0 could take many less.)

It's all very exciting.

God does not play dice with the universe. Albert Einstein

Philips Async DCC Chip (1)

Ion Berkley (35404) | more than 13 years ago | (#384385)

The point most people miss about chip design in general is that whatever the methodology being discussed, modern chip designs would be utterly infeasable without the CAD software that goes with them. The complexity of any IC design both synchronous or asynchronous is utterly beyond any manual design methods at this point, and the main reason that synchronous design predominates today is that CAD tools for synchronous synthesis came to market (Synopsys in particular ) and have dominated the field for nearly 10 years now. However research on async CAD tools continues, one notable effort being the European OMI project EXACT ( Detail.cfm?ID=6&Project=6143) which yielded a commercial error correction chip used by Philips in DCC players. If groups such as the AMULET group can automate there methodolgies then async design can quickly gain ground in various power and performance sensitive niches.

Re:How about the human brain? (1)

Steeltoe (98226) | more than 13 years ago | (#384387)

Try to count seconds _asynchronously_ with your heart... It's not easy, I've tried :-)

- Steeltoe

Re:Units (1)

Stipe (35684) | more than 13 years ago | (#384388)

> [can measure how many operations it can do per second]

Yes, but the point is that even on the same processor may take a different amount of time to do the same operation albeit on different data.

Re:How about the human brain? (1)

Tiroth (95112) | more than 13 years ago | (#384389)

There are structures in your brain that basically function as a 24 hour clock, and some people can use them so effectively that they can tell time. This "clock" is not distributed to every functional component though. Ditto in your hearing example.

Asyncronous computers will have a timing clock, just not a clock signal that controls the gates on each functional component. This is the difference between attaching a few thousand parts to the clock and attaching 37 million+.

Design Logic (4)

Alien54 (180860) | more than 13 years ago | (#384390)

As I recall, this story has been around for a few years. But this does not make it less relevant.

What makes it interesting is that you have to fundamentally redesign your your whole logical design so that you have a general purpose design.

With clocked computing, it is easy to see how you would flush buffers, etc. Clockless computing would be more problematic, and of course, would probably be proprietary.

My initial reactions are that it would work easiset in things like embedded processing. I also wonder if there would have to be some sort of evolution similar to what we have seen over the past few years with Intel, Motorola, etc.

One must not forget that the increases in performance for an awfull lot of these chips has to do with clock speed increases, as well as code designed to take advantadge of certain coding features in the hardware.

an early example of this is when the Pentiums first came out. For a while you had 486 boxes and pentiums with the same clock speeds on the market. you could compare performance between systems with the same video cards, same ram, same cache, etc. even though the chip sets with not the same, etc. This was educational. As I recall, the performance boost for somewhere not taking advantadge of the pentium feature set was aboput 20 - 25% (?) I may have this wrong, of course.

But at a time when pentium systems cost twice of a 486, it was definitely buying for the future.

Re:Would this really work? (1)

uberchicken (121048) | more than 13 years ago | (#384399)

...but only when it needs to synchronize with those components.

Waste of time.. (2)

Edge (640) | more than 13 years ago | (#384400)

Is ten years of research really worth a 20% decrease in power consumption and a 15% decrease in overall chip size? I can't see how it could be. Chances are, by the time this technology is ready for prime-time (if ever), chips will be utilizing vastly different technology than they are now.

It's becomming increasingly harder to shrink chip sizes and increase speeds. Even with using different metals such as copper and shrinking trace widths, we are eventually going to hit a brick wall with current technology. After doing so, taking away 15% of the chip complexity is not going to go far in creating the next generation faster chip.

It's time to look to new technologies: carbon nanotubules and buckyballs, quantum computing, etc.

Re:Asynchronous Logic (1)

onion2k (203094) | more than 13 years ago | (#384402)

Have you been peeking at my source?

We can fix that (2)

leonbrooks (8043) | more than 13 years ago | (#384406)

The last time I checked, there wasn't any sort of high frequency clock signal running down my spine.

I can introduce you to one of several cousins who generally have the effect of sending high-freqency signals not only along your spine, but along nerves you never knew you had before - if the ``mike'' in your email address does stand for michael. Youngest candidate is about 15, oldest is about 30. Warning: they're more likely to stop your clock than start it, if the old ticker isn't in good shape or the old blood supply is a bit lean... (-:

The "Simpler Design" (1)

jjr (6873) | more than 13 years ago | (#384408)

Most of the times wins. From what people are saying is that asynchronous logic is much harder to create. Or is just that everyone was taught more about clock-based circuit than asynchronous logic circuits. Well I will personaly like to see how this will play out.

Re:Units (offtopic rant) (1)

unperson (223869) | more than 13 years ago | (#384417)

I don't think flops will be able to replace Ghz, what with the idea of a floating point operation recently becoming "obsolete". To quote the Matlab version 6 help file for the function FLOPS (available in Matlab 5.3.11 and prior):

This is an obsolete function. With the incorporation of LAPACK in MATLAB version 6, counting floating-point operations is no longer practical.

While I have to admit that FLOPS did't give a 100% accurate picture of whats going on, it was nice to test an algorithm and see if the actual flop count matched the theoretical count...or to get a rough idea of what constants are associated with the order of the algorithm. Thank God for Octave!

Units (4)

Zordok (90071) | more than 13 years ago | (#384419)

FLOPS, of course.
Even without a processor clock, you should still be able to measure how many operations it can do per (real-time) second.

Null Convention Logic (1)

spatula (26874) | more than 13 years ago | (#384420)

While clockless digital circuit design is not new, it's not as well analyzed as conventional synchronous circuit design. There are still many improvements to be made to this design technique. It will also take time before it will be generally accepted by the industry.

I saw a presentation about Null Convention Logic (NCL) just last week. It seems that there are companies out there already manufacturing chips using this particular asynchronous design technique. Overall, the advantages don't seem to be great enough to make a big impact on the computer industry, but with active research in this area we might see something within the next few years.

Re:Units (1)

SEWilco (27983) | more than 13 years ago | (#384422)

The retail standard is not FLOPS, it is FPS (frames per second) at various screen resolutions. :-)

No Login (1)

voidzero (85458) | more than 13 years ago | (#384423) 5IVAN.html?pagewanted=all

Re:How Is The Human Brain Organized? (1)

sgt101 (120604) | more than 13 years ago | (#384424)

Have a look at (randomly from google) en_eng/3rd_gen_eng.html Spike train networks... Been around for a bit, never understood the attraction (geddit!) personally. Ok, here is the view from research mountain: neural networks (shock horror) are nothing special, infact, if you have a look at the work on Support Vector Machines it will dawn on you that NN's are basically generalised n'NN algorithms trained by gradient descent.

Research themes for AI:

* representation
* understanding the representation
* reintegrating new things (learning)
* deciding how to act on what you know

So that would be no change there for 40 years then... but, I have to say, the spin offs are spectacular. To answer your comments one at a time:
* Neural nets are probably not capable of consiousness because beings with a digital matrix cannot conceivably operate in a linear fashion. What will time mean when you can live in any "when" that you, or anyone else, recorded? So no "self" , no "stream of being".
* Subsystems - good research theme - look at sekaran1988-1.html for a summary of Chandrasekaran's seminal paper.

Hope that is of interest,


I love slashdot (1)

AxelBoldt (1490) | more than 13 years ago | (#384425)

Why I love slashdot.
The best part of slashdot is the hypocrisy. Slashdot has a definite "do as I say, not as I do" policy.

Example 1: Censorship
Slashdot claims to be anti-censorship. They make prominent figures in the anti-censorware movement authors. I'm talking about Michael Sims [] and Jamie [] . They claim to promote free speech. But do they really?
I'm not going to bore you with tales of the dreaded bitchslap [] .
Here's an article [] you might find interesting. It's about Michaels real position [] on censorware.
Also, here's a charming article [] .

Example 2: Auctions
Taco and Hemos find the idea of auctioning virtual property to be interesting. Here's a story by Hemos [] , and here's one by taco [] .
But what happens when someone tries to auction a slashdot account? Here's a snippet from an IRC log:
[22:25:58] [Questions] JustSomeGuy asks: How do you feel about the recent sale of user accounts on ebay?
[22:26:06] [CmdrTaco] should we fess up?
[22:26:11] [CmdrTaco] we fucked with the first guys karma.
[22:26:14] [CmdrTaco] it was funny as hell.
[22:26:28] [CmdrTaco] we wrote a script to give him random karma from 0.. number of seconds until ebay auction ends.
[22:26:35] [CmdrTaco] so he had 0 karma when the sale ended.
[22:26:41] [CmdrTaco] he updated his account to cry.
[22:26:44] [CmdrTaco] it was so funny.

What's this? Taco writing a script just to fuck with a user? Say it isn't so.
You can view the complete IRC log here [] .
Oddly enough, this never gets mentioned in any story on virtual property auctions.
Why is that?

Example 3: Community
Slashdot is a community oriented website. They win webbys for this. It's the community that helped Taco and Hemos to a big pile of VA Linux [] stock.
But they don't really give a fuck about the community.
Here's a quote from an email Taco sent to Shoeboy:

> Anyway, to go back to my original point, I think a fair
> number of readers are interested in who the trolls are
> and why they post what they do.
That may be, but I don't care. I post Slashdot stories that *I* want to read.

You can get the whole email thread here [] .
(Shoeboy kicks Taco's ass hardcore)

Want more? How about the theft of user accounts?
Famous slashdot poster Signal 11 grew tired of this site. So he gave away his account. Dear beloved free speech advocate Michael discovered this and used his authorial privileges to steal the account. No warning was given. No explanation either. The account was simply stolen and that was that.

These are all reasons I love this site. If I wanted a site that wasn't run by assholes, I'd read kuro5hin [] .

NOTE: this post is entirely factual. If you have any doubts about the veracity of these claims, feel free to contact Taco [mailto] .



It's not a dataflow machine (4)

Pseudonym (62607) | more than 13 years ago | (#384426)

Well, it's not exactly a dataflow machine, anyway.

The old E&S machines were dataflow architectures at the equivalent of the "machine code" level. Newer architectures are using similar ideas, but in a way that does not require details of the dataflow model leeching outside the chip.

Look at the Pentium 3, for example. It exploits dataflow ideas at the microcode level by prefetching several machine code instructions, splitting them into a larger set of "micro-instructions" and then schedules them together. That's not really a dataflow architecture, but it does use ideas from it: the idea of deciding on how to schedule the instructions at run-time.

The new clockless CPUs will exploit dataflow ideas by implementing a kind of dataflow machine between the functional units of the CPU itself. The CPU, remember, is like an interpreter for machine code. Since the "program" for that interpreter does not change, it can be implemented in a "plugboard" kind of way and people or programs producing machine code will never know the difference, apart from speed.

Huh! (2)

julesh (229690) | more than 13 years ago | (#384429)

But if it doesn't have a clock, how do you overclock it?

Asyncronous neural nets... (1)

OpCode42 (253084) | more than 13 years ago | (#384431)

... with self-aware subsystems, not governed by a clock, 15% percent smaller, 20% more power efficient....

Yeah, but can I run Linux on it? ;-)


Re:How about the human brain? (1)

Chirs (87576) | more than 13 years ago | (#384432)

Actually, the human brain does have a clock signal of sorts in it. For interpretation of audio and such, there is a "gate" of a few hundredths of a second at the onset of a sound when the brain can figure out where it's coming from. After that onset, its almost impossible to figure out the direction of a sound.

Also, how about people that can tell time without a watch? I knew someone who was accurate to the minute, day or night. There must have been some clock signal in his brain to govern that.

Re:FP (1)

telstar (236404) | more than 13 years ago | (#384433)

Yeah, I think your sig says it all.

Re:Asynchronous Logic (1)

scroy65 (135141) | more than 13 years ago | (#384435)

There's been a good deal of reserach on this under the topic of wave-pipelined systems. The biggest problems are in the area of design tools, e.g. accurate timing checks and debugging methods. On the flip-side in 1993, a 32-bit wave-pipelined multiplier was by by a grad student at NC State running at 200-MHz in 2.0 micron CMOS.

It may be difficult, but... (2)

Anonymous Coward | more than 13 years ago | (#384436)

there're not many ways. There's no way to sustain a global clock/time over a whole chip with their future size as they're growing today. The skew is killing us.

Either we have to have separately clocked parts in smaller domains or we have to go asynchronous. Both are insanely difficult, but the latter have the possibility of generating speeds unheard of. There are transistors capapble to 250 GHz (not in CMOS Si-technology, but anyway) and with some reduction in the feedback, a back-fed inverter could generate 50-100 GHz, locally. Imagine small parts of a chip operating at that speed and using level-triggered handshaking... diffucult, but mindblowing. :-)

Another thigs, we would get rid of the power consumption. The CMOS is consuming power proportional to the frequency even if they are not doing anything. (At least the clocked parts) The asynchronous logic would not waste any charge in the on-off switches.... Some real power saving!

The next step is adiabatic calculations. After the logic has reached the result, the process is reversed with not energy- or charge loss.

However, the quantum computers will not happen during my life time. If ever.

Re:Waste of time.. (1)

Tiroth (95112) | more than 13 years ago | (#384448)

And do you know WHY we are having to shrink die sizes? It's because the increasingly short clock signal can't propagate to all of the gates if they are physically too distant. In a clockless design, this hard limit on die size does not exist. It may not be important now, but in 6 years when we are using 24GHz computers on 8mm dies it is going to be a huge problem.

What will replace megahertz? (2)

Yebyen (59663) | more than 13 years ago | (#384450)

How about an equally meaningless number, like BogoMIPS?


nVidia pixel shaders (1)

MrMeanie (145643) | more than 13 years ago | (#384452)

nVidia pixel shaders work like this; see the information on their site wrt coding for the GeForce / GeForce2 with OpenGL. Judging by John Carmacks comments wrt GeForce3 pixel shaders, they haven't changed much since then.
They are quite usable, but then the "circuits" you build are not in excess of 25 stages long (most likely less than that).

Re:What will replace the megahertz rating? (1)

Wattsman (75726) | more than 13 years ago | (#384455)

Well, the moderators seem to agree with you.
What, then, will replace the megahertz rating? Flops?

Re:How about the human brain? (3)

Steeltoe (98226) | more than 13 years ago | (#384456)

The story is about asynchronous computing, not about clocks in general. Asynchronous computing is to synchronous computing as functional programming is to imperative programming. Sure you may have methods of synchronizing with external entities, but the internal processes are (mainly) asynchronous.

The brain is an excellent example of parallell asynchronous computing, since a neuron will only fire when its input-treshold has been reached. However, many internal processes in the brain may in fact be more or less synchronous, due to the fact that maybe it's an evolutionary advantage :-) So the basic idea is that a neuron is asynchronous in principle, but groups of them may find it easier to communicate synchronously.

- Steeltoe

Mod the Parent Up! (1)

Tiroth (95112) | more than 13 years ago | (#384459)

I agree completely. I think the clock skew is the big (enormous?) incentive to go async, and I'm surprised that no one mentioned this until Comment 59.

Will be home soon (1)

duvel (173522) | more than 13 years ago | (#384460)

At least this thing will give you an excellent excuse while you're at work to say at 10AM: 'Hey I'm going home, my computer's telling me it's 5PM.'

This sounds like a dataflow machine (5)

Thagg (9904) | more than 13 years ago | (#384461)

From what little I could glean from the NY Times article, this sounds like a dataflow machine; that is, a machine when the various units 'fire' when all of their inputs are present. The idea is that each functional unit of the machine could be running in parallel, asynchronously, without any of the complexity that EPIC, say, imposes.

Unfortunately for Sutherland, there's something called the PS300.

Back in the late 70's and early 80's, his company, Evans and Sutherland, ruled the world of computer graphics with their very slick Picture System machines. These were peripherals to PDP-11s and VAXes, and were wonderfully programmable machines. There was a fast interface between host memory and Picture System memory; letting you mess with the bits to your heart's content. We had a couple of them at NYIT's computer graphics lab; and did a lot of great animation with them.

E&S's next machine, though, was the PS300. This was a far more powerful machine, its first machine with a raster display. It was an advance in every way, except that it imposed a dataflow paradigm on programming the machine. You could only write programs by wiring up functional units. It was astonishingly difficult to write useful programs using this technology. Everybody I know that tried (and this was the early 80s, when people were used to having to work very hard to get anything on the screen at all); every one, gave up in frustration and disgust.

ILM got the most out of the machine; but that was by imposing their will on E&S to provide them with a fast direct link to the PS300's internal frame buffer.

Basically, dataflow ideas killed the PS300, which destroyed the advantage that E&S had as the pioneer graphics company, and they have never recovered from it. While the idea is charming, and to a hardware engineer it makes a lot of sense, programming them takes you back to the plugboard era of the very first WW-II machines. Nobody wants to do that.


What's new here? (1)

ernop (111430) | more than 13 years ago | (#384462)

Hmm. I've understood that several computers of '60s and '70s were asynchronous. I'm sure PDP11/20 was, and some early-model PDP10 (KA10) machines were too and most likely the PDP6 was also asynchronous - and certainly there were many other asynchronous designs from other manufacturers than DEC.

Re:Chip would still have a clock... (2)

levendis (67993) | more than 13 years ago | (#384463)

So this is similar to CDROM data and other serial data that is "self-timing"? Do you have any more in depth articles or whitepapers to back this up?


Re:I love slashdot (1)

grazzy (56382) | more than 13 years ago | (#384464)

wtf are you posting this here?

its so darn offtopic, take this somewhere else.

Re:Would this really work? (1)

Ella the Cat (133841) | more than 13 years ago | (#384465)

Synchronous design (using a clock) makes it a lot easier to structure your design. It makes it into a big state machine in effect.

There's nothing to stop you changing the clock period on a cycle by cycle basis - slow instruction, use a longer clock period. There used to be an AMD chip to do just that for bitslice machines. Even with a fixed clock period, only the critical path needs a full clock period, some bits of logic have made their minds up well before a clock tick, so the bottlenecks you mention apply here as well.

It all falls apart on the assumption that the clock ticks at various points in your circuit are all happening at the same time; speed of light is finite, so you can understand why that's not true in the real world. (Clock skew)

It also falls apart because you can't enforce clocking on the entire universe - a signal coming into your system can (in theory) hang it up if it comes at just the right time, but there's maths to show that the likelihood is so small that it doesn't matter in practice. (Metastability)

That said, last time I looked into asynchronous design, not very deeply I admit, the cure looked worse than the disease. Which isn't to say it's a bad idea, maybe its time has come?

We don't need no steenkin' clocks! (1)

imadork (226897) | more than 13 years ago | (#384466)

It is estimated that 15% of a chip's circuitry is dedicated to distributing the clock signal and as much as 20% percent of the power is consumed by the clock.
Yeah, and I'll bet that traffic lights and intersections take up a fair amount of road space in any city, and if we get rid of them, think about how much more traffic we can handle!

Seriously, one of the reasons that clocked logic is used everywhere is that it's (relatively) portable across process tweaks. If Intel used asynchronous logic on the new Octium... err... Pentium VII, then every time it tweaked its process there is the potential for the critical logic path to change, and the design would have to be re-optimized, re-layed out, and Oops! There goes the schedule again. Not to mention the fact that Logic Synthesis for Asynchronous circuits is a pain. (All of my designs are clocked...)
IANASPE, (I'm not a Semiconductor Process Engineer), so I'm probably getting this explanation all wrong. But I am a rather lame Digital Designer, FWIW.

On the other hand, There are some games you can play in a fast Clocked design. At 1GHz+ speeds, the clock skew across a die is so great in comparison to the period that you have very little time (if any!) left over for Logic. I think companies like Intel and AMD must have a way to "schedule" and plan for clock skew, so they can make tighter designs that actually work... This technique kind of looks like Asynchronous Design if you look at it sideways.

old idea (2)

peter303 (12292) | more than 13 years ago | (#384482)

People been worying about this since the 1980s.
Speeds get faster; chip dies get larger;
far of units get out of sync.

Mips, Flops, and lack of Clocks... (5)

stevew (4845) | more than 13 years ago | (#384483)

The problem with Mips:

Not all Mips are created equal. For example: is it fair and reasonable to compare a CISC Mips to a RISC Mips? The CISC may be doing something like a string move with one instruction while the RISC machine does it with series of instructions in a loop. Obviously this is an apples an oranges comparison.

Okay - next you look at Flops - aren't Flops the same on every machine. Well - no, though that is probably less of an issue for comparing IEEE based implementations. The question comes up (and it has already been mentioned) that Flops don't compare useful work loads! The vast majority of computer work loads don't involve significant floating point operations. (Yes you can find workloads where that is the case - but it isn't the majority situation.)

So it comes down to comparing computer "systems" is a tricky business. Even Mhz in the same architecture family doesn't work because you don't know how efficiently the machine is designed -the hardware might be capable of greater than one instruction per clock!

Finally - I don't believe the estimate of upto 15 % or clock distribution. It's more like 1%-2%. ( I do chip design for a least I have an educated opinion on this!) The clocks ARE a significant part of the power issue though. CMOS burns power when signals move. The clock moves. Simple enough analysis there.

Asynch design methods have been around forever, but present a number of problems for traditional design tools that depend on the clock to do their work. Further, there are alot of chip designers that throw up their hands if you just mention the word "asynchronous design" to them. Any push to this kind of design would be tramatic to say the least ;-)

Re:An end of judging the speed in MHZ WhooHoo! (1)

Tiroth (95112) | more than 13 years ago | (#384484)

Hmm, but I'd bet that my 550 MHz system with 128k of cache beats yours when it comes to rendering/2d graphics/etc. It all depends on what is important to you...since I rarely use Office apps I'd rather see raw MHz than larger caches. (Your memory bandwidth argument is generally being affected by the fact that whatever program you are running fits into your cache, but not the 512k one...something that doesn't hold for large programs or ones that work on large datasets)


mirko (198274) | more than 13 years ago | (#384485)

During the last few years, a group has been working upon an asynchronous processor : AMULET [] .
This CPU uses the ARM [] core.
It is so power-efficient that it could only rely on the induction power resulting from its pins transmitting information.
Current status specify delivery of the AMULET3i.

Re:Design Logic (2)

Alien54 (180860) | more than 13 years ago | (#384486)

My initial reactions are that it would work easiset in things like embedded processing. I also wonder if there would have to be some sort of evolution similar to what we have seen over the past few years with Intel, Motorola, etc.

An added thought to this is that since, according to the article, a lot of the research is being done of the Sun side, this will have interesting implications for the Wintel crowd.

It seems that it would make its' way into the market first via the UNIX crowd. This makes for interesting opportunities. The last two paragraphs of the article are interesting in this regard:

Mr. Sutherland, in fact, says a new magic is precisely what he has found. He draws an analogy to the first steel bridges, which were built like stone bridges, with arches. It took some time, he said, for designers to recognize the possibilities of the suspension bridge -- a form impossible to create with stone alone but which was perfectly suited to the properties of steel.

The same is true with asynchronous logic, he said. His research shows that it will be possible to double the switching speed of conventional clock-based circuits, he said, and he is confident that Sun in particular will soon begin to take advantage of that speed. "A 2X increase in speed makes a big difference," he said, "particularly if this is the only way to get that fast."


Different Kind of Clock (2)

nadador (3747) | more than 13 years ago | (#384487)

The clock that the OS uses to wake itself up to make scheduling decisions is completely seperate from the clock signal that is distributed on a cpu chip. The clock signal on a chip is what permits data and control signals to advance from one stage of the pipeline to the next, and thats the clock signal that async logic gets rid of. The clock that the OS uses to wake itself is a hardware interrupt from a completely seperate place.

Re:Units (1)

Colvin Burgess (146596) | more than 13 years ago | (#384488)

Yeah, but what if it is going so fast that space/time is warped within the environment of the processor? Appropriate calibration techniques would be required to accurately measure the speed the processor is calculating with reference to the speed the measuring device is operating at. "Simple, shunt the measuring device to a Flux Capacitor via a wheatstone bridge!", I hear you say. Well, in theory this would artificially induce the measuring equipment to increase at the same unknown rate.

Theory is great, however, Einstein questioned the theory that a straight line between two points is the shortest distance. He was correct, a straight line is not the shortest distance between two points - but it is damn close to a straight line. Thus the theory of using a Flux Capacitor may be flawed. We may never be able to measure the true speed of the device - it could be fooling us by taking as long as it wants to perform a calculation, then warp time to its fancy to come back from the future and return the correct result. Either way, it would be fast. :)

Re:Asynchronous Logic (1)

eXtro (258933) | more than 13 years ago | (#384489)

I'm an EE and I disagree. You're thinking of combinational logic, wiring together bunches of NAND and NOR gates as a heap of random logic. This leads to lots of fun with race conditions that introduce logic glitches.

Asynchronous logic uses a handshake signal to indicate completion of an operation. This is often done by adding an extra signal along with the result, the signal is asserted when the operation of the gate is completed.

There are other problems that have stopped asynchronous circuits from wide adoption. The handshake signalling adds overhead. More circuitry and more wiring, which means more silicon area used and higher wiring congestion.

Re:Asynchronous Logic (1)

artg (24127) | more than 13 years ago | (#384490)

I agree that you need a clock to indicate when data becomes available, but it's increasingly difficult to design complex circuits with a global clock - the clock distribution itself becomes a major part of the silicon.

Asynchronous logic offers the opportunity to avoid this problem. Self-clocked might be a better description than Un-clocked - essentially, every data object has a related 'clock', or arrival indicator.

Think of a massively parallel pipelined processor (at the register, or even gate level) rather than some sort of unclocked anarchy. Or a very finely divided set of Communicating Sequential Processes, to use your software analogy.

Re:Units (1)

Anonymous Coward | more than 13 years ago | (#384491)

FLOPS only gives a measure of floating-point operations. Prolog systems used to be measured in kLIPS (thousands of Logical Inferences Per Second). That seems to be a much more relevant unit of measurement in this case too.

Has anyone actually measured the clock rate ofCPUs (1)

Colvin Burgess (146596) | more than 13 years ago | (#384492)

CPU clock speeds, namely x86 architectures are increasing at astounding rates. I still have a 286 in the shed with 10MHz success silk screened on the main board. To my knowledge the bus on some of the newer CPUs runs at 100MHz however it is claimed that there is internal clock multipliers e.g 4x, 6.5x ,etc. But has anyone actually measured or would be able to measure these internal clock rates? Some of the benchmarks do not demonstrate that an 800MHz unit as a whole performs 8000times faster than a 10MHz unit. I certainly question that the user experience is 8000x quicker anyway.

The question I'm really lookking an answer for is: Could CPU manufacturers be fooling us?


The Truth (1)

Ryan Koppenhaver (322154) | more than 13 years ago | (#384493)

If you believe the parent, you're an idiot.

I kid you not, the trolls actually sit around for hours fabrication "evidence" to support their position. I would fascdot I was inside the troll cabal. It disgusted me, though, so I left...and now they hound me.

They can't handle the fact that anyone would not want to be a part of their little circle jerk.

Re:This sounds like a dataflow machine (1)

Graham_Thomas (255229) | more than 13 years ago | (#384494)

While you're right about the difficulty in programming dataflow engines manually, what you aren't taking into consideration is that this does not have to be done manually anymore. It's merely a different paradigm of object/symbolic assembly. So unlike in the WW2 days (or the early 80s for that matter), problems like this are only problems for guys writing the assemblers and compilers.

Re:Different Kind of Clock (1)

gle (215268) | more than 13 years ago | (#384505)

Yes, but you sometimes have an active wait, where the cpu will just run through a series of NOP, just to wait for a very short time (say, a few ns or s).
BTW, how long is a NOP on a clockless computer?

Take off every .sig

Units (1)

Fr05t (69968) | more than 13 years ago | (#384511)

Have a poll on it
2. Trolls/Sec
3. Katz/Sec
4. HP
5. Porn/Sec

Re:So THAT'S what happened (1)

iainl (136759) | more than 13 years ago | (#384512)

Its good to see you can find the funny side and don't hold a grudge; while slightly juvenile of the ./ crew it did make me smirk when I read it.

Of course, on a serious note I could completely sympathise if e.g. Verant took a similiar stance on eBay trading of virtual property - its understandable to slap circumventing of the rules in place for allocation of the stuff. Coming back to this example, allowing a goat troll to post with +2 bonus simply because they bought an account would be irritating, to say the least.

Re:Units (not floating point operations) (3)

JWhitlock (201845) | more than 13 years ago | (#384513)

What kind of Floating Point Operation? Addition will be faster than multiplications, which will be faster than division. Operations will no longer be tied to the slowest possible operations, so they may not even be even multiples of each other.

I think in such a system, other features (code optimization, use of 3D accelerators, etc) will be more important than the speed of an add. It will even take several years of experimentation to determine what optimizations to make (how many times is it better to add than multiply, how should loops be unrolled, etc).

I think many traditional measurements will become worse than useless, and instead misleading. Since a lot of your repetative math operations may be unloaded on your 3D accelerator, it is questionable that, even if you could decide how to measure it, floating-point-operations per seconds would be a real indicator. I wouldn't want the manufacturer optimizing for that over other, useful things.

A better question is, how long does a NOP last? Won't this system optimize it out? How can you time a NOP without a clock?

Re:I love slashdot (1)

AxelBoldt (1490) | more than 13 years ago | (#384514)

wtf are you posting this here? its so darn offtopic, take this somewhere else. Where? I think this is something the community should be aware of. I can't post it anywhere ontopic as there will never be a story about what a hypocrite taco is. I've been on this site since before there were registered nicks, and I can't sit by while all this crap goes on behind closed doors. Cheers, ~Axel~


The killer with asynchronous logic may be testing (3)

hqm (49964) | more than 13 years ago | (#384515)

One problem with aynchronous systems is testing.
If you have a chip where some of the units are slower than expected, you might get curious interactions and "race conditions" that are
very hard to test before you put the chip into

Also, designing for asychronous logic has
been difficult - designing clocked and even
pipelined systems is a breeze compared to
dealing with asynchronous design. A lot of the
structured methods that have been developed for
conventional clocked circuits cannot be used,
and so designers have a lot of trouble
building complex systems.

Asynchronous CPUs exist (1)

riedquat (226343) | more than 13 years ago | (#384516)

Asynchronous CPUs exist. Have a look here [] . It's a commercial 32-bit system-on-chip with an Amulet asynchronous core. Even that article's a year old.

Clockless Computing (1)

herwin (169154) | more than 13 years ago | (#384517)

That's how brains seem to work. When synchronization is seen in the brain, it is suspected to reflect multiple regions attending to a single object. Note that synchronization in the brain is self-organized, rather than driven by a clock.

Re:Asynchronous Logic (3)

Salamander (33735) | more than 13 years ago | (#384518)

To use a software analogy, how easy would it be to debug a program where half of the code consisted of conditional branches ?

A little bit of a pain, but far from impossible. Anyone who works on software for a multithreaded, multiprocessor, or distributed environment solves asynchrony-related problems all the time. We do it by having locks instead of clocks; hardware folks can do and on occasion have done just the same. I'm sorry to hear that such basically simple problems are considered unsolvable by garden-variety EEs.

Re:What's the point? Read the article! (1)

funkman (13736) | more than 13 years ago | (#384519)

It says radio interference is less but isn't the radio signal strength just spread over a larger spectrum? Instead of the whole chip broadcasting at 1 (really 3 or 4 because of the buses) strong signal, we actually have an unpredictable spread across multiple frequencies? Might that be more of a problem than a predictable frequency?

Re:Would this really work? (2)

jovlinger (55075) | more than 13 years ago | (#384520)

just design it as a data-flow chip, with functional units propagating answers when they are availible and halting on partial inputs (you can even envisage a system which allows out-of-order execution of ALU operations). The main difficulty is likely to be getting in-order commit to work out.

appart from that, it is basically an excersise in bookkeeping -- tag all values as belonging to a subinstruction, so that you are able to get the data dependencies right.

I could go on, but I think you get the idea. However, let me emphasize that the situation of the whole chip waiting on the slowest component is what we AVOID by going asynchonous, as this is exactly the reason why intel needs to pipeline so damn deep to get the clock rate up. They need to split the pipeline into steps small enought that each step can be done in one clock. Asynch circuitry wouldn't have that problem.

another group (1)

Anonymous Coward | more than 13 years ago | (#384521)

the Asynchronous VLSI and Architecture Group [] is also active in this field

Re:So THAT'S what happened (1)

Ryan Koppenhaver (322154) | more than 13 years ago | (#384522)

You can have this one for $3.50

Re:What's the point? Read the article! (1)

stilwebm (129567) | more than 13 years ago | (#384523)

The radio interference is from the clock's rising and falling edges occuring, say 1,000,000 times per second (at 1MHz). Without a clock signal being driven throughout the entire circuit constantly, there are fewer fluxes, for shorter periods of time. So not only does the amount of the circuit causing radio interference at one time get reduced, but the amount of time it produces that interference is reduced.

No Login (2)

voidzero (85458) | more than 13 years ago | (#384526)

Is here [] Sorry

The homepage for the group (5)

hammy (22980) | more than 13 years ago | (#384530)

Here's the URL for the asynchronous design group's homepage [] There's more info there.

What's the point? (2)

MSBob (307239) | more than 13 years ago | (#384535)

I'm not trying to troll here but people have been trying to design asynchronous computers for decades now. For a while the British government sponsored some intensive research into asynchronous logic and what did they get out of it? Thath's right, nothing. The problem with asynchronous cirtuits is that you are still only as fast as the slowest gate in your ciruit. But the real issue here is obviously the race conditions that kill any non trivial asynchronous chip. Debugging such a race monster is a task beyond the capability of a human brain.

With all the suffering and poverty in the world we should really question whether some "scientists" deserve the money they get or whether those same funds could be utilised elswhere.

Would this really work? (1)

MajroMax (112652) | more than 13 years ago | (#384536)

Althought I have limited knowledge of CPU design, I fear that this will fall apart in application. At the very least, the lack of a clock signal will mean that some instructions finish marginally faster than others; anything that's running 'behind' will have to wait for the next transistor-equivalent to be free. In more coherent terms, I fear that this chip will waste a good deal of time as, without regulation of a clock, the cpu will hit bottlenecks in slower components.

Anyone with experience in chip design want to make an evaluation of the possibility?

Re:How about the human brain? (1)

ConsumedByTV (243497) | more than 13 years ago | (#384544)

I rememeber hearing about the jews in WW2 being tortured if they couldnt tell the guard when a minute was up. They had a problem: no clock. So what did they do? They counted the beats of their heart, and with enough pratice they were able to get it in sync with the nazis test. They would be spared if they could tell when a minute was up (give or take a few seconds). I personaly think that this aproach does work, and that the clock doesnt always need to be in the processing area, eg: the heart (sure your counting with your brain...)... Perhaps I am wrong, but wasnt there a way to make the clock off cpu?

Fight censors!

Re:Units (1)

popular (301484) | more than 13 years ago | (#384545)

Is that maximum or average FLOPS? I'm thinking of variable power techs like SpeedStep and PowerNow!, but especially Transmeta, which has the added variable of OS optimization.

WRT clocks, the strongest argument against them is their tendency to remind me that I'm late for work (again).


An end of judging the speed in MHZ WhooHoo! (1)

jellomizer (103300) | more than 13 years ago | (#384546)

I always hated giving the speed of a computer in MHZ. The MHZ is almost a pointless meaning of speed sience on most system 90% of the CPU cycles are generally wasted. The Cache, RAM, Harddisk Speed, Pipelining, and Bandwith (Network and Bus) are the real speed of the computer. Sience the computer will generally go as fast as it can retreve memory. If you havent realized it yet. Many Chip Makers are just boosting up the MHZ and reducing all the more expensive components that really improve speed. The only case where really fast MHZ are needed is when you are doing a lot of complex Math Problems such as 3d Rendering or Vector compuing. (Which now are mainly done on paralallel computers, which MHZ dont count as much because the fact that you have hundreds or thousands of processors where MHZ still dont count because it can usually be done in a Big O set down) Todays programs require more memory usuage then processor usage so we should judge the speeds on what counts and not the Meegnless MHZ which alone dosent count for much anyways. My 440Mhz system with 2megs of Cache on most apps can beat a 1GHZ System with 512K of cache.

Fundamental mode (1)

LowneWulf (210110) | more than 13 years ago | (#384547)

Fundamental mode is a basic circuits concept, and is the fastest possible design for any specific set of functional components. The only problem is that race conditions are a pain in the butt to solve, but surely computers can design these things these days.

Re:Chip would still have a clock... (1)

ooze (307871) | more than 13 years ago | (#384548)

There was a longer article in the infamous German c't [] magazin about it, sorry but that article is not online. The main idea is to let different (very small) sections of the chip work at the speed they want to and provide several join points. The problem is that all sections have to provide a signal about the processing state, which makes an extended NAND take more components. So the size of the new chips is hard to predict, as the small units provide the syncronisation themselves, which makes them bigger, but also makes external syncronization components obsolete.
The new smaller parts of the chip can go faster, as the ways are shorter, but the joining points have to be synconized in way that make the speed advantage in the sections not vanish.


riedquat (226343) | more than 13 years ago | (#384549)

Nearly right; some of the Amulet processors are code-compatible with the ARM cores; the core itself is completely new. The ARM cores (ARM6-ARM10) themselves aren't asynchronous.

Re:Waste of time.. (1)

spannerboy (312310) | more than 13 years ago | (#384550)

Chances are, by the time this technology is ready for prime-time (if ever), chips will be utilizing vastly different technology than they are now.


"By the time this slightly different technology is ready, it will be irrelevant because we'll already be using vastly different technology"

Of course! why didn't I think of that?

It's time to look to new technologies: carbon nanotubules and buckyballs, quantum computing, etc.

Isn't it lucky for us that these different technologies won't take nearly so long to develop!

After all, designing logic circuits from scratch with entirely new types of materials will be much faster than improving what already exists. And besides, using new materials will mean that none of the same design considerations come up.
Or will it?

Understating the problem (1)

nerdbert (71656) | more than 13 years ago | (#384551)

I believe that he's rather understating the problem: in CPUs the clock is the source of major headaches. I suggest you look at any of the recent ISSCC digests to see the lengths to which we have to go to design clocks for these beasts. You'll see H-trees to manage power distribution effectively. You'll see 43 different clock domains with custom alignment circuitry available on test points to accurately align clocks (as the Intel P4 does). You'll see arrays of PLLs in clock subdomains to align clocks and minimize clock skew across the chip (as the IBM POWER3 does). To say that clock distribution is a major headache is an understatement; and he does underestimate the amount of power that the clock tree consumes since can be higher than half the chip power in some MPUs. The numbers he gave are appropriate for a normal ASIC, not a CPU.

All that said, async circuits have been around a long time and they have yet to prove viable. The overhead for adding the additional "computation completed" signal usually more than compensates for area and speed savings you might get for getting rid of the clock. Besides, you still have to clock the output to be able to talk to any modern bus. I'll believe this when I see the details of the "new magic" in operation, but right now I put this in the same class as "cold fusion."

E&S Not Dead Yet (1)

Samedi1971 (194079) | more than 13 years ago | (#384552)

Evans and Sutherland may not be a household name, but they're still a world-class image generation company, at least in the real-time arena. E&S image generators are very common in flight simulation. They may not have a large market share, but from my experience that would mainly be due to potential customers ordering cheaper (and inferior) systems.

What will replace the megahertz rating? (1)

Wattsman (75726) | more than 13 years ago | (#384553)

Personally, I think benchmarks will. It might be something similar to those run by ZDNet. However, I'd probably separate them out.
I'd have a `How fast does it compile kernel 2.x.0` test, how fast does it render `scene x`, and a few others. Enought to touch on what people would consider important. Office applications would done using the same sort of set-up ZDNet uses (unless somebody can think of a better one).
You'd have to use the same compiler/rendering/whatever program across all the platforms if possible. Otherwise, the ratings would be really unfair.
With this set-up, you could get a processor that would be great for what you're doing. The processor that works great for compiling may not be the best choice for office applications.

OS design? (1)

cxreg (44671) | more than 13 years ago | (#384555)

I'm not an OS guru, but afaik, you NEED the CPU clock to have an accurate timing loop in your OS. Especially in faster processors, the only reasonable way to figure out how long each "tick" is, is to divide the cpu clock. Are there other alternatives for a system like this?

Nothing new - Amulet, frex (5)

Stipe (35684) | more than 13 years ago | (#384556)

The Amulet [] project has been going for over 10 years (it's an asynchronous ARM-like core, IIRC). I remember seeing a circuit that did asynchronous addition (or was it multiplication?) in a lecture about 2 years ago.

Another advantage to power is also the speed; the clock speed isn't determined by the worse case of the most expensive instruction. (e.g. adding 0 and 1 can be done a lot quicker than adding (2^31)-1 and 1, because of no overflow)

Re:What's the point? Read the article! (2)

nysus (162232) | more than 13 years ago | (#384558)

The following is a direct quote from the article you did not read. Asynch circuits are already being used:

For example, Royal Philips Electronics has built a pager using asynchronous electronics, taking advantage of the fact that the circuits produce far less radio interference than do clock-driven circuits. This makes it possible to operate a radio receiver that is directly next to the electronic circuit, greatly increasing the unit's operating efficiency.

Philips has also actively pursued research into asynchronous logic. Two small start-ups, Asynchronous Digital Design in Pasadena, Calif., and Theseus Logic in Orlando, Fla., are developing asynchronous chips for low-end consumer markets and high-performance computing systems.

How about the human brain? (3)

Snard (61584) | more than 13 years ago | (#384560)

So far most of the comments here are along the lines of "this won't work, it's too hard to debug, etc.". But it seems to me that the human brain is a pretty good example of asynchronous computing? The last time I checked, there wasn't any sort of high frequency clock signal running down my spine.

nothing new (1)

El Cabri (13930) | more than 13 years ago | (#384564)

There is nothing new in studying asynchronous design. There is already some asynchrony in edgy circuit design currently. Note that asynchronous design also brings its own overhead, in "self synchronization" circuitry, such as handshakes. It allows for higher throughputs, but is a pain to design and to debug.

Re:Units (not floating point operations) (3)

wowbagger (69688) | more than 13 years ago | (#384570)

A floating point operation is usually taken to mean a floating point multiply followed by a floating point addition, also known as a Multiply/Accumulate Cycle (MAC).

A MAC is a very important operation in digital signal processing. For example, to implement a digital lowpass filter (to remove tape hiss, for example), you define a finite impulse response filter (FIR filter) of some number of taps. You might need 256 taps to implement the needed low pass filter (this is a shot from the hip, the actual number of taps may be more or less). That means for every sample of audio (88.2kSamples/second for stereo audio) you need to do 256 MACs, or 22.6MFLOPS.

Re:I love slashdot (1)

atrowe (209484) | more than 13 years ago | (#384571)

From my encounters with Shoeboy, the AC pretty much hit the nail on the head. He's an ass.

Mips (1)

Anonymous Coward | more than 13 years ago | (#384574)

Years ago we used to talk about the speed of processors in mips (million instructions per second). I don't really see the problem with this measurement to this day. Flops aren't really that relevent for the majority of computer activities as with the exception of scientific applications most things are done using integers.

Of course this kind of measurement worked nicely on the ARM processor which exectued most instructions in a single clock cycle, however I suspect it may be somewhat more difficult on other processors which perhaps take varying time periods to execute different instructions.

Re:What's the point? Read the article! (1)

josecanuc (91) | more than 13 years ago | (#384575)

If the radio signal strength remained the same, but was spread over a larger spectrum, then the *interference* would definitely be less because most receivers are tuned for a small bandwith section of the radio spectrum.

You could also say that since the receiver could be placed closer to the electronic bits then the wires would be shorter and would act less like antennas, making the total radiated energy less than in "traditional" pagers.

Re:This sounds like a dataflow machine (2)

jreynold (56969) | more than 13 years ago | (#384577)

actually it sounds like a verification NIGHTMARE.
ASICs are hard enough to validate today with the
few async pieces we have to put into them.

Asych logic may look nice but unless we get some
major breakthroughs in verification tools, don't
look for it anywhere near the future.

Re:Has anyone actually measured the clock rate ofC (1)

riedquat (226343) | more than 13 years ago | (#384578)

I know 'certain chip manufacturers' put in extra pipeline stages which increase the clock rate of the chip but actually degrade performance.

Most people will use the clock rate as a measure of the chip's performance so if you're designing a chip for end users then it makes more marketing sense to make a chip with a higher clock rate than one with a better performance.

Obligatory "This Is Not New" (1)

Orne (144925) | more than 13 years ago | (#384579)

For anyone who's gone through advanced chip design, you learn that there are advantages and disadvantages to designing chips in asynchronous modes. But then again, isn't that the case with all technologies.

The most common "clock-less" model is designing the circuit as a Finite State Machine, where the circuit is constantly checking "inputs" to determine when to move to the next step. This solves many timing issues: you send out a trigger pulse that activates a sub-circuit, and wait for a pulse on a return line to tell you when that piece is done computing. What you end up with is a complex system of Strobes and Acknologments, and lots of edge-sensitive circuitry (as opposed to Hi and Lo that is the basis of CMOS)

Also, some types of designs run better as async circuits: FFT and Division circuits if I recall, you just trigger it to start and let it the whole bundle cascade itself to completion. Also, bus circuitry in modern motherboards works in Async mode already (theres this nifty thing called IRQ...)

chip hippies == chippies (1)

OlympicSponsor (236309) | more than 13 years ago | (#384583)

We must free the CPU from the oppressive overlordship of the CPU Clock! Let the CPU work as it is wont to work, beholden to no one! Let Nature be our guide!

CPU 1: Hey, I think we have a calculation due in a few microseconds.
CPU 2: Dude, don't sweat it. We'll get by.
CPU 1: No, really, my mom said if we don't do some work she's gonna put a clock in here.
CPU 2: Dude, that'd suck. ... Hey, we'd be living in SINchrony. Get it? Heh heh heh.
CPU 1: Heh heh heh. I've got the munchies.
Non-meta-modded "Overrated" mods are killing Slashdot

Naughty Timothy (1)

voidzero (85458) | more than 13 years ago | (#384586)

Forgot the link explaining that you can replace www with partners, channel for NYT.

How Is The Human Brain Organized? (1)

cybrpnk (94636) | more than 13 years ago | (#384591)

I have long been facinated by neural nets and their potential to develop true machine consciousness. Currently most neural nets are stap-by-step simulations using clock-based CPUs, which somehow seems vulnerable to missing some key factor that makes the natural neural nets in the brain conscious. Most research into actual neural nets (not digital simulations) seem to concentrate on maximizing the number of connections and what training algorithm to use. Can Slashdot readers suggest something beyond these three familiar themes in neural net research (digital simulation, maximizing connections, training algorithm) that gets into how neural nets might be organized to achieve true machine consciousness? If this is achieved in a manner based on the human brain, I imagine numerous neural net "subsystems", each wired differently internally and connected to other subsystems like a patchwork quilt. What are these subsystems? How are their internal wiring schemes different? How are they connected? How do theese subsystems become self-aware?

Example asynchrounus CPUs (4)

helge (67258) | more than 13 years ago | (#384593)

Asynchronous CPUs have existed for a while. They don't seem to have become very popular, though. Apparently, they don't give the power/speed advantage that you would expect at first glance. A quick search with Google gave this:

Asynchronous ARM core nears commercial debut (1998) []
ARM researches asynchronous CPU design (feb 1995) []
AMULET3: A High-Performance Self-Timed ARM Microprocessor (1998) []

Asynchronous Logic (1)

TM22721 (91757) | more than 13 years ago | (#384595)

Any EE knows the idiocy of designing complex logic without a clock. It is impossible to guarantee correct operation given latency variations due to slight temperature and voltage changes. To use a software analogy, how easy would it be to debug a program where half of the code consisted of conditional branches ?
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?
or Connect with...

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>