Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Clockless Computing

michael posted about 12 years ago | from the beat-of-a-different-drummer dept.

Hardware 342

ender81b writes "Scientific American is carrying a nice article on asynchronous chips. In general, the article advocates that eventually all computer systems will have to move to an asynchronous design. The article focuses on Sun's efforts but gives a nice overview of the general concept of asynchronous chip design." We had another story about this last year.

cancel ×


Sorry! There are no comments related to the filter you selected.

First? (-1, Troll)

Cmdr Taco posted (582492) | about 12 years ago | (#3903602)


Re:First? (-1, Offtopic)

klerk (593764) | about 12 years ago | (#3903652)

Being the mighty Klerck, I claim this post. Taco can snot himself!

Re:First? (-1, Offtopic)

Anonymous Coward | about 12 years ago | (#3903744)

sorry, klerk, you are not the real Klerck

CLAIMED (-1, Offtopic)

Anonymous Coward | about 12 years ago | (#3903670)

My clock is gone.

Cockless computing?! (-1, Troll)

Anonymous Coward | about 12 years ago | (#3903607)

Why would you want to do that?

FP, by the way.

Re:Cockless computing?! (-1, Offtopic)

NorthDude (560769) | about 12 years ago | (#3903783)

Wasn't that a justified question? Why was it moderated as being a troll?

Pist Frost (-1, Troll)

Anonymous Coward | about 12 years ago | (#3903608)

I own the CLiT!

HOW???? (2, Funny)

Anonymous Coward | about 12 years ago | (#3903609)

But how will we be able to tell time with our computers! Dear god no!

fp (-1, Troll)

Anonymous Coward | about 12 years ago | (#3903612)

... bitch.

1 Million reward (5, Funny)

nick-less (307628) | about 12 years ago | (#3903613)

for the first guy who overclock's it ;-)

Re:1 Million reward (5, Funny)

sketchkid (555690) | about 12 years ago | (#3903621)

larry ellison? is that you? :)

Re:1 Million reward (3, Insightful)

npietraniec (519210) | about 12 years ago | (#3903660)

You actually could "overclock it" because such computers would have a maximum speed... Instead of spinning their wheels like todays computers do, they would only clock when they needed to. They'd be able to achieve quicker bursts because all that wheel spinning wouldn't melt the processor.

Spinning? (0)

BlackMesaResearchFac (593320) | about 12 years ago | (#3903780)

With all that talk of spinning you make it sound like there's a little gerbil in my CPU.

Re:1 Million reward (5, Informative)

ZeLonewolf (197271) | about 12 years ago | (#3903932)

You actually could "overclock it" because such computers would have a maximum speed... Instead of spinning their wheels like todays computers do, they would only clock when they needed to. They'd be able to achieve quicker bursts because all that wheel spinning wouldn't melt the processor.

That's one of the key benefits of clockless computing: an instruction runs through the processor as quickly as the electrons can propagate through the silicon. In other words, the processor is ready to accept the next instruction at the exact instant it's available. You just can't pump it any faster...


Electricity propagates through Silicon faster when the temperature drops. Thus, the COOLER an asychnronous chip runs, the FASTER it gets! This opens up alot of exciting doors....and will certainly ignite hordes of development in the CPU cooling industry if async chips ever get off the ground. For an async chip overclocking = overcooling.

Re:1 Million reward (5, Funny)

Midnight Thunder (17205) | about 12 years ago | (#3903815)

Intel marketing wouldn't like clockless chips as it would cause them massive headaches in the Mhz FUD. For once real world performance comparisons would have to matter.

Re:1 Million reward (-1, Flamebait)

Anonymous Coward | about 12 years ago | (#3903916)

"has learnt" - learnt? are you from Kentucky?

Here's a hint. When words you speak always follow "also" in the dictionary, that's short for, "but some idiots might say"

Re:1 Million reward (3, Insightful)

Mike_K (138858) | about 12 years ago | (#3903946)

Simple. Crank up the voltage.

One huge advantage of asynchronous circuits is that you can turn the power down, and the chip simply slows down (up to a point, but you see the point). You turn power up (increase Vcc) and the chip runs faster. Same principles apply in overclocking your desktop chip, except here you don't need to crank voltage AND clock :)

Of course doing this could ruin your chip.


Re:1 Million reward (1)

cybermace5 (446439) | about 12 years ago | (#3903949)

Not impossible.

The speed of the chip will still be limited by certain factors, which can be changed by increasing voltage and cooling.

Communication interfaces will still need clocks for the time being, including peripheral busses. Unless, of course, a clockless computer has a redesigned peripheral base as well.

I see asynchronous processors making their first mainstream appearance in high-performance graphics cards. This is an application where specialized functions will benefit vastly, as well as the ability to pack in more processing power without heating up. The graphics processing will get done as fast as possible, and the chips will be designed to process in a massively parallel fashion.

BS (0)

Anonymous Coward | about 12 years ago | (#3903618)

That will never happen. Users are aleady too used to having a clock in the computer to tell them the time & to remind them of their appointments. :)

obligitory... (-1, Redundant)

Anonymous Coward | about 12 years ago | (#3903619)

wow! Imagine a Beowulf cluster of those!


Anonymous Coward | about 12 years ago | (#3903653)

The above post is not authorized and has not been certified by me, Anonymous Coward, as a 100% authentic beowulf post. The reader is hereby advised that the imaginization of a beowulf cluster of the items to which this post pertains, in whole, in part, or in any combination thereof, is not endorsed or sanctioned by me, Anonymous Coward.

That is all.

First REAL post (-1, Troll)

klerk (593764) | about 12 years ago | (#3903632)

This is the first REAL POST! All the other fags don't count! Acs can fuck off and die like the bastards they are!

Yeah and... (2, Funny)

xbrownx (459399) | about 12 years ago | (#3903634)

...maybe some day we'll actually get 64 bit processors for home use

Re:Yeah and... (0)

Anonymous Coward | about 12 years ago | (#3903692)

uh... why not just go pick up a PS2 Linux box? if nothing else, it's fun (and good experience?) to program for the 128bit processor core (in asm, of course. all mips compilers suck, sorry guys)

cheap too.

Re:Yeah and... (0)

Anonymous Coward | about 12 years ago | (#3903751)

You mean like next year?

The earliest Hammers will become a new line of Athlons. Athlons are generally for home use.

Just because Intel wants to control tight supply of IA-64 so it'll cost more doesn't mean we won't have 64-bit processors.

I h4x0r3d t3h G1bs0n. j00 4r3 l4m3. (-1, Offtopic)

Anonymous Coward | about 12 years ago | (#3903636)

This is true.

cockless?!? (-1, Troll)

Anonymous Coward | about 12 years ago | (#3903639)

cockless computers, but what about all the porn?!?! damn MPAA + RIAA + Microsoft + Government!!

Pretty cool (-1, Troll)

Anonymous Coward | about 12 years ago | (#3903646)

I've written some papers on this subject myself. Check 'em out here []

I don't understand (0)

Gabreal (592076) | about 12 years ago | (#3903648)

If you have a clockless computer than doesn't that mean everthing is running at the same speed and if that is true doesn't that mean good for the computer world??? If you think about it that sortof means that everything is going at the same speed which means no more lag in any of the programs we run as well that means that when something on the computer goes down then it may cause major side effects on the systems timing sending the whole thing out of wack? Oh well it does seem for the most part a good idea, every syustem will run a hell of a lot smoother and faster.....

The Pure Fantasy Approach (1)

GodInHell (258915) | about 12 years ago | (#3903651)

If only it was truly external to a need for time.

"Why yes docotor, we will write that program!"
What makes you say that
"The printer just spit out the results.

Or; Why can't I set a global to tell myself not to run that devide by zero after the crash has occoured before I execute..?

Loopy does not even begin to describe my current mental state.

One problem with asynchronous logic (2, Interesting)

Mifflesticks (473216) | about 12 years ago | (#3903667)

For any large project (such as an MPU), using asynchronous logic isntead of synchronous for the entire thing means it goes from being "merely" really-really-hard to damn-near-impossible.

Re:One problem with asynchronous logic (0)

Anonymous Coward | about 12 years ago | (#3903749)

Its not impossible at all, it simply takes a lot more transistors to produce the same logic. Which means that the design takes more time and more importantly the size of the chip becomes much larger, and that is the real problem.

jewless computing (-1)

Ralph JewHater Nader (450769) | about 12 years ago | (#3903671)

I want a chip free of zionist influence.

clockless computing? (4, Funny)

gyratedotorg (545872) | about 12 years ago | (#3903672)

they can't take away the clock. how will i know what time it is?

Re:clockless computing? (0, Flamebait)

tripletwentie (312452) | about 12 years ago | (#3903785)

they can't take away the clock. how will i know what time it is?
or when it's time to patch up XP

A similar subject, in an old thread (-1, Redundant)

Soul-Burn666 (574119) | about 12 years ago | (#3903673) 9&mode=thread&tid=137

Soo... Lazy... (-1, Redundant)

abusimple (588765) | about 12 years ago | (#3903736)

Clockless Chips []

Re:A similar subject, in an old thread (-1, Offtopic)

Anonymous Coward | about 12 years ago | (#3903765)


This poster is a KARMA WHORE!!

The link the poster provided is the EXACT same link provided by michael in his posting of this story.

Please kill off this thread quickly and promptly by tossing nothing but Redundant mods. Or use Overrated and get away with not having to be subjected to Metamod.

The Amiga Zorro Bus was Asyncronous (4, Interesting)

Anonymous Coward | about 12 years ago | (#3903680)

Yet another old idea revived. The Amiga's Zorro expansion bus was asyncronous and plug n play in the 80s (although the rest of the machine was clocked).

Re:The Amiga Zorro Bus was Asyncronous (5, Funny)

Joel Ironstone (161342) | about 12 years ago | (#3903809)

I thought the clock on the zorro was just masked.

Re:The Amiga Zorro Bus was Asyncronous (1)

ZaneMcAuley (266747) | about 12 years ago | (#3903827)

oooh the memories..

It was also:
AMP - Asymetric Multi Processing...
Memory mapped devices...

Now its just...

I want...
Transputers :D Want more power, just plugin more CPUs...
Unified bus... Modular design (lego computing)

Re:The Amiga Zorro Bus was Asyncronous (2, Insightful)

mikehoskins (177074) | about 12 years ago | (#3903895)

Are you just talking about a passive backplane? If so, we're talking about something VERY different here. If it's a passive backplane, you're talking about power and conections, little else.

Re:The Amiga Zorro Bus was Asyncronous (2)

alphaseven (540122) | about 12 years ago | (#3903940)

Poor Amiga, that reminds me of a quote [] from John C. Dvorak:

The hapless Amiga, a machine a decade ahead of its time (there's a lesson in there somewhere)

Explanation, sorta (3, Interesting)

McCart42 (207315) | about 12 years ago | (#3903681)

To clear a few things up, just because a processor/motherboard is "clockless" does not mean it won't be able to tell time. They can still use the 60 Hz AC signal for ticks.

This is really cool. I was learning a little about asynchronous systems in my Logic Design and Computer Organization class last fall...they seemed pretty cool on a small scale, however they could get really difficult to work with when you're dealing with something as complex as a processor.

Re:Explanation, sorta [--OT?] (1)

abusimple (588765) | about 12 years ago | (#3903776)

Always sorta been curious but never bothered to take any initiative and actually look it up myself:

How precise is the frequency of AC from a typical power company? And how much does it change over time (if any)?

Re:Explanation, sorta [--OT?] (2, Interesting)

AstroJetson (21336) | about 12 years ago | (#3903925)

It's dead accurate. By law, the number of zero crossings on an AC line must be 10368000 every day. If there are too many in the morning, they have to make up for it that afternoon.

But I think an asynchronous computer would still use a RTC to keep track of calendar time. It has to keep time even when it's turned off.

Oh, God, NO! (0)

Anonymous Coward | about 12 years ago | (#3903810)

Where the fuck did you get that one from?

1. The so called 60 Hz sine wave hasn't got a 60.0 Hz frequency, since deviations from that are acceptable, so your clock wouldn't be precise. It's not guaranteed to be a perfect sine wave either.

2. Despite being relatively simple, it's much more complicated to measure the AC supply's frequency than to use a crystal for the time-keeping clock.

And this doesn't even start addressing issues such as possible power outages.

In the future, please refrain from posting about matters from which you know nothing about.

Re:Oh, God, NO! (1, Informative)

barawn (25691) | about 12 years ago | (#3903859)

Most alarm clocks use the 60 Hz AC to generate time. Yes. It's not precise at any given point, but it's accurate over time. Even some digital clocks use this instead of crystals, hence the reason why if you plug them into outlets in other countries which use 220 V @ 50 Hz, they'll run slow. Not ALL digital clocks use this, but some (not sure how common it is, actually) - this I know from personal experience.

And if a power outage occurs, then yes, the computer may have a bit of trouble determining the time. It also may have a bit of trouble running at all. :)

As per the second comment, I don't know where you got that from: you wouldn't measure the frequency (which... would only need a frequency-to-voltage converter anyway: any basic EE text will have that) - you just need to use a bit of electronics to clean up the sine wave, turn it into a square wave, and then clock some logic with it. It's just as easy as using a crystal.

Damn it, not again (-1, Flamebait)

Anonymous Coward | about 12 years ago | (#3903953)

1. Just because a shitty alarm clock uses the AC signal to generate its clock pulse it doesn't mean this is good. It is definitely not good enough for a computer.

2. Computers should remember the time whether or not a power outage occurs, which is a completely trivial matter.

3. The circuit you suggest will not be cheaper to produce, but of much inferior quality.

4. Have you any idea of how much more precise a crystal generated clock is?

5. As per the second comment, I should know because I'm an eletrical engineer.

Re:Explanation, sorta (5, Informative)

barawn (25691) | about 12 years ago | (#3903812)

er... doubt they'd use that, to be honest. :) It's extremely unlikely that the entire system would be clockless: you'd have to redesign almost every peripheral. In any case, there'd be a clock SOMEWHERE for them to use.

For me, this is kindof amusing: asynchronous logic is where you start out - it's more basic (shove signal in, get signal out). You then move to synchronous logic to eliminate glitches and possible race conditions (clocked flipflops, etc.). Apparently now you move BACK to asynchronous logic to gain performance. I can't disagree : working with synchronous systems, I've always been annoyed that certain combinations couldn't be used because they were too close to a clock edge, and it could miss the latch. If you can eliminate the glitches and race conditions, asynchronous logic would be faster. Of course, that's like saying "if software bugs didn't occur, you'd never need to look over code." Yes, true, valid, but not gonna happen.

Of course, they're not talking about a true full asynchronous design: just getting rid of external clock slaving. The external clock would still BE there, for the external architecture - it's just that the internal architecture wouldn't all move to the beat of a clock.

For instance, I'm pretty sure that no one is suggesting getting rid of synchronous memory design: it's just easier that way.

Re:Explanation, sorta (1)

MikeD83 (529104) | about 12 years ago | (#3903822)

Because all motherboards have a direct link to 120V 60hz AC Power... Your computer knows what time it is because of the real time clock which is a component of the motherboard. It runs when the PC is off by getting it's power from the small watch battery.

Huh? (2)

autopr0n (534291) | about 12 years ago | (#3903876)

Well, its not like they are going to have no clock, obviously they'll have some crystals in there to keep other things synchronized. And lots of thing, especially interactive stuff needs accurate timekeeping (not to mention most of the chips in the overall system will need a clock)

I am a Sovereign. (-1, Offtopic)

Anonymous Coward | about 12 years ago | (#3903685)

I revoked my Social Security Trust...6 weeks and $17.
I revoked my Taxpayer Identification Number...4 weeks and $15.
I filed UCC-1, UCC-2, and UCC-3...7 weeks each and $13.
My "STRAW MAN" now belongs to me and am a Sovereign and 0wn my stock...6 months and $3,000 later.

Stomping every United States employee (police) and Britain employee (judge)...priceless.

You see, there are some things that us Sovereigns enjoy more than Asyncronous computing. bbye

But what will they use... (0)

Anonymous Coward | about 12 years ago | (#3903695)

as a dic...err peformance measurement unit then ?

Apple will certainly be please if it is still around since it will obviously end the megahertz myth, which giga-hurts the Mac.

But imagine: they will need to find real, actual benchmarks to compare the performance. This does not sound reasonable...

Or should these chips be called cockless...

Think of a water clock (2, Informative)

imta11 (129979) | about 12 years ago | (#3903697)

I have read this article, and it is a very cool technology. Portions of the UltraSparc IIi [] have gates (capacitance) in the datapath that let the chip capture state of a computation and finish the result later. This lets the chip do other calculations during the time it would usually be in the wait state. Like compile the bytecode to optimize a loop after iterations have begun, but before it terminates.

Return of the 68000? (2, Interesting)

vanyel (28049) | about 12 years ago | (#3903700)

Wasn't the 68000 asynchronous?

Re:Return of the 68000? (2)

leucadiadude (68989) | about 12 years ago | (#3903740)


It was the original CPU chosen for the Amiga 1000 and several of the Atari machines. It was the first consumer 32 bit device (16/24 bit address bus, 32 bit for everything else).

It had a clock speed of 7.14 MHz, or 0.000714 GHz.

Re:Return of the 68000? (0)

MORTAR_COMBAT! (589963) | about 12 years ago | (#3903825)

isn't 7.14 MHz equals to 0.00714 GHz, not 0.000714 GHz?

1000 MHz == 1.0 GHz
100 MHz == 0.1 GHz
10 MHz == 0.01 GHz
7.14 MHz == 0.00714 GHz
1 MHz == 0.001 GHz

i dunno, maybe my brain is dead today and i can't do basic math...

Re:Return of the 68000? (1)

pmz (462998) | about 12 years ago | (#3903828)

It had a clock speed of 7.14 MHz, or 0.000714 GHz.

Actually, it's 0.00714 GHz.

Re:Return of the 68000? (1)

leucadiadude (68989) | about 12 years ago | (#3903901)

Ooops. Stuttered on the "0".....

Re:Return of the 68000? (3, Interesting)

Baki (72515) | about 12 years ago | (#3903799)

In a way yes. If I remember well, it's memory addressing and I/O bus system was asynchronous (not the clock of the CPU itself), meaning no 'wait states'. It would request a memory location and react as soon as the memory came up with the result. I forgot the details though.

Re:Return of the 68000? (3, Informative)

martinde (137088) | about 12 years ago | (#3903922)

> It would request a memory location and react as soon as the memory came up with the result.

Well, kind of. A bus cycle completed when someone signaled "data transfer acknowledge" (DTACK) - then the CPU would read the data off of the bus. Most systems understood where in the address space the memory request was going, how fast that device was, and had logic to count system clocks to trigger DTACK when the data "should be" ready. (In fact, most memory devices have no way of signaling when a read has completed - they just guarantee in in a certain amount of time.)

On the other hand, if you didn't hit DTACK in time, a bus error was generated and an exception routine triggered. Ahhh, the good old days ;-)

Re:Return of the 68000? (4, Funny)

isorox (205688) | about 12 years ago | (#3903800)

Wasn't the 68000 asynchronous?

No, it was so slow it just seemed that way.

Browsing at +1... (-1, Offtopic)

talks_to_birds (2488) | about 12 years ago | (#3903701)

( Read More... | 1 of 19 comments )

Only one of nineteen posts make the cut.

The usual /. content: AC diarrhea, a la carte...


Mirror? (-1, Troll)

sbeitzel (33479) | about 12 years ago | (#3903710)

Tried to RTFA, but the site's Slashdotted.

Intel will be pissed (0, Redundant)

JimE Griff (534058) | about 12 years ago | (#3903712)

How will they market stuff now? They might hve to thing of real architecture solutions! Woe is them.

Re:Intel will be pissed (5, Funny)

isorox (205688) | about 12 years ago | (#3903841)

  1. ClocklessCPU
  2. ClocklessCPU 2
  3. Super ClocklessCPU 2
  4. Super ClocklessCPU Turbo
  5. Super ClocklessCPU Turbo 2
  6. Super ClocklessCPU Turbo !!!
  7. Super Duper ClocklessCPU Turbo MAX
  8. Super Duper ClocklessCPU Turbo MAX 2
  9. etc. etc.
Your comment violated the "postercomment" compression filter. Try less whitespace and/or less repetition. Comment aborted.

The next step (1)

goofy183 (451746) | about 12 years ago | (#3903733)

Looking at the progression of programming languages it makes sense that the rest of the computer would follow. I don't know as much about older languages as I really need to make a strong argument but it seems that they are for the most part procedural and very few have any means for events or being driven by input or other actions. New incarnations of languages such as Java, C#, and even C++ (form my limited experience) are completely event driven or moving in that direction.

From a programmer's standpoint the code is generally easier to write and there is less of it with an event driven system. The other bonus is you aren't wasting cycles by waiting in a programmer created loop (I know there is a loop in there some where). With an asynchronous chip (and hopefully whole PC) one could design a truly event driven system. It would use very little in the way of resources when nothing was going on, instructions would only take as long as they need instead of having to wait for the clock in many cases and the logical design seems like it would be much simpler.

I hope asynchronous computing technologies become more available. The biggest obvious bonus is reduced energy consumption & heat output but I think and entirely asynchronous computer would be a marvelous piece of equipment and be extremely powerful for it's complexity.

Re:What about games running too fast? (0)

Anonymous Coward | about 12 years ago | (#3903899)

That sound nice except that it would mean that games would run unplayably fast, since there is no clock to regulate the game speed. How would you write a game (or even a video/audio player) so that it would only run or play at a given speed. I wouldn't want to watch a movie at 20x normal speed just because the hardware is capable of it (though running Final Fantasy 8 for the PSX at 5 times normal would be nice).

combine clocked/-less sections on same chip? (3, Insightful)

The Fun Guy (21791) | about 12 years ago | (#3903738)

The article talks about an advantage of clockless chips being the fact that you can do away with all of the overhead in sending out the clock signal to the various parts of the chip. It also discusses what kind data processing activities are more suited for clocked vs. clockless chips. To get a best-of-both-worlds chip design, what about farming out various responsibilities on the chip to clockless sub-sections? The analogy I have in mind is when I drop my laundry off at the dry cleaners. I am on a tight schedule, and I have a lot of things to do in a certain sequence, while the dry cleaners collects laundry and does it at various rates during the course of the day. This particular laundry of mine can be done at any point over the next 4 days, and held afterwards, just so long as I have the finished product exactly when I need it, Thursday at 4:15 p.m. Different people assign different limits on the time-sensitivity of their laundry. The clocked sections can drop off their data for processing, and pick it up when they need it, and what happens inbetween is up to the clockless subchip, which does more-or-less FIFO, but can be flexible based on the time-sensitivity of the task.

Scientific American (-1, Offtopic)

Anonymous Coward | about 12 years ago | (#3903739)

(Un)Scientific (Un)American Magazine is a liberal, socialist, communist piece of shit rag.

Re:Scientific American (-1, Flamebait)

Anonymous Coward | about 12 years ago | (#3903774)

Unscientific == Religious == Right Wing == American. Wake up you luser.

Re:Scientific American (-1, Offtopic)

CreamOfWheat (593775) | about 12 years ago | (#3903843)

Religion == Virtue == Republican Party == American == Truth == Science

Re:Scientific American (-1, Offtopic)

Anonymous Coward | about 12 years ago | (#3903921)

there are many flaws with your logic.

Religion == Virtue == Republican Party == American == Truth == Science

the only one which passes is "Truth == Science", and that one is using a very, very large fudge factor on the equality check.

Religion == Virtue. Hrm. how about the fact that you just said that "Militant Islam Extremism == Virtue". Maybe that's what you had in mind.

Virtue == Republican Party. Uh... yeah. Way to go, W. and Co., enough investment scandals to start a new white collar prison resort.

Republican Party == American. Okay, maybe that passes through the same fudge factor allowance as "Truth == Science", but only maybe. But that makes the fudge factor as +/- 50, on a scale of 0 to 100, with American being 100, and Republican Party being 50.

American == Truth. If we "Americans" knew half of what our government does in our name outside (and inside) our borders... but we don't.

Funny title (0, Offtopic)

nkyad (589067) | about 12 years ago | (#3903745)

My eyes just crossed through the words without really reading. Then my brain started puzzling itself about why were people taking their macho feelings about their hardware so far as to have a company explicitly launching a "Cockless" computer.

Uh, ok (-1, Offtopic)

BlackMesaResearchFac (593320) | about 12 years ago | (#3903813)

How that comment managed to get by w/o getting modded down is beyond me.

Re:Uh, ok (-1, Offtopic)

BlackMesaResearchFac (593320) | about 12 years ago | (#3903870)

And his lame post about miss-reading the title isn't offtopic? Ooook.

Re:Funny title (-1, Troll)

Anonymous Coward | about 12 years ago | (#3903856)


Thisisahorribleidea... (5, Funny)

zulux (112259) | about 12 years ago | (#3903764)

withoutaclocksignal,howcanyoutellwhenoneinstructio nstopsandanotherbegins?


Re:Thisisahorribleidea... (0, Offtopic)

sharkey (16670) | about 12 years ago | (#3903865)

withoutaclocksignal,howcanyoutellwhenoneinstructio nstopsandanotherbegins?

When the Slashcode Lameness Injector inserts a space for you.

Re:xxxxx Thisxxxx isxxxxxx horrible (5, Informative)

A nonymous Coward (7548) | about 12 years ago | (#3903902)

withoutxxxx axxxxxxxxxx clockxxxxxx signalxxxxx ,xxxxxxxxxx howxxxxxxxx canxxxxxxxx youxxxxxxxx tellxxxxxxx whenxxxxxxx onexxxxxxxx instruction stopsxxxxxx andxxxxxxxx anotherxxxx beginsxxxxx ?xxxxxxxxxx

Because rephrasing your question as above is what synchronous looks like; every word has to be padded to the longest word length. Asynchronous is like normal written language; words end when they end, not when some 5 char clock says so. Another crude analogy is sync vs async serial comm, except using hoffman(sp?) encoded chars, so async can use variable length chars, but sync has to padd the short ones out to the length of the longest.

I tried underline instead of x but the stupid lameness filter objected/

Haven't we seen a similar story before? (-1, Troll)

Anonymous Coward | about 12 years ago | (#3903766)

Here [] . Seems like that system has to be running on some sort of clockless architecture for sure.

obligatory request for google cache link (0)

Anonymous Coward | about 12 years ago | (#3903777)

since the article is completely slashdotted... can someone who can figure out the google cache link post a link for the lazy ones among us?


Small scale, and then larger (3, Interesting)

McCart42 (207315) | about 12 years ago | (#3903787)

After reading the article, I have to wonder why asynchronous processors (or smaller logic devices, such as ALUs) haven't been considered before. The ideas have certainly been around for awhile--and in fact, asynchronous is intrinsically simpler than synchronous logic. The only conclusion on this I can reach is that while asynchronous designs may be "simpler" in theory, in that they don't include a clock pulse, they are much more difficult to work with in practice. Here's an example for those of you that have worked with logic design: try creating the logic for a simple vending machine that dispenses the product whenever a combination of coins (triggered by 3 switches, quarter, dime, and nickel) adds up to $0.50. Which would you prefer to use--synchronous or asynchronous logic? I know when I did this example I got myself stuck by using asynchronous logic, because while asynchronous logic meant less memory states (all states above $0.50 were treated the same), it also meant lots of added complexity, which I didn't need for the problem at hand.

I foresee lots of bugs, but if they can pull this off, more power to them.

Re:Small scale, and then larger (2, Informative)

el bastardo (12041) | about 12 years ago | (#3903905)

It's also tough to test and verify. I do ASIC timing analysis for a living, and most test methodologies (including IEEE JTAG boundary scan) rely on some sort of test clocks to pre-load test patterns into the chip before throwing on the system clock. I'm sure you could do this with global async. control signals, but it would be harder.

I also haven't seen or heard of any large-scale software tools for doing this sort of analysis (as opposed to classic synchronous design, where one can pick from at least half a dozen static timing analyzers on the market today). This is probably at least a big a gate as anything else.

But... (3, Insightful)

ZaneMcAuley (266747) | about 12 years ago | (#3903791)

... won't the buss and storage devices be a bottleneck still?

Bring on the solid state storage.

Asynchronous Slashdotting (0)

Anonymous Coward | about 12 years ago | (#3903797)

Perhaps they need asynchronus web servers -
Or maybe asynchronous users...

Didn't Intel already develop one? (1)

Yold (473518) | about 12 years ago | (#3903816)

If memory serves... Isn't Intel already utilizing asynchronous technology in thier StrongARM [] line of processors? I thought they developed an asynchronous processor back in the early pentium days, but the costs of mass producing it were pretty steep.

Tools (3, Insightful)

loli_poluzi (593774) | about 12 years ago | (#3903844)

Kevin Nermoyle (Sun VP) advocated asynch at the 2001 uProcessor Forum. The biggest and most daunting objection I heard in response was that tool support would be a killer. There is no tool support for asynch design at the complexity level needed to do a processor. You're left to a bunch of Dr. Crays using the length of their forearm to resolve race conditions with wiring distance. Since a large portion of the industry would have to make the leap to get the tool guys to invest in development, this kills any realistic possibility of an overnight asynch revolution. Small niche applications will have to get the ball rolling on this. Even still, designer's would need to get a lot smarter to think asynch. Think of how many chip protocols rely on a clock. How do you do even simple flow control in a queue for example? Pipelining goes to pot - its a whole different world. My two-cents.. Loli

Why? (2)

nuggz (69912) | about 12 years ago | (#3903846)

Why not have several smaller sections passing messages to each other? They could be clocked unclocked or just picking their noses.

Right now a network is a bunch of arbitrary speed systems just passing messages, can't we scale that down to the computer level?
It wouldn't even involve anything overly revolutionary.

Re: It's been proposed. (1)

kcm (138443) | about 12 years ago | (#3903945)

try this paper [] . there are others [] , of course, this is only part of the architecture.

No mhz!?!? The world will end. (2)

Dutchmaan (442553) | about 12 years ago | (#3903851)

...but what scale will I use to justify how cool I'm trying to be!?!?! How will people be able to judge just how much more successful they are than their fellow human beings.

I guess I'll just go back lifting weights, over compensating cars and the ruler. *sigh* I hate analog.

The problem with asynchronous logic (0)

Anonymous Coward | about 12 years ago | (#3903877)

What I see wrong with asynchronous logic and the progression to faster technology is that it requires timing wires to measure the time it takes for a signal to propagate. That's a problem because then the assumption is that there should only be one signal on a wire at a time. That means the fastest speed between two circuits is dependent on the distance between them. That will severely limit the speed scalability of asynchronous circuitry! I'd like to see a high-speed CPU design take an example from networking-- have more than one bit on a line at a time. For example, suppose you have a highspeed network (i.e. 100Mbps or something similar). Along several meters of network wire there are multiple bits of data. I'm too lazy to calculate actual numbers, but it's probably somewhere around 70 or so bits. Each spread out by a distance of perhaps a few centimeters-- remember it takes time for electrical signals to propagate. I believe whoever develops an architecture allowing multiple bits on a CPU bus (like a LAN network) will win the race for speed.

Heard about this stuff in class (2, Informative)

Montag2k (459573) | about 12 years ago | (#3903885)

I have a professor here who swears by this asynchronous stuff. He told us that Intel actually developed an asynchronous version of the Pentium (I) processor. It worked just fine, and used less power than a normal clocked processor. Unfortunately, all of the processes and designs for everything else they were doing had to be redesigned for it, and it would have ended up costing a bundle in retraining and redesign in order to mass market the chip.

It seems to me that clockless chips like these would seem to work very well with MIPS style processors - where you have lots of little instructions. However, you can't take advantage of the extreme pipelining features that chips like the Pentium 4 use when you don't have a clocked design. It would take a lot of research and a lot of re-education to get the design engineers to start thinking asynchronously instead of clocked, but my professor seems to think that eventually there will be no other way to speed things up.

Its also like you'd be trading in one problem for a host of others. I remember doing tests on 1GHz clock chips, and those things had to be absolutely PERFECT in order work correctly on the motherboard. They ate up a lot of power too. However, an asynchronous design would have its own traps. You can design a state machine for it an then minimize the states, but glitches will do a lot more harm on a chip that is running asynchronously. Plus you have to take into account that chips run at different speeds at different temperatures. I think we have a long way to go in the quality of individual electronic components before we can actually implement a modern processor that is asynchronous.

By the way, that Professor's name is John McDonald, and he's here at Rensselaer Polytechnic Institute.


Armada (2, Funny)

Shadow Wrought (586631) | about 12 years ago | (#3903897)

Hopefully Sun's "ships" and "flotillas" won't go the way of the Spanish Armada;-) Will this be the new way to measure? "Sure this one has 10k ships, but they're only frigates. Even though this one only has 5k ships, they're all ships of the line."

Intel, AMD, etc and marketing (5, Insightful)

Alien54 (180860) | about 12 years ago | (#3903904)

So ...

if we have clockless computers for the desktop, HOW will Intel and AMD market them?

After all, a large quick and dirty rating they have used for decades is the clock speed. Throw that away and what do you have?

I can see the panic in their faces now...

Would like to see some real-world results (2)

acordes (69618) | about 12 years ago | (#3903911)

It would be interesting to see some real-world speed results comparing an asynchronous and synchronous circuit with identical functionality, fab process, transistor size, transistor switching speed, etc.

What's this look like to the programmer? (2)

mcc (14761) | about 12 years ago | (#3903929)

I am just curious. Would the way an asynchronous chip worked dictate anything about the instruction set of the chip? Would it be possible to use today's instruction sets in an asyn chip? Would you have to come up with something different? Would someone writing an asyn compiler have any special issues or optimisation techniques they would have to be aware of that would be inherent to the concept of the asyn chip itself?

Are there any "features" related to the asynchronocity of the chip that it would be possible to add to the assembly language of an asyn chip? Becuase individual sectors of the chip can function independently and not have to synchronize, can you kind of get a multiprocessing-within-a-single-chip effect? I.E. can you create a singular asyn chip split up into separate sectors, each of which functions as if it were an autonomous processor? Can you have one chip concurrently execute single threads?

If the answer to this last question is "yes", do you have to do this by organizing the chip such that the different sectors are basically seperate chips on the same cast, or can you just have it so that the exact borders of the the chip area working on a certain thread at a certain moment is reconfigured dynamically? Would it be possible someday to create a microchip whose internal execution model is somewhat like that of Cilk [] ?

How does asynchronous design fit in with atomic-execution technologies like VLIW and EPIC?

Low-Power Async Procs (2, Interesting)

Dielectric (266217) | about 12 years ago | (#3903936)

A while back I saw a whitepaper on an asynchronous design, but it was being done for low power applicatios. Basically, you had two lines for each bit. Condition 00 wasn't allowed and could be used to detect faults. 10 was one, 01 was zero, and 11 was idle. Nothing would happen until one of the lines dropped, so there was no clock but the CPU still knew when it was time to do something. It was a fully static design where no power was being used unless there was some user interaction. You could run it off a few nanoamps, so a piece of citrus fruit would run it until the fruit rotted. Simple chemistry.

I think this was from Seiko-Epson. I might have the states screwed up but that's the idea. (-1, Troll)

Anonymous Coward | about 12 years ago | (#3903941)
Its better than sex!
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?
or Connect with...

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>