×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Building a 32-Bit, One-Instruction Computer

timothy posted more than 4 years ago | from the some-things-weren't-meant-for-post-its dept.

Programming 269

Hugh Pickens writes "The advantages of RISC are well known — simplifying the CPU core by reducing the complexity of the instruction set allows faster speeds, more registers, and pipelining to provide the appearance of single-cycle execution. Al Williams writes in Dr Dobbs about taking RISC to its logical conclusion by designing a functional computer called One-Der with only a single simple instruction — a 32-bit Transfer Triggered Architecture (TTA) CPU that operates at roughly 10 MIPS. 'When I tell this story in person, people are usually squirming with the inevitable question: What's the one instruction?' writes Williams. 'It turns out there's several ways to construct a single instruction CPU, but the method I had stumbled on does everything via a move instruction (hence the name, "Transfer Triggered Architecture").' The CPU is implemented on a Field Programmable Gate Array (FPGA) device and the prototype works on a 'Spartan 3 Starter Board' with an XS3C1000 device available from Digilent that has the equivalent of about 1,000,000 logic gates, costing between $100 and $200. 'Applications that can benefit from custom instruction in hardware — things like digital signal processing, for example — are ideal for One-Der since you can implement parts of your algorithm in hardware and then easily integrate those parts with the CPU.'"

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

269 comments

That instruction is .......... (4, Insightful)

140Mandak262Jamuna (970587) | more than 4 years ago | (#30161312)

-------------drum roll

0x2A

That is the ultimate instruction.

Re:That instruction is .......... (1)

snspdaarf (1314399) | more than 4 years ago | (#30161432)

So, the mice beat everyone to it?

HP 1000 (1)

mollog (841386) | more than 4 years ago | (#30162346)

Is this not a state machine design? A network switch ought to implement this sort of design. In fact, the design of TCP would also provide functionality for parallel processing, multiple cores, etc. That would make for a variable word size, too. The work would be in the implementation of the various functions such as add, subtract, etc.

Re:That instruction is .......... (2, Funny)

MozeeToby (1163751) | more than 4 years ago | (#30161728)

Unless of course, the ultimate question really is 'What is 6 times 9?' as some people believe (meaning 42 is base 13 for some unknown reason). Which would of course make the ultimate instruction 0x36.

Re:That instruction is .......... (1, Funny)

ksemlerK (610016) | more than 4 years ago | (#30161816)

6 times 9 is 54.

Re:That instruction is .......... (4, Informative)

MozeeToby (1163751) | more than 4 years ago | (#30161866)

Hence the '42 is in base 13' part of my comment. 42(base 13) == 54(base 10) == 36(base 16). Of course, Adams himself denied this was the case... "No one writes jokes in base 13" but after this theory emerged he did work it into some of his later jokes, probably just to keep people wondering.

Re:That instruction is .......... (0)

Anonymous Coward | more than 4 years ago | (#30161870)

6 times 9 is 54.

Wow, I never knew that!!! mod parent informative!!!!

Re:That instruction is .......... (2, Funny)

dgatwood (11270) | more than 4 years ago | (#30161902)

Appropriate that the ultimate instruction would also be a wildcard (*) in ASCII.

And speaking of your drums, on Apple II, it's rotate accumulator left, the ROL instruction.

How curious.

Re:That instruction is .......... (0)

Anonymous Coward | more than 4 years ago | (#30161988)

Rolling On the Laugh?

Yeah, I can see that.

Re:That instruction is .......... (1)

gblues (90260) | more than 4 years ago | (#30161938)

Fail! That's not even a 32-bit instruction. Everyone knows the ultimate instruction is 0xDEADBEEF!

Re:That instruction is .......... (2, Funny)

EkriirkE (1075937) | more than 4 years ago | (#30161992)

But that's just 0xBAADF00D

Re:That instruction is .......... (2, Interesting)

mwvdlee (775178) | more than 4 years ago | (#30162106)

It's got only one instruction. ...and the first parameter to that instruction controls what the instruction does with the rest of the parameters.

(p.s. I wish this was just a joke, but this is pretty much what it seems to be doing)

Re:That instruction is .......... (0)

Anonymous Coward | more than 4 years ago | (#30162164)

move along nothing to see on this planet.

But can it (0)

Anonymous Coward | more than 4 years ago | (#30161354)

But can it run Vista?

"ideal for One-Der"? (4, Insightful)

mpoulton (689851) | more than 4 years ago | (#30161362)

It seems specious to say that One-Der is optimal for a task because it offers the flexibility of the underlying FPGA hardware. If you have the FPGA hardware present to run the One-Der implementation, then you could just configure a more optimally designed processor out of it for whatever task you are actually performing.

Re:"ideal for One-Der"? (2, Interesting)

Bakkster (1529253) | more than 4 years ago | (#30162514)

But most FPGAs utilize a CPU core, which is often hard-wired and has ports to access the programable elements. Assuming the single-instruction MIPS runs faster than the common 'standard' CPUs such as PowerPC, then there would be a benefit. The CPU could be smaller (leaving more space for programmable elements) and more easily expanded upon (run additional functions by address rather than by OPCODE).

That's a big 'if', but there's merit in exploring it. The biggest barrier I can think of right now is with programming time, and that's the most expensive part of most FPGA projects already.

Re:"ideal for One-Der"? (1)

mattdm (1931) | more than 4 years ago | (#30162670)

The sentence from the summary which you're replying to makes more sense in its full context in the article:

Even so, One-Der is imminently usable as it is. Unlike many other FPGA CPU cores, this one is very simple to customize even if you aren't an expert on its internals. Applications that can benefit from custom instruction in hardware -- things like digital signal processing, for example -- are ideal for One-Der since you can implement parts of your algorithm in hardware and then easily integrate those parts with the CPU.

In other words, it's an ideal starting point for these applications.

nihilist (4, Insightful)

Nadaka (224565) | more than 4 years ago | (#30161388)

vaguely reminds me of the nihilist language joke. A language that realizes that ultimately all things are futile and irrelevant, thus allowing all instructions to be reduced to a no-op.

Re:nihilist (4, Funny)

Anonymous Coward | more than 4 years ago | (#30162302)

... and then it does dead code elimination, right?

He's Building a One-Der, Stop Him (5, Funny)

eldavojohn (898314) | more than 4 years ago | (#30161410)

Everyone attack him before he wins this round of Age of Empires. Quickly, he's probably low on resources right now.

Cheating? (4, Insightful)

happy_place (632005) | more than 4 years ago | (#30161416)

So the one instuction is essentially a move command that has multiple modes... Ahem. Isn't that cheating? Isn't move considered two instructions already, a load and store? I guess this is really dependent upon how you define what is and isn't an instruction.

Re:Cheating? (2, Interesting)

Anonymous Coward | more than 4 years ago | (#30161522)

Erm, no. The canonical single instruction machine uses "subtract and branch if negative" and that's not considered to be three instructions (subtract, test, branch) but one.
Using memory-mapped facilities to perform operations like addition...now THAT is cheating.

Re:Cheating? (2, Informative)

quickOnTheUptake (1450889) | more than 4 years ago | (#30161610)

Using memory-mapped facilities to perform operations like addition...now THAT is cheating.

Isn't that what it does?
Strikes me that that is just complicating things, insofar as you still effectively have multiple instructions, there is just another semantic level tacked on to hide them.

Re:Cheating? (2, Informative)

Talennor (612270) | more than 4 years ago | (#30162368)

So the one instuction is essentially a move command that has multiple modes... Ahem. Isn't that cheating?

Yes, it is cheating. He basically took the instruction bits of the program and said, "Behold, for they are now address bits!" With the caveat that the address bits happen to address INSTRUCTIONS. It's all pretty brain-dead.

And for Slashdot users, that one instruction is... (-1, Troll)

Anonymous Coward | more than 4 years ago | (#30161446)

Troll

GOTO ... (4, Funny)

gstoddart (321705) | more than 4 years ago | (#30161464)

I vote for GOTO as the only instruction.

That would be hilarious.

Cheers

Re:GOTO ... (1)

jeffmeden (135043) | more than 4 years ago | (#30161598)

I think I was in a high school 'comp sci for dummies' class based on that principle. You would be surprised how much qBasic can do with generous use of GOTO.

Well, at least, it seemed like an impressive program at the time. Good thing I don't write code for a living!

Re:GOTO ... (1)

ctrl-alt-canc (977108) | more than 4 years ago | (#30162366)

A NOP is better. Further developments will make available an indexed NOP, so that the CPU will jump and do nothing at the same time.

The first language I ever saw was GOTO only (1)

istartedi (132515) | more than 4 years ago | (#30162548)

The language of naughty schoolboys was goto-only. However, it never fulfilled on its promise of naked chicks if you turned to page 69. Some of the programs written in said language were, however, quite humerous and complex. You could implement loops in that language of course, and perhaps even keep an idiot busy for hours. I'm not sure if it was Turing complete though.

Can be a bit tricky to program... (5, Interesting)

nokiator (781573) | more than 4 years ago | (#30161468)

I built a single instruction microprocessor at grad school. The only instruction was to move a 32-bit data from one address to another address. All the ALU and I/O functions were memory mapped. For example, you could have an adder where address A was operand #1, address B was operand #2 and address C was the result. Branches were handled through ALU units where the result of the operation changed the instruction pointer for some future instruction. It was very easy to implement and notoriously difficult to program.

Re:Can be a bit tricky to program... (4, Interesting)

purpledinoz (573045) | more than 4 years ago | (#30161942)

For a few seconds there, I thought you said grade school. Made me feel very inferior :) Wouldn't the complexities of programming it be handled by a compiler? If someone managed to write one for a 1 instruction processor?

Re:Can be a bit tricky to program... (1)

Chris Burke (6130) | more than 4 years ago | (#30162320)

So... how did you encode what operation the ALU should perform? And wouldn't that then be the ISA? Couldn't you then make a "one-instruction microprocessor" where the only instruction is "move bytes to x86 processor instruction cache"? ;)

Or was each possible ALU operation a different memory-mapped address? Was writing the operands to the addresses what caused the operation, or did you have to write to a "do-it" ?

Not that making such a processor isn't cool. Cus it's cool. Making just about any kind of processor in school is cool. :)

Just when I read "one instruction processor" I was thinking of something more traditional, where instructions map to execution units. So like a machine where the only instruction is NAND with memory arguments. Now that would be a bitch to program. ;)

Wrong part number in summary (5, Insightful)

mako1138 (837520) | more than 4 years ago | (#30161530)

It's XC3S1000, not XS3C1000. Been working with these parts too long...

One instruction... (3, Insightful)

hey (83763) | more than 4 years ago | (#30161652)

... whose first operand is the task to perform. Followed by the necessary operands for that task.

Re:One instruction... (5, Interesting)

pz (113803) | more than 4 years ago | (#30162256)

... whose first operand is the task to perform. Followed by the necessary operands for that task.

Exactly. It isn't a single instruction computer.

And the idea isn't new.

If a single instruction architecture is designed, then there is only one instruction (duh), and there's no reason to encode that instruction in the instructions themselves. All that will be left is encoding for operands. There's a tempting but brief foray into semantics where you can argue that the first handful of bits in TFA's instruction set are operands to the execution control unit, but that is, in fact, what most would consider defining a set of instructions where each distinct value in that first handful of bits describes more-or-less a distinct instruction. One quickly realizes, however, that there is a fundamental difference between data operands and instruction operands, and, by stating that it is a single instruction architecture, the implication is that there are no instruction operands. Therefore, TFA's architecture is not single instruction.

It's well known that there are universal logic elements (like the two-input NOR gate), and, by extension, you can create single instruction architectures that compute the universal logic element operation on two arguments, writing the results to a third. Instructions in such architectures are just memory locations -- source A, source B and destination. While incredibly simple, such a machine is going to have a very, very low instruction set density. It's an interesting project for intellectual curiosity (like in an introductory graduate level machine architecture course) but hardly worthy of a Slashdot front page mention.

...to rule them all! (0)

Anonymous Coward | more than 4 years ago | (#30162644)

But, err, there are no instructions for it to rule. Oh well.

Microblaze (0)

Anonymous Coward | more than 4 years ago | (#30161654)

It occurs to me that the Microblaze would be 10 times faster, much easier to program and probably of a similar size.

Old news (1)

Al Kossow (460144) | more than 4 years ago | (#30161674)

Mike Albaugh did this in 1986
Google for
urisc macro package in the net.arch archives.
His instruction was "Reverse subtract and skip if borrow"

Memory of this from Engineering School (3, Funny)

systemeng (998953) | more than 4 years ago | (#30161726)

I remember hearing about building a one instruction computer back in engineering school. The one I heard about was based on Subtract and Branch if Not Equal. My roommate at the time figured it ought to be a way to get a very high clock rate. It seems like he found a proof in a hoary old book that such a computer was in fact Turing complete. I'm sure I'll get flamed for posting a vague recollection but. . . here it is.

Ummmm (1, Insightful)

Sycraft-fu (314770) | more than 4 years ago | (#30161744)

"The advantages of RISC are well known -- simplifying the CPU core by reducing the complexity of the instruction set allows faster speeds, more registers, and pipelining to provide the appearance of single-cycle execution."

Is it just me, or does this sound like RISC fanboyism from the 1990s? The "advantages" of RISC are not nearly so clear these days. Indeed, it is getting rather hard to find real RISC chips. While there are chips based on RISC ISA idea (like being load/store and such), they are not RISC. RISC is about having few instructions and instructions that are simple and only do one thing. Those concepts are pretty much thrown out when you start having SIMD units on the chip and such.

These days complex processors are the norm. They have special instructions for special things and that seems to work well. RISC is just not very common, even in systems with a RISC heritage.

I'm just not seeing what this processor is supposed to accomplish, especially being on an FPGA. If you can implement a CPU to do what you need on an FPGA, you can probably implement a dedicated solution on the FPGA that is faster. That is rather the idea of an FPGA over a CPU. You can implement things in hardware that are faster.

Re:Ummmm (0)

Anonymous Coward | more than 4 years ago | (#30162020)

In my opinion RISC was not about a reduced number of instructions (it was really a bad choice for what, I guess, was a back-cronym). It was more about having instructions encoded in a regular way, hence the fixed size instructions and the reduced instruction formats.

Your example of SIMD units, if you take this into account, is completely oposite. SIMD units have really minimized instruction sets encoded in a very regular way.

Btw, I dont agree at all with the notion that a processor based on triggered operations by a move instruction being RISC at all. One of the design goals of traditional RISC processors was increased orthogonality. I cant see the orthogonality of triggering operations by writing to magic addresses.

Re:Ummmm (2, Informative)

Anonymous Coward | more than 4 years ago | (#30162108)

This isn't true. Modern processors are highly RISCy -- they just have front-ends that translate from CISC ISAs. The last genuinely CISC processor was, I believe, the Pentium (non-pro edition).

Re:Ummmm (5, Informative)

julesh (229690) | more than 4 years ago | (#30162174)

Is it just me, or does this sound like RISC fanboyism from the 1990s? The "advantages" of RISC are not nearly so clear these days. Indeed, it is getting rather hard to find real RISC chips. While there are chips based on RISC ISA idea (like being load/store and such), they are not RISC. RISC is about having few instructions and instructions that are simple and only do one thing. Those concepts are pretty much thrown out when you start having SIMD units on the chip and such.

I wouldn't say that's what RISC was about at all; the basic idea was to have only instructions that could be implemented using a few simple pipeline stages. This is a substantial improvement over the microcoded architectures that were prevalent prior to RISC, because it can be much more easily pipelined (or, indeed, pipelined at all). I don't see SIMD as incompatible with RISC in any fashion; it just happens that the instruction operates on very wide data, but it's still a relatively simple instruction that should be able to complete quite quickly.

These days complex processors are the norm. They have special instructions for special things and that seems to work well. RISC is just not very common, even in systems with a RISC heritage.

I'd say it's more the other way around. Even in systems with a CISC ISA (e.g. x86), you tend to find that under the hood the CISC instructions are translated into a series of microops that are then dispatched in a system that is somewhat RISC-like. The most common processor family in the world is the ARM family, and all of those processors subscribe pretty well to the original principles of RISC, from instruction set to internal design of the processor core.

All of these are much more faithful to the principles of RISC than the chip described in TFA, whose instruction performs two memory accesses on each execution -- note that the removal of such instructions and consequent simplification of the execution pipeline (by having only a single pipleline stage that could access memory) was the original motivation behind RISC architectures.

Re:Ummmm (1)

Dunbal (464142) | more than 4 years ago | (#30162584)

Is it just me, or does this sound like RISC fanboyism from the 1990s?

      The good thing about fashion is that if you wait long enough, everything will be in vogue again. I'm just waiting for the day when my crates of punch cards will be in demand by everyone. My great grand-children will surely respect me THEN.

Not new, and not too useful (5, Interesting)

Animats (122034) | more than 4 years ago | (#30161782)

That's an old idea. [wikipedia.org] The classic "one instruction" is "subtract, store, and branch if negative". This works, but the instructions are rather big, since each has both an operand address and a branch address.

Once you have your one instruction, you need a macroassembler, because you're going to be generating long code sequences for simple operations like "call". Then you write the subroutine library, for shifting, multiplication, division, etc.

It's a lose on performance. It's a lose on code density. And the guy needed a 1,000,000 gate FPGA to implement it, which is huge for what he's doing. Chuck Moore's original Forth chip, from 1985 [ultratechnology.com] had less than 4,000 gates, and delivered good performance, with one Forth word executed per clock.

RISC vs CISC - sigh (2, Informative)

peter3125 (1117319) | more than 4 years ago | (#30161790)

"The advantages of RISC are well known — simplifying the CPU core by reducing the complexity of the instruction set allows faster speeds, more registers, and pipelining to provide the appearance of single-cycle execution." I know this has been argued to death already - but it just isn't completely true that a RISC has advantages over a CISC. The gain in speed is usually negated by the lack of expressiveness and the number of registers would help a CISC just as much as a RISC. Why is this being dragged up again?

"One-der" (4, Insightful)

porges (58715) | more than 4 years ago | (#30161846)

The hyphen being so everyone doesn't call it "The O-need-er", as in That Thing You Do.

This is not Computer Science (0)

Anonymous Coward | more than 4 years ago | (#30161904)

We'll see lots of joke replies here. Computer Science is more concerned with O() notation--they have enough problems wrapping their heads around something like floating point numbers.

Used to work for someone doing this (1)

JKR (198165) | more than 4 years ago | (#30161946)

The idea of offloading software functions onto custom hardware built around a TTA is interesting - 5 years ago I used to work for Critical Blue [criticalblue.com] who were writing software to design and build those custom processors and optimise an ISA for them. Worth a look.

Fastest Processor (-1)

Anonymous Coward | more than 4 years ago | (#30162054)

I have built the fastest processor in the world using this technique. Its one instruction is NOP. It does nothing faster than anything else on the planet.

One command? (2, Interesting)

HockeyPuck (141947) | more than 4 years ago | (#30162090)

Reminds me of this old saying,

"Every program can be reduced by one instruction, and every program has at least one bug. Therefore, any program can be reduced to one instruction which doesn't work."

I just wish I knew who came up with it.

I think it's misleading to call it 1 instruction (2, Insightful)

shoor (33382) | more than 4 years ago | (#30162212)

There can be different architectures for computers, but, nowadays, for many of us, I'd say there is one particular model of an architecture that is likely to be the only one we're really familiar with, and that automatically comes to mind when one speaks of a computer architecture. It's a rather compartmentalized architecture in which the CPU is the place where opcodes are executed and memory is just a big flat address space for data, including instructions. This "transfer triggered" architecture strikes me as being not so much a 1 instruction computer as one where instructions are implemented in a less compartmentalized fashion, spread out among special units activated by addresses, as opposed to the more plain architecture where bit patterns on the address bus simply activate individual generic memory cells along with a read/write signal. More than that may happen, cache memory comes into play with all it's complications for instance, but the 'model' for the programmer is that simple one.

Re:I think it's misleading to call it 1 instructio (1)

DerekLyons (302214) | more than 4 years ago | (#30162594)

There can be different architectures for computers, but, nowadays, for many of us, I'd say there is one particular model of an architecture that is likely to be the only one we're really familiar with, and that automatically comes to mind when one speaks of a computer architecture.

Which just goes to show how shockingly ignorant 'many of us' are.
 
Now if 'many of us' (brighter than the norm, or so the theory goes) can be so ignorant - why do we laugh at Joe Sixpack?

Actually the real point (0)

Anonymous Coward | more than 4 years ago | (#30162322)

If you read all the way through he talks about how the bus-like architecture would let you reconfigure the CPU although he doesn't have any tools for that. So you could scan your program, decide on an optimal architecture for it (I need 4 accumulators, 2 stacks, and 4 floating point units) and then compile a program just for that "new" CPU. You could do that with other schemes, I guess, but it becomes hard becuase the data path is usually pretty much wired one way. This is just a bus.

Also, it looks like it would be nothing to add new "instructions" just by plugging relatively simple boxes onto the bus.

There are a couple of old CPUs that did this (Burroughs maybe? I forget).

Oneders? (1, Funny)

Anonymous Coward | more than 4 years ago | (#30162336)

AH the ONEDERS! - didn't that band have to change their name to the wonders? silly movies.... GO ONEDERS! pronounced (oh-knee-ders)

It's not working (1)

rewt66 (738525) | more than 4 years ago | (#30162342)

The point was supposed to be speed. So, he gets 10 MHz? That's not very impressive...

Re:It's not working (1)

Rockoon (1252108) | more than 4 years ago | (#30162654)

No matter how you look at this type of computer, its just not going to compete in the general computing space because you can't design a memory cache that can cope with the forced random access to memory.

Isn't it cheating? (1)

HexaByte (817350) | more than 4 years ago | (#30162384)

Isn't it cheating to have a CPU with one instruction that relies on custom hardware to do the rest of the instructions? You're just re-defining the CPU and adding more hardware to 'simplify' the CPU!

This is progress? (1)

pedantic bore (740196) | more than 4 years ago | (#30162488)

2009: one million gates, one instruction, RISC, gnarly to program = 10 MIPS.

1984: 200,000 gates, gobs of instructions, CISC, easy to program = 10 MIPS.

We should have more to show for the last twenty-five years in microprocessor design.

Perfect for running CPM/86... (0)

Anonymous Coward | more than 4 years ago | (#30162672)

... the OS only had one instruction - "PIP" ("Peripheral Interchange Program").

I've seen this before. (0)

Anonymous Coward | more than 4 years ago | (#30162716)

It seems to me like all they're trying to do is reduce risc.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...